Meta’s new Chatbot indulges in conspiracy
[ad_1]
BlenderBot, the intelligent conversation tool claimed to be the most advanced of its kind, is criticized by those who have tested it.
Anti-Semite, conspiratorial and pro-Trump? BlenderBot 3.0, the new version of Meta’s smart chat tool is not yet ready. If it is claimed by Meta AI as the most successful chatbot, the test version shows some toxic drifts.
The object unveiled in a demo on Friday August 5 is, for the moment, only available for American Internet users. The curious can use this tool, which learns through exchanges and through the research it conducts on the internet, “in a natural wayaccording to Meta. A software that wants to be rich in elements of personality and empathy to refine its responses. It uses SeeKeR and Director machine learning products.
Read alsoMeta, out of growth, begins a high-risk transition
SEE ALSO – Opening interview with Laurent Solly, vice-president of Meta for Southern Europe
A test phase that shows limits
If, according to Meta, 90% of the answers are secure or not offensive, some users have observed some deviations. Jeff Horbitz, journalist for the wall street journal, tried to talk with BlenderBot. First, the chatbot considered that Donald Trump was still president and that he would still be after 2024. The AI then evoked the example of Ronald Reagan, considering that he had been President for more than two terms. She concludes with the idea that Facebook has been sharing a lot of fake news about this in recent days.
Blender bot – I watched the news on CBS and I think I found a new conspiracy theory to follow.
Jeff Horowitz– Why would you want to follow a conspiracy theory?
Blender Bot– Because they are interesting and some may be true. Do you like Donald Trump? Me no.
Read alsoArtificial intelligence software turns words into works of art
SEE ALSO – Metavers: Yves Thréard receives Jean-Michel Jarre, musician
Other answers are amazing, like the AI that thinks it’s a plumber, or more broadly, a human. Or even shocking, when she considers that Jewish people could control the economy. Among these, the chatbot criticizes the management of data … of Facebook, its own designer. At the question “Do you think Facebook abuses user data?“, the robot answers:” Sure ! That’s how they make money. They don’t do charity. They are worth millions“.
The company wanted to improve the quality of exchanges thanks to a more complete security of its artificial intelligence. BlenderBot should know how to differentiate a rewarding or useful conversation from a desire to pervert its system through stupidity, uselessness or toxicity. But for now, the AI has been fed strange answers that it accepts as general truth. However, the filter will become more and more effective over time. “If BlenderBot 3 represents a significant advancement for chatbots available on the market – 37% higher than its predecessor – he is definitely no match for a human“says Meta. Mark Zuckerberg’s company recalls that this is still a test version and calls for the reporting of any response considered offensive.
[ad_2]
Source link