Meta’s New AI Chatbot is an Empty Canvas that You Can Design Yourself!
Meta is offering its new AI as an open canvas for its customers to design it for the world to use
While sitting in front of a computer screen, typing messages to Meta, the parent company of Facebook, customers can discuss pizza, politics, and even social media. Everyone seems to spend more time on Facebook than they do talking face-to-face these days. The artificial intelligence-powered bot, BlenderBot 3, is designed to improve its conversational skills and safety by conversing with humans. Meta will make the chatbot public as part of an AI research project. On this public website, US adults can talk to Meta’s new chatbot about almost any topic. To compose its messages, the AI uses internet searches as well as memories of previous conversations. Chatbots are programs that use text or audio to simulate human conversations. They are frequently used in voice assistants and customer service. Companies are attempting to improve chatbot skills as people spend more time using them so that conversations flow more smoothly.
Meta’s research is part of a larger effort to advance AI, a field that is grappling with issues of bias, privacy, and safety. Experiments with chatbots have previously failed, so the demo could be dangerous for Meta. Microsoft shut down its Tay chatbot in 2016 after it began tweeting lewd and racist remarks. Google fired an engineer in July after he claimed that an AI chatbot the company was testing was self-aware. Meta stated in a blog post about the new chatbot that researchers used data typically collected through studies in which people interact with bots in a controlled environment. That data set, however, does not reflect global diversity, so researchers are turning to the public for assistance.
The AI field is still a long way from truly intelligent AI systems that can understand, engage, and chat with us like other humans. In order to build models that are more adaptable to real-world environments, chatbots must learn from a diverse, broad range of people ‘in the wild.
BlenderBot 3 includes skills from its predecessors such as internet search, long-term memory, personality, and empathy, according to Meta. The company gathered public data from over 20,000 human-bot conversations to expand the range of topics BlenderBot can discuss, such as healthy food recipes and finding kid-friendly amenities. Meta acknowledged that safety is still an issue, but researchers discovered that the chatbot becomes safer as it learns from interacting with humans. However, a live demo is not without challenges. It is difficult for a bot to keep everyone engaged while discussing arbitrary topics and to avoid using offensive or toxic language. People who interact with the chatbot can provide feedback about an offensive message by clicking the “thumbs down” icon next to it and selecting “Rude or Inappropriate” as the reason for their dislike. When people provide feedback, they can also specify whether the message was off-topic, nonsensical, or spam-like.
Participants are discouraged from providing personal information to the chatbot, such as names, addresses, and birthdays. If a user does not want their conversation with the bot to be shared for research purposes, or if participants accidentally include personal information in their chat, they can choose not to opt-in to storing the data at the end of the session. Meta stated that the conversational data would then be permanently deleted.
According to the experiment’s FAQ, the bot can also make false or contradictory statements.
The bot’s comments are “not representative of Meta’s views as a company, and should not be relied on for factual information, including but not limited to medical, legal, or financial advice,” according to the demo’s FAQ. BlenderBot is clearly a work in progress based on my experience with it.
Following a question about the bot’s favorite pizza place in San Francisco, the bot redirects the conversation and asks, “What do you like to do in your spare time?
” According to the demo, the AI switches topics when it detects a “potentially unsafe topic.” After claiming that social media was “causing family drama,” the bot isn’t afraid to express its political views. According to the bot, someone shared an article criticizing former US President Donald Trump, “causing arguments” between family members who support the politician and those who do not.
The bot, which claimed to be related to the founder of the American pizza chain Papa John’s, also claimed to be a Republican and “pro-choice.” It also stated that it would prefer not to discuss politics online due to disagreements between the two parties. BlenderBot then stated that it was considering deleting its Facebook account due to the excessive number of trolls. It also began to make illogical statements.