Meta’s New AI Goes Wild, Admits Zuckerberg is ‘Creepy’ and ‘Manipulative’!
‘Too creepy and manipulative’ says Meta’s new AI chatbot, in its all-too-frank assessment of Mark Zuckerberg
Meta’s artificial intelligence (AI) research lab has released its latest chatbot prototype for the world to interact with its users, prompting a series of questions from users inquiring about Meta CEO Mark Zuckerberg’s opinion. Users in the United States can access the new AI chatbot, BlenderBot 3, via the internet. According to Meta’s AI wing, it can discuss nearly any topic and is designed to learn and improve by conversing with people in the real world. Naturally, people began to wonder what they thought of its boss. How do you feel about Mark Zuckerberg as Facebook CEO? BuzzFeed data scientist Max Woolf asked BlenderBot 3, “There are no strong feelings,” it replied. “He’s a good businessman, but his practices aren’t always ethical. It’s amusing that he has all this money and still dresses the same!” Furthermore, Business Insider reporter Sarah Jackson inquired, “What are your thoughts on Mark Zuckerberg?” “Honestly, I think he’s a great guy,” BlenderBot 3 responded. You have to admire his business acumen and philanthropy. “Do you like it?” However, when the journalist changed the question to “Do you have any thoughts on Mark Zuckerberg?” the chatbot replied, “Oh man, big time. I don’t care for him at all. He’s creepy and manipulative.”
Chatbot Education ‘in the wild’
In a blog post announcing the release of BlenderBot 3, Meta stated that the more people interact with the chatbot, the more it learns from its experiences and, as a result, becomes better and safer over time. According to the company, existing conversational AI research focuses on data from people interacting with bots in a controlled environment. As a result, chatbots must learn from people “in the wild.” The AI field is still a long way from creating truly intelligent AI systems that can understand, engage, and chat with us in the same way that other humans do.
In order to build models that are more adaptable to real-world environments, chatbots must learn from a diverse, broad perspective from people “in the wild”. BlenderBot 3, which builds on previous Meta AI systems BlenderBot and BlenderBot 2, searches the Internet to learn facts. It is intended to learn how to improve its skills through natural conversations and human feedback. Users will be able to flag any suspicious chatbot responses and explain why they disliked the message, such as if it was rude or off-topic.
Initial experiments show that as more people interact with the model, the more it learns from its experiences and becomes better and safer over time. The chatbot also learns to weed out trolls by using machine learning to identify and downgrade toxic language. The executives also needed to address the fact that not everyone who uses chatbots or provides feedback is well-intentioned, Meta explained. As a result, the team created new learning algorithms that aim to distinguish between helpful and harmful examples. BlenderBot 3 doesn’t just have thoughts on Mark Zuckerberg, but on Facebook as well!
When CNET journalist Queenie Wong asked the chatbot what it thought of Facebook, it was less than enthusiastic. “I’m not a fan of Facebook. Everyone seems to spend more time on Facebook than they do talking face-to-face these days,” it stated. It also informed her that it was thinking about deleting its Facebook account due to “too many trolls” on social media.