
Google will Either Bring an End to Soulless Artificial Intelligence or the Humanity!
Why a senior Google engineer claimed its AI chatbot LaMDA is ‘sentient’ is still a huge question
The suspension of a Google engineer who claimed that a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has raised new questions about the capability and secrecy around artificial intelligence developments in the company. Language Models for Dialog Applications, or LaMDA, is a machine-learning language model developed by Google as a chatbot that is meant to simulate human dialogue. LaMDA, like BERT, GPT-3, and other language models, is based on Transformer, a Google-developed neural network architecture that was open-sourced in 2017.
This architecture results in a model that can be trained to read a large number of words while paying attention to how they relate to one another, and then predict what words it thinks will appear next. LaMDA, on the other hand, is unique in that, unlike most models, it was trained on dialogue. While most conversations focus on certain topics, they are frequently open-ended, meaning they might begin in one location and end in another, encompassing a variety of topics and issues.
When talking with a friend, for example, a conversation could start with a movie show and then move on to the region where it was filmed.
Conventional chatbots will soon be displaced by this fluid character of dialogue. Because they are built to follow specific, pre-defined discussion segments, they are unable to follow such fluctuating conversations. LaMDA, on the other hand, is meant to allow for free-flowing discussions on nearly any subject. This technology, believe, will be fantastic. Lemoine collaborated with a colleague to present Google with evidence of its sentience. However, after investigating the charges, Google vice president Blaise Aguera y Arcas and Jen Genna, Google head of Responsible Innovation, denied them.
In many of these cases, in which the language model appeared to have some sort of self-awareness, leading Lemoine to assume that the model had become sentient. Lemoine sent an email to over 200 people with the subject “LaMDA is sentient” before being fired from the company and losing access to his Google account. Google, on the other hand, has stated that the evidence does not support his assertions. Even if LaMDA isn’t sentient, the fact that it can look sentient to a human should be cause for alarm.
However, Google claims that when developing technologies like LaMDA, the company’s first focus is to eliminate the likelihood of such hazards. It has “scrutinized LaMDA at every step of its development,” according to the business, and has established open-source resources that academics may use to analyze models and the data on which they are trained.