
Understanding how OpenAI’s GPT-3 can Disrupt the Society
The GPT-3 model is trained over 175 parameters, thus making it the largest language model ever created.
When OpenAI’s GPT-3 greeted the world in the year 2017, humans became fascinated by it. Over the years it has become the most discussed technology amongst tech leaders, experts, and tech enthusiasts. Certainly, the language model is extensive and carries huge potential under its roof. For example, due to its natural language processing attribute, it can enable a faster coding process, aid in accurate diagnosis of diseases, and can perform operations, which exceed the realm of human skills.
A recent article written by GPT-3 created so much buzz that even OpenAI’s Sam Altman tweeted that the technology is over-hyped and the mistakes need to be addressed. Undoubtedly, GPT-3 is leading innovation to the next level. But as the model is getting applauded, the concerns and dangers need to be addressed.
What is GPT-3?
Created by OpenAI, GPT-3 is a neural network powered by the language model. A language model predicts the likelihood of the sentence existing in the world. They learn the succeeding words and phrases before and after a sentence. The GPT-3 model is trained over 175 parameters, thus making it the largest language model ever created. It is fed by using data from major news portals such as BBC and The Newyork times, including many renowned curated sources such as Reddit, Wikipedia, and relevant books. Due to the vast dataset, it is trained with, GPT-3 acts like a translator, programmer, poet or author. GPT-3 can self-tune without any human intervention and henceforth is distinguished from other language models.
Can become the perpetrator of Fake news
The GPT-3 model is trained on the data available over the internet. This is one limitation of GPT-3, where it can potentially act as a fake news device when asked a question. As the language model neither understands the context of the texts/data it is fed with nor the model checks the authenticity of the data fed, it is extremely prone to deliver false information. For example, if asked about the COVID 19 outbreak or holocaust, GPT-3 can generate answers denying such events, in case it is fed by the data that denied the incidents.
Creating Discrimination and Biases
The world is advancing, with meaning about different entities changing accordingly. However, even when altered definitions are accepted, not everything over the internet has transformed. There are many words/texts on the internet associating women and men with stereotypical words. If GPT-3 is trained using such words/texts, it is likely to deliver adjectives stereotyping men and women. Moreover, if the model is trained with words that insult or discriminate against any religion or identity or community, it can supply discriminatory answers. For example, it can generate negative answers about blackness, if trained in data that degrades the community.
Can take away jobs
The world has witnessed GPT-3 already publishing an article. This implies that with the vast amount of data fed to the model, it has the potential to write countless articles over diverse topics. Henceforth, jobs that demand human writing skills, such as journalism, content writing, and research, are in threat with the existing writing capability of GPT-3.
Conclusion
The research paper on OpenAI GPT-3 model does not answer questions regarding many biases. That’s why researchers need to be more vigilant regarding the data that trains the model.