Latest Posts

Stay in Touch With Us

For Advertising, media partnerships, sponsorship, associations, and alliances, please connect to us below

Email
info@globaltechoutlook.com

Phone
+91 40 230 552 15

Address
540/6, 3rd Floor, Geetanjali Towers,
KPHB-6, Hyderabad 500072

Follow us on social

GPT-3: Is the Major BreakThrough from OpenAI, A Blessing or Curse?

  /  Artificial Intelligence   /  GPT-3: Is the Major BreakThrough from OpenAI, A Blessing or Curse?
OpenAI, GPT-3, transformer

GPT-3: Is the Major BreakThrough from OpenAI, A Blessing or Curse?

Everything You Need to Know about OpenAI’s GPT-3 in Less than 5 Minutes!

Last year, the internet was abuzz about headlines when OpenAI released its Generative Pretrained Transformer-3 (GPT-3). According to OpenAI’s blog post, unlike most AI systems that are designed for one use-case, GPT-3 offers a general-purpose “text in, text out” interface, allowing users to try it on virtually any English language task.

Generative Pretrained Transformer is an unsupervised deep learning algorithm that is usually pre-trained on a large amount of unlabeled text. It is fine-tuned and trained on a large task-specific labeled dataset and is then tasked with inferring the most likely set of outputs given a specific set of inputs. This implied it is a sequence transduction model. In layman’s terms, sequence transduction is a technique that transforms an input sequence into an output sequence.

OpenAI’s original 2018 GPT had 110 million parameters, referring to the weights of the connections which enable a neural network to learn. A parameter is a computation in a neural system that applies an extraordinary or lesser weighting to some part of the information, to give that aspect greater or lesser importance in the general estimation of the data. Next came GPT-2 in February 2019. GPT-2 too rose to popularity as it was labeled as one of the most dangerous AI algorithms in history. It was trained on 40GB of text data and had 1.5 billion parameters.

GPT-3, which is an autoregressive language model that uses deep learning to produce human-like texts, is considered the largest artificial neural network created to date. It is trained on the massive amount of data from the Common Crawl dataset. This dataset alone makes up 60% of training data for a GPT-3 along with Wikipedia (0.6%) and others. It consists of 175 billion parameters and has been trained with 45TB of data. This is ten times more parameters than the most complex model prior to GPT-3’s release, Turing-NLG by Microsoft, and 117 times more complex than GPT-2. The best feature of GPT-3 is that it can excel in task-agnostic performance without fine-tuning.

According to Carbontracker, training GPT-3 just once requires the same amount of power used by 126 homes in Denmark per year, or is equivalent to driving to the Moon and back. So, higher performance does have a price – taking a huge toll on environment!

 

OpenAI GPT-3: Promises, Fears and Concerns

One of the main functions of GPT-3 is typically creating anything that has a language structure (not to forget it’s The Guardian Article) This implies it can answer questions, write essays, summarize long texts, translate languages, take memos, and even create computer code. It can also help in text summarization, interface design and coding, automatic mail answering, guitar tablature generation, write SQL queries, give outputs for SQL queries with simple English inputs and many more. GPT-3 is also capable of creating an app and write codes for unique functions.

Users around the world have also used GPT-3 to develop many interesting applications. For instance, a medical student from the UK used GPT-3 to answer health care questions. The program not only gave the right answer but correctly explained the underlying biological mechanism. GPT-3 lets people design chatbot that lets you talk to historical figures and autocomplete images too!

This is one drawback of GPT-3, where it can potentially act as a fake news device when asked a question. As the language model neither understands the context of the textual data it is fed with nor the model checks the authenticity of the data fed, it is extremely prone to deliver false information. Another problem highlighted in a new study by scientists from Stanford and McMaster colleges is that GPT-3 creates novel statements of bigotry. So, sadly, GPT-3 can produce totally new bigotry statements. So, the fear that GPT-3 system has the ability to output toxic language that propagates harmful biases easily, is becoming a reality. Moreover, if the model is trained with words that insult or discriminate against any religion or identity or community, it can supply discriminatory answers. For example, it can generate negative answers about blackness, if trained in data that degrades the community.

Earlier, science researcher and writer Martin Robbins claimed that GPT-3 is overhyped. He cites that all it did was “cutting lines out of my last few dozen spam emails, pasting them together, and claiming the spammers composed Hamlet,” while Mozilla fellow Daniel Leufer called it “an absolute joke.”

Despite its shortcomings, the good news is that artificial intelligence experts believe that such enormous language models might be a significant advancement in the transformation of versatile, general language systems. Hence, GPT-3 is still a major breakthrough as it produces results that are leaps and bounds ahead of what humankind has seen previously.