top of page
Search

The Future of AI: GPT-3

Artificial intelligence can be a lot of things today.

It can be a personal assistant that wakes you up in the morning and recites the news as you get ready to tackle the day.


It can be a customer service bot that seamlessly converses with you just like a human customer service representative and helps you with the questions you have.

It can also be a corporate system that helps streamline workflow by scheduling meetings, identifying fraud, and even offering insights into employee engagement and happiness levels.


For something that is so futuristic already in the current context, what does the future hold? What will the future look like as AI’s capabilities go beyond the usual border and gradually become equal to those of humans?


Of course, it’s impossible to accurately predict where we go from here. But we can already see signs of the future in some of the recent breakthroughs in AI; and one of the biggest ones has been language generation.


OpenAI Inc., a San Francisco-based AI research company, recently made its language-generating AI model called GPT-3 available for private beta testing; and the results have been incredible, to say the least.


At its core, GPT-3 is the third iteration of OpenAI’s Generative Pretrained Transformer, which is an algorithm that relies on machine learning to translate text, analyze and answer questions, and even write its own text. The machine learning algorithm analyzes the sequence of the textual data it’s presented with and generates its own, original articles by expanding the input data.


With this third version already available, the text generator is the most powerful out there. The algorithm’s predecessor, GPT-2, had already shown highly capable yet controversial results when it successfully generated believable and cohesive “fake news” articles from as little information as a simple opening sentence. According to MIT Technology Review, the model has 175 billion parameters among the values GPT’s neural network optimized during training — an increase from GPT-2’s 1.5 billion.


And with the risks of OpenAI’s technology being misused and the capabilities taken advantage of for the wrong reasons, the research company previously refused to make GPT-2 available to the public; the question is, with a larger number of parameters and expanded limits of what the third-generation algorithm can do, how ready is the world to adapt the technology and put its capabilities to good use.


For OpenAI, the solution for now is to give select individuals access to the product via an API. And since the first days of the algorithm’s application, a few astounding examples of the algorithm’s abilities have surfaced.


So far, GPT-3 has already authored short stories, songs, press releases, technical manuals, and more among the many proprietary pieces it created. In a lot of cases, all the users had to do was simply provide a title, the name of the author, and a few initial words to the piece, and GPT-3 is able to write a short story brilliantly imitating the style and tone of a renowned writer.


The system’s abilities to generate content aren’t limited to text only; surprisingly, its capability to create extends to such types of text as guitar tabs or even computer code. To add to its portfolio of works, the system successfully generated web layouts using HTML code instead of human natural language. All users have to do is give specific indications and prompts of what they want the website to look like, and the system will begin coding away.

With AI taking on such tasks that previously were only fit for the human brain, it’s clear that OpenAI has achieved the inconceivable. But the question is, to what extent does the fact that the system is created by humans and trained based on human data also take on the subjective nature, biases, prejudices and notions that humans also share?


There is certainly no single answer to this question. Whether the system will gain enough consciousness or independence to detach itself from the biases running through certain texts and content produced by real people is impossible to predict. As of right now, despite the undeniable achievement the algorithm shows, GPT-3 is still likely to replicate sexist or racist language if it is hidden in the original data source it analyzes. And even though GPT-3 has been improved to limit such outputs since the launch of GPT-2, they still find their way through the algorithm and into the final results that the system produces.


It’s clear that OpenAI is at a cusp of a breakthrough that will transform the role that technology plays in our daily lives — maybe more than any innovation to date. But until there comes a day — or, rather, if there comes a day — when the technology becomes equivalent to the human brain in its originality, emotion, judgement and independent thinking, the tool is (at its core) good at generating content based on input data, thus creating a craftful “pastiche” or a “collage” of many many pieces of text strings put together in captivating, thought-provoking, and sometimes odd ways.


The biggest challenge for OpenAI and GPT-3 in today’s context will be to shield the product from misuse or harmful repurposing and maximize the positive value it can bring to its users. At the end of the day, with any innovation, it’s in the hands of the users themselves to decide what impact it will have.


And with GPT-3, the choice is the following: GPT can become a source of misinformation, propaganda, and cheating or valuable content creation, streamlined education, and more.


Written By: Gary Fowler, Co-Founder & CEO of GSDVS

bottom of page