Browse By

Abolishing the Author: Literary AI

Written by: Dorrie Peters

 

The term ‘artificial intelligence’ is likely to conjure up images of research labs and advanced computer programs. However, recent developments have proven that AI has a myriad of uses beyond just math and science. As artificial intelligence becomes more of a reality than a science fiction fantasy, scientists are discovering new applications for it in literature and social studies. One program, GPT-3, is being used to generate the most human-like replications of writing ever produced by AI. GPT-3’s success at literary imitation provides a glimpse into the future of an AI-aided world.

 

What is GPT-3?

GPT-3, which stands for Generative Pre-trained Transformer 3, is a regressive language AI created by OpenAI, an artificial intelligence research group. GPT-3 is the third generation of a series of highly realistic language prediction programs. It is one of the largest AIs to date, with roughly 175 billion parameters. Parameters in machine learning are similar to synapses in the human brain: both form connections between data points and relay information. For context, the average human brain has over 1000 trillion synapses. This might make GPT-3 seem minuscule in comparison, but OpenAI’s next model is scheduled to have as many as 100 trillion parameters.

Webs of data that occur similarly in the brain and in artificial intelligence

Source: Scientific American

How it works

To understand and be able to replicate human writing, GPT-3 had to undergo a process called generative pre-training. During this step, the AI was fed large masses of human writing, which linguists call “text corpuses”. These texts serve as examples for GPT-3, teaching the algorithm the many patterns and themes that appear in human language. Because of the way machine learning is structured, the scientists who created GPT-3 played a very small role in generative pre-training. After feeding the text corpuses into the algorithm, GPT-3 taught itself independently by memorizing and analyzing the various patterns. As previously mentioned, AI algorithms are structured similarly to the human brain. Growing networks of data and code mirror the brain’s networks of neurons and synapses. Presenting GPT-3 with examples of human writing served to expand the AI’s data network in the same way that learning new things creates new connections in a human brain.

GPT-3 has many uses, but the most successful so far is a program in which the AI completes a piece of writing started by a human. The user inputs as much human writing as they desire – whether it be one sentence or a few paragraphs – and GPT-3 uses its knowledge of literature and writing patterns to add on. The more text initially inputted by the user, the more realistic and accurate GPT-3 becomes. In some cases, even the scientists who created GPT-3 have had trouble distinguishing its replication of human writing from the real thing.

 

Implications

As of now, GPT-3 has only been used experimentally to demonstrate small-scale literary replications. Outputs of the AI have shown up in publications like The New York Times, where it wrote three short stories based on one-sentence prompts. These stories were lighthearted and harmless, but critics have started to question the potential risks of a powerful AI like GPT-3.

One concern is related to data bias in the algorithms for GPT-3. In the same way that bias can occur in the human mind, bias can occur in an AI when the data used for pre-generative training is skewed in a certain way. GPT-3 has been found to show prejudice in its correlations, with one example being its strong association between the words “Islam” and “terrorism”. Cases like this bring up doubts about exactly what data GPT-3 was fed during pre-generative training.

Another point of concern is GPT-3’s increasing realism. As the AI is fine-tuned and perfected, its replications of human writing are becoming increasingly more realistic. If GPT-3 becomes more widespread, how will consumers differentiate human writing from artificially generated writing? Will GPT-3 eventually replace human authors and journalists? These moral and practical questions are being heavily considered today as GPT-3 continues to grow in power and popularity. 

 

References and Sources

Choi, C. Q. (2016, June 20). “Artificial Synapses” Could Let Supercomputers Mimic the Human Brain. Scientific American. Retrieved March 11, 2022, from https://www.scientificamerican.com/article/artificial-synapses-could-let-supercomputers-mimic-the-human-brain/

Metz, C. (2020, November 24). When AI Falls in Love. The New York Times. Retrieved March 8, 2022, from https://www.nytimes.com/2020/11/24/science/artificial-intelligence-gpt3-writing-love.html

Nyuytiymbiy, K. (n.d.). Parameters, Hyperparameters, Machine Learning. Towards Data Science. Retrieved March 12, 2022, from https://towardsdatascience.com/parameters-and-hyperparameters-aa609601a9ac

 

Leave a Reply

Your email address will not be published. Required fields are marked *