The convincingness of current state-of-the-art language models like GPT-3 is dangerous for a number of reasons, which is why OpenAI has an application process for developers who want to use the GPT-3 API. However, language models don’t have to be perfect to have concerning implications for the future of reading and writing. While researchers who design and program language models may on some level implicitly understand the disembodiment of the “voice” that comes through many of these texts, laypeople may not realize that GPT-3 isn’t an embodied program with a sense of self, but rather an algorithm probabilistically imitating surface-level patterns in text data. As the general public encounters AI-generated text more and more often, and as that text becomes more and more convincingly real, it becomes vitally important to make this distinction between voice and the simulation of voice with no real source. No matter how coherent a GPT-3-generated document may seem on the surface, any language that comes from it is being created by the reader, not by the machine.

final essay for LITR1231 with Prof. John Cayley. summer 2021.

see pdf version of essay here.