Search This Blog

Literary Theory for Robots: How Computers Learned to Write by David Yi Tenen

  Dennis Yi Tenen tricked me. His book sports an engaging cover an interesting title, Literary Theory for Robots: How Computers Learned to Write, but I’d have to say that putting “literary theory” in the title is somewhat of a misnomer. Really, the work is more of a critical history—a well-developed one, to be sure—of key moments and developments in technology, spanning from the ancients to today, that comprise our current understanding of large language models. It offers a persuasive case that the history of AI is intertwined with the practice of reading. I admit, though, I was really hoping to learn about how AI and literature might interact in new and surprising ways and that I’d leave the text with a new framework for literary analysis.

Along the historical route, Tenen is a compelling storyteller. It’s compelling to see the successive developments and hear more about how the contributions of, say, Ada Lovelace, helped lead towards what we now recognize as artificial intelligence. It was interesting, also, to read about the technical necessities to make AI possible—starting from wheels that spun together and kept connected ideas linked, moving towards a sort of predictive-text model that relies on the statistical likelihoods of the subsequent words.


Tenen offers a generally balanced view of what artificial intelligence is and what it is capable of. He points to some of its incredible possibilities, but also some of the missteps and misapplications to which it has been set. For instance, he suggests that we fail to construct meaning simply by relying on word frequency; simply because words appear together often does not mean that they are producing meaning. In fact, Tenen suggests that it is the unlikely combinations that are more likely to produce original literature.


Hence a central debate in the text: the role of originality when it comes to AI. Ultimately, Tenen asserts that AI is like any other tool and seems to accept its use, even in creative pursuits. He does so by persuasively arguing that writers have never possessed individual genius. On the one hand, he points to the idea of the “template” culture that emerged in the Victorian era. Books were published on the different kinds of conflicts that stage plays might have, for example, and collections of readymade storylines that people could remix for their mystery novels abounded. A little later in the book, he takes on a more Marxist angle and comments on the fact that any literary production involves any range of contributors and collaborators—writers, editors, publishers, and so on.

The wholesale acceptance of AI, though, is challenged by Tenen’s theses in the final chapter. He notes that AI is a tool like any other, available for use or misuse at the hands of the user, but we also need to make the distinction between AIs. AI is not uniform; AI serves any number of purposes, some of which can be valuable while others can be disastrous. AI is not singular, so our conversations need to be more specific.


The last chapter of the book, which offers 9 “Big Ideas for an Effective Conclusion,” offers both opportunities and challenges for AI. Some of the opportunities might be promising—but I’d argue that they’re utopian. Tenet talks about how AI will liberate people to do more creative work, rather than being tied to their work. Rather than putting people out of work, Tenet suggests that AI liberates us to work more creatively. I don’t buy it. Every advance in technology opens further exploitation of workers—think of the Industrial Revolution. I think Tenet underestimates the capacity of capital to destroy people and the optimistic spin of being “able” to work more creatively downplays that it is a forced position for us.


Yet, Tenet also seems to recognize that politics are particularly vulnerable to the egregious use of AI and its capacity for manipulating the truth. He’s right. I don’t think that politics or law have successfully kept pace with AI and that we need more lawmakers to be invested in learning about it. At the same time, I don’t want there to be some kind of technocratic society where the best manipulators of AI are the only ones able to be engaged in politics.


The piece of the book that I actually find most surprising is its connection to linguistic debates. There are two camps when it comes to language: descriptivists and prescriptivists. The descriptivists argue that however language is used is the “correct” way of using it. If everyone understands contextually how the language is used, its appropriate usage. For example, if someone says “axe” instead of “ask” we all know the intention and there’s no need to call it out (especially because of the underlying racism of that correction). The prescriptivists suggest that there are particular rules that must be followed and that anything else is linguistically incorrect and thus inadmissible. For full transparency, I’m a descriptivist at heart, especially because of the cultural biases implicit in prescriptivism. However, Tenet discusses how if AI is purely modelled after descriptivists, it creates a number of flaws in its use of language. If I recall, he doesn’t say it directly, but it makes me think of the Twitter AI that turned into a Nazi so hastily—frequency, in that case, created racism. So, there needs to be some prescriptivist principles embedded to give the system some coherence and boundaries. It’s one of the most interesting arguments about descriptivism and prescriptivism that I’ve read in a long time.


My notes on Literary Theory for Robots are rather scarce, primarily because I anticipated a much different book, but also because the book is quite short. It would be a work well-worth expanding. In fact, when Tenet identifies the 9 angles worth of exploration in the final chapter, I couldn’t help but feel that that should have been the book. Rather than giving a history, looking at the implications in more detail would have been more important—any one of the theses could have been its own book, I’m sure.


So while Literary Theory for Robots wasn’t what I wanted, it was still interesting enough to warrant reading. I just wish there was a little more meat on its bones, or I guess a few more megabytes on its circuitry? I’m still waiting to see how the very practice of interpretation changes in response to the gigantic epistemological shift taking place in the realm of programming. 


I suspect I won’t have long to wait.


No comments:

Post a Comment