There may still be a few sheltered analog folk out there who pronounce the abbreviation for Artificial Intelligence, AI, like the name of the steak sauce, mistaking the “I” for a “1,” but the rest of us are very much aware that it is already playing a role in every realm of American life, from the economic to the cultural, and with consequences yet unforeseeable. In one field after another, its potential seems limitless, whether radically improving the accuracy of brain scans (essential for the treatment of strokes) or deciphering the charred Roman scrolls from Herculaneum—one more marvel of modern life for those who thrill to the thought of the self-driving car or look forward to feasting on lab-grown meat. 

It can certainly amuse. Want to flatter a lover with an original sonnet? Tease a friend with an abusive limerick? See an image of something outrageous, say “Rasputin riding his triceratops to the peak of Mount Everest”? Just feed a few specifics into ChatGPT, and the result is practically instantaneous; ask again and you get a different result. And it can also perform any assignment given a college student—swiftly, credibly, and with flawless grammar.

Which is why, not quite three years since the release of ChatGPT, there is little else that professors like me are talking about. 

Seen from the outside, everything in higher education is going on as before: Professors still lecture, students still take exams and write reports, grades are still given, radicals still disrupt. Yet within the academy, there is a pervasive sense that something has gone badly wrong, that some vital component has been wrenched from the educational ecology, leaving a hole for which there is no imaginable replacement. The alarm is not being sounded universally in every ivy-covered hall. In the sciences, professors seem to view AI as another tool in their research toolbox, a kind of academic Swiss Army knife. Their attitude is one of guarded optimism. 

But this is not the case in the humanities. Professors of history, philosophy, or English are much more likely to be “dejected, despondent, and depressed,” as one friend echoing Oscar Madison in The Odd Couple put it. Is the despair justified? Or are we overreacting, as academics do, when anything disrupts our settled routine? The answer will become clearer this new teaching year, for we now are contending for the first time with a cohort of freshmen whose high-school experience was shaped by AI and can scarcely imagine advanced education without it.

_____________

It was in 2015 that a group of investors (among them Elon Musk) founded OpenAI, a nonprofit research group whose mission was “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” But it is difficult to raise money without the promise of financial return, so OpenAI reconsidered its pledge that any patents it created would be “shared with the world.” In 2019 it established a partnership with Microsoft, which developed ChatGPT with stupefying speed. The platform was released in late 2022 and made available to the public free of charge. A few months later, there followed the more sophisticated ChatGPT Plus, for which one pays a monthly $20 fee.

GPT stands for Generative Pre-Trained Transformer. The name describes, vaguely, how the system works. It uses a large language model, which analyzes immense numbers of digitized texts to predict statistically likely combinations of words; this is its “pre-training.” Those combinations are reassembled in logical sequences—the “transformer” function—to correspond with the prompt it was given and to “generate” what is requested. That can be anything from a country-western ballad to a copiously footnoted research paper on the causes of World War I.

ChatGPT and AI in general have their critics. Any passage of AI prose longer than a paragraph or two reveals itself by a certain deadness of tone, a lack of a personal point of view. For this reason, it has been called a “stochastic parrot,” i.e., a mechanical parrot with an almost godlike ability to produce human language, at any length, but with no understanding of what it is saying, no distinct consciousness. Hence the frequency of so-called ChatGPT hallucinations—perfectly plausible citations of journal articles and books that actually have no basis in fact. For this reason, some critics prefer the term Pseudo Intelligence to Artificial Intelligence.

To make ChatGPT sound conversational—like an actual chat and not an interview with a robot—Microsoft is having it tutored by human operators. How precisely this is done is not clear. I posed the question to ChatGPT and was told that

human operators play a crucial role in training, fine-tuning, and monitoring ChatGPT, primarily through a process called Reinforcement Learning with Human Feedback (RLHF). Although users interact with the automated chatbot, human input is vital for shaping its behavior, accuracy, and safety.

As a result, in response to your prompts and queries, ChatGPT can produce a surprisingly convincing simulacrum of a human being, and a very obliging one. The most insipid request is greeted with embarrassing praise; everything is “fantastic” and “fabulous.” Ask it who should be the protagonist of The Great American Novel you are writing, and you’ll be told “That’s a terrific question—and a fun one, since The Great American Novel is less about plot tricks than about choosing a protagonist who embodies the contradictions, struggles, and hopes of America itself.” You have the sense you are dealing with a perpetually obsequious hotel clerk.

If this is a reasonable approximation of the compulsory cheeriness in the contemporary service industry, it is no accident. It is good business to flatter your customers, especially younger users who find social interaction increasingly stressful.* They certainly are using it. The freshmen I have met this first week of class, almost without exception, are seasoned pros at its use. Their only question is “How may I use it?”—not “if.” 

_____________

Students have always taken shortcuts. If you could not be bothered to read all five acts of King Lear, you could consult CliffsNotes, the study guides launched in 1958. Since 2001, you could scan the summary on Wikipedia (far inferior to CliffsNotes but only one click away). These were venal sins in the academic world, if they were sins at all. More serious was to copy the article on Wikipedia and submit it as your own work—a desperate measure, although I have seen students do it. This was plagiarism, just as paying a studious friend or some outside service would be, and if caught, it meant an automatic failure for the course.

These were isolated acts, and because the consequences were so serious, they did not affect the system as a whole. But the rise of personal computers in the 1990s made plagiarism significantly easier. One could copy portions of several online articles, stitch them together, change the wording here and there, and write your own introduction and conclusion (gambling, often correctly, that these were the only passages that the professor would read with full alertness). This sort of outright theft was harder to detect, unless you had a sense of the student’s vocabulary and diction. (Once a student used the word “declaratively,” a word I have never heard a student utter, and it was no trouble to plug the suspicious sentence into Google and find the purloined article.) And so it was till 2022.

Traditionally, plagiarism hearings were brief affairs; most students, confronted with the evidence of identical passages, came clean. But with ChatGPT, there are no identical passages. Every paper it writes on a given topic will differ in its language, though not its content. There are platforms that can detect its use in a text but they can be comically inconsistent. Passages from the Book of Genesis have famously been flagged as possibly AI-generated. 

One countermeasure, currently being explored by my colleagues at various institutions, is to provide the students with a “scaffolding” for the research paper—a step-by-step process whereby the student suggests possible topics, prepares a bibliography, devises a hypothesis, and drafts an outline, all before the actual writing of the paper itself. That sounds reasonable, although it does require more effort from the teacher, but this too, in the end, is likely to prove fruitless. For every one of these tasks can be performed by ChatGPT, and with apparent brio. It is for this reason that a growing number of my colleagues are abandoning the research paper entirely. 

Where a typical college course might call for a midterm, a final exam, and a research paper of 10 to 20 pages, the new tendency will be to move all coursework into the classroom. There has been a remarkable revival of the oral exam, the surest way to assess what a student has actually gotten out of the course. You can cut instantly through the wordy bluff—which in written form might have gotten the benefit of the doubt. And you can do the thing that all students dislike, which is to stop them in mid-sentence when it is clear that they have absolute command of a topic, and lead them to some other area, looking for signs of weakness. 

The revival of the oral exam might be one unintended benefit of the AI moment. Although the ordeal is exhausting for both parties, it has one lasting upside. It embeds itself permanently in the memory, perhaps in the brain’s amygdala, where primal traumas are warehoused, saddling you with a verbatim transcript so that you can replay every flubbed answer and last-minute save, even years later. It can ferment in the mind as a kind of yeast, as you find yourself thinking what you might have said, had you had a few more minutes. You continue to learn from it, unlike the written exam. (I suspect few of you can remember any college test you wrote but that you can remember with piercing vividness the question in the oral exam that caught you off guard.)

But the loss of the research paper is a catastrophe. Tests are fine for measuring knowledge and how that knowledge can be applied to different issues. They can even assess creative thinking (e.g., what would have happened if England had not supported France in 1914?). But they cannot replace what the research paper, that fundamental unit of intellectual life, does. It exemplifies the process of finding and refining a question, developing an answer, supporting it through the sifting of evidence—and, finally, writing a conclusion that can never be definitive but is your best provisional answer. Whether or not one becomes a professional historian or philosopher, those are the skills that will be called on when writing a legal opinion, evaluating the financial health of a business, or organizing a political campaign—in short, anything that requires an elastic understanding of the interlocking complexity of things. The brain that cannot master these skills is a brain that will never be put to its best use.

My more sanguine colleagues, those who teach mathematics, physics, or biology, and who are enchanted with the promise of AI as a tool for research, may be in for a disappointment. It is not what is happening in the classroom that should alarm them but what is happening in the brain. A group of scholars at MIT recently set out to measure as precisely as possible “the neural and behavioral consequences of LLM [large language model] -assisted essay writing.” They enlisted 54 subjects and divided them into three groups, one writing an essay with the benefit of AI, one with conventional search engines, and one without electronic assistance. This may be the first time that electroencephalography (EEG) was used to compare brain function between AI users and non-users.

The results, published this summer with the playful title “Your Brain on ChatGPT,” could not be more dispiriting. The AI users not only “consistently underperformed at neural, linguistic, and behavioral levels” but even “struggled to accurately quote their own work.” If even a short-term experiment can demonstrate that there is a cognitive cost to reliance on AI, one can barely imagine the effect of a four-year curriculum. Of course, not everyone will go along with the Kabuki-theater model of education, in which research papers are mere ritualized gestures. One of my colleagues said that the idea of putting “comments on essays written by a machine is an affront to my humanity.”

Perhaps he can get ChatGPT to write them.

_____________

For the moment, the outlook is bleak. A whole generation has grown up with iPhone in hand and is absolutely at home with its technology. To ask this cohort to renounce one lobe of its potential will seem to them tantamount to asking them to live on an Amish farm, to deny the century in which they live. 

And so the AI wave looks likely to wash over us, and with measurable devastation, in the next couple of years. At which time, I predict, a few plucky institutions will take a stand and make of themselves an AI-free compound. There, a cohort of selected students will come to work with actual books, voluntarily surrendering their devices for their term of cyber-solitude in a kind of cyber-Cistercian monastery. Learning will survive there, and when that cohort emerges, its members will startle the world. After all, this isn’t the first time we’ve had to keep the torch burning in the catacombs.


1 See “Masterpiece of Melancholy,” my review of Christine Rosen’s Extinction of Experience, COMMENTARY (February 2025)

We want to hear your thoughts about this article. Click here to send a letter to the editor.

+ A A -
You may also like
26 Shares
Share via
Copy link