The best AI prompters will be people who have been steeped in long-form writing
- dvollaro
- May 14
- 6 min read
Updated: May 29

I have been using ChatGPT a lot more often these days for research. Already, I can see how it is changing my research methodology, and not for the better.
Because I was raised and trained in a book-based education system, new ideas and connections have always arisen from engagement with long-form writing—articles, essays, books, etc. I have a Ph.D in nineteenth-century American literature, so I have made a deeper investment in books than most. Reading is still an act of discovery for me. I use a highlighter when I read, make copious notes in my books, and journal about what I read. This habitual engagement with whole texts produces an inner dialectic when I read—a conversation between competing reactions, a rich tableau for learning.
Here is a good example of how that tableau works: I recently plucked my well-worn copy of The Basic Writings of Carl Jung from my shelf and opened it to his essay "The Assimilation of the Unconscious." I quickly found myself immersed in it. A burst of intellectual activity was occurring in me. I quickly found myself in conversation with the text, which led me to highlight passages and then open my journal to write about one of them. My brain was working overtime to create a unique response. Here is a passage from that journal entry:
Jung warns of the dangers of the individual being subsumed or bullied by society. He wrote, "The bigger the organization, the more unavoidable is its immorality and blind stupidity." And then a few lines later. "Individuality will inevitably be driven to the wall. This process begins in school, continues at the university, and rules all departments in which the state has a hand. In a small social body, the individuality of its members is better safeguarded and the greater is their relative freedom and the possiblity of conscious responsibility." This is Kirkpatrick Sale's Human Scale argument put in psychological terms. Big systems produce misery. This is the message.
There is nothing special about this journal entry, but my path to writing it is radically different from any chatbot interaction I've had recently. ChatGPT can summon an ocean of factual information, but most of it has the depth of a wading pool. On the other hand, my twenty-minute encounter with Jung burrowed deep. I can almost feel the new synapses forming.
Here are a few thoughts about my encounter with Jung:
Long-form reading stimulates many unpredictable, unanticipated connections
I have observed that my old-school, book-bound, time-intensive research process turns up connections that GPT would likely never have made. To test this theory, I prompted the chatbot with this question related to my journaling about Jung: "Did Carl Jung have opinions about the scale of human civilization? If so, how do they resonate with modern thinkers on the subject." The response accurately identified Jung as a critic of modern mass society who worried about the alienation of the individual and touted the need for decentralization and psychological individuation. The chatbot mentioned E. F. Schumacher, Wendell Berry, Yuval Noah Harari, and Sherry Turkle in its response, but not Kirkpatrick Sale.
I tried several more times to create a prompt that would bring Jung and Sales into alignment, but I failed each time.
I was not surprised by this result. Long-form reading creates the space for rich connections between ideas that are just not possible with chatbots, which will give me instantaneous answers to questions but presenting responses that are shallower and less complex. I learn things from chatbots, in the same way I have always learned things from encyclopedias, as a place to begin research rather than a final destination.
This encyclopedia comparison is instructive for me. I think of chatbots now as learning companions that can prompt further exploration into topics. For example, I plan to pick up Sherry Turkle's book Alone Together because of my recent exchange with ChatGPT about decentralization. But I will then read the book, which is an important qualification. If the research pathway does not lead back into long-form reading, that is a big problem.
Chatbot output is only as good as the prompts, and people who were trained in long-form reading are better prompters
Some questions are better than others; this is true of prompts as well. Deep reading strengthens a person’s ability to ask good questions. Literature, for example, is often based on implicit or explicit questions, and people who read and enjoy it internalize this probative potential. Long-form reading also encourages reflection and metacognition, which is another internalized form of questioning. People who read literature, history, and philosophy encounter contradiction and complexity; they are likely more comfortable in spaces where questions are expected.
People with training in deep reading are better able to recognize how others think or feel, predict how they might behave, or understand their state of mind. Literature readers, for example, inhabit the interior lives of characters. This experience could potentially improve a prompter’s ability to imagine how an AI will interpret language.
Also, deep reading exposes a person to richer sources of information. The prompt I wrote about Jung's interest in societal scale, for instance, would have been impossible without my prior intellectual engagement with Sigmund Freud’s greatest student. Jung's interest in small social bodies is not part of the greatest hits list of Jungian topics that would emerge from ChatGPT if you asked it to summarize his ideas (I know because I tried it). It's a deeper cut. I thought to ask it because of my prior engagement with Jungian ideas. This prior exposure to Jung trained my instincts, which in turn told me that it was likely Jung cared about the issue of human scale. It is unlikely that a person with no previous knowledge of Jung would formulate such a prompt to begin with.
Why does this matter? AI developers are warning of something called "model collapse." This is the predicted outcome of large language models being fed with AI-generated content. The LLM was originally trained on high-quality data that originated within human-based civilization (large caches of novels, for example). But as the system is fed AI-generated content of inferior quality, the entire system begins to degrade, headed for eventual collapse.
Model collapse is a problem for AI developers, but it could well be a metaphor for the effects of AI on civilization itself. If the people using AI to generate and consume information are themselves trained on AI-produced data, isn't it likely to conclude that a degradation will occur in the quality of that information, and perhaps in the intellectual abilities of humans in general? AI developers worry that AI-generated data lacks the richness and diversity found in data produced through human activity. The data is inferior. When humans are steeped in this inferior data, isn't it possible that our collective intellectual abilities will decline?
Chatbots encourage shortcuts that undermine the quality and integrity of writing.
Lately, I find myself taking shortcuts with research that never would have occurred to me in the pre-AI era. For example, the temptation is there to research a topic using a chatbot, ask it for a few quotations representing the idea, and then integrate them into the writing. This process seems efficient, but it entirely removes any responsibility on the writer's part to know anything about the source of the quotation. Chatbots open the door for this kind of shortcut; stepping through it leads to an obvious loss of quality and integrity. Speed vs. quality. That tension is queuing up for anyone who writes these days.
Why is the shorter path to knowledge inferior--reading a series of summaries spat out from a chatbot, for example? Some of my students believe it is not. They speak of learning as if we will very soon be able to simply download information into our brains like Neo learning how to fly a helicopter in ten seconds in The Matrix.
Another problem is the quality of research. I've been double-checking sources and facts from ChatGPT and find a significant percentage of them to be flawed. The chatbot regularly confabulates sources, pulls statistics from Reddit threads, and makes dubious numerical estimates by culling data from multiple websites. The labor required to correct these issues is considerable, but the chatbot's initial speed in producing these results creates a sense of its automatic value. I would not describe it as trust; it is more like a kind of slick packaging that resists dismantling. We want to believe in the instantaneous result because we have been indoctrinated to respect speed and efficiency. These qualities are accorded automatic value in our society.
I return to Marshall McLuhan insistence that "the medium is the message." The message conveyed by chatbots is that learning should be instantaneous and therefore effortless, but this is a dangerous illusion. Learning still requires friction, and mastery comes with many hours of practice. There are no big shortcuts that come with AI in this regard. My biggest fear about generative AI is that overexposure to the technology will soften societal expectations for what learning should look like. That will be a tragedy.






Comments