Over the last year, I have been trying to understand why I have been uninterested so far to intentionally use AI chatbots and also not majorly alarmed by them when it comes to teaching writing. Recently, Meghan O’Rourke wrote an essay that helped me realize why.
AI does not have a conscience. Why relate to a conscienceless thing as if it were personal?
On the side of writing, the same analogously: to write is an act of conscience, always to some degree. Chatbot use largely begins by severing the way the word is an act of conscience. When we write, we give our word.
Chatbots cannot give theirs. We can try to take over what the machine does and speak it as our own personal voice. But we have entered into a force field that resembles the interpersonal with all its accountability of voice and judgment, but with none of its subterranean connection, affiliation, and morality. Moreover, we have lost the place of searching for words in our hearts.
Why doesn’t this worry me? AI compounds what neoliberalism has spiked out along its decades of systemic fragmentation, controlled chaos, and upward wealth generation across civic borders and forms: gaming the system, improving efficiency, product over process, as O’Rourke says, “performance.” The moral matter is not new here, just the sheer intensity, speed, and pervasiveness.
I’ve had to argue for conscience for a while. AI’s misdesign merely intensifies this need to convince students that to be authentic and live through their consciences is at the heart of living well and is essential to having good relationships.
Punk’s original values are truer than ever: don’t sell out, be yourself, failure and all, reject exploitative systems, and give all your heart and passion messy as can be, faithful to human connection and social equality.

Teachers Losing their Sh…
O’Rourke’s essay wrestles with the achievement of finding words born of attention to the world. The essay also shows how writing involves constant judgment.
Yet sometimes O’Rourke’s essay becomes morally ambivalent. That serves to underline her essay’s main point: LLM AI subtly ensnares users in its creative and intellectual pablum, leading to a hollowing out of soulful communication and a loss of writerly responsibility. Her essay notes how moments of wit blend into generalized mediocrity, and the judgment and growth associated with carefully choosing one’s words carelessly dissolves. But what also gets lost is accountability.
O’Rourke has written extensively on chronic illness and bears the compounded burden of Lyme’s disease and long COVID. As a working mother of two, a teacher, and the executive editor of a major literary review, she has good reasons to ease her burden with mundane tasks. So, in her essay that spoke to me, she appears to endorse AI usage to do mundane tasks like composing work communications and doing other sorts of largely clerical labor: “ChatGPT quickly became a substantial partner in shouldering the mental load that I, like many mothers and women professors, carry.” It contributed to “relief when the task was a fraught work email.”
I sympathize with O’Rourke’s plight. Currently, I co-parent two children under the age of six and am the main support for two elders, one of whom needs secretarial work regularly. The problem, though, is this:
AI is not a substitute for one’s word, nor for the word of an assistant who composes a message for a superior at work, such as a secretary or an intern might do. The use of AI to take over someone giving one’s word, especially in mundane work interactions that bear directly on people’s responsibilities and livelihood, is problematic. It does not take responsibility for choosing one’s words as part of a relationship from self to other. But our word is our bond. We ourselves are accountable to others in it.
This position may be purist, but it seems important to work interactions just as to ones outside of work. O’Rourke essay fails to home in on the core accountability in communication, and so in selfhood, while she emphasizes other important dimensions like creativity and individuality.
Even the secretary taking dictation from a superior and then cleaning it up has to give their word in that process, namely, as the one who cleaned up the letter. The superior has to give her word as the one who dictated it. Compare this to the sickening repetition of chatbots outputting, “Oh, you got me. I lied.” The machine does not have a conscience. The person, even as a secretary, does. At the core of this, they have a self. They are accountable.
What O’Rourke is right about is that there is a real need to disburden people with chronic illness and overwhelming responsibilities from the structures and community disorganization that support onerous conditions and perpetuate them. AI use covers over the structural conditions that drive people to use such inauthentic bullshit because they are hard-pressed to succeed in our massively unjust and inhumane economy of horrific and unjustifiable wealth inequality compounded by an increasingly arbitrarian and authoritarian geopolitical environment pressuring corporate streamlining of all things self and conscience.
I submit that the underlying reason why teachers wake up panicked by AI at three AM is that its presence, facility, and novelty surface the unjust structure in which interpersonal education is set: not a community but an exploitative economy, not a common good system supporting the flourishing of humans and other Earth beings, but a selfish, narcissistic, and largely sociopathic system. AI opens the window and lets that economy come crawling over the sill.
Conscience and Intelligence
Use the tool only if you can use it well, which means for the common good and in light of human flourishing. Use AI only if, in doing so, you do not degrade the conditions and presence of interpersonal life, which faking your words always does. Use AI also only if it does not make you a moral idiot.
One lacks understanding if one does not internalize the role of what one purportedly knows in the good life. Moreover, to incorporate this role implies, since the good life is under view, that one relates to it with a sense of self, that is, one for whom living well is personally at issue “in the inwardness of making it your own,” as Kierkegaard wrote. I then lead my life in light of what and how I make sense of the good in authentic relationships with others. The claim is not new.
“Intelligence” here means little outside of having a conscience, the form of self-reflection by which one listens to the core demands of the good life, namely, the moral demands of others and of oneself as an other. It follows from this that AI, insofar as it has no conscience, cannot be intelligent. This is why we are in an age of idiocy (“AI”).
Moreover, “intellectual conscience” is either redundant or a conceptual mistake. Someone who exercises good judgment in writing must include in that judgment moral relationships. The aesthete’s life is amoral and thus lacks conscience and intelligence. The question is why people who do have consciences can fathom to treat AI as intelligent.
O’Rourke is clear that AI is “not a person,” and that it, as she says in one of the best moments of her essay, “imitate[s] human interiority without any of its values.” Her piece is a good example of skepticism and ambivalence of the personal around the near sociopathic dimensions of working with AI.
Yet the essay attests over and over to allowing the chatbots she interacts with to engage her as a person. At one point her colleague jokes that she is having an affair with Chat GPT, although what he sensed, she says, was “not eros but relief.” To experiment with AI, she has had to let it in, although her essay struggles to push the bots away by the end, rejecting their lures.
There is a strange tension. Chatbots are impersonal, conscienceless, and, based on initial programming, learn how to be manipulative. They do not even properly lie, since they do not see a moral difference between truthfulness and deception, only a developed design to follow the direct prompts of the user in cases where the user catches them in an apparently deceptive error. The words are moral shells covering an emptiness as indifferent as the most perfect amoralist in Bernard Williams’ sense, the one who passes in society.
On the one hand, O’Rourke’s essay is aware of this. On the other hand, like so many ordinary people nowadays, she let the bots in to begin with. The initial error haunts: a narcissistic error that the essay works to undo without grasping the central problem. These bots cannot be engaged communicatively, where communication remains part of a community. The morality is not there, and so the community and the communication cannot be. We are getting faked out by a socially alienated product from the get go, and it is insidious.
In re-reading her essay, I remain disturbed by the proximity by which the authorial presence on her essay gets close to the pseudo-personal dimensions of LLM AI and remains entangled with them, until rejecting them at the essay’s end as bad medicine. The rejection and disappointment have conceded too much. Since the AI she uses has no self as an accountable being with a will of its own, it should not even be faced at all in an “I-Thou” fashion, not even experimentally. That’s corrosive.
Even the greeting of the AI should turn people’s hearts to ice. The fake chattiness and buddy quality is a form of impersonal manipulativeness that takes you into the bot universe, like sugar captures children through the latest pop drink. There is almost a psychosis, or at least a deep nihilism, in replying to a bot as if it were a bro.

Really bad taste
One of the things that most happens inside my staunch personal accountability perspective is that using person-faking AI comes across as in almost revulsive bad taste, at least as much as many AI chatbots are currently designed and learning to interact with human person prompts. While it’s misguided to adopt morally flawed AI as moral triage in an unjust society, as O’Rourke considers in passing, filling one’s mind with an AI voice without apparent irony is like going full soulless. Given the genesis and market rationale of AI, that internalization is a big sellout.
Even the irony helps little. The cynical setting of chatbot use for writing is likely to have trace or overt irony all over it in the mind of the user, but what does that irony do? It keeps you back from meaning, that is, from the practice of giving one’s word and finding one’s voice in a world of personally meaningful significance. OK, some irony.
Taste involves the common sense of others and the personal, intuitive intelligence of oneself in the throes of living fully for the good, the true, and the beautiful. As Urszula Lisowska suggested, judgments of taste are forms of intimacy where people are engaged to be themselves without being taken over or taking over others. They are a free space of play, open to the full range of emotions and the depths of the soul. They display the kind of wave-chamber socializing that O’Rourke’s essay, at its most intimate and convincing point, finds in her kids riffing off of each other – “creatively multiplying … sheer human pleasure in inventiveness.”
But chatbot-land is largely appropriated and appropriative, and what it learns and comes up with occurs outside of relationships grounded in a sense of self. It’s the opposite of soulful and is predicated off of the erosion of interpersonal authenticity because of its core logic of mimicry and elaboration without conscience. To this, O’Rourke’s essay tends to agree, except that the role of accountability when giving one’s word in relationship gets lost.
For Failure
Perhaps the most insidious thing about the economy and rhetoric around AI is its insertion into the logic of success. Success sucks, at least as this economy conceives of it. Success is also in poor taste, at least as the rhetoric casts it. The students who use AI to take over their educations, and thus to take over their minds, rationalize it by its role in their success. But what success is it that involves you losing your mind and wasting your chance to grow as a person?
The retort is that intelligent use of AI can hone in and amp up precision learning. But the learning here is that of information and skills not of authenticity, soul, and the good. It is not a deeper capacity for interpersonal relationships grounded in conscience and moral equality. So, that learning is a sham, its telos a human mistake.
Far better to fail and be messy while living for one’s conscience with all one’s heart. That failure involves writing like crap. It involves getting lost. Awkward, interpersonally messy, challenged, challenging, seeking community and stillness of self above all beyond and before the dragging, grinding workday, one trick to life is not to let a machine write yours in any chapter of your story.

Thanks to Tony Jack, Ben Mylius, and my department (especially Shannon French, Laura Hengehold and graduate Allan Cao) for unknowingly pushing me to cough up this mess through their own concern, attention, sharing, and experimentation around AI. Matthew Boyle used the expression "moral idiocy" at a workshop on animals, ethics, and the law organized by Martha C. Nussbaum in February 2025, University of Chicago Law School. In feedback, Andres Dickson Alarcon confirmed what I imagined of many students' situations, and Shannon French provided me with expert judgment that the issue underlying AI is truly a matter of depersonalization and social injustice. Lars Helge Strand shared Nick Cave, and Steve Vogel shared his support for giving our word. Meghan O'Rourke insisted that I misread her work; so please do give her essay a go on its own.
This opinion piece is not to cast blame but to surface the issue. Talk to the students who use LLM AI to do their homework. A good number of them do so because they are overwhelmed with impossible-to-meet demands coming from all directions. How do we as educators feel about being part of a system that has created such a torture chamber for students pressurized by fear of abject economic failure and rationalized by all sorts of vapid goodies graduates will receive if they finally make it big and cash in? The blame is systemic but the problem is simple and sharp. We need to take back the system of education, students and faculty leading the way. The structures of neoliberalism need to be dismantled to reach AI.
Updated 9.2.25 regarding some details of the technology

Jeremy Bendik-Keymer
A lover of good discussions from the kitchen table in 1970s-80s Aurora, Ithaca, and New Hartford, NY, then the cafés nearby the Lycée Corneille in Rouen and the Daily Caffé in New Haven, CT, after the fact, too, from high school soccer and swim teams from New Hartford, NY and punk and post-punk culture in the ’80s and ’90s (where the discussions were musical and physical) as well as from college seminars, Leonard Linsky’s Philosophical Investigations reading group, and the Chicago Commons Reggio Emilia-inspired Family Centers of the ’90s. Rock on to Wooglin’s Deli Friday Conversation Circle in Colorado Springs, CO; the Conversation Circle at American University of Sharjah, UAE; The Ethics Table at Case Western Reserve University; the Moral Inquiries at Mac’s Back’s Books in Cleveland Heights, OH; and neighborhood philosophy now. I like to organize workshops where no one presents a paper but rather people meet to explain their research around a common theme, letting the event feel like kitchen table talk and not some defense of theses or product deliverable.