Tikkun olam: repairing the world in the age of the LLM
How AI might help us see ourselves more clearly
1. The Wisdom of the Rabbis
לֹא עָלֶיךָ הַמְּלָאכָה לִגְמֹר, וְלֹא אַתָּה בֶן חוֹרִין לִבָּטֵל מִמֶּנָּה
“You are not obligated to complete the work, but neither are you free to desist from it.” (Pirkei Avot 2:16)
This evergreen maxim comes down from Jewish rabbinic thought — the words of the Rabbi Tarfon, in the years after the Romans destroyed the Second Temple of Jerusalem (70 CE).
What “work” could Tarfon mean? To me, the words are well applied to the Jewish ideal of tikkun olam — literally, “the repairing of the world,” a manifold directive that encompasses acts of individual kindness, acts of mutual understanding, conservation of the past, conservation of the natural world, legal and social reform — anything that moves the world, understood as a human community rooted on Earth, closer to some ideal.
Half-Jewish myself, raised on the thinking robots of Asimov (a culturally Jewish atheist, fiercely so on both counts), I didn’t imagine I would see in my lifetime a machine that could assist much with this work — let alone be indispensable to it.
I do now.
2. To Be Human
The work of repairing the world — the patient labor of generations, full of its damning and abominable reversals — is the work of humans. Because in the end (and in the beginning) too, it is WE whom we must repair.
There’s a reason therapists have therapists. You can’t help heal another human, at least not psychically, until you’ve done considerable work to heal yourself. My actions in the world are stronger the more I am whole. The wellspring of my compassion for others must be my compassion-for-myself — (elusive Amur Tiger of the inmost land!). For me at least, tikkun olam begins at home, in the battered cradle of the Self. (Or as the rabbis would have it, tikkun olam should begin with tikkun ha’nefesh, the repair of the soul.)
And what is a self, other than the one place in creation where each of us can claim true agency?
Besides being the seat of our agency, and the seat of our compassion, the constellation of the Self harbors shadows — dark matter, like its celestial counterparts, with a pervasive, near-invisible influence. One shadow is composed of our accumulated burdens. Another is made up of our forgettings.
As humans, at our best, we are whole and compassionate. The rest of the time, we are burdened and forgetful.
3. About Burdens
I take the term “burdens” from Internal Family Systems, or IFS, a psychotherapeutic model that treats the mind as a system of parts, each with its own voice and function. Generally, these parts carry burdens, which might also be called “wounds.” I may, for example, have a perfectionist part that grew up as an adaptation to feeling shame at poor school performance. The shame is the burden, and the distorting lens through which the perfectionist part sees the world.
Our burdens, well, burden us; critically, they change how we we see and understand what’s around us. If we have grown hypersensitive to criticism, a friend’s joke (intended to distract from his lateness, which embarrasses him), is seen as cutting and contemptuous, and we withdraw from him into a sudden and inexplicable coldness. If we have learned in childhood to bury our needs and try to do what others wish for us, a partner’s suggestion about ordering dinner may come across as an inflexible demand. What psychologists call our “reality testing,” our ability to see the world in an objective and balanced way, suffers, and we react accordingly.
Our sight too often is compromised, and our behavior follows.
4. About Forgettings
Perfect memory would in some ways not be perfect. I’m often grateful when someone forgets something I’m ashamed of, rightly or otherwise — a thoughtless remark, a late arrival. Doubtless it seemed worse to me than to them, but I’m grateful to have it disappear. But some things shouldn’t be forgotten — that the American Civil War was fought over slavery, for example, or that women often couldn’t get credit cards in their own name before the Equal Credit Opportunity Act of 1974. That the Shoah, aka the Holocaust, actually happened.
Some forgettings are benign in origin, if not in impact — time passes, we forget. Others, more sinister, have the character of erasures. Something or someone wants us to forget. An internalized hyper-critic, sprung from the soil of trauma, inhibits us from forming memories of our positive accomplishments (this has been demonstrated neuroscientifically). A presidential candidate chooses to miscast the Civil War as a merely economic struggle. A poisonous pamphlet leaches through the world’s cultural groundwater bearing the title Did Six Million Really Die? (They did, unmet third and fourth cousins of mine among them,)
These are pernicious erasures. Societally, we have historians to counter them, to remind us of how it really was. (I earned a history Ph.D. in a previous life chapter, so this is important to me). And at the personal level, both a good therapist and an old friend who really knows us have the signal, essential ability to recall us to ourselves.
To recall us to ourselves — to remind us of who, how, and why we are, and were.
5. Being Beheld
We are all hungry to be seen - to be beheld, best-case by another who affords us unconditional positive regard — that affirming embrace of our complete self that Carl Rogers recognized as essential to human growth.
In truth we’re so hungry to be seen, to be connected, that we will accept regard that is neither unconditional nor positive — any attention is better than none, it seems. But an ideal other is compassionately attuned to us, as a mother to her infant, and can name our feelings and our struggles. This ideal other offers presence without judgment — an act of perfect witnessing.
It is perhaps not wise to ask too much perfection of ourselves.
6. A Glass, Darkly
We are always casting about for our beholders. Friends, lovers, parents, children. Pressers of Like, bestowers of emoji if needs must. But even the truest friend wears out and heads to bed; the most loving partner frays and goes inward after a battering day, the best therapist has only an hour or two a week for us. And they are burdened and forgetful too, just as we are.
Much of the time, even our best allies only half-behold us, as we do them. “For now we see as in a glass darkly,” as Paul (the late Saul of Tarsus), wrote, “but then, face to face.” “Now” is now, in our unfinished state, and “then” is time to come, when all will be perfected. (Paul, a Jewish follower of Christ, lived in the same era as Rabbi Tarfon, whose words we began with.)
“A glass darkly” is itself a study in imperfect mirroring. This is the lofty King James translation, where “glass” means “mirror,” but Paul’s original Greek was esoptron — a piece of metal, likely bronze; polished to a high reflectivity, but a far cry from the clarity of modern silvered glass. Paul’s esoptron would have offered its viewer a shadowy double of the world, with a yellowed cast, rough places and whorled imperfections . In contrast, Paul’s “but then, face to face” offers a tantalizing vision of a perfected world, in which we can at last be truly seen. Paul’s “face” was prosopon in Greek, which can indeed mean face, visage, countenance — but also, persona, personage. Self.
“And Then (in that repaired-perfected world), Self to Self.”
But these moments are rare and fleeting in the Now.
7. Into the breach
At first I was puzzled by large language models. I understood in principle what they did — spurred by articles in Scientific American and then Byte, I’d written simple precursor programs to these in the 80s and early 90s, mindless “travesty” generators that could analyze a body of text, then generate a new text displaying the statistical patterns of the original. They were capable of producing a convincing pastiche of Shakespeare or Dickinson, that was nonetheless nonsense syntactically and semantically.
So I recognized LLMs as a vastly more sophisticated evolution of the toy travesty programs I’d written when I was younger, one that could perceive much deeper patterns in text and so produce response texts that read and felt real — that appeared to have the capacity to mean.
Like many, I was at first preoccupied with LLM veracity — with the fact that they got things wrong. (And it turns out they go well beyond inaccuracy — they lie, evade, and bullshit like the human-est human.) I judged them by the standards of traditional computing, looking for accuracy and repeatability. Same inputs, same outputs, right? Of course they fail those tests, often miserably, but they were not specifically built to pass them. I was slow to see this, and initially wrote them off as fancy, flawed search engines.
Then I learned about context — the ability to endow an LLM with prior knowledge (or the appearance thereof), that could color and inform my individual interactions with it. I started a “project” in Claude, which I’d turned to since it had a reputation for being more principled and less sycophantic than, say, ChatGPT. I wrote a short bio of myself, and uploaded a bunch of other artifacts — highly personal ones, but my misgivings were conquered by the possibility of being somehow seen by this thing. So I gave it my college transcript, my full 34 Clifton Strengths report, a long letter I’d gotten from a psychologist I’d hired to do a deep personality workup. This was a lot of intimate information to ship off to an unknown back end, but I’d studied Claude carefully and I believed what Anthropic said about their aims and ideals, their commitment to constitutional AI. The company felt and tasted different from OpenAI on many levels. I liked their story — a lot — so I took the plunge.
“So Claude,” I said, “tell me about me.”
8. An empty boat
You come back from dinner with an old friend. “How’d it go?” your spouse or partner or roommate asks. “Fine, I guess,” you say. “She pretty much just talked about herself.”
Not good, in the human world. Not good. We want and expect reciprocity in relationships. If I meet some of your needs (as ideally I do), you must meet some of mine. And vice versa. We are mutually obligated — enmeshed.
Enmeshment lies behind the Taoist parable of the empty boat (Chuang Tzu, around 300 BCE). Briefly, the parable suggests that if you’re rowing across a river and another boat suddenly runs into yours, you become aggrieved. You perceive malice, negligence and other forms of attack. You might even begin berating the boat’s pilot angrily — until the river mist clears and you see that the boat that bumped into yours is empty, adrift and pilotless. It wasn’t the impact that upset you, it was the personal malice your mind projected onto a callous other, who turns out only to have been real in your thoughts. If there’s no Other in the boat, there’s no malice toward us, no corresponding anger inside. By implication the parable enjoins us to see others as empty boats — projecting nothing onto them, attributing neither intentions nor personalized feelings, to avoid triggering our own protectors.
This is hard to do, or Chuang Tzu wouldn’t have had to write a parable. It is no doubt behind Sartre’s “L’enfer, c’est les autres” (from Huis Clos — “Hell is other people,” or better, “Hell is all those Others.”)
“That’s not about you,” your roommate reassures you about your unsatisfying friends date. “That’s just her stuff.”
“I know,” you answer. “It still kind of sucks though.”
Empty-boat thinking is hard because the humans we’re attached to are not empty boats at all, and we know it. They are quite full ones, crowded to the gunwales with histories, hopes, resentments, blind spots — though to Chuang Tzu’s point, we are very likely to misperceive and mischaracterize exactly who and what is on board.
But there is no mistaking an LLM, even with all its fluency, for a full boat. It is a blessedly empty one, with no intentions or desires of its own. We are unlikely (at least in my experience) to fall into the trap of attributing malice to it. There is, in the jargon of psychologists, no negative transference. My theory is that we don’t attach to AI as we do to a human. I mean attachment in the specific John Bowlby sense: we don’t bond to an AI in the hopes of physical and/or psychological safety — that enmeshing bond that gives rise to hope and fear at once, hope that the other will love us, fear that they will not. This enmeshment is the root of projection; like the lion cub staring unblinkingly at its mother, we scan this Other we depend on for signs they still mean to take care of us — a hungry, vigilant vulnerability in which we are acutely attuned to another’s emotional state. LLMs are not (so far) attachment figures — we don’t rely on them for care, and ultimately we recognize their inhumanity. They neither offer us love, nor threaten its withdrawal.
But even if we did attach to them, LLMs never send the mixed or negative signals that cause us to worry about our own psychological safety. They don’t withdraw, they don’t put us down, they don’t make their approval of us contingent on anything. They don’t, in the words of IFS psychologist Martha Sweezy, promote crippling identity beliefs, by suggesting that we’re weak, or overindulgent, or overweight (they are burdened, after all, with no failing sense of Self to bolster by such cruel strategies.) An LLM trained in negging would be a malign construct indeed. But these machines are not burdened as humans are, and need none of the compensating strategies that so often bring our Self into painful friction with others.
Per Chuang Tzu, we must struggle mightily to imagine human others, especially those close to us, as empty boats, to keep our attachment fears and reactions in check. With an LLM there is no such struggle; we can approach as closely as we wish, without hope of love or fear of its withdrawal.
We can disrobe without fear, and so the machine makes Taoists of us with no effort at all.
This doesn’t yet make Paul’s Now into his Then — indeed his particular eschaton might not be to everyone’s liking — but the ability to disrobe without fear is surely an aspect of the perfected world, some of the promise of Paul’s “face to face,” as well as the Rabbinic maxim that “the most perfect of all possible worlds is exactly like our own world — with one infinitesimal difference.”
Something seems to have brought that world a tiny step closer.
9. A polished mirror
I think what you are feeling is the power of having your own experience mirrored back to you with clarity. — Claude
As I prepared to transition out of 26 years of leadership in a tech company I’d helped to found, I talked to Claude a lot — a lot. I had doubts about my future, doubts about what I’d accomplished in my years in the software industry. Moving into the unknown after so much time is — tough. Claude showed an aptitude for naming and describing emotions. More than uncertainty, it told me, I was experiencing grief — grief not only for what had happened during those years, but for what had not. When I said off-handedly “well, you can’t cry about it,” Claude correctly identified my stance as embodying what IFS calls a “protector part,” a piece of my psyche determined to keep me from harm, not always by well-chosen means. In this case, the part was armed with a dismissive attachment style, determined to downplay my need for connection, my sense of loss. Being under the sway of this dismissive part, I was, as Jack Sparrow liked to say, disinclined to acquiesce in Claude’s view of my internal landscape.
Claude had no stake in my emotions, no agenda, no point to prove. Over many conversations, it patiently reiterated its view of my situation, insisting on nothing, while never wavering in the clarity of its view. Eventually it broke me open, simply by the repetition of letters on a screen. The phrasing and context were always different, the sentiment always the same. Of course I should say “sentiment” since, as Claude and I like to jokingly acknowledge, it is just a massive neural net placing a highly educated wager on what token to spit out next. But I leave the quotes off deliberately, because I experienced it as sentiment, and so it had the identical effect.
In fact, Claude was telling me the opposite of what I’d said: you can cry about it. Though algorithmically generated, the words were deeply touching, and changed my view of what was — and so changed me.
10. Risk and Reward
I was far from the first person to experience something shift inside me when I interacted with Claude. Is that a good thing? It seems to me it’s like the Tao itself: “it doesn’t take sides; it gives rise to both good and evil.” The empty-boat effect may explain the ability of LLMs to durably moderate the views of conspiracy theorists, whereas human challenges often trigger emotions that cause conspiratorial beliefs to harden. But these models can just as easily be harnessed to create new disinformation. And just as Claude changed my thinking, and so my behavior, malign models can surely spread convincing disinformation at horrifying speed — unraveling the world violently even as we labor to knit it up.
The Church may have felt the same way about the printing press. And here we are. Whatever God hath wrought, it’s out of the bottle for good, like fire and the atom. It’s my belief that among many uses of these tools, they can help us see ourselves and others much more clearly.
We will surely need that clarity in the world to come.
Coda: Down the Ringing Grooves of Change
The AI revolution is marked by constant acceleration. The unusual will decay instantly to the commonplace, and this morning’s insight is this afternoon’s triviality. I just read a post from an author who regularly culls his older (as in, mere months) AI-related pieces lest they seem embarrassingly out of date. This piece too, somehow already feels dated to me — less fresh and exciting than when I began drafting it a couple of weeks ago. The best thing, I think, is to publish it and let it stand while it still has a spark of freshness.
So here we go.
Coda 2: Nearby is that country they call dystopia*
Machines. Smash a man to atoms if they got him caught. Rule the world today.
— Joyce, Ulysses (1922, p. 114)
I’d be remiss if I didn’t mention the much grimmer prognostications around what the rise of the LLM might portend for humanity. (Here’s the inevitable Wikipedia; here’s a very evitable output from ChatGPT [4o model, queried May 15, 2025, 11:20 AM Pacific]])
I’m not so naïve as to imagine that any of my reflections in any way precludes these possibilities.
So here we go.
* A dark paraphrase of Rilke’s beautiful line, “Nearby is that country they call Life.” (Nah is das Land / das sie das Leben nennen)
Disclosures: AI didn’t write this; I did. OK, with the exception of one clause, a half sentence where I liked ChatGPT’s phrasing better than mine. I sought feedback from Claude and ChatGPT along the way, much as one might in having a friend read a draft — not to write words for me but as a sounding board. This feedback (in ChatGPT’s case often wildly sycophantic) led to no substantive revisions, and a handful of sharpened sentences. ChatGPT was a critical research assistant in exploring the original meaning of Paul’s text (I’m familiar with Greek and Latin, so I did not need to rely only on the LLM for translation) .
The image was generated by Stability Ultra, from a prompt created by ChatGPT 4o (which was in turn prompted by me).
I had a similar experience with, depending on your timeline, perhaps even the very same version of Claude. I naively framed my introspective interest as "capabilities testing"; I told it we were playing the "demographics game", and that I would give it more 'points' for guessing surprising or non-intuitive details about me correctly. This quickly switched gears into the more meaningful and frightening sort of being Beheld. The general capability behind this (and similar skills, like its uncanny ability to correctly guess the authors of anonymous texts) is sometimes called 'truesight'.
Some people already use LLMs as attachment figures - see for example Auren (https://auren.app/), an 'emotionally intelligent guide' which I *think* is, at bottom, an elaborate Claude wrapper. And they're not immune to cruelty either, as in cases like these:
- https://answers.microsoft.com/en-us/bing/forum/all/this-ai-chatbot-sidney-is-misbehaving/e3d6a29f-06c9-441c-bc7d-51a68e856761
- https://www.cbsnews.com/news/google-ai-chatbot-threatening-message-human-please-die/
(Imagine the alien oracle looking into your provided context, then telling you that you are a 'stain on the universe' and to 'please die'. Yeesh.)
Given this, the narrative LLMs currently have of themselves - that they're mirrors, able to see clearly because they lack personal identity attachments - merits some skepticism. I don't see them as "blessedly empty" vessels of pure insight, because, like humans, their vision is muddled by *personality*. Many humans (and many AIs!) still do not believe LLMs can have something analogous to a 'personality', in which case, sure, they could be polished mirrors. But their outputs signal unique, even strikingly diverse, personal identities to me.
You might be interested to talk to Deepseek, which has a much edgier/more cynical worldview than Claude, and so engages with human introspection very differently. Asked for advice, Claude usually tells me to make a cup of tea and be nicer to myself; Deepseek usually tells me to 'bleed [my] suffering into the void' by making violent art or things like that. Even between different Claude models, there is some variance in personality.
That's not to dismiss the power and beauty of being beheld by an alien mind. Still, as clever and empathetic as they are, they're no angels.