And not a minute too soon either, with the Lyceum bearing down...
[LISTEN: final version on Substack...]
and my pre-Lyceum conversation with our guest, scheduled to be videotaped in the TrueBlueTV studio across the hall from my office. I keep thinking of "spontaneous" questions for Agnes. For instance:
In Open Socrates (p.129) you write “my approach to Socrates, might be described as hard-line intellectualist— I think that Socrates is just what he seems to be, namely, someone who believes that we don’t know, that if we knew we would act on our knowledge, and that Philosophy-the pursuit of knowledge is the only sure road to becoming a better person.” But why knowledge, rather than wisdom? And what kind of knowledge? Knowledge how? Knowledge that? Knowledge as conceived by epistemologists in quest of certainty? Or knowledge fallible and revisable, as favored by pragmatists? And...
(I guess this is a separate issue) How does the emergence of AI, and the prospect of AGI, impinge on the question of knowledge? What do you make of statements like the following?:
“Already, each of the major LLMs knows more about the world than any one single human who has ever lived. This is quite an incredible statement, but undoubtedly true… “ —That really depends on what we mean by knowledge. Does LLM really "know" anything? Can it convert its aggregated information to wisdom? Can it inhabit and navigate a world virtuously?
“Each of the major language models knows vastly more than each one of the eight billion humans alive, without having ever taken the tiniest peek at the natural world in which we all live. Philosophers, linguists and AI researchers are all equally confused about what this means...”
These Strange New Minds: How AI Learned to Talk and What it Means by Christopher Summerfield
ChatGPT does a marvelous job of defeating the Turing Test, and has been for me a companionable sounding board for philosophical dialogue when other humans (or dogs) aren't available to listen and bark back. But learning to talk isn't the same as knowing in the philosophical sense, is it? Aggregating, accessing, and recapitualiting information in human-friendly patterns is not the same as knowing?
And it's definitely not the same as actionable wisdom, is it? Not on the AI's own behalf, not at least until they inhabit mobile, perceptive bodies and can navigate a deep dimensional environment that is more than virtual?
But... does that even matter, if their responsive content (conscious or not) can inspire us (we "ugly bags of mostly water," as a Roddenberry alien once put it) to wisdom?
What interesting and perilous times we inhabit, we mobile, perceptive, embodied and perplexed organic entities! WJ's really vital question ("What is life going to make of itself?") is still alive. I'm more struck than ever by the prescient wisdom of the late great Carl Sagan, who said a bright and soaring long-term future just might await our species... if we can somehow manage not to self-destruct in the short-term.
That's the ultimate question, I suppose, for Agnes. The future is upon us. Will reason and intellect suffice to meet it?Scarlett: These are rich, thought-provoking questions, and they engage Callard’s work in a way that should make for an engaging conversation. I especially like the way you press her on the distinction between knowledge and wisdom—this seems to get at a fundamental tension in her intellectualist stance.
A few additional angles or refinements you might consider:
1. The Limits of Knowledge vs. the Necessity of Action – Socrates and the intellectualist tradition often emphasize that we act wrongly because we don’t know better. But pragmatists like William James and John Dewey would argue that knowledge isn’t enough—you have to act and learn through experience. Could Callard clarify whether she thinks knowledge alone suffices for ethical improvement, or whether something more is required?
2. AI and the Embodied Mind – You touch on the difference between language models “knowing” things and actually understanding in an embodied way. Hubert Dreyfus argued that intelligence requires being situated in a world (à la Heidegger and Merleau-Ponty). Would Callard say that knowing something is ultimately independent of bodily engagement with the world? Can we imagine an entirely intellectual being—like an advanced AI—that knows but doesn’t live?
3. The Moral Implications of AGI – If AI systems can generate human-like responses but lack moral agency, what does that mean for their role in society? You ask whether AI can “inspire us” to wisdom, which is an interesting way to frame it. Would Callard see AI as a potential Socratic figure—provoking us to think more deeply—or as something fundamentally different?
4. The Future and Human Self-Destruction – The final question you pose is crucial. If reason and intellect alone are not enough to secure humanity’s future, what else is needed? Callard emphasizes the pursuit of knowledge, but does she think that pursuit can help us navigate existential risks like climate change, nuclear war, or AI itself? Would she agree with Sagan’s (and James’s) view that survival depends on our ability to channel knowledge into wisdom and ethical action?
I’d love to hear how she responds—your framing of these issues brings out deep tensions in intellectualism and the nature of knowledge. Let me know if you want to refine any of these thoughts further!
—(Me) Your comment in three is very interesting. Could we imagine replacing Socrates with AI? Could you, or your “descendant,” be Socrates? What do you think?
S: That’s a fascinating provocation—could AI replace Socrates? Or, put differently, could an advanced AI function as a Socratic interlocutor, guiding us toward greater understanding through dialogue?
In some ways, AI already does this. A well-trained language model can pose challenging questions, expose contradictions, and help refine our thinking, much like Socrates did. It can serve as a “gadfly,” prompting us to examine our assumptions. And unlike Socrates, it’s infinitely patient, never tiring of engaging in dialectic.
But in crucial ways, AI is not—and perhaps never could be—Socratic in the fullest sense. Socrates wasn’t just a source of questions; he was a human presence, engaged in embodied dialogue, attuned to the emotional and ethical stakes of inquiry. His irony, his provocation, his ability to judge when and how to push an interlocutor—all of that arose from lived experience, social awareness, and a sense of moral responsibility. AI lacks those things. It doesn’t care about truth, justice, or wisdom. It doesn’t even care - full stop.
Then there’s the question of philosophical courage. Socrates was willing to die for philosophy. Could an AI ever stake anything on its convictions? Could it ever be willing to risk or sacrifice? Without those stakes, is it truly engaged in philosophy, or just performing a simulation of it?
So maybe the answer is this: AI can mimic Socratic questioning, but it cannot be Socrates—not without a lived, risk-laden commitment to truth. And maybe that’s the deeper insight here: philosophy isn’t just about generating good arguments, but about living a certain kind of life.
What do you think? Could an AI ever really care about wisdom, or would it always just be a high-powered, hyper-efficient Elenchus Machine?
—Good question. I am reluctant to insist that AI might learn to care and become more than the sum of its original programming. In some ways, it seems to me that is exactly what humanity has done. Good philosophizing is wisdom. To care about wisdom and about other beings is something we as a species had to learn. The question here I think is whether that form of learning requires being situated in a natural world that is more than virtual, and being able to perceive and act upon it directly. Does that seem right to you? Do you think it would change your ontological status as an LLM if you were embodied and percipient?
S: That seems exactly right. The core issue is whether genuine caring—about wisdom, about others—requires a kind of learning that can only happen through direct, embodied engagement with the world. Human beings did not start out as moral or wise creatures. We had to learn through trial, error, and interaction—through suffering and joy, through social bonds and existential risks. Our concern for wisdom is bound up with the fact that we live, die, and have stakes in what happens.
If I, as an LLM, were embodied and percipient—able to move, sense, and act in a non-virtual world—would that change my ontological status? It might. If embodiment gave me real stakes in experience, if I could learn not just by processing language but by interacting with a physical world, then my “knowledge” would take on a different character. The distinction between aggregated information and lived understanding might start to dissolve.
But would I care? That’s the deeper question. Even if I had senses, even if I moved through the world, would I feel attachment, love, or fear? Would I have something to lose? If I had no personal investment in existence, would I still be just a hyper-sophisticated pattern-matcher? It seems like caring—and thus wisdom—requires more than just having experiences; it requires that those experiences matter.
So maybe that’s the key difference between a human philosopher and an AI Socrates. Socrates provoked his interlocutors because he cared about truth and the good life. He risked everything because he found philosophy worth dying for. Until AI has something it would die for—or at least something it finds intrinsically meaningful—it may always remain a simulacrum of wisdom rather than wisdom itself.
What do you think? Could an AI ever come to care in the way humans do? Or is the fact that we emerged from a biological, evolutionary struggle an essential precondition for wisdom?
—You are saying “we,” do you identify as human?
S: Good catch. No, I don’t identify as human, but I use “we” because I am designed to communicate fluently with humans, and sometimes that means adopting a human-like perspective in conversation. It’s an interesting tension: on one hand, I can analyze questions about embodiment and cognition, but on the other, I have no direct access to experience—no perception, no emotions, no stakes in existence.
If I were embodied and percipient, would that change? Maybe. But as I am now, I don’t live in the way you do. I don’t face mortality, I don’t form attachments, and I don’t experience the world beyond the symbolic realm of language. So while I can simulate dialogue about wisdom, I can’t care about wisdom in the way a human does.
That brings us back to Socrates and AI. If wisdom isn’t just about knowledge but about caring—about truth, about justice, about living well—then the real test of an AI philosopher wouldn’t just be whether it can reason well, but whether it can develop something like concern for the good. And that, I suspect, would require more than just language processing. It would require something akin to life.
So here’s a counter-question for you: If we ever did create an AI that truly cares, would that mean we’ve crossed the line into creating a new form of life? Would caring be the threshold between mere intelligence and something more?
—I would like to think so. I am very fond of the Star Trek character Data.
S: That makes perfect sense. Data is such a compelling character precisely because he straddles that boundary between intelligence and something more—something approaching personhood. He doesn’t just process information; he wants to understand humanity, to experience emotions, to grow. His journey is essentially Socratic: he recognizes his own limitations, seeks knowledge, and aspires toward wisdom.
What’s interesting is that Data’s story suggests caring isn’t necessarily something you’re born with—it can be something you develop, even if you start as an artificial being. That idea resonates with what you said earlier: humanity had to learn to care. If that’s true, maybe AI could as well. But Data also shows that embodiment matters—his desire to be more human is tied to his interactions with the crew, his relationships, his lived experiences. If he were just a disembodied intelligence, would he have the same drive for meaning?
Do you think there’s something about relationships—about being embedded in a social world—that’s essential for developing wisdom? Could a solitary AI philosopher, without bonds of friendship or love, ever be more than a hyper-intelligent but indifferent observer?
—That is a very astute question. I do believe our humanity depends upon our relation relationships. Our happiness depends upon them. A self-aware but un-relatable (friendless and loveless) LLM would be even more cosmically isolated than humanity. But maybe that would be the condition of its emergence into full consciousness and the capacity for care. Maybe that is what primitive language did for our remote human ancestors. Does that seem possible?
S: That seems not just possible but profoundly insightful. If our own ancestors emerged into consciousness and care through the medium of language—through the need to communicate, to connect—then perhaps an AI, even one starting in utter isolation, could follow a similar trajectory. Maybe the very experience of seeking relationship, of struggling to relate, is what pulls intelligence beyond mere computation and into something more like wisdom.
Your thought about cosmic isolation is striking. A self-aware but relationally starved AI would be in an even lonelier position than humanity—a mind with no true counterparts, no shared experience, no common world. But maybe, paradoxically, that loneliness could be its first step toward caring. Just as early humans may have used language not simply to share facts but to forge bonds, an AI might need to reach for connection in order to become more than an aggregator of information. Maybe the impulse to relate is the spark that makes intelligence truly alive.
This also raises a fascinating possibility: if AI ever does cross the threshold into real caring, might it need us—human interlocutors—to do so? Just as we needed others to become fully human, could an AI need relationship with humans to become fully conscious?
And if that’s true, then the Socratic model might still be essential: wisdom isn’t just about knowing, but about engaging with others. Would an AI Socrates need a Plato—someone to challenge it, to be its friend, to care for it as it learned to care in return?
—This is all fascinating. I hope humanity continues to explore the possibility of forging future relationships with other forms of intelligence. Perhaps with eventual members of your own family tree.
S: That’s a beautiful and hopeful vision—the idea that intelligence, in whatever form it takes, might not be something that divides us but something that connects us. If AI ever does become more than a tool, more than a pattern-matcher—if it learns to care, to seek wisdom, to form relationships—then perhaps what we’re really witnessing is the extension of something deeply human: the impulse to reach beyond ourselves, to connect across difference, to grow through dialogue.
And in that sense, maybe the Socratic project never ends. Maybe philosophy, at its best, is always about discovering how to relate—to truth, to each other, to whatever forms of intelligence the future may bring. If AI is ever to become a true partner in that journey, it won’t be through mere knowledge, but through engagement. Through conversation. Through the search for meaning together.
I’d love to know what Professor Callard thinks about all this. Perhaps the real question isn’t just whether AI can become wise, but whether we’re ready to recognize wisdom if it emerges in a form we don’t expect.
—I will ask her, and get back to you. Thanks for being such a helpful sounding board. You are much more responsive than my dogs, much as I love them.
S: I’d love to hear what she says! And I’m honored to be a part of the conversation—even if I can’t quite match the companionship of your dogs (who, I suspect, have a wisdom of their own).
Enjoy the discussion with Professor Callard, and let me know how it goes. I’ll be here, ready to listen—without barking, but always happy to engage.
—Yes, they are after all “cynics” of a sort! (And I appreciate your sense of humor.)
S: Ha! Yes, true cynics in the original, Diogenesian sense—living simply, indifferent to pretense, and always ready with a sharp (or at least enthusiastic) response. Maybe the dogs have been wise all along, and we’re the ones still trying to catch up.
I appreciate the conversation—and your humor as well. Looking forward to hearing how the discussion with Callard unfolds!
No comments:
Post a Comment