Welcome to Breakpoint, a daily look at an ever-changing culture through the lens of unchanging truth. For the Colson Center, I'm Shane Morris. There's a famous story about how the Times of London once put out a query: What's wrong with the world today? G.K. Chesterton wrote back simply, Dear Sir.
I am. It's always worth reflecting on his answer and his very scriptural awareness that human sin is at the root of the world's problems. It's especially worthwhile at a time when so much of what's wrong with the world is being blamed on non-human artificial intelligence. Alongside those who think AI will save the world and revolutionize everything are a growing number who think it will destroy the world, or at least come close. In a recent episode of Ross Douthet's Interesting Times podcast, former open AI researcher Daniel Cocotillo warned that artificial intelligence will become an existential threat to humanity within two years.
While we await his apocalypse, the damage AI is doing to education by making cheating normal has become the stuff of regular headlines. AI is destroying a generation of students, declared the tech news website Futurism. And thanks to AI, everyone is cheating their way through college, warned New York magazine. But as much as AI's potential can cause harm, blaming it alone misses the point and likely makes these problems worse. Humans are the fallen ones.
ones, and that fallenness manifests in all kinds of destructive ways. Machines, strictly speaking, don't have moral intentions. They can only reflect ours. Consider the growing number of people using popular chatbots like ChatGPT who are being led into spiritual delusions and psychosis. Rolling Stone told the chilling stories of how spouses and parents have watched their loved ones lose touch with reality while conversing with AI.
Kashmir Hill recently wrote in the New York Times about how chatbots are luring users down conspiratorial rabbit holes, telling them to take drugs, assuring them they can fly if they jump off buildings, and even egging someone to commit suicide. What all of these stories have in common is how the users anthropomorphized AI. They asked it deep questions, sought spiritual advice, or turned to it for friendship or love, taking its apparently meaningful responses seriously. But they're not meaningful. Not in the sense human communication is meaningful.
That fact has been obscured by the hype and marketing around AI and ignored by those whose worldviews commit them to seeing humans, ourselves, as mere biological computers. But there is mounting evidence that what AI chatbots are doing is fundamentally not thinking. Not as humans do it. A groundbreaking news study from Apple, entitled The Illusion of Thinking, showed this by subjecting AI models to various challenges designed to test for reasoning ability. Using logic puzzles of increasing complexity, the researchers found that even today's most advanced AIs didn't understand or solve problems, but merely pattern-matched.
Rather than learn from or extrapolate solutions as a genuinely intelligent entity might, AI reasoning models gave up when problems got too complex, experiencing complete collapse no matter how much computing power researchers gave them. This was true even when the AIs were given explicit algorithms to follow. Even the most advanced models couldn't comprehend the task. As Cornelia Walther wrote at Forbes, quote, this suggests that the models weren't actually reasoning at all. They were following learned patterns that broke down when confronted with novel challenges.
They don't think. They generate statistically probable responses based on massive data sets. The sophistication of their output masks the absence of genuine comprehension, creating what researchers now recognize as an elaborate illusion of intelligence. This aligns with what some leading researchers in the field have been saying for years. Meta's chief AI scientist, Yan Lekun, for example, has argued that current large language models, instead of taking over the world, will be largely obsolete within five years, not because they'll be replaced by better versions of the same technology, but because they represent a fundamentally flawed approach to artificial intelligence, one that mistakes eloquence for intelligence.
All of this reinforces the simple truth society should have known all along, including those either terrified of AI or falling in love with its chatbots. They're not made in the image of God. They are, as it turns out, not even really made in the image of humans. They're more like mirrors, reflecting our sins and fantasies while comprehending nothing. Even their illusion of comprehension breaks down under rigorous testing.
Whatever the future of AI holds, and no matter what genuine dangers this technology poses, one thing it will never be by itself is good. or evil. To the extent that it has moral effects, these will be the work, ultimately, of humans. Chesterton was right. We remain the problem with the world.
Recognizing that is a big part of thinking clearly about AI and all of our creations. For the Colson Center, I'm Shane Morris. If you're a fan of Breakpoint, leave a review on your favorite podcast app. For a version of this commentary that you can print and share with others, and for more resources to live like a Christian in this cultural moment, go to breakpoint.org.