June 3, 2025
June 20, 2024

AI’s alleged ‘psychopathic tendencies’ are not what should be concerning us

Min read
share
The first time I heard of ChatGPT was last year, when I received an email circulated among some university chaplains. We were presented with an AI-generated sermon, and we were asked whether we should be worried. “Yes,” came back one reply, “this is, worryingly, better than about half of the sermons one might expect to hear on Sunday!” Although I don’t have any immediate fear of being replaced by an AI chatbot (after all, it’s ontologically impossible for a bit of silicon to celebrate the sacraments), there is obviously going to be a lot of social turmoil if people start believing that anything a human being can do, AI can do better.<br><br><strong>RELATED: <a href="http://gets"><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-cyan-blue-color">AI priest avatar gets the chop in first week of digital ministry</mark></a></strong> The challenge AI poses to society is the subject of a new book (published last month by Oxford University Press) by the AI ethicist Shannon Vallor: <em>The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking</em>. According to Vallor, AI does indeed pose a serious danger to our society, but just not in the way most people think. It is not at all obvious that engineers will ever be able to create technologies that will radically surpass human creativity and intelligence. Vallor recalls a rare moment of consensus among AI experts when a Google engineer named Blake Lemoine made global headlines by claiming a large language model he was testing was a conscious, self-aware being. Experts in the field were very quick to denounce Lemoine, saying that the product he was testing was no more conscious than your toaster. But the danger with AI comes from the human temptation to rely on it too much. Vallor uses the analogy of a mirror to explain what AI technology can do. All it does is “generate complex reflections cast by our recorded thoughts, judgements, desires, needs, perceptions, expectations and imaginings”. In other words, AI systems mirror our own intelligence back to us. But to claim that AI systems were actually intelligent would be like claiming there is another human person behind the glass of every mirror we look into. It takes a huge amount of human data to develop and train an AI system, but the inherent biases in the training data will inevitably be reflected in the kind of output the AI system generates. Vallor gives the example of a healthcare AI system in the US where the system’s designers chose healthcare cost as a guide to healthcare need. As a result, the system rated black patients as needing less care than white patients – the system had just found and reproduced the patterns of injustice in the data that was used to train it. The use of AI could be a force for good if it helped us to see more clearly injustices that need to be addressed. But if human decision-makers merely delegate their decision making to these AI systems, then any unjust behaviour will be magnified.<br><br><strong>RELATED: <a href="https://catholicherald.co.uk/pope-endorses-artificial-intelligence/"><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-cyan-blue-color">Pope endorses Artificial Intelligence</mark></a></strong> Vallor speaks a lot about virtue in her book. The virtuous person, when making a decision, is able to discern and assess the relevant information, and then act on that information in a responsible manner. But it takes time to become a good decision-maker, and we can only become so by trying to make good decisions, by learning from our mistakes and by taking responsibility for our actions. One of the great dangers of AI is that it stunts our ability to grow into good decision-makers; the same goes for any domain of human creativity. If we delegate our creativity to AI, we will never be able to cultivate our own creativity. Unfortunately, the virtue required to develop AI systems that could benefit humanity seems to be somewhat lacking in the AI industry itself. For instance, last year several prominent AI leaders announced that they were developing machines that in the future could potentially enslave or destroy humanity, and that the AI industry needed regulating before it was too late. But, as critics pointed out, if you really think you might be building a super-powerful human extinction machine, then rather than ask governments and philanthropists for piles of money to build stronger handcuffs for it, why don’t you stop and go build something better? That would be the virtuous thing to do. When AI leaders have anxieties over the psychopathic tendencies that might be lurking in the machines they are making, perhaps that is a sign that their consciences are trying to tell them something: maybe they should be seeking a more noble goal than trying to replace people with machines. AI has great potential for human good if we allow it to complement human intelligence, rather than mirror it. But if this potential is to be realised then there has to be a conversion of hearts and minds. Now there’s an idea for a sermon!<br><br><em>Photo: A model of the 'T-800 Endoskeleton robot' used during the filming of 'Salvation', part of the US Terminator film franchise on view at the ROBOT exhibition at the Science Museum, London, UK, 7 February 2017. (Photo credit BEN STANSALL/AFP via Getty Images.)</em> <strong><strong>This&nbsp;article&nbsp;originally appeared in the June 2024 issue of the&nbsp;<em>Catholic Herald</em>. To subscribe to our award-winning, thought-provoking magazine and have independent and high-calibre counter-cultural Catholic journalism delivered to your door anywhere in the world click&nbsp;<a href="https://catholicherald.co.uk/subscribe/?swcfpc=1"><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-cyan-blue-color">HERE</mark></a>.</strong></strong>
share

subscribe to the catholic herald today

Our best content is exclusively available to our subscribers. Subscribe today and gain instant access to expert analysis, in-depth articles, and thought-provoking insights—anytime, anywhere. Don’t miss out on the conversations that matter most.
Subscribe