Transcript for Episode 61
Gretchen: My guest today is Dr. John Wyatt. He's an emeritus professor of neonatal pediatrics, ethics, and perinatology at University College, London. He's also a senior researcher at the Faraday Institute for Science and Religion at Cambridge, where he explores new ethical, philosophical, and theological challenges caused by advances in medical science and technology.
He says he's fascinated by the rapid advances in AI and robotics, and the interface between cutting edge science and the Christian faith. As it turns out, I am too, so I'm thrilled that he's on the show today.
John Wyatt, welcome to the podcast.
John: Thanks so much, Gretchen, it's good to be here.
Gretchen: Your career path could be described as a journey from biology to technology. Give us a brief professional history of John Wyatt: how—and maybe more importantly, why—did a medical doctor end up doing robotics research?
John: I started off in physics before I changed to medicine, so I've always had a kind of nerdish interest in physics and technology. But then I had a spiritual crisis as a young student and changed from physics to medicine, feeling strongly a sense of call to work as a physician. I trained in London; I was drawn to pediatrics, trained in general pediatrics, and then specialized in neonatology, the care of newborn babies. I went into the field primarily because I loved children, and it was a very exciting, rapidly developing, science-based kind of medicine.
It was really only after I was working in the field that I was increasingly aware of the fact that I was in the midst of an ethical maelstrom, with all kinds of issues being caused by advancing biomedical technology, ability to keep babies alive. We were doing sophisticated research into identifying brain damage in babies; and then the question was, how do we use this information? And also issues concerning abortion and prenatal screening. So a wide range of issues.
And so increasingly I started to move away from the frontline clinical work, and become more and more involved in ethics. As a Christian, I'd been very heavily influenced by John Stott, who happened to be the rector of the church I was attending—All Souls Church in central London. He became a personal friend and a kind of spiritual father to me, as he was to so many other younger people. I really took on his vision of what he used to call double listening—of Christians engaging and listening to the issues of the modern world, and then trying to build a bridge between the real issues which were being confronted in the modern world and the historic Christian faith.
What I increasingly have realized is that each time technology advances, it raises some very fundamental questions. In particular it raises the question of what it means to be human, and also what kind of society are we building for the future? I have seen that repeatedly in the biomedical field, but I've become increasingly convinced that the next front which is being opened up in addressing these questions is with AI and robotics. It is raising again, these age old questions. What does it mean to be human? What kind of human society are we building for the future?
Gretchen Huizinga: You mentioned a spiritual crisis in physics; I had one too, but it was only after one course. [laughter] Actually I've interviewed quite a few scientists who started in physics and ended up elsewhere, so I wonder if there's a crisis in that particular discipline anyway.
John Wyatt: It's interesting, isn't it? Because once you start doing university level physics and you realize that there are some people who can just think in interdimensional space—it's just all there in their heads. And I really am not that kind of person.
Gretchen Huizinga: “I'll be a doctor instead.”
John Wyatt: That's right.
Gretchen Huizinga: That's just fascinating.
You co-edited a book called The Robot Will See You Now—which is an interesting blend of medicine and robotics in the title. It's a collection of essays on artificial intelligence and the Christian faith. Why did we need this book?
John Wyatt: As I was on this journey of becoming persuaded that AI was a really important issue to address from a Christian perspective, I've had a close link with a Faraday Institute, as you mentioned earlier, which is an Institute of science and religion based in Cambridge. And I persuaded them to support a research project specifically in AI. We managed to get funding from the Templeton Foundation with part of a wider program grant. So I had the privilege of three years of working with Peter Robinson, who was professor of computing in Cambridge, and arranging a series of workshops and conversations with technologists, philosophers, theologians, and so on, just to think through the question: what is the impact of nearly-human machines and our understanding of what it means to be human?
The book is really a product of that journey. Many of the authors were people who were involved in the workshop. And when I was editing this book, I was very determined that it should not be just a standard, academic, multi-author volume that costs $150 and sits on a university library shelf, and no one could ever afford to buy it. I was much more concerned at trying to produce a volume which was academically credible, but which was intended at the informed lay person, that didn't assume any technical knowledge. We managed to get a remarkable combination of authors from a variety of continents, different backgrounds. It's an early contribution to the field, but I hope it is highlighting many of the questions which I think are going to become important as we go into this next phase.
Gretchen: Let's get into some of those questions. You have a recent article called “Artificial Intelligence and Simulated Relationships,” which is basically an extension of that The Robot Will See You Now. I kind of think forward and say, how will it be in the future when we all get replaced by robots? In the article, you discuss an array of what we now call bots—chat bots, care bots, mental health bots (which I think some people are calling “woebots” for depression and anxiety), and sex bots.
Let's talk a little bit about the arguments in favor of these AI applications, because they're sold to us as meeting a need or solving a problem. And then I want you to answer the question you raised in the article, which goes like this: could synthetic relationships with AI somehow interfere with the messy process of real human interactions, and with our understanding of what it means to be a person?
John Wyatt: It all goes back to Eliza, doesn't it? Joseph Weizenbaum created this very, very simple program, and then I think he said subsequently he was quite horrified with the way that people started taking this very seriously. The anecdote was that his secretary started having conversations with Eliza, and then asked him to leave the room, because this was a personal interaction. I think ever since, it's become apparent that human beings have this enormous tendency to anthropomorphize technology which appears to be intelligent, which appears to simulate human speech or human relationship.
I see this as an unstoppable trend. I think it has only just begun. The commercial potential, of course, is enormous, and has been identified with Google Home, and Amazon and Alexa, Siri. And of course there’s huge investment going on to make these speech producing bots more and more effective, and to destroy our sense that this is something artificial and create the illusion that we're actually talking to some kind of intelligent being.
I'm particularly interested and concerned about the effect on children. What happens when, from your earliest memories, you are engaging with these bots? I've already heard, anecdotally, of a seven year old saying to the family, “we have to take Alexa with us on holiday because she's part of the family.” There's fascinating research suggesting that in particular children almost seem to be developing a third category for being. Most of us from very early on manage to divide the world into beings that are living and beings that are not living; intellectually, we have a very clear distinction between these two. But there is evidence that children who are exposed from an early age create a third ontological category, and it's somewhere halfway between: this thing is neither fully living, nor is it fully non-living. It is a bit living. It is as though it was living. And I think we just don't know what the long term consequences are.
But we are already seeing people who say, actually, I much prefer talking to my bot than I do talking to human beings. Because my bot is always friendly; is always pleased to see me; is always positive and encouraging; is never tired; never has its own interests; never tells me to push off; never tells me that I'm a prat. It's just so much nicer than talking to a human being
Gretchen Huizinga: I want to drill in there, because what you've just described is what I have seen as the selling point for sex bots. As a woman, it's like—whatever, we could go down all kinds of creepy trails with that.
John Wyatt: [laughter] Is this an X-rated show?
Gretchen Huizinga: No, it's not; but I mean, the pitch for it is that it would be for people who are in prisons, the military, et cetera.
John Wyatt: Deprived of female companionship.
Gretchen Huizinga: And we've got an entire category called incels, which is involuntarily celibate, and that has more to do with attractiveness and geekiness than it does with “I have to go to the military for a period of time.”
But backing up to the “it's always there, always kind, always responding in the way we want,” whether it's for care, mental health, or sex. What is that doing to us?
John Wyatt: There’s real concern that it's actually changing our understanding of what relationships are for, what a good relationship looks like, of being concerned for the other, and so on. And it's producing a very transactional kind of relationship.
Gretchen Huizinga: Take that same thought thread and go back to the care bot and mental health bot. This is a question that I haven't asked before, but is something that's been on my heart and mind: if we have, for example, the mental health bot, the argument in favor is that it's always available. So if there's someone who's struggling with depression or maybe suicidal ideations, they could contact this mental health bot that would walk them through an Eliza-like experience of therapy at midnight, when a caregiver is not available. Is there any level of developing an inner strength, or ability to wait for something? I wouldn't say if someone's completely suicidal, we should say this is something you should do; but if we have a society that always gets our needs met immediately—a servant centric society—where does the human idea of suffering through something or developing spiritual strength or emotional strength go?
John Wyatt: I think they're really good questions. I personally would think it's helpful to draw a distinction between programs which are specifically designed for a therapeutic purpose, compared with bots which are designed for “normal” relationships or as someone to be friendly with when you are lonely. I think they're two different cases, and we need to think about them separately. Because I do think there is a place for these kinds of tools to be used by professionals for therapeutic purposes, and there's no doubt that they can be very powerful. One example you've given is of mental health applications, which are providing some kind of replacement or additional support for cognitive behavioral therapy or for talking therapies or something else like that. I think in that context— professionally supervised, given for specific therapeutic purposes—I think these are very powerful tools.
Another completely different example is the use of robots with children with autistic spectrum disorder, where they can have a very powerful means of helping children learn how to read faces, how to engage and become more socially competent. So I think there's definitely a place within the therapeutic armamentarium. But I would want to make a distinction between that and the use as a replacement for human friendship, because I think it's pretty obvious that there's a dangerous path there of ultimately withdrawing from human relationships.
Gretchen Huizinga: What you just said fascinated me, that someone on the autism spectrum could use a robot to help distinguish human facial expressions—the face, the eyes, the voice from a machine. That's a whole separate question, but it actually leads into another phrase that comes up in your work of “analogous personhood.”
John Wyatt: It was actually Nigel Cameron, and I think it was in a conversation with him that he used this phrase. I found it extremely interesting and quite helpful, because it's the suggestion that an analogous person is an entity that we know is not a person, but it is capable of playing the role of a person in certain specific contexts.
It's clearly controversial, and I know that some people have resisted that particular phrase; but I am interested in the idea, because I think that there is something analogous between what a bot does and the way a person behaves. As long as we understand it's analogous, but it isn't the same thing, then there is some value in that formulation.
Gretchen Huizinga: That raises the question I think you even ask, is it unethical even to design robots that resemble humans, because of the danger of—as you said—bringing children into this space? You and I are of a certain age, and we remember a completely analog world; my nephew's son was making pinching fingers on a magazine. There was a picture, and he was trying to enlarge it with his fingers, and it spoke to the idea that a magazine is an iPad that doesn't work. So again, that this ethical murky area where we are designing robots to be analogous persons—is that ethical?
John Wyatt: I've often been challenged by people who say, yeah, but isn't that what children have always been doing with a teddy bear or with a doll? And my answer to that is, well, yes and no. It probably is true that in some ways a teddy bear is an analogous person, but every child who has at all got normal cognitive development knows that this is not a real bear. And however much the little girl puts the doll into the doll's house and talks to it and has dinner with it and all the rest, the child is completely aware that this is not a real baby, because as soon as you confront the child with a real baby, their behavior is completely different. The problem with the new, sophisticated simulacra is that that is the confusion. Is the child really certain that this is not living? Or is it living and, and how should I treat it? It's that blurring, which I feel really concerned about.
I like the suggestion which has been put forward by several people—it's sometimes been called a Turing red flag—which is the idea that whenever we have an interaction with some form of AI program or robot where we are at risk of confusing this for a human person, that by law, the device has to say, “I need to remind you that I'm not a human being.” It's almost like the terms and conditions. In other words, we break the spell; we break the illusion. The machine is designed to continually break the illusion so that you cannot confuse this.
Gretchen Huizinga: The fourth wall.
John Wyatt: Interestingly, the manufacturers hate it, because everything is about building the illusion and preserving the illusion; to have to be constantly destroying the illusion, they would see as entirely counterproductive.
But my perspective is above all, protecting the vulnerable from abuse and manipulation; and if in order to do that, we have to continually break the illusion, then I think that's probably a good response.
Gretchen Huizinga: Everything you're saying is bringing up new questions, and one of them, as you even talk about these bots that are resembling humans, there's another layer of data collection that's going on. When you talk to your teddy bear, it's a stuffed animal, and there's no software behind it. There's no connection to a database. Where I see some of these applications going is commercially, as you've explained before. What value does a company get by making a bot that talks to a child, and those eyes are actually recording or hearing—it doesn't just go into the mechanical teddy bear, it goes to a database in a company? That's where I get to is this ethical.
John Wyatt: Absolutely right. And this is where it's so deceptive, isn't it? Because on the surface, this seems so harmless and toy-like and limited in its capability. But the reality is of course, is that via the cloud, this has access to obscene amounts of data and information, and it could be being powered by some supercomputer. All of that is entirely opaque, and even impossible to control. So these are very real concerns.
It seems to me that at the moment, the whole regulatory frameworks are way behind the speed with which these commercial companies are ahead of the game. And one of the really important things therefore is to try to restore the balance and to insist on far greater transparency. Most of us don't want to be tracked. Most of us don't want conversations to be recorded. Most of us don't want our data to be filtered off and analyzed. And if we are given the choice, we will say we don't want it to happen. So I think we need to be given the choice.
Gretchen Huizinga: But then you have the category of the child who doesn't have a file folder for that; or a mentally unstable person who maybe does have the file folder, but is disturbed enough not to care at the moment. Those two buckets of data from a mental health perspective and a minor perspective are the ones that bug me the most.
John Wyatt: Absolutely. And from a Christian perspective, this recurring theme throughout the scriptures of the vulnerable in society—of widows, orphans, and immigrants—that is very much at the heart of a Christian understanding of society, is the protection of the vulnerable. That really, to me, comes before things such as wealth creation and other things—all of which are potentially good in themselves, but they're outweighed by the importance of protecting and defending the vulnerable.
Gretchen Huizinga: This whole idea of deception is at the core of this, the simulacra. That leads into a discussion that we could have about the false image or the idol. You address in an article what that's capable of doing to humans or human worshipers, specifically the idol. What parallels do you draw between our human tendency to make idolatrous artifacts, as it were, in the image of God; and more recently what's happening with AI is that we're making idolatrous artifacts in the image of humans?
John Wyatt: I certainly think the category of the idol is a very powerful Biblical category, which needs to be thoughtfully reflected on and applied. And of course the Old Testament—I think of that passage in Isaiah, where Isaiah lampoons the idolatrous, because they take a lump of wood and half of it, they burn to keep themselves warm because it's just wood; and then they take the other half and they turn it into an idol and then they bow down and worship it. How mad can you be?
That's in a way exactly what we are starting to see in many forms of AI in that, part of us knows it's simply ones and noughts and bits and some circuits whizzing around; and yet just recently we had the case of the Google engineer who felt that this advanced program was becoming sentient and needed to be treated with respect..
The point about the Biblical image of the idol is that although the idol is ultimately nothing—it is simply wood, it is simply material—but actually it exerts a strange, deceptive power on the worshiper. And it dominates. Something that is ultimately powerless paradoxically starts to dominate our lives. Applied to advanced programming techniques, I really think there is something in that: that these are ultimately powerless idols in their very nature, and yet they can exert the most extraordinary power and mastery over us in a very strange way.
Incidentally, I do find this quite common phrase that God made human beings in his own image and now we are making machines in our image; I have to say, I find that very unhelpful. I think it doesn't reflect the richness of what the Bible means by being made in the image of God. It's only God who is capable of making anything in an image; and because the image is a profound reflection in a completely different sense.
God is spirit; and yet he chooses to call into existence matter, and then mold that matter to reflect His own being. That is a profound, wonderful thing. There is no way that human beings can ever do anything like that. We are simply incapable, as beings made out of dust, of creating anything in our image in that sense. So I think that's probably not a very helpful way of thinking about it. Nonetheless, because these are artifacts created by human beings, and because human beings are fallen, we should not be surprised that the artifacts we create are contaminated by our fallenness. And that includes AI.
Gretchen Huizinga: As you unpack that, I see that. But I also see our human desire to try to do what God did, and to try to be what God was. So my framework for it isn't that we're actually doing what God did, but it's the whole “run at the hill” thing, especially for people who take God out of the equation. If you order intelligence in such a way where you have divine intelligence, human intelligence, then machine intelligence, you've got a complete anthropology; God out of it, you don't, and it's just us. So why wouldn't we take a run on that hill?
John Wyatt: Okay. I can see what you're saying.
Gretchen Huizinga: A block of wood, like you mentioned—I used that in my dissertation as an example of how stupid we are. And there's other areas in scripture that talk about that as well. But AI gives us a very particularly oracular instantiation. The wood isn’t going to talk back to me, but the AI might.
John Wyatt: I think that is something which is profoundly interesting and mysterious, because the whole point in the biblical narrative is that the idols couldn't speak, and they couldn't move, and they couldn't breathe. Part of the lampooning of the idol of the idolator is it's so transparently obvious that these idols have no innate power. But what we have created is idols that speak back to us, and idols that can move, and idols that seem to have all the appearance of being human. That is deeply troubling and interesting; it indicates why these modern idols are so much more powerful, and so much more deceptive.
Gretchen Huizinga: You and I have talked a couple of times about spiritual powers, both good and bad; and in St. Paul's letter to the Ephesian Christians, he reminds them, our battle is not with flesh and blood, but with rulers, authorities, cosmic powers, and spiritual forces of evil. Some people have suggested that those spiritual powers find their way into the tools we make. I have to ask you, John, do you think AI is becoming one of those powers?
John Wyatt: I certainly think we have to be extremely cautious and mustn't adopt a simplistic metaphysics of demons investing computers; because actually, the metaphysics of Paul and the New Testament writers is highly sophisticated and thoughtful understanding. I've been quite influenced by the work of Walter Wink, who analyzed in great detail the use of the language of power in the New Testament. His basic conclusion is that the way the New Testament thought was that all powers exerted in the human sphere had both a material, human component and an immaterial, hidden spiritual component. Often modern commentators, when they read this language of powers, they say, well, it's not clear. Is Paul here talking about the earthly powers, the magistrates, the rulers; or is he talking about hidden spiritual forces? What Wink says is the whole point is that in Pauline thinking, the two are inextricably intertwined. Whenever there is earthly authority, there is also hidden spiritual authority.
I think that's a very interesting lens to think about what's going on here now, because it is extraordinary and unexpected the way that the use of information technology seems to unleash the most disruptive evil forces. I suspect that the Silicon Valley ethos out of which much of the Silicon Valley companies came was a kind of New Age, optimistic, information wants to be free, all about open source software. We just share everything; it's peace, truth, love, man. Of course it always had this other element—the military industrial complex were very interested in AI as well.
And then what happened is that commercial forces could suddenly monetize all this stuff; but that kind of peace, truth, love is still there. Mark Zuckerberg says, I want to connect every human being on the planet; what could possibly go wrong? And then he just connects people across the planet, and to everyone's astonishment, what happens is there's an outpouring of evil, of trolling, of abuse, of disinformation, of hate, of violence. People die. There are riots. And all he's done is connected people together.
This seems to be whenever there is some kind of advance in technology, we just see further examples of evil and malevolence and corruption and hatred. The fascinating thing is that if you take a purely physicalist understanding of the universe—ultimately there's nothing there but atoms and the laws of physics and subatomic particles—then there is really no category for evil. I think many people working in the tech industry and in Silicon Valley and so on, they simply don't have a category for evil. They have a category for programming errors and bad technology; and they have a category for sort of freak accidents and things going wrong. But the idea of malevolence is something quite strange.
It's interesting that more recently they developed this concept of the bad actor. The idea is that the majority of people across the world are normal, decent human beings like us, and they would never do anything evil; but then there are a few bad actors, and they've got some kind of wiring wrong in their brains and so on. But it's a very simplistic and naive understanding of the nature of evil.
One of the things that is often said about orthodox, Biblical Christian theology is it always takes evil very seriously. It always treats evil with respect. It sees it as a powerful destructive force that has to be reckoned with and even anticipated. So I don't think Christians were particularly surprised that connecting people across the world led to a vast amount of evil, because of course in Christian thinking there's a much greater awareness that the human heart is itself—this hidden reality within us—is contaminated by evil.
Gretchen Huizinga: So maybe the question is AI becoming one of those powers is well, of course. It always would've been and is, because everything that's in this fallen state and under the authorities of this dark world is part of that fabric.
John Wyatt: Another philosopher who's very much influenced my own thinking is George Grant, a Canadian philosopher who wrote very profoundly in the early days of computer technology about the nature of technology. He defined technology as an interpenetration of knowing and making—techno-logos—oriented towards the mastery of human and non-human nature. So ultimately it was oriented towards mastery. He saw cybernetics as a particular example of that; it’s all about the helmsman, the one who is mastering the direction.
Whenever we have something which is dedicated to mastery and control and power, we shouldn't be surprised that in a fallen world, it unleashes remarkable levels of evil.
Gretchen Huizinga: Right. And that's part of a chapter that you've been working on for a book, talking about the mastery of human and non-human nature. You've written about this idea of resurrection and moral order, and wrote that when Christ was resurrected, God proclaimed his vote of confidence, his final yes, to original model humanity. It's a beautiful sentence and concept there. In an age of cynicism and the software model of constant upgrades, what are we saying yes to with AI and robotics?
John Wyatt: What I’m trying to do there is simplify and popularize the thought of Oliver O'Donovan, a British theologian who wrote a very profound book, Resurrection and Moral Order. He saw the resurrection as playing a profound role in confirming the original creation order. He says, before the incarnation and the resurrection, having seen the litany of everything that's happened in the Old Testament, it would be easy to conclude that God's whole plan of creating this being made in his own image, but made out of dust, has gone horrendously wrong. The best thing to do is to wipe the slate clean and start again with a new beginning.
But when Christ then is born as a completely normal human being, is born as a baby and lives a normal life, and then dies and is risen as a physical, touchable, recognizable human being—as he says, it's the final vindication of the original humanity as in Adam and Eve, that God hasn't given up on this concept. In fact, he has redeemed it and transformed it into a kind of homo sapiens 2.0. What we see in Jesus is our first glimpse of what homo sapiens 2.0 might be.
Drawing on from that, I think we don't need to have a more advanced, sophisticated kind of humanity, to put it in simplistic terms. If this kind of humanity is good enough for Jesus, then it's good enough for me. I don't need to crave a robotically or digitally enhanced kind of humanity, because the original form of humanity has been vindicated in the incarnate Christ.
Gretchen Huizinga: That's absolutely beautiful. And it flies in the face of what's happening in “Silicon Valley” (and I use the term as representative for technology and AI and robotics). How could we as Christians push back and say actually, a) we don't need an upgrade; that's already been accomplished in the resurrection of Jesus; but b) then what do we do with this push towards the upgrading of the human regardless of our beliefs?
John Wyatt: I think this is a huge challenge for us as we come into this age, because it seems to me that transhumanism is an idea of philosophy whose time has arrived. In many ways we're already seeing low tech transhumanism—cosmetic surgery, gender defining surgery, gene doping, recreational pharmaceuticals—all these are kind of low tech transhumanism already. The idea has come that we need to upgrade our bodies, and I think these are just very low tech versions of what is to come. What is to come as a much higher tech, more sophisticated version of improving ourselves. I think it's going to be a challenge for everybody, but particularly for Christians, to say, am I going to go down this route because this is where the smart money is? Or am I going to say, actually, I love being human in the way that I've been made?
I would argue that this desperate sense that there must be more to human existence than this, it's driven by a sense of dissatisfaction. I am just not satisfied. I have all these dreams of what could be, and then I wake up in cold reality and it's yuck. I am not satisfied. I think that is an example of what C.S. Lewis calls the inconsolable longing. It is a God-implanted longing for something more, something greater; but it then gets redirected into, well, let's do it through technology.
Gretchen Huizinga: One of the things we talked about before was that we build artifacts as idols, and they're usually things—so maybe it's AI, maybe it's a piece of wood or whatever. But you talk about this idea of fashioning the future as an artifact. I've heard a lot of scientists refer to this as we're creating a preferred future; better living through chemistry, or whatever version of it—that particular technology is going to help us be better.
I'm going to ask you this in a funny way. What do the theologian Oliver O'Donovan and the rock band Pink Floyd have to say about this?
John Wyatt: [laughter] That well-known philosophical source, Pink Floyd. Just to backtrack a bit: the grand narrative of Christianity was pretty much unique in the ancient world, because most of the ancient religions, including the Eastern religions, their understanding of history was almost entirely cyclical. There were these great cycles of a things improving, and then they degraded, and we went round and round and round and round. Christianity, building on the Old Testament, the Hebrew scriptures, was always progressive. It always saw there was an arc of history. There was a line of history, it was one directional, and it was a drama. It was a drama created by the great dramatist, and we were bit players in the drama. And that has been, throughout most of the history of Christianity, the dominant understanding of history.
But then increasingly, we get in the 1500s hundreds onwards—and there are figures like Thomas Bacon, who are some of the founders of this idea—this idea that human beings can grasp control of the levers. We don't just leave it up to God to direct history and to his Providence, but actually we are creating history. This is where Pink Floyd—everything is a brick in the wall—this idea of every act I do is creating the future. The future is an artifact, a human product, which is created by human choices. All of us together have this terrible responsibility, a crushing responsibility, to build the future. And if we aren’t very careful in how we choose which bricks we place there, the future could be catastrophic. The future may well be a complete and utter disaster simply because we failed to build the wall well.
This is, in some ways, intellectually, a kind of grand heresy; it's a Christian heresy where human choice and human will has replaced the sovereignty of God and the providential wisdom of God over history. But it's a very powerful image. Humans come of age; we've come out of the swamp and by some enormous fluke, we've now worked out how we can grasp control over the future, and now we're making it. You see this particularly in these sort of big figures like Elon Musk, who are so dominated by science fiction tropes of intelligences. I have the privilege of having a conversation with Martin Reese tomorrow, who's the Astronomer Royal in the UK; and he has written about this and he says that our paltry human intelligences are merely the premonitions of the far deeper thinking and cerebration of the non-human intellects of the future.
It's this idea of we are creating the future; but there is a sort of crushing duty on this, which we hear a lot in the climate sphere—and I'm not by any minds diminishing the reality of climate change; but it's this sense that unless we sort ourselves out, we are destroying the world, catastrophe, it's all our responsibility. And there's also a sense of hopelessness, that it's too late; bad decisions have been made, and there's nothing we can do. We’ve messed it up, we have a doom in front of us. So a terrible sense of fatalism.
Gretchen Huizinga: The interesting part there is, like you have framed it, there's this crushing responsibility, coupled with astonishing arrogance, to say that we could actually make the choices outside of God's Providence. I think that goes back to the idea that we've dismissed an actual super intelligence in the world, so we have to make a super intelligence to fix the problems.
John Wyatt: That’s right. I've struggled to try and find a good analogy to compare with building bricks in the wall, but it seems that one way of thinking about it is that there's a great river which is flowing through history, which started before the foundation of the world, and it's the river of God's providential purposes and plans. We are called to contribute to the flow of the river, and we do that by putting rocks in, or little dams, or something like that. Our actions do influence the river. They do have downstream effects, and therefore we are called on to act wisely, to act with faith, hope, and love, and prudence and justice. But we are not so arrogant as to think that we can in any way redirect the river. The river is going to flow long after we have disappeared. And that gives a freedom.
O’Donovan points out that actually, this is the way the Bible thinks of the Sabbath. We don't have to work seven days a week, seven days a week, seven days a week building a future. We can actually have a Sabbath when we stop and we celebrate and we enjoy and we rest, because the future of the river doesn't depend on us.
Gretchen Huizinga: It’s Christ's yoke upon us, not our yoke to make it happen.
John, as someone with experience in medical and bioethics, you've noted that we've been at that long enough to have formulated a clear Christian response. At the same time, you've suggested that AI presents us with such genuinely new issues that we don't at the moment have an equivalent framework for AI ethics. To quote you, we're painfully aware of our deficiency, and we're working on it. And you are: you're at the Faraday Institute, you've turned your intellect and your superpowers toward these issues.
Think of this podcast for a second as a sort of audio whiteboard. What ideas might be foundational to a Christian response to the simulacrum?
John Wyatt: We need to think much more deeply about the theology of simulation. It seems to me that we haven't been forced to do that before. One of the fascinating things in the history of Christian doctrine is that it's often the confrontation with new challenging ideas and “heresies” which stimulates a flowering of new Christian thinking. If it wasn't for the challenges of the ancient heresy of Arius and so on, we would never have a worked out, fleshed out doctrine of the Trinity and of Christology and so on. So I'm actually very excited, because I think in interface with these new, very challenging ideas, we're going to discover new depths and truths within the richness of the Christian faith.
I think one is a theology of simulation versus authenticity, and why authenticity really matters. There's a wonderful scene in the television series Westworld, a rather tacky series where there's a theme park dominated by humanoid robots, and real human beings can go to the theme park and they can live out their fantasies, which seem to be largely about either having dex with robots or else shooting them and fighting with them. I mean, what else would you do? It’s obvious isn't it? But there's this theme in one of the early episodes when a human being arrives in the theme park for the first time, and this beautiful woman comes up to him and says, is there anything I can do for you? And he stares at her and he says, are you real, or are you one of those? And she says, if you can't tell the difference, does it matter?
It seems to me that question resonates as the question which we will have to wrestle with all into the future. If we can't tell the difference, does it matter? My Christian instinct, and I'm sure yours as well, is yes, it jolly well does matter. But we need to find ways of expressing why it matters.
I've had an interesting thought experiment in this. Suppose my wife, who I've been married to for 33 years, is actually a Russian agent in very deep cover, and actually she's been living this completely double life. She's got a handler and so on, and I'm completely unaware of all this. Suppose I go to my grave, and I always think that she loved me and that we had a good marriage. Does it matter that actually she was a double agent? I think the Christian instinct is yes, it does. Even if I never found out I was being lied to, our relationship was not based on the truth. It's deeply related to truth, trustworthiness.
I recently discovered that the three English words—truth, trustworthy and troth—are all related, and similarly in Hebrew, that these concepts are very closely intertwined. The simulacrum rides a coach in horses through truth, trustworthy and troth.
I think that's one area we need to work on. Another area is just on what it means to be a person. What the philosophy of artificial intelligence does is it privileges a certain kind of rationality, a certain kind of computational processing, as being the core of what personhood is. And I, as a pediatrician who have cared for many children with profound brain injury or with disabling conditions, I'm completely convinced that despite the fact that they may have a very impaired ability to process thoughts, they're still as much a person as I am. In other words, personhood is not the same as processing power.
But then the question is, well, what is it? And I think ultimately it's a reflection of the image of God and the fact of the persons of the Trinity, that we are persons in relation. We’re made in relation. As a simplistic tag, I've suggested instead of cogito ergo sum—I think therefore I am, which of course is at the root of the philosophy of AI—I think it's amor ego sum, which means I am loved, therefore I am. It is actually to be in love, in relation: that is the core of my personhood and my being, and only human beings can experience that.
So I think the theology of personhood and of human anthropology is something which we need to drill down deeper and further in order to compare and contrast with the “analogous” or “simulated” personhood of the machines.
Gretchen Huizinga: Everything you've said just raises more thoughts and questions. I've heard that we are not primarily thinking things, we are primarily loving things, which is why you see so many irrational decisions made over love, even in light of all of the cognitive warnings against such a relationship.
I want to point to you too, my father's suffering with Alzheimer's, and kind of rejected the church all his life. And then when his brain started to go, he started to go to church with my mom. And he listens to the Bible now, and will pray with us. And my mom asked her pastor, what's up with that? How do I take this? Is this real, because his brain is gone? And the pastor responded, the brain and the soul are different and operate differently, and God can affect our soul, even if our brain doesn’t work as it did before.
John Wyatt: I think you’re absolutely right. There’s this wonderful verse in the Psalms, “deep calls to deep,” and that's often been applied to that. The deep things of God are knowable by the deepest places of what it means to be human. And that's again, an aspect of the image, and it's the way that—whether we like it or not—we are in a relationship with God. I think it's true that therefore our cognitive, rational bits of our minds and brains, which are supposed to help us and guide us in our faith, can actually be the obstruction to our faith. And of course, Jesus says, unless you become like a little child, you can never enter the kingdom of heaven. And it's the pride and the arrogance of our thinking rationality and “we can work this out ourselves” which becomes the biggest barrier.
Gretchen Huizinga: When you and I talked before you used a term co-belligerence, which I looked up. Historically, apparently it's meant that you were allied with someone else against a common enemy in a conflict or a war. How might we as Christians expand or update our definition of this term in this deceptive, AI-infused, robot-forward, simulation-centric world? What are we up against, and how might we join forces with other believers as we face this?
John Wyatt: Yes. I find this concept very helpful, because the idea of co-belligerence is that we are in some kind of conflict, and we're trying to promote our particular perspectives and so on; and for the purposes of this particular conflict, we find other people who don't necessarily share our entire worldview or convictions, but who on this particular aspect share our common goals. And we agree to fight together. We agree to pool our resources in order to have greater effect. And the fascinating thing about this kind of field is that Christians can find very different co-belligerents from other fields, such as in the very well worn pro-life/pro-choice fields or so on. There are very different co-belligerents here.
In particular, I think there are a whole number of people who don't come from a Christian or religious perspective, but who are deeply concerned about making human centered technology, and are concerned as we are about the abuse of the vulnerable, about manipulation and coercion and about data storage. I think even though we don't share fundamental metaphysical or philosophical concerns, we do share common concerns about the appropriate use of this technology. I'm struck by how many people there are out there who share these concerns. In fact, I think we're probably easily the majority, but at the moment we don't have any mechanisms for pooling our common concerns and interests.
We've got a great deal to learn from people who are coming not from a Christian perspective, who are in some ways ahead of the game compared to us in terms of thinking of ways of minimizing damage—harm minimization, splitting up commercial interests, improved regulation warnings like the Turing red flag and so on. So I would very much encounter us to be looking for co-belligerents and to be wanting to pool our resources, to try to have a greater influence against where we see clear evil and dangers.
Gretchen Huizinga: Have you seen this in the field of bioethics already?
John Wyatt: Oh, absolutely. An example would be in reproductive technology, concerns about IVF and the use of embryos and all that kind of stuff. I found a number of co-belligerents, for instance, with feminist groups who are very concerned about the abuse of women. I've found them in ecological groups who are concerned about nature and protecting nature and not using embryo enhancement and so on. It's just an example of where, certainly in the bioethical field, co-belligerence is a very powerful and effective tool.
Gretchen Huizinga: I just like the term co belligerent. And I'm one. I'm on your team; I'm in the game John. Every time I talk to you, I learn something new and I end up grinning at God's goodness. I just want to thank you for taking time to join me today, because you have a lot of work to do, and I'll at some point probably join you as a co-belligerent in some way.
John Wyatt: Okay. We'll be out there on the front.
Gretchen Huizinga: That's right. Well, it's been a real pleasure again, and thanks for joining me, John.
John Wyatt: It's been great. Gretchen. God bless you.