Transcript for Episode 67

Gretchen: My guest today is Dr. Noreen Herzfeld, She's the Nicholas and Bernice Reuter Professor of Science and Religion at St. John's University and the College of St. Benedict in Collegeville, Minnesota. Dr. Herzfeld holds degrees in computer science and mathematics from Penn State and a PhD in theology from the Graduate Theological Union in Berkeley.

She teaches courses in both computer science and theology, conducts research at the intersection of religion and technology, and has spoken and written extensively on how new technologies like AI challenge our relationship to God and to each other. Noreen Herzfeld, welcome to the podcast.

Noreen: Thank you very much, Gretchen. It's my pleasure to be here.

Gretchen: You've been described—or have perhaps described yourself—as an upright Quaker with Lutheran leanings or Lutheran roots. What does that mean to you theologically, and why do you think it matters where we land personally on the Christian faith spectrum in the context of our roles in the larger body of Christ?

Noreen: The combination of Quakerism and Lutheranism has really informed how I approach theology and how my own theological views have changed over time. I was raised as a Lutheran, and I went off to St. Olaf College to get a degree in sacred music and was trained there as a classical musician, playing organ and voice and conducting. I also got a math major while I was there. 

When I went to graduate school, I went to the local Lutheran church and I found two problems. One was that they were engaging in one of these disputes Lutherans like to do where half the congregation thought the pastor walked on water and the other half thought he was the devil incarnate. But the other was that this was just the time when a lot of churches were moving from sacred organ and choral music to praise bands. Having gotten a degree in sacred music and just been steeped in the classical choral and organ tradition, I thought, if you can't do liturgy well, just don't do it at all. And that's what led me to the Quakers. 

I soon found with the Quakers that God truly spoke to me out of the silence. What that represented for me, I think, was a movement from a cataphatic view of God, where God was found in superfluity. When I was in college, one of my favorite poems was my Robinson Jeffers, and it begins, “Is it not by his great superfluousness that we know our God?” But as I moved both into studying mathematics for a graduate degree and worshiping with the Quakers, I found God in the silence; and I moved to a more apophatic view of faith, that as we find in the first line of the Tao Te Ching, the Tao that can be named is not the Tao. To me, it became the God that can be named is not God, and that we find God in that still small voice that speaks out of the silence.

Gretchen: Wow. That's an incredibly evocative journey that you've just described, and it sounds like it started with what they call the worship wars right now. Either the music's too loud or it's too quiet; it's too modern or it's too old fashioned; it's too hymny or it's too praisy; it's the seven phrases repeated 11 times as opposed to meaty theological hymns and music.

Noreen: Exactly. And that wasn't the end of my journey, because after I got my degrees in math and computer science, I came to St. John's University, which at the time had the largest Benedictine monastery associated with the college. And so the Benedictines became a very large influence as well on my faith journey, particularly their understanding of hospitality and the bringing together of prayer and work; that worship is also found very much in what we do in each little action throughout the day.

Gretchen: It sounds like you have a rich tapestry of influence from different aspects of the Christian faith that's informed you.

Noreen: Yeah, I think I do. And then when I was at the Graduate Theological Union, I was living right across from a Presbyterian church, so the Reformed tradition came creeping in. It was at that time that I was introduced to the theological works of Reinhold Niebuhr and Karl Barth, and those became also very influential. Anyone who's read my work will see Niebuhr all over it.

Gretchen: Anyone who's talked to you actually, because I can say the same for both. 

The idea of computer science, mathematics, and theology as a combination for a life's work might seem odd to some people, but explain why it's not.

Noreen: This was another journey that I took. As an undergraduate, as I moved into mathematics and particularly went off to Penn State to get a degree, the part of mathematics I was interested in was formal logic. What I loved about mathematics and logic in particular was its cleanness. You could prove something; it was either provable or it wasn't. Answers were either right or they were wrong. A computer program either ran or didn't. And I really found a lot of comfort in that. 

I came to St John's and I began teaching computer science. And at a certain point I started teaching artificial intelligence and started bringing some questions of computer ethics into my classes. And I realized that I had run away from the human messiness that one finds in the humanities or the social sciences. And yet, it was precisely in that human messiness where the most interesting questions lay. This turned me to starting to think; and in artificial intelligence, the question that I got to thinking about is why do we want to make the computer in our own image? Because computers are the most useful to us as tools when they do the things we don't do very well. They crunch the big numbers; they dive through the huge databases that we wouldn't couldn't take the time to get through; and that makes them useful.

But there has always been this impetus, this dream—we find it in science fiction and we find it in computer science and computer scientists—to make the computer like us: to make it in our image; to make robots look like us; to make the computer think like us. 

And I was thinking about this question and I thought, well, great: I'm getting ready to go on a sabbatical. I'll attack this question. And I went off on my sabbatical, and it has to be in history one of the least productive sabbaticals that anybody ever went on, because I got nowhere. And the reason was that this wasn't a question I could approach as a computer scientist or a mathematician. This was a question about human values about human predispositions and interests, and it had to be approached from the humanities.

So I came back to St. John's and I thought, well, okay, I need to get the background, because I was still thinking about this question. I thought, well, I could approach this philosophically, theologically, maybe psychologically; but at St. John's we have a graduate school in theology. And also as I thought about this question—why are we trying to make the computer in our image—I thought, whoa, where have I heard that language before? Well, Genesis 1, right? We say we were created in the image of God. 

And so the practical side of me said, Hey, there's a school of theology here. I'm gonna study some theology. And also I thought, well, that's interesting; is the image we're trying to give to the computer the same as the image we think we reflect from God? And that became the question that I pursued as I went to the Graduate Theological Union and studied theology, and the question I'm still struggling with. 

AI has become such a buzzword these days. It's almost like if you want to get any notice of what you're doing in computer science right now, you have to call it AI. And it seems to me to be a resurgence of this idea that the computer should be in our image, that robots should look like us and act like us and computers should talk like us and think like us. But I’m not sure that's really the most productive avenue we should be going now.

Gretchen: No. And the idea of optimizing, as you've said, for intelligence versus optimizing for love is an interesting thought experiment. Although if you drill down, pretty quickly you find out that a computer can't do that optimization for love or the self-sacrificial nature of the kind of love that God gives us.

But I want to take a little detour to something you've said recently in reference to Jesus' command that we love the Lord our God, with all our hearts and love our neighbor as ourselves. You've asked a provocative question around this: in an age of artificial intelligence—in other words, machines that are created to look, talk and act like us—is AI our new neighbor? And if it is, what does it look like? What does the neighborhood look like?

Noreen: There are many who do think that developing AI is simply enlarging the neighborhood, that we are simply adding to the neighbors that we have and that we should consider giving robots the same sorts of rights as humans, and bringing them in as a part of our human community. check

First of all, I would say that a lot of that is wishful thinking. Just recently an engineer at Google when looking at the answers that GPT-3 was putting out and said “it's intelligent.” And, uh, it's not. We are so quick to project. And in many ways, I am guilty of this myself: I have a dog at home. And when I see the dog have a certain expression on her face or do something, I'm very quick to project the kind of motivations that I would have if I had that expression or was doing what the dog is doing. And I have to remind myself, wait a minute—she's a dog. She's not a human. She's not probably thinking the way you're thinking.

But we do that with the computer as well. When we see something that shows the least bit of what looks like agency; when we see something that shows the ability to express, particularly in human language, we’re very quick to attribute agency and intelligence when it really isn't there. It's a good mimic; it's a good puppet; but it's still only that. 

So do computers enlarge the neighborhood? Not yet, certainly. And not in the way that I think many people wish that they did. What I fear that they do instead is narrow the neighborhood. We see that probably most of us with the algorithms that control our Twitter feed or our Facebook feed or whatever; it's putting us into bubbles, because it's trying to show us what it thinks we're going to like so that we stay engaged with the program and so that our eyes are caught by the advertisers ads. 

This is the real AI. When we think of AI, we tend to have this science fiction mindset that, oh, it's robots, you know? It's voices like GPT-3, game playing programs like AlphaGo; but the real AI right now that is influencing our neighborhood and ourselves is all of these algorithms that are working behind the scenes and that are trying to influence us, here in the West, mostly to buy things. If we were in China, it would be to be good members of society.

Gretchen: And I think to some degree you find that the manipulation that we see in social media tends to make us try to be good citizens of society, at least as far as some people want the world to be, on either side of the spectrum.

Noreen: Unfortunately, I fear that here in America, it is making us very bad citizens, because it is amplifying our emotions and our emotional responses to political ideas. And it is very much dividing us into two camps that hate each other.

Gretchen: I'm fascinated, because I didn't know you were going to go in that direction in terms of how AI changes the neighborhood. I'm thinking many people are saying it expands it, and you're actually saying it narrows the neighborhood. And I see that. I think that's fascinating. And I think that's something we ought to really be thinking deeply about—is it good that I get more of what I want? Does that make me a better person or a more shallow person?

Noreen: And we can extend that to the idea of—as AI and robotics move into more parts of our lives—will this keep us from being with other humans? Because it might be easier to be with the AI. Sherry Turkle has a lovely sentence in which she's thinking about robot companions and particularly sex bots, which are coming and are already here in some ways. She says, what we're trying to have is love that's safe and made to measure. And to me, I want to say, but love isn't safe, and it shouldn't be made to measure; because the relationships that we are in, the people that we love, are what challenge us to grow.

The monks here at St. John's often say that the whole point of living in community is to be living with people that you don't necessarily get along with all the time, but bumping up against them every day eventually wears all of the rough spots off of you. At least that's the ideal. And I've heard people say similar things about marriage—that you just annoy each other into heaven.

There is something that we need to think about there: that if we devise robotic companions who are always cheerful, are always telling us what we want to hear … this isn't the way a neighborhood should be.

Gretchen: No. And I've heard the phrase, “Marriage is not there to make you happy, it's there to make you holy.” We could say the same thing about other relationships as well. 

But you've just hit on a question that I want you to go a little deeper on in this idea of a recent book chapter you've written called “Religious Perspectives on Sex with Robots,” which I find absolutely fascinating. You do note that not all religions have the same sexual ethic, but AI does pose profound questions about our conceptions of love and companionship and relationship and even sex. So go a little deeper on what we need to be thinking and asking about sex robots, specifically from a Christian perspective.

Noreen: Many have said, is a sex robot any different than just a blow up doll? The idea is that if you could take essentially a sexual doll, but then you add artificial intelligence to it, that you are adding the relational component. The question you have to ask here is, what does this say about our approach to sex? Some of the questions that I would ask about having a robotic sexual companion is, first of all, can the robot say no? Because if they can't, what you've got is a sex slave. I'd say no better than a prostitute, but even worse than a prostitute, because you're not paying it. So then we have to ask, is this healthy, or will this posture in us attitudes toward sex? 

Coming back to Sherry Turlke's love that's safe and made to measure, people have also said sex bots would be a safe way to have sex for people who have disabilities or are in difficult conditions, like they're incarcerated. This does not fit the Christian view of sexuality, because in the end, it's closer to masturbation; the computer isn't truly relational. The computer may look relational. It may act relational in certain ways. But the ideal for sex is that it's a relational bond between two people. And of course, in the Catholic Church and among Orthodox Jews, the sex act should also always be open to procreation. And there's not going to be any procreation doing it with a robot.

Gretchen: I think this raises a lot of deep questions, not just about the actions that we might take with a machine, but also what it does to us in terms of shaping our minds and our hearts: either toward God or away from God, toward people or away from people, as you've mentioned before.

It's an interesting dichotomy here: we can talk about love and war, because that's another area that you've probed in your work. In a recent article in moral theology, sort of switching streams here rather radically, you talk about lethal autonomous weapons, LAW's. And you ask, can they be just? Which is an interesting and deep question.

We asked similar questions about nuclear weapons in the 20th century. Talk a little bit about how, in terms of applying the just war theory, these two types of weapons are both the same and different; and what new challenges true autonomy poses for us now in the 21st century.

Noreen: The first thing that we need to recognize is that there are levels of autonomy in our weapon systems. The military in the United States has categorized these as human in the loop, human on the loop, and human off the loop. If a human is in the loop, it's in a sense working in a partnership with the weapon. The weapon may have certain capabilities, but the human being is perhaps guiding it or making certain decisions while the weapon itself makes others. Human on the loop simply means the human being has the ability to override the decisions. So all the decisions are being made by the weapon itself, but a human has an ability to step in and make changes. And then of course, if a human is off the loop, then the weapon is simply making all of the decisions fully autonomously. 

I coauthored an article for Peace Review with a retired U.S. general, Robert Latiff; I also spoke at a conference at the law school at Penn State that had a lot of military personnel there, both current and retired; and they don't want humans off the loop. They're even very wary about just human on the loop weapons. Their concern, of course, is that this takes strategic command out of their hands—out of human hands. And the biggest worry is that, just as we've seen on Wall Street with automated trading and the speed, it led to the big Wall Street crash about 20 years ago. The computers were simply trading at too fast a speed for the human traders to keep up with. Their concern is that the tempo of war could also start to go too fast for human decision makers to keep up with what's going on.

If we look just at where humans are at least on the loop or in the loop, there are people like, the roboticist Ron Arkin at Georgia Tech, who have argued that these kinds of autonomous decision making weapons would actually be a good thing, because they would not react emotionally. So many war crimes are committed in the heat of battle, in the fury of human emotion, and that actually this would be lessened with autonomous weapons. I am not sure that this is the case; it would really all depend on what kind of an ethic we give our weapons.

But when you really think about warfare, isn't our ethic essentially going to boil down to: we want win? I fear that anything greater than that, any deeper ethical issues about saving human life would be window dressing in a sense, rather than truly, deeply embedded in the decision making process for these weapons.

I also worry that if we can deploy autonomous weapons on the field rather than human boots on the ground, that this is going to make it too easy to go to war. It's difficult for a commander in chief to explain to people why their sons and daughters have to get killed on the battlefield, but it's not difficult to deploy weapons.

But my greatest fear comes from a quotation by a Marine general who said, when we think of autonomous weapons, we think of these big tanks; we think of guided missiles; but he said what we really want are weapons that are smart, small, cheap, and ubiquitous.

Think about that for a minute. If we have weapons that are smart, small, cheap, and ubiquitous, they're going to be ubiquitous. They're going to fall into the hands of terrorists. They're going to be deployed easily. I saw a concept thing on Twitter the other day of having a smartphone, and it had a little tiny drone that would pop out of the smartphone, and they were the new way to take selfies. You can send that tiny drone up in the air and take your selfies, and then call it back to your smartphone. And I thought, hmm.

Gretchen: What else could you do with that?

Noreen: What else could you do with that? Think about the possibilities for assassination, with tiny drones that have facial recognition that could be sent out; or think about the possibilities for genocide if you could put certain characteristics into these drones, weaponize them, and send them out in swarms.

Gretchen: I'm thinking of the book of Revelation and some of the things that St. John was trying to describe, and we've all over the years tried to figure out what is that he's talking about in terms of the apocalyptic end of humanity. I just feel like we're hitting this critical mass of things that could be destructive on a massive scale, and so easily.

Some of the things that you're talking about are really, really provoking thoughts, because we've designed these weapons for speed and efficiency. And that's what's good about a machine, right? What tools are better for us that we're not that good at? Being fast and efficient and accurate is where we fall, where human messiness comes in. So if we've gotten a worldview of that's good—and I would even allude to Julie Carpenter's work on the bomb diffusing robots that she studied in terms of the soldiers relationship with them and the emotional attachment to these machines that went and basically did the dirty work of diffusing a bomb instead of a person; and when you extend that, what does it do to your mind and heart to say, well, isn't it better that a machine does that than a human?

Noreen: I believe that we certainly need technology. We need tools. There are things that machines do much better than we do. What I want to come back to is something I mentioned earlier: this idea that machines should be in our image, this idea that they should be like us. The problem is that we can only take a piece of our image, and the piece we're trying to take is instrumental reason. Joseph Weizenbaum wrote about this way back in the 1980s in his book, Computer Power and Human Reason. He pointed out that when you take instrumental reason and you separate it from human love, companionship, and community, it becomes monstrous. The Nazis were very reasonable as they approached how to do the final solution.

Reason by itself is wrong. As we try to make computers in our image, I fear that we will change ourselves to be more like them. Microsoft guru Jarol Lanier has said, if a computer ever passes the Turing test, it's not going to be that the computer got that much more human; it's going to be that we got that much more computer-like, because we are the more flexible of the pair. 

And I want to say something about flexibility, because think about the stories of Jesus and Jesus’ actions in the Gospels. In a lot of those stories, he gets into a dispute with Pharisees. And what are those disputes always about? They're always about flexibility. For the Pharisees, it was: we need to keep the law. We need to be on the straight and narrow; we need to follow a protocol. You could even say, we have a program. And Jesus was always breaking the boundaries of that program, and saying: No, your law is too narrow. Your neighborhood is too narrow. Your thinking is too much about process and not enough about love. 

And so as we try to make computers in our image, we really need to guard against remaking ourselves in the computer's image, because that is going to be far too narrow, far too legalistic, far too process-bound, and not at all what Jesus tried to teach us.

Gretchen: This isn't even a question I had on my list to ask you, but it really comes up right now. Let's step into the computer ethics or the AI ethics lane for a second and discuss this concept of the law versus grace, the algorithm versus love. I think what I'm seeing in the AI ethics world is a similar pharisaical bent towards getting a system of rules and laws that people will just obey, so that we can have good, beneficial, and benevolent AI. But in a sense, it's trying to make us tow the line and obey the rules and not be bad people. How would you apply what you've just been talking about as our concept of machines into our concept of machine ethics?

Noreen: We're hitting a wall that we've hit before, and this was back in the early days of AI when people approached it through symbolic reasoning. They thought, if we can just come up with the right reasoning process, we'll come up with computers that do what we do. And of course they had early successes in playing chess, solving calculus problems, that kind of thing, because these are very limited, rule bound worlds. Symbolic logic and processing worked very well in those worlds. It didn't work well in the larger worlds of human communication, of even just navigating around a natural environment. 

Well, now we're managing those worlds mostly with big data. But there's still that limitation. And as we approach computer ethics, we're almost going back to the old symbolic reasoning and saying, well, if we can just come up with the right set of rules, the right processes, we can make computers ethical. But don't you see, that was exactly what the Pharisees were trying to do, and that was exactly what Jesus was fighting against, because love doesn't work that way. Even if we go back to this question of autonomous weapons, if there was a system of rules that made warfare work, we'd have found it by now. There isn't. If there was a system of ethics that made human society work, we'd have found it by now; but we haven't, because there isn't. It's not rule bound, the same way that we found that human intelligence is not; it doesn't work like a computer program. It's not a system of symbolic reasoning—at least not in general, only in certain areas.

Gretchen: Oh my goodness. I always say this, but nobody can see how big I'm grinning on a podcast because of what I'm listening to from the brilliant guest. 

Noreen, I think one of the biggest challenges AI presents us with is one of definition, and this is kind of what we're talking about right now. Some people say it's just a tool. Others say it's a partner or a collaborator. Still others call it a surrogate. And the most extreme might even say it's a savior—in fact, we've seen big hype around that. So what do you say, Noreen? How should we view AI, and how can we hope to control the narrative in a world where hype makes both headlines and secures funding?

Noreen: Exactly. And that's a problem, particularly the latter. It’s said that one of the researchers at MIT used to have a sign that hung over his desk that said “we shall overclaim,” and AI has always been an overhyped field. It's always been just around the corner. If you go back to predictions made by some of the earliest pioneers in AI, like Marvin Minsky back in the 1960s, he was saying, oh, within 8 to 10 years, we're gonna have computers that think as deeply as we do. Rick Kurzweil has been predicting that singularity is going to happen 10 years from whenever. And it becomes a sliding scale: for a while, it was going to be by the 21st century; for a while it was by 2020, then it was 2030. Now he's saying 2045. It just keeps being pushed back. 

People want to believe the hype of AI. They want to think that it's just around the corner. But they also have to promise that it's just around the corner to get funding. Right now I would say to people, don't believe that everything you hear called AI is AI. 80 to 90% of it is just regular computer programming; there's nothing AI about it. But the AI has become the buzzword, the catch word, the way to get customers, the way to get funding. I think we have to recognize that if this is hype, it's not just around the corner. GPT-3 did not become intelligent or human-like overnight, or sentient as I think the engineer said it was.

Gretchen: So we're landing more on, it's a tool. I think that people would agree that it's a tool, but it gets back to what you're talking about: this hope—and I think it is hope for some people. Does it replace other kinds of hope in solving the world's biggest problems, answering the world's biggest questions? And where does religion come into this? This is something you've thought long and deeply about.

Noreen: I think that in many ways, we as a society and as individuals have become less religious, as we do not believe in God or angels or anything other than ourselves. As somebody put it, it can be lonely being the only sentient beatings in the cosmos. We are built for relationship. We are built to be in relationship with each other, and with our creator, with God. And when we are not in satisfying relationships with each other, when we are not in relationship with God—with the being that is other to us, that is not exactly like us—we miss that, and we start looking for it. 

Now we are looking towards AI to fill both of these gaps. We've already talked quite a bit about how we're looking for AI to be the perfect companion, the perfect sexual partner, the perfect comrade in warfare. But we are also looking for AI to fill in where God used to be. And in that sense, we're looking for some sentience that is not human sentience. Well, we look for sentience in other animals, and I think we are finding that they are more sentient than we thought they were, certainly more than Descartes gave them credit for being. We find that we're looking for extraterrestrials, hoping that there might be some sentience out there that we can communicate with. And we're also looking for AI—that we'll build a sentience ourselves that will be other to us.

And I think we're looking in the wrong direction. The same way that we look for AI to solve problems like climate change for us—we're saying, well, maybe artificial intelligence will come up with solutions. It's a tool. We need to come up with the solutions. We can use computers to model the climate—we have been doing that, and our models are becoming better and better, although I think we're finding that they've been pretty far off the mark in suggesting that changes that would happen in 2050 or 2080 or 2180 are happening already. So we have to recognize, and I think we have to take responsibility. In some ways, when we think AI will solve our problems for us, we're abdicating responsibility for solving our own problems.

Gretchen: As a side note here, there's been recent discussion, especially in Kate Crawford's new book, Atlas of AI, that these giant compute resources are actually contributing to some of the climate problem, with the vast mining for batteries and how much electricity it takes to run a data center that does AI modeling, which is just fantastic in terms of the large sense of that word.

Let's go a little deeper on what you've just been talking about, because you have a new book coming out, titled The Artifice in Intelligence, and the subtitle is Human and Divine Relationships in a Robotic World. You've talked about this a little bit just now, but I want you to go a little deeper into how this idea of artifice—not artificial, but artifice, that's an interesting distinction there—impacts our relationship with God and with each other; specifically in terms of how the Christian doctrines of the Incarnation and Resurrection impact our current fascination with AI.

Noreen: Well, there's a lot to unpack in that question.

Gretchen: I have a habit of doing that. I'm sorry.

Noreen: Thinking about artifice, if you go back to the Bible, let's just think about the golden calf. Idols were gods that are made by human hands. This was artifice; this is something we are warned against time and time again in the Bible. It's this idea that through our own artifice, through the works of our own hands, we can make a God for ourselves. There are very few that would call AI a God—although there was briefly an attempt in Silicon Valley to have a Church of AI; it did not find a lot of adherents. However, we act like AI is a God. We take its answers, its proclamations, as if they came down from on high, and in that sense we make of it an idol. And the thing about idols—the way this comes back again to Sherry Turkle’s love made safe and made to measure—is that idols are gods that are safe and made to measure. If we make them ourselves, then in a sense, we think we can control them; but it never works out. It's always a substitute for the real thing, and a distraction, a dead end that we wander down. So that's the first part of your question. 

The second part of your question was about the Resurrection and the Incarnation. This is a major theme in this new book that's coming out—the importance of embodiment. When we set aside intelligence, we would like to think that it's something that can exist outside of our body. It's this nice, abstract thing, the same way that the cloud is this nice, abstract, beautiful, clean place. And then we say, no, actually it's very dirty server farms that are using fossil fuels. Intelligence seems like this nice disembodied thing, but it isn't. Everything about our intelligence is rooted in our body, and in our bodily experience within our environment; and everything about how we relate to each other is rooted in our body. 

In this book, I asked the question what makes for an authentic relationship, and Karl Barth had four criteria for that. The first one was, look the other in the eye. Well, that is certainly embodied. The second was, speak to and hear the other. You could say, well, that's not so embodied. We're doing that right now, across the other waves through computers; but you have to recognize that the ability to not be authentic is compounded as we get more physically distant in our communications. And then Barth said, you need to aid the other. Now that aid can be physical; it can also be mental or emotional, and be mediated by computers. 

But his final one was, you have to do it gladly. Aiding another, if you're coerced—well, that's slavery. You have to do it gladly. Can a computer do anything gladly? To do that, It has to have emotions; but emotions are completely rooted in our body. In fact, if you look at what psychologists have written about emotion, they say it's a four step process. We have a stimulus; that stimulus elicits in us a physical response. Think about fear: you're out hiking in the Minnesota boundary waters. You hear rustling in the underbrush. You know there are black bears around. And the first thing that happens when you hear that rustling is your heart starts racing—you're ready for fight or flight; then your brain kicks in and analyzes both things. What was the stimulus that I heard/saw/felt; what's going on in my body? And that then produces the emotion. 

So the computer can perceive a stimulus. It can jump then right to step three and analyze that stimulus and think of the correct response. But it hasn't felt an emotion. So when we say can a computer show love back to us, the answer is no.

I'm old enough to remember when Bill Clinton was president, and people made a little bit of fun out of the fact that he would say to people, “I feel your pain.” But that is what we need with love, is to actually feel empathy, to feel another person's pain. People who don't have that feeling—who might observe someone in trouble, calculate what would be the socially correct response, and act on that-- we call them sociopaths. So a computer without a body is going to eventually feel to us empty, sociopathic, because it doesn't really feel. 

And the other thing is, to establish relationship in the most authentic and the deepest sense, you need to be there. You need to touch each other. We are physical creatures. They certainly have shown that children that were raised in Romanian orphanages, where nobody ever touched them, just could not develop properly. And so we need the body. We need bodily experience of ourselves and our world and of being with each other. And I think Christianity brings that to the table by sanctifying the body through the Incarnation: that even God Godself came and took on human flesh, and with that human pain and suffering and feelings and death, to truly establish a full and authentic relationship with us.

And we carry that a step further with what we believe about resurrection in the Apostles creed. We say we believe in the resurrection of the body and the life everlasting, that we don't just get subsumed into some kind of Godhead. We're not even just souls that flip off to some disembodied existence in heaven. I think a lot of Christians think the latter, or talk like they think the latter. “Oh yeah, but her soul went to heaven. But that is not what Christianity teaches. What Christianity teaches is that when the resurrection comes, we will be resurrected in the body, the same way that Jesus was resurrected. It was a changed body, but he remained an embodied soul.

Gretchen: This is just powerful refutation of the whole idea of “human enough,” which even Westworld has said, well, if you can't tell, does it matter? And I think what you're saying is, yeah, it does. And ultimately, we get there through the hope in Christ and the incarnational ministry that he modeled for us.

Noreen: On the other hand, now that you bring up Westworld, I do want to bring up the caveat that it does matter how we treat robots. And here's why I say that. It's interesting, because people have found at conferences that if they leave the robots alone, they will get destroyed. Many years ago, there was this really simplistic robot called hitchBOT that people were taking on trips. The idea was that it was going to hitchhike its way across the world; and it made it across Europe, and it made it all across Canada, and when it got to Philadelphia, the city of brotherly love, it was torn apart. Poor hitchBOT. Now hitchBOT didn't suffer. So in that sense, I'm not saying it matters to the robot; the robot doesn't care. It's not sentient; it doesn't feel pain. But it matters to the people who destroy the robot. How we treat things that are humanlike is going to form us. 

Here I would quote St. Benedict in his rule for monastery. He has a chapter on the cellarer—this is the person who is in charge of all the goods in the monastery. And he says, first of all, about the cellarer: if someone comes and asks you for something, insofar as you can meet their request, do it. If you can't, at least give them a good word. So he's saying, people come first. Deal with people, meet their needs as much as you can. And if you can't meet their physical needs, at least try to meet their mental and emotional needs and treat them well. Then he says, and treat all the goods of the monastery as if they were the vessels of the altar.

What that says is we need to treat even things with respect, because things stand in proximity to the divine the same way we do. To me, that says we need to do a much better job of treating nature, of treating the earth with respect; but we need to treat our computers with respect too. And we need to treat robots with respect, because ultimately how we treat things shapes us. Aristotle said the same thing: he said how we get virtue is by doing virtuous acts. How we get virtue is by doing good things over and over and over again, and how we get less virtue is by doing bad acts over and over and over again. We are formed by everything we do, and that everything extends beyond just how we treat each other. It also extends into how we treat the earth, and that means how we treat everything. So that includes AI. 

So yes, we need to treat AI as well. But we shouldn't make the category error of saying, well, then that puts them in the same category as human beings.

Gretchen: That takes deep thought, because you can easily go to that direction of robot rights and personhood for machines. But you can do that with everything—animal rights, et cetera.

Noreen: Yes, you can.

Gretchen: It's that fine line of respecting creation, including our creations, and not calling them us and not deifying them.

Noreen: Here I come back to my dog. I love my dog. I need to respect my dog. I need to treat my dog with all love and graciousness. But I shouldn't start thinking my dog is my child or is another human being, because it's not. That is not respecting the otherness of the other. And I think we need to respect that, whether it's in animals, whether it's in God, whether it's in our creations like AI.

Gretchen: When we talked earlier, you said, and I quote, “I'm almost done with AI” in terms of what you're thinking and writing about. So what's next for Noreen Herzfeld? What areas can we look forward to hearing about from you in the next phase of your incredible writing and thinking?

Noreen: Well, you'll certainly still hear me talking about AI. I have this new book coming out in [February], and still continue to have thoughts about what this is about. However, I am starting to move in my writing in the next couple of projects will deal with technology at large. So it will be broader than just AI.

I think I'm moving more towards thinking about what do we need, as far as a new philosophy of technology and of the role that plays in our lives. In particular, I want to look at that from the stance of climate change and how technologies might help us mitigate climate change, but how technologies got us into this difficulty in the first place, and how our wrong thinking about technology may be exacerbating the situation we find ourselves in. But I fear that if we cannot get a handle on climate change, technologies like AI are going to become a moot question, because we're just going to be dealing with one natural catastrophe after another. And we're just not going to have the resources; we're not going to have the intellectual bandwidth. We're going to start fighting each other for what resources remain.

So right now, climate change is the question we need to deal with before we can deal with more subsidiary questions. It's going to be the driver of what happens to humanity over the next 50 years.

Gretchen: Noreen Herzfeld, as always, it's been illuminating and inspiring to talk to you, and expanding in terms of what my own thoughts are about these big topics. So thank you so much for taking time to join us today.

Noreen: Oh, it's been my pleasure. Gretchen. I've really enjoyed it.