Transcript for Episode 53

Gretchen:

My guest today is Dr. Brian Green, and he's the Director of Technology Ethics at the Markkula Center for Applied Ethics at Santa Clara University. Brian has a background in genetics and did his graduate work in ethics and social theory at the Graduate Theological Union in Berkeley. And he now teaches ethics, AI ethics to be specific, in Santa Clara's Graduate School of Engineering.

He spends a lot of time thinking and writing about ethics in real-world situations, and has an impressive list of articles published in both academic and popular journals. Now, I don't know if Brian just plays his life at 2x speed or if he never sleeps, but he's also just published two books back to back, last year and this year, exploring, respectively, space ethics and transhumanism.

We'll talk about both shortly, but first: Brian Green, it's hard to keep up. Welcome to the podcast. 

Brian:

Thank you. It's a pleasure to be here. 

Gretchen:

Well, let's start with geography, AKA, where you work. You're at the Markkula Center for Applied Ethics in sunny Santa Clara. Is it sunny there right now? 

Brian:

It is sunny here, yes. 

Gretchen:

I hate that! Walk us briefly through the history of the Markkula Center, sort of how it started/how it's going, and tell us a bit about your place within it.

Brian:

The Markkula Center was founded back in 1986. Mike Markkula, who was the third employee at Apple—sometimes people referred to him as the adult super supervision, or the first employee not named Steve. His daughter came to Santa Clara University, and he caught the president at some point and said, “Hey, give me a call. I think you need to have an ethics center here.” And of course the president didn't know who he was. So he just put his card in his pocket and wandered off, and found the card a few months later and said, “Oh, I was supposed to call that guy.” So he called them up, and long story short, Mike Markkula gave some money to the university, and now the Markkula Center exists.

It's been around for quite a while—it's been around since 1986, so 30 something years now? 

Gretchen:

Don't make me do math on the fly, Brian. 

Brian:

I know. 

Gretchen:

Even if it's addition. A long time.

Brian:

It's been around. I started there in 2013. The Markkula Center’s been getting bigger over time. For maybe the first 15 years or so, it wasn't that big, but we started getting bigger, faster around 2001. Since then, we have a whole bunch of different areas. We’re the largest university-based applied ethics center in the world—you add enough qualifiers onto anything and you become the best at it, so that's what we did there! But it actually is fairly significant. We have between 15 and 20 full-time employees now. We work in government ethics; we work in business ethics; we work in nonprofit ethics; internet ethics; technology ethics; media ethics; bioethics. Based on all those different ethical areas, it gives us an interesting perspective, and of course very interesting conversations on  what is going on in ethics and how do these problems relate to each other.

Gretchen:

Right. And even as you said those areas of ethical interest, there's a lot of Venn diagram overlap between internet ethics and tech ethics, which is kind of the same thing. 

Just out of curiosity, did Mike Markkula have some special affinity to ethics? Back in 1986, we certainly weren't facing as many of the problems we are now, as technology has advanced in many ways. So what was the genesis of his desire to get this thing going?

Brian:

That's a great question. Basically, as an entrepreneur and a co-founder of a company, he was better at seeing the future than the rest of us. He saw that this was happening, that we need to be talking more about ethics, otherwise it's going to get away from us. I think it’s great that he saw these things, and too bad that he was right. 

Gretchen:

Well, from a religious perspective, we all know something's gonna go wrong. So good for him for thinking about it. 

Let's talk a little bit about your work. It's situated at what I would call the intersection of technology and humanity; but you have five distinct areas of interest. Again, it's hard to keep up, Brian! Give us the 10,000 foot view—because I want to zoom in on a couple of them more specifically later—of these large topics, and why you think they're important in the world of applied ethics.

Brian:

I used to say I had three things that I specialized in, and then I had to expand it to five; and then sometimes I think, well, it's really all just one thing. It's applied ethics. But I think the five is what I'm going with these days. 

I work on transhumanism, which is the question of the application of technology towards human nature to try to turn us into something beyond human, basically. So that's transhumanism. 

I work on space ethics. Space ethics is basically applied ethics—so thinking about what happens, what we should do, what we should not do—but applied towards issues involving space. 

And then I also work on artificial intelligence and ethics, and of course AI ethics has become a huge thing recently just because AI has suddenly become so very, very powerful.

I also work on existential risk or catastrophic risk and ethics, which is kind of a different category. A lot of people have talked about the ethics of nuclear war, the ethics of artificial intelligence; and we don't want to produce artificial intelligence that's going to destroy the world, obviously. But I look at it in kind of a more comprehensive approach. In other words, what are the risks, what do we need to be thinking about, and how can we make ourselves better people so that we can avoid having these terrible things happen to us in the future? 

And then the last one that I work on is corporate technology ethics. So I work with corporations on very, very applied ethics. How does a company make better ethical choices when it comes to developing technology? 

Gretchen:

This was one of the areas I want to zoom in. We're going to talk a lot about all these different areas and how they interact and intersect, but I really want to zoom in on space ethics for a minute, because I never thought that was an area we need to think about making good choices. (Aside from my personal vendetta to make Pluto a planet again.) I'm particularly intrigued by the idea of what it is that we have to worry about or think about in terms of space ethics?

Brian:

There are lots of things to think about. The first one would be, should we go at all? Is this the right way to spend money, or is this just a waste of money when there's plenty of things going wrong on earth that we could be taking care of right now. So that's really the first question.

The second question involves a lot of risk. In other words, should we be taking this risk? If we're going into space, it's hugely risky. It's bad for human health. Your spaceship can blow up. It can depressurize. You get exposed to radiation. You have problems with lack of gravity. So there are just lots and lots of health and risk things that can go wrong.

Then it turns out that the issues actually become bigger. In fact, the next issue is bigger than the planet earth itself. It's the problem with space debris—that we're kind of wrapping the entire planet in this kind of whirling cloud of death. Everything in, everything in orbit is going at miles per second, in terms of orbital velocity, depending on how close it is to the earth—the farther away you get, the slower things go. But these objects, even a tiny thing like a flick of paint going at three miles per second—one of these hit the space shuttle Challenger, actually, before the Challenger was destroyed. Several years before that, on a different space mission, it was hit by just a flick of paint in the windshield. And it went most of the way through the windshield. So if it had been bigger than a fleck of paint, the entire space shuttle might've depressurized and everyone on it could have died. We don't think about these things very much, but right now it is getting crowded up there. There are lots of pieces of old leftover junk. A couple satellites have crashed into each other, and when they hit they blast themselves into a million pieces. And of course there's also countries like Russia, who decide that they're going to blow up one of their own satellites and thereby create more debris around the earth. Eventually this is going to be a problem that gets out of control.

Gretchen:

Not to go too far down this tangent, but I'm super interested: Elon Musk is talking about going to Mars. We also have people that are doing space tourism. So aside from governmental programs that will take us to explore space, you've got people going up there for fun for a few minutes, so talk about crowding.

What are the ideals for potentially offloading humans to space, like the movie Wall-E, when we do too much debris down here? Are there any other areas of ethics that you're digging into in that area?

Brian:

Yeah—that was just the first three issues in the book. The book goes on and on. I do think it is important for us to consider getting self-sustaining cities off of the earth, settlements in orbit, whether it's on the moon or in free-floating places or over on Mars. But it's very, very dangerous. It's very risky. It's good to start with something closer first; probably the moon would be my first place to go to. 

The reason that's important is because there are a lot of risks here on earth. One of the other things that I said I was interested in is the idea of existential or catastrophic risk. When I think about this, I often think about monks after the fall of the Roman empire. You might think to yourself, okay, the moon as a colony is pretty different than monks after the fall of the Roman empire; what are you talking about? To me, the connection is clear: these monasteries with these monks were some of the few places where civilization endured, where they actually had libraries and just a memory of what an educated, civilized society was like, while the rest of the world very much simplified—and simplified in a bad way. Because earth is vulnerable to these things, it would be very helpful if there were self-sustaining settlements off of earth, where if that sort of disaster befell the earth, then there could be outsiders who could come back to earth and help us get back on our feet.

Gretchen:

It makes me think of that Star Trek movie, where the Vulcan planet was imploded, and they had cultural guardians that sort of kept all of the Vulcan culture in a small little colony or settlement. Last question on the space thing: where do we live? Let's say that the moon is one place, but you're referring to free floating settlements or colonies. What does that even look like?

Brian:

This kind of research was started way back in the 1970s. You can make a giant ball, a sphere, and you can pressurize the inside of it. You can get a source of light in there, warm it up, have dirt and things like that. And you can spin it so you get some artificial gravity, and that can be one form of a colony. There are other colonies that are shaped like donuts, and you can spin them. If you painted it like a doughnut, I'm sure it would be much more attractive.

Gretchen:

I would go.

Brian:

You can have a cylinder also, and the cylinder will just spin around. Anyone who's seen Babylon 5, the science-fiction show, will remember that it was a cylinder. So there are different options for what these settlements might look like if they are free-floating. 

Gretchen:

Got it. Let's get back to AI and humanity for a while; but I'm going to read your book. It just came out last year, right? Space Ethics

Brian:

That’s right. It came out in October. 

Gretchen:

Brian, on the spectrum of technophobe to technophile, where do you fall? Should we approach technology cautiously, or should we be more optimistic? How does the field of tech ethics inform our approach on this topic?

Brian:

This is a really interesting one to me. I wrote a paper about five years ago about the Catholic Church’s approach towards technology. The Catholic Church has always been in general very, very pro-technology. There are only a few technologies that they didn't like. They didn't like weapons technology, and they don't like things that interfere with reproduction. If you want to categorize them, they both have kind of an anti-life quality to them; they're either killing people or they're preventing people from coming into existence. 

I think that kind of rationale of an approach towards technology, which is generally positive, is good; but you also always have to have in the back of your mind, something could go wrong, and we need to think about what that could be. In other words, ethics always needs to be there evaluating technology. And that's basically been the historical approach of the Catholic Church. That's also the approach that I am well disposed towards as well. Every time a new technology comes out, I say, wow, that's really interesting. What is the technology? What does it do? What purposes might it be used for? What does this do to us as people, I think is ultimately what it comes down to. Does this help people? Does it harm people? 

Lots of medical technology, for example, is clearly beneficial. But if you look back at the history of humanity, you can see plenty of examples, especially with weapons technology like nuclear weapons or biological weapons or things like that, where you can say wouldn't it have been better if we hadn't developed that? Maybe the world would be a safer, a better place without those sorts of things. 

Gretchen:

Drilling in on that a little bit: in one of your earliest articles on tech ethics, you argue that tool use and religion occupy the same faculty of thought in the brain, and both speak to our conception of purpose in life. So we've been talking about the “what could possibly go wrong.” Putting [aside] the obvious danger of reducing God to a neurological impulse, expand on the idea of how, in light of new frontiers in technology like AI, or like the Neuralink brain tech interface, or the CRISPR gene editing technology, even augmented reality and virtual reality and psychedelic drugs; these literally challenged the boundaries of humanity. What's your take on that? Expand on your idea of this tool use/religion purpose in light of these technologies and what could possibly go wrong. 

Brian:

This is something that's been fascinating me ever since I started doing technology ethics. So this is something that I've been thinking about for close to 20 years now. The basic idea is that humans are tool users. Every time you have a tool, you're thinking to yourself, “This is what this tool is for.” It's important to recognize that that “for-ness,” or that idea of purposiveness or what that tool is for, is not in the object itself. It's in our head. So it has kind of a spiritual quality, if you want to think about it that way. We see a glass, and we know that that glass is for drinking out of; its purpose is to hold water so that a person can drink from it. When an animal sees a glass, it has no conception of what that purpose is. It's not something that registers in its mind. 

So this is something that makes us fundamentally different from a lot of the created world. And interestingly enough, I argue in that paper, it makes us capable of thinking religious thoughts. As soon as you think about an object or a tool having a purpose—the hammer has a purpose, the glass has a purpose—you can think to yourself: well, what's my purpose in life? Why am I here? What is the purpose of my group, and what I, who I'm a member of in this group? Um, what is the purpose of the universe? And if humans create tools and we create tools with purposes, then is there a God who also created this universe that we are now living in? What is God's purpose for my life? 

Gretchen:

That's kind of backwards retrofitting. We make tools, and then wonder if somebody made us; kind of a existential question that we've been asking here. 

But even more finely tuned here: I want to talk a little bit about this Neuralink brain tech interface, and the idea which you've just given us an overview of: how our faculty of thought engages with this. And the idea of being able to upload our thoughts—our brain, ourselves—into the cloud, or, I don't know, a free floating space settlement. Where are we going to upload it into a computer?

Brian:

Then the question is where's the computer? There is no cloud, it's just somebody else's computer. One of the things I would say with Neuralink: Neuralink is a really interesting one, because their objective of course is to have direct brain-computer interfaces, where they stick the electrodes into your brain and poke those little wires down in there to try to touch groups of neurons, or even individual neurons, in your head, and try to get you to be able to interface with the computer. If you are a paralyzed person, then think to yourself, I want to move my hand; and if you're paralyzed, you can't do it. It doesn't work anymore. But if you think it, and there's a brain-computer interface there, it'll go to the computer; and if you have some sort of exoskeleton or something like that, it'll raise your hand for you and you can use your hand again. Or if you had a broken neck or spinal injury or something like that, perhaps you could bypass it by having other electrodes on the other side of the damage, which would then permit you to just think, and the thoughts would go through a computer and then go back into your body again.

There's lots of really interesting, wonderful therapeutic applications there. But of course the next question is, well, what if I just want to surf the internet and watch YouTube in my mind? And of course these are going to be things that we have to consider going forward. We'll have this fantastic and powerful technology; then the question becomes, how do we use it correctly in order to help people? 

Gretchen:

Which is raising questions of the metaverse right now. Do we really want to put on goggles and just live online? The ethics of engaging in a development of this kind of technology is something we've talked about on this podcast: should we even conceive of it? You said that about space earlier—should we even go there? I think it would be hard to argue, if someone who's paralyzed could actually walk again because of this technology, who's to say no to them? But where do we go up to, but no further? I've often said the Amish, say the button goes far enough; the zipper goes too far.

Brian:

It's a fascinating question, right? How far is too far; what's the acceptable amount of technology that we want in our lives? I think that the answer is we want to be able to help people. We need to be able to explore this technology in order to help people get the wonderful benefits that we can get. I mean, it's literally commanded from Jesus: His followers are supposed to help the lame to walk. Jesus says, you will do these things that I have done and more. We really need to think to ourselves, how do we do that? We've been commissioned to perform miracles on behalf of God, and technology is a way that we can perform those miracles. 

Then the question is, there's still, probably a place that's beyond appropriate when it comes to miracles. For example, just plugging into the metaverse and living in an imaginary world, which is not the world that we're supposed to be living in and doing other beneficial things like loving God and loving neighbor.

It's really a question of dual use technologies: the technology can be used for good, it can be used for bad. And if that's the case, we need to figure out what the bad uses are. We need to regulate them. We need to agree that we're not going to do those things. And it turns out that this is really, really difficult to do in the world right now, because we have competing geopolitical forces. We're in a state of global competition, and we have a long-standing understanding in science, which is that anybody can research anything they want to, and any engineer can build anything that they want to, as long as they can do it. 

Now it's not that case for everything. You're not allowed to build a nuclear reactor in your home. There've been a few prominent examples of people trying to do this; they get in trouble for it. But in general, that's the sentiment. We're currently in this cultural situation where we have to figure out how [to] navigate this space, given that we are now dealing with such high stakes and so many technologies out there that can be used for so many different sorts of things. 

Gretchen:

That's clearly occupational security for people like you, Brian. The ethicists of the world unite!

Brian:

Yes, exactly. I'm not going to run out of work. 

Gretchen:

As long as there's people around doing stuff with technology. Interestingly, [with] one of the people I interviewed for my doctorate, I was going down this trail of, if we reach AGI—artificial general intelligence, “strong AI” is what some people call it—then aren’t we a slave to electricity? Because like you said, the cloud is just someone else's computer; well, AI goes down if the grid goes down. And he said, “I'm not worried about that. If we've conquered AGI, we'll probably have little personal nuclear reactors in our own little suns going.” I thought, you know what? He may not be wrong. 

Anyway, on this line, I want to go in and talk about the next book that you've published called Religious Transhumanist and Its Critics. In it, you claim that transhumanists are hijacking Christianity and replacing God with technology. Ironically, my first guest on this podcast was Micah Redding, a Christian transhumanist, who made a strong case (and he backed it up with Scripture) for why we should embrace it. It kind of goes to what you just said about our mandate to make the world better, to love God and love neighbor. And who's to say that technology isn't part of the miracle spectrum, if we can make the lame walk? Peter and John can do that with the Holy Spirit, and technology is part of our way of helping people, like medical science.

Where, and why, are people like Micah Redding wrong?

Brian:

I am very familiar with  Micah’s work. He does some really, really interesting stuff. I like the fact that he’s pro-technology in terms of looking at Christian history and Scripture and seeing that technology and Christianity over time generally have gone together quite strongly.

However, I differ from him in his optimism about technology, because technology is very, very powerful. It's also very, very dangerous. And of course, if you ask him about this, he'll say, oh, of course, I know that. I mean, we need to think about ethics. We need to have ethics that's proportionately as strong as our technology. But then I think he still spends a little bit too much time emphasizing the optimistic side of things; maybe he's intrinsically just too optimistic or idealistic about the world. 

I also differ from him in the fact that he likes to use the word transhumanism. I don't think transhumanism is a word that we should be using to refer to this. It's been taken over by a group of people who are pretty extreme, and I don't think it's a form of rhetoric that Christians ought to be engaging in. In Religious Transhumanism and Its Critics, my chapter is called “Technological Progress Yes, Transhumanism No.” I'm very much in favor of continuing technological progress, but I'm not in favor of transhumanism, because I think that transhumanism is an ideology, fundamentally. It's not so much about the technology; it's about a way of viewing technology, and it's a fundamentally exploitative understanding of how technology works in the world. And it also has a lot of other associated ideological and even religious aspects to it that I just don't think are compatible with Christianity. 

Gretchen:

On some level it's semantic. I think Micah might argue, we're just taking back the word; but your argument here, if I hear you right, is it's dangerous to take back a word that's got such connotation in our culture right now. And the people that are associated with it are generally not going to be detached from it for the less nuanced among us.

Brian:

Right. I agree completely. And one thing I will say is that I know lots of transhumanists who are good people, so I'm not trying to say bad things about them at all. In general a lot of them are interested in this because they have very personal reasons: they know someone who died, or they have a loved one who has a sickness. They want to find out how to solve these problems and help people. But in a lot of ways, it's just not a word that we can take back. Even though the original word is found in Dante’s Paradiso—which is kind of fascinating to think about—it's just not an appropriate word right now.

Gretchen:

Well, that's part of the joy of doing a podcast like this; you can have Christians who agree on a lot of things, and disagree on some things, and still love each other. You'll know them by their fruit and by unity. 

Brian:

Absolutely.

Gretchen:

You've helped convene some academic discussion groups on AI and human nature, which is kind of what we're talking about right now, involving the Vatican's Pontifical Council for Culture. So talk about the relationship between technology and religion, particularly technology and the Catholic Church, and maybe dig into the monastic roots of the phrase, “There's gotta be an app for that.”

Brian:

Yes, I have been working with the Pontifical Council for Culture. It turns out that the Pope is very interested in AI, of all things, and so of course all the Vatican is. The reason he's interested in AI is, of course, because it has immense ethical implications to it, and these ethical implications are important for us to think about.

That's exactly what these groups with the Pontifical Council for Culture are working on. They're having these conversations—how should we be thinking about AI? And not just kind of as simple issues either. Everyone knows that bias and loss of jobs and all these other things are going to come along with AI, but the Pontifical Council is really thinking about, what does this mean to be human? What should we be thinking about in terms of embodiment, in terms of people who want to upload their minds into a computer? Which is something which could be possible if you start sticking enough wires into people's heads. 

So it really becomes a fundamental question of who we are as human beings.

If we can externalize everything about humanity that makes us particular and special and who we are, and put it outside of us, and then that outside object is better at that than we are … What's good about us anymore? That becomes the fundamental question that's going on there.

By the way, I'll just answer that question because it does have an answer: humans love. Humans love, right? This is something that we can do that nothing else can do. We are commanded to love God and neighbor, and that is really what makes us distinct in all of creation. It doesn't matter how much we offload onto machines; it's still going to be love and compassion and caring that makes us different. 

Now I'm going to go that one more step that you were asking about monks making an app for that. So it turns out, returning to the fall of the Roman empire, these monks were stuck in various strange locations at tops of mountains, or on far away shorelines in Ireland, and places like that; and there weren't that many of them. The traditional thing to do in a farming culture is to make your kids do a lot of the work. They didn't have a bunch of kids around to do the work, so they had to do the work themselves. And so they were constantly thinking to themselves, how do I make this more efficient? How do I make my work easier? Because it's tiring to grind flour by hand. 

So they came up with things: well, we have tidal water passing our monastery every day, a couple of times a day. So we'll put a water wheel in the water, and it will spin, and that will turn the grinding wheel for us. And so we see in Ireland the very first example of a tidal-powered water wheel back in the sixth century, which is a long time ago. There's no other examples of that previously in history, so they invented that, brand new. 

Over time they got more and more involved with the development of technology. Whether it was central heating—that was something that monasteries developed because they needed to keep their libraries warm enough so that the books didn't get moldy—or things like metallurgy; there was a lot of work with metallurgy. A lot of work with draining swamps, actually; for example, some of the early work draining the Netherlands and turning that into a country with more land was done by a monasteries who said we need more land to farm. So they started draining that land.

Gretchen:

You know, it sounds like the monks were the original lazy engineers. There's gotta be a better way to do this, and I'm gonna make it. You forgot cheese and booze! I think that the monks have paved the way and some really good food and drink options for us as well.

Brian:

You're absolutely right. And that should not be underestimated either, because food technology is actually really important: understanding how food can be preserved and how to prevent it from spoiling. Dom Perignon, for example—Dom Perignon was a real person. He was a priest. Somebody said, Hey, fix this problem. Our bottles keep exploding. So he went out there and he said, well, this is what your problem is, and this is how you fix it. And they said, right; we’re naming the champagne after you. 

Gretchen:

He's probably my favorite monk of all time. 

Going back just a little bit: the concept of the Pope being interested in this from a spiritual perspective, and maybe combating the idea of computational reductionism—that we're not computationally reducible. You can't code love; you can only give it and do it and receive it. I thought that's a really interesting place to dig in the Pontifical Council for Culture. 

You wear a lot of hats, Brian, and one of the many hats you wear is co-chair of the Responsible Use of Technology group at the World Economic Forum, where you have the opportunity, as you say, to influence the influencers. You're saying, I don't have a lot of influence, but I can influence the influencers. These are people from companies who set the agenda—so this goes back to your corporate ethics area of interest. So where's the current balance? Do we need an ethics police or an ethics mafia, or are we simply appealing to people's better angels? What's the balance between carrot and stick here right now?

Brian:

It's mostly carrot, at least from my perspective. But you're right; there actually is a stick, which is that people get hauled in front of Congress, and there are bad headlines, and there are all sorts of other negative things that happen. The technology industry has really gotten hit by a whole bunch of stuff in the last five years. If we back up to 2015, I wasn't yet in my current job as director of technology ethics, but I was working on technology ethics at the Markkula center at Santa Clara. We talked to companies, but there wasn't really a whole lot of interest. People said, eh, the tech industry is fine. Everybody likes what we're doing; it's not a big problem. 

And then, things started happening. Whether it was the Tay chatbot at Microsoft, or whether it was the Gender Shades study that realized face recognition software was extremely biased against dark skinned women—all these things started coming out. And of course everything around the election, and so many other things, a laundry list of things that have happened. Eventually, the shine starts wearing off the industry. 

To their credit, several companies recognized that this was happening early on and said, “We need to do something about this.” So we worked with them at the World Economic Forum. Our first paper was with Microsoft; Microsoft worked with us to lay out all the things that we're doing to support ethical thinking in product development at Microsoft. So we have that paper. Then we went to IBM, and we have a paper with them that came out in September of last year. It talks about everything that IBM is doing in order to try to promote ethical thinking at their own organization. And of course, by putting these things out there, they're hoping that other people will see what's being done and then adopt that on their own. They're trying to make it easier for other people to see what can be done in order to try to make better ethical decisions at their companies. 

Gretchen:

So like you said though, it’s mostly carrot right now; but that tide is changing, I think. 

Brian:

At some point, if the carrot doesn't work, then the stick comes out. Yes.

Gretchen:

And we see a lot of examples of sticks with GDPR in Europe; they're kind of ahead of the curve there. And I wonder what we'll see coming down the pike. 

Let's switch to another hat: you also have done work with an organization called the Partnership on Artificial Intelligence to Benefit People and Society. It's also known as the Partnership on AI, and it's a nonprofit that organizes its work under several programs, including media integrity, labor and the economy, and what they used to call FAT ML: fairness, accountability and transparency, but they changed that for a lot of reasons. You're in the fourth program, and that's called safety critical AI. I want you to talk about that for a minute, maybe by way of some concrete examples, and tell us how safety critical AI differs from the other flavors of AI that we might encounter.

Brian:

The interesting thing about Safety Critical AI is that the group was founded with about half the people in [it] very interested in the AGI problem—in other words, we're going to create super intelligence and it's going to do something terrible because it's misaligned with human intentions. The other half of the people in the group were thinking very much presently: right now, the self-driving cars are crashing and killing people. This kind of split between the people wanting to think right now, and the people wanting to think farther in the future, is interesting. 

Sometimes people look at that and they say, those are very different and therefore they shouldn't even be in the same group. I think it's important, actually, that they are in the same group, because they're both varieties of the same problem. They're fundamentally an issue of where you've automated something and it doesn't do what you want it to do. It hurts people instead of helping them. When self-driving cars crash, it's because they don't know where they're supposed to be driving. (Of course we say they don't know where they're supposed to be driving, but there's nothing there. It's a machine, it's a computer; it's just made a mistake.) 

And of course there are other examples. Right now, there are AI assisted clinical decision support systems. So these AI clinical decision support systems—or AICDSS—will watch the surgery happen with visual recognition technology and say, you should be cutting here and not over there. Then the surgeon has to decide whether they trust the machine or not. And of course, if they make a mistake, whether it's following the machine's advice or not, that's going to have an outcome; and of course, there's a strong incentive to just do what the machine tells you. In which case the surgeon is no longer a surgeon; the surgeon has just become a technician following the commands of a machine. That's a big, big shift in the way that medicine would operate.

Gretchen:

This takes us into the thorny bushes of who's responsible. I know that this has been a conversation regarding self-driving vehicles or autonomous agents: who do you blame? Because everyone's going to point the finger. Is it the company? Is it the engineers? The person who's using the autonomous agent? Where are we with that right now?

Brian:

Right now, if you're driving a Tesla down the road and it crashes, it's your fault. The reason they want that to be the case is because Tesla doesn’t want to pay for you doing something terrible and injuring people or getting killed. It defends them from lawsuits; it's a legal strategy.

So you have a big book of things that you are not supposed to do, or known failure modes of Tesla. So for example, as the car is driving down the road, if it sees a big white truck on a big white background—which would be an overcast sky, for example—it might not register that the truck is there, and so you crash right into it. That was one of the first fatalities in a Tesla, was a Tesla just plowing right into the side of a truck because it couldn't see the white truck on a white background. It was the driver's fault, because he signed a piece of paper that said, this is a known failure mode, and I will not let my Tesla crash under these circumstances.

Now there are more and more of these sorts of accidents happening. After a while, you have to ask yourself: Yes, the person driving the car should be responsible in one sense, because they are the operator of the machine, and it's a dangerous machine, and they should be paying attention to it. On the other hand, if you sell your car as having autopilot, you think it's going to be able to autopilot. This kind of expectation is setting up people to fail. And there really needs to be more work done thinking about exactly where responsibility lies. Yes, it's partly with the operator; but it's also with a company that makes you sign a legal form that says you're at fault, not them, because there could be a dispute about it. If it wasn't a chance of there being a dispute, then they wouldn't have you sign that form. Even if they're not legally responsible, there is some sort of moral responsibility there that they are trying to offload from themselves. But you can't just offload a moral responsibility when you create a product that creates these sorts of fatal circumstances. 

Gretchen:

Bob Marks, who was on our show in the second of my episodes in this new season, has talked a lot about the levels of autonomy and seductive semantics, which is a marketing term that [causes] us to not really understand what we're buying and what we're getting into. 

If you had to put on your oracle glasses and go a few years down the line, when these autonomous agents are more and more a part of our lives, where do you think the law is going to go? I mean, you're the ethicist at the party here; do you have any thoughts about this? I'm really interested, because ultimately, we are God-created and we're responsible for our moral decisions. We're trying to build machines that are not just autonomous agents, but moral agents. How do they make a decision? Can that ever be, and should we stop calling it autonomous?

Brian:

There are lots and lots of questions there for sure. The way I would approach this is to say responsibility used to be a lot simpler; now it's become vastly more complicated. When you're in this complicated sort of situation, sometimes you can trace it back to individual choices and individual people who are at fault in making certain decisions. But very often, even if you do that, you discover that five different people each made this same mistake or contributed to the same mistake; or they each did something different that only if all five of them had made the same mistake, would that thing have gone wrong. 

You have to start asking yourself: the responsibility is plainly diffused across an organization; how do you hold an organization responsible? You have to figure out the right incentive structure, which is that if they produce a good product, they're rewarded for it; and if they produce a bad product, they have to pay some sort of penalty that disincentivizes them making future bad choices. Generally in the United States that involves paying money when people have made a mistake, because we honestly don't have another good mechanism for it.

Gretchen:

That's actually one of the biggest sticks out there is loss of revenue, or a lawsuit that rewards the victim. You have no chance of losing your job in the next decades, as long as you live! 

Brian, digital technologies have arguably moved us further into what media scholar Marshall McLuhan referred to as a post-literate society. But I still read a lot; and no one can see this, but you're sitting in front of a lot of books, including your Space Ethics book, which I think is pretty cool. I like to get book recommendations from people I like and admire, and I like and admire you, Brian. What are two or three of the books that have made a big impact on you and or your work, and why would you recommend them to our listeners?

Brian:

There are a couple books that I would recommend a lot. This is a difficult recommendation for me to make, because if you don't like German philosophy, then you're going to have trouble reading this book. This is a hard book for me to read; I had to read it three times over several different years: it's Hans Jonas's Imperative of Responsibility. Just on the very first page he says, look, technology has completely changed the way human society is operating, and we need to look at this from an ethical perspective because it's a very big deal. From then on, it goes for 200, 300 pages; I can't remember how much. It's not easy to read, because it is German philosophy; it was written in the 1980s. We should read it and think to ourselves, okay, this is from a different time, but actually he was being extremely prophetic. He saw this as a problem way before anybody else did. Of course, in our minds, all of this is hitting us right now, over and over again, and we’re finally figuring out that technology's important. If we had just listened to him 30 years ago, we could have done something about it and maybe done a better job of it.

So that book has been highly influential on me. I've read it several times. One way that made it easier is I discovered that there's an audio tape of it that was in the GTU library. I listened to that and I said, ah, now it makes sense. 

Gretchen:

Just, just the fact that you said audio tape in the library. I mean, who does that anymore? You do!

Brian:

That's the thing. I'm sure there are more copies of it out there besides just in the Graduate Theological Union’s library. It's really golden material, and not a lot of people know about it. 

Another good book that I would recommend would actually be a book by a former colleague of mine, Shannon Vallor. She's a professor of philosophy now at the University of Edinburgh; she was at Santa Clara University. She's one of the most prominent philosophers of technology in the whole world, and her book Technology and the Virtues is a fantastic book.

Basically, she looks at technology from a virtue ethics perspective. In other words, what are we doing to ourselves with our technology, and how can we become better people in order to create better technology? She looks at it from the perspective of not just a Western Aristotelian virtue ethics, but also Confucianism and Buddhism, just to try to get a more global perspective.

She really weaves together a book that's been really, really eye-opening I think, to a lot of people. So I recommend that anybody who's looking for another philosophy type of book to read. The nice thing about her book is it's not terribly expensive, and it's not nearly as hard to read as German philosophy; but I've tried to get my engineers to read it, and certainly some people still have a little trouble with it.

Gretchen:

That's cool. So German philosophy and Shannon Vallor. I'm reading that book right now and her, her name is spelled V as in Victor, A L L O R. It's  an interesting way of spelling it; but yeah, it's a good one. 

Brian:

A very virtuous name. 

Gretchen:

Any others Brian, or just to those two are going to keep people busy for a while?

Brian:

There are really so many of them. I think those are the main two for now.

Gretchen:

And you should also read Space Ethics by Brian Green.

Brian:

I like my book also. So thank you for recommending it.

Gretchen:

And Religious Transhumanism and Its Critics, which is available shortly.

Brian:

I will say that the price point on Religious Transhumanism and Its Critics is really high, so you might want to go to an academic library for that one.

Gretchen:

I was just going to say, a lot of these books are academic, some more than others. 

At the end of every podcast, Brian, I like to give my guests a chance to share something of their own personal vision and vocation. How do you see your mission or call in life? What kind of legacy would you like to leave, both on your field and on the world, maybe before you Neuralink yourself into a free-floating settlement with a Venus-front property and a great view of the moon?

Brian:

First I'll say that I'm probably not going to be doing that, but thank you for the wonderful idea and wonderful image of that. 

Gretchen:

No charge.

Brian:

What I would say is that fundamentally, I want to give people the tools to make better ethical decisions, particularly having to do with these new technologies that we're getting, because they're getting to be very high stakes. My hope is that I can take these tools, which are basically just ways of thinking about things—a lot of them come from the Christian tradition also, or from Catholic moral theology—so many historical tools that people have thought of over thousands of years, and maybe people have forgotten them. They might not even know that they exist anymore. This really gives us the opportunity to bring up these old tools and say, look, this is applicable to the situation that we're in right now. People have forgotten about this, but now we can think about this again. 

And we have pretty clear examples of this. Double-effect reasoning: anytime somebody is thinking about medicine and side effects, they're using double-effect reasoning. They just don't recognize that they're using it. Also things like just war theory. Just war theory is something that was very specifically Catholic for a long period of time, until it's now become something that people in the U.S. government talk about: is this a just war? Is this something that we should actually be doing? 

We have these tools in the past. It's just a matter of getting them out there and helping people know what they are and how to use them in particular situations. There's so much now with technology, so much that we really need to be thinking about more deeply. My goal is just to make sure that the people who have to make these decisions have the things that they need in order to make the best decision that they can. 

Gretchen:

I really love that, because we tend to think—even the longer we live—well, that's already been written about. And half the time it's simply not known. How much do you have to dig to find things? And you wouldn't even know where to begin looking to dig into these issues. I'm really glad that you're serving your purpose in your generation, Brian, on the tech ethics front.

Brian:

Every generation has to do it again, is what I would say. Actually, this is a point Hans Jonas makes, which is that the more something has to do with  ethics, the less likely is progress in it, he says. That's because if you're doing science or technology, you can build on the past, because you're working with things that have been externalized, and those things are still there. But moral knowledge is something that has to be learned every single generation over again. We can build political systems or social institutions that try to support or facilitate that cultivation of virtue in people, but even those breakdown over time. So it's constantly a struggle. Every generation has to build these things over again. 

I do think we have the ability to get better at it in the future, but we really need to start now because the situation is dire. I don't think there's any other way to look at it.

But by saying that we're in a dire situation, that's not to say that it's hopeless. There's always hope. God is here to help us. God wants us to have a better future, I would argue. There are two hypotheses for how the future could go: either badly, or well. And there are two ways that we can choose to approach that problem: we can either try to make the better future happen, or we can do nothing. If we do nothing, then of course the bad future is going to happen, because we haven't done anything. If we do something, if we actually try—if we follow the nobler hypothesis—then we will either succeed or fail. If we fail, at least we tried. And if we succeed, then we've done a wonderful thing. We've saved human civilization and we've gone on to a bright future together. 

This is why we are born in this generation. We are here because this is our job. We got to do it. 

Gretchen:

And the call to evangelism is one that resonates there. There are no second generation Christians; each generation has to make its own decision to follow Jesus or not, to do good or not. It's not a predetermined thing; we have agency.

Brian:

I agree. Absolutely. Sometimes I say we need to evangelize like our lives depend on it, because in some sense they do.

Gretchen:

The further technology goes, the more we need it. 

Brian Green, I'm a big fan. Thanks for joining me today and coming on the podcast.

Brian:

Gretchen, this has really been wonderful. Thank you so much.