transcript for episode 84

Gretchen Huizinga: So I'm delighted to welcome David Brenner to the podcast today. David is the founder and board chair of AI and Faith, which we'll learn all about shortly. He's also a great friend and colleague.

Previously, David was a risk management attorney who practiced law in both Seattle, Washington and Washington DC, and he's a graduate of Stanford and UC Berkeley Law School.

If I could brag about David for a minute with something that would never appear on his resume: I've never met a person who could inspire, initiate, ignite, and shepherd more efficiently and more effectively than David Brenner. I've said he could be one of those performers who keeps a dozen plates spinning above his head, but then he'd have to join the circus, and unfortunately – or fortunately – he's needed elsewhere.

So, full disclosure, I serve on the board of AI and Faith and I'm a research fellow there. So, if it sounds like I'm a little biased, I am.

David Brenner, welcome to the podcast.

David Brenner: Well, thanks so much Gretchen. And what a kind introduction.

Gretchen Huizinga: Well, let's talk about AI and faith. I'm not sure how many of our audience even know this organization exists. It's a relatively young organization, even though I would say it was formed “BC,” “Before Covid,” and I believe it's the only one of its kind.

So give us a brief history and overview of the organization. What was missing in the AI conversation that you felt needed the voices of the world's great religions to address? And what was the value add in an otherwise dense field of AI and AI ethics organizations?

David Brenner: What was missing, I think, was depth in the ethics conversation.

I came to this conversation with a background you described as a lawyer in risk management, working increasingly with technology companies over the 00’s and into the teens. And that was a set of business questions around how do you move forward with technology without getting snagged by problems?

When I stopped practicing law and was looking for something a little more foundational and something that would relate my faith life as a Christian to problems out in the culture, I realized that the big questions of artificial intelligence were the same as the big questions of faith. Questions of: Who are we vis-a-vis what we create ourselves or experience in nature? Do we have free will and agency to act in the world? What is the meaning and purpose of life and of work? What's our source for truth and for justice? What are our standards of reference?

All those questions were theoretical questions around 2017, when I first realized that artificial intelligence was likely to be a major conversation. It was already starting to be; there were already ethical groups around the conversation. The big question was robot overlords right at that time. But it was all theoretical. You know, it was all Hal in 2001, the movie, that level of conversation. Now, here we are, now six years later, and it's not theoretical any longer. It's been quite shocking what we've seen in the last six months, and it's on everybody's radar screen.

Gretchen Huizinga: Yeah, so talk a little bit about how you started this. I mean, you see these big questions, you see these big issues, and you've stopped practicing law. What did you do?

David Brenner: I was looking for bridge builders across the increasing polarization and especially in the communities I was involved in the religious world. And one day I walked into the church library at my big church on the edge of the University of Washington I was attending. And there was a book propped up on the shelf by the librarian called Man's Final Invention? And I thought, “Oh, well what's this?” It was about the robot overlord question and what would happen if we connected AI to the internet and gave it arms, legs, and let it out into culture. I thought, Well, that's silly,” based on all my prior experience, which had involved things like web liability and cybersecurity and those sorts of questions. But then I looked at the blurbs on the back of the book and it was Bill Gates and Elon Musk and I thought, “Well, okay, maybe I need to pay a little more attention.”

Out of that, we tried a couple of different programs at that church and another one right on the edge of the Amazon campus here in Seattle to see what people's reaction would be to the matchup of AI issues and faith questions. It was really positive. And so from that, we started to build a network of professors at the University of Washington who were churchgoing people who had an integrated faith in science outlook, and people in technology and faith leaders who were pastoring congregations. From there, from the Pacific Northwest, over the next couple of years, we moved out nationally and then eventually internationally. And now we have a community of 130 experts who cross over from technology to faith leadership and all kinds of professions in between academics: sociology, psychology, neuroscience, business people, lawyers like me. So it's a big tent because the issues and AI reaches into all those areas.

Gretchen Huizinga: Right.

Well, you know, and I just would add how I got involved, briefly. And I'm not being the hero of my own story here, but my husband was on the treadmill at the club and he was reading an article in the Seattle Times about this organization. It had been covered in their tech section. And I saw your name and a bunch of other people too. But I didn't initially call you. I had called some other people that I knew tangentially through my church who directed me back to you. They were down in California and it's like, “Oh, you really need to talk to David Brenner.” So I called you up and here we are today.

That's fascinating. Well, David, we've already alluded to this a little bit, but why do you think AI is such a compelling subject right now? From your perspective, how is it different from other major technological innovations and why?

David Brenner: I think it's the way it mirrors back to ourselves. You know, we always want to anthropomorphize things. I used to do it all the time with, even before the cartoon cars, you know? I'd look at the front of cars and see our human face in them, right? The headlights, the grill.

And I think that's what happens with AI, the whole robot world. Before, when we used to think about robots, we'd think of the physical robots moving about. And then along came Alexa and other chat-bot-type assistants, and we realized actually this is a much more virtual thing. That's the stage we'll go through. And now we're at a point where that's really happening in a way that combines not only programmed responses, but seemingly spontaneously generated responses in ways that are seemingly knowledgeable and also emotional.

We used to joke about, you know, Alexa and relationships with those commercial bots, but now we have this interface that feels so much like us, functionally. And I think that's the fascination.

Also, the ability to organize knowledge and find knowledge in easier and easier ways. Search was great. Wikipedia came along first and there we had the world's knowledge sorted like a massive digital Encyclopedia Britannica. Then Search exposed us to the total knowledge of the universe seemingly, but you had to deal with it on a one-on-one search basis. And then now we have answers coming in organized fashion just by asking questions. It's like, remember Ask Jeeves from about 1992? Now we have Ask Jeeves as a seemingly real person, there, with all the knowledge of the universe behind it. So there's just this mirroring function.

And then of course there's also the fearful side of it, which has gripped a lot of people, especially now because of the speed with which this new GPT technology is improving. And when you see the tech creators like Geoffrey Hinton, often described as the godfather of deep learning, neural networks, leaving his very highly paid job at Google so he's free to talk about the risks involved here. That has to get your attention for sure.

So between the fear and the remarkable functional imitation of ourselves, I think that's what's different here.

Gretchen Huizinga: You know, it's interesting hearing people leaving so they can talk. I know several people in the faith world who've said, “I've left my job so I can talk about faith,” which is a whole other bucket and a whole other podcast.

But speaking of faith, AI and Faith is intentionally multi-faith. And you are a Christian believer. So kind of a multifaceted question here. First of all, I know that we had initially a statement by Brad Smith from, I think, Chapter 11 of his book Tools and Weapons that ostensibly invited voices of the world's great religions to speak into the AI conversation. And yet it's been very, very difficult to get a seat at the table. Why was it important for you to build this organization upon a robustly pluralist scaffolding?

David Brenner: My experience for that came from another global health organization here in Seattle, The Washington Global Health Alliance, which I co-chaired a committee for. They made space for the faith world in that big global health venture. And these were major players in the global health world: the Gates Foundation, the University of Washington Global Health Department, and eventually World Vision, which was down the freeway, but Gates was not talking to World Vision, and even though Gates was the largest foundation in the world, World Vision was the largest faith-oriented relief and development agency in the world.

So from that experience, I could see that if you had a third party of structure and a topic that people could respectfully talk about and had passion around in ways that were related to their value systems, you could really get a lot done.

Coming out of that, when I saw this opportunity with artificial intelligence as a new subject to work shoulder-to-shoulder on, I realized that that same pluralistic approach that we'd taken in the global health world with the major religions represented in these big, large faith congregations here in the Seattle area, at the largest mosque, the largest synagogue, very large Presbyterian church, that that could work here well, too, as a pragmatic matter. Because the secular side, just like in the global health world, the secular side and this big AI ethics debate will listen better if the major religions are working side by side, cooperatively and respectfully to make the points on which they agree.

So that was the pragmatic side. But also, it's actually quite interesting to have these conversations. I know you've participated in our research fellows monthly call. And I've been amazed, as I sit in on those calls, at how much our Jewish and Islamic research fellows agree on a law-based system as a foundation for this ethical conversation. Here you have a 4,000-year-old legally based faith system, the Jewish world, you've got, what, about a 1300-year-old faith tradition that builds off a part of the Old Testament, Islam, and the sort of structure that they share actually overlaps quite a bit, especially given that to some degree they come out of the same sacred text.

So there's a substantive part of it, but the main thing for us is… Our main point working together is that faith and the major religions and their faith leaders, and the tech creators who adhere to those religions belong at this table around how do we get AI for human flourishing and not destruction? Because their faith traditions are ancient wisdom that's been applied to ongoing developments throughout the last four millennia. And a lot has been learned from that. So how do we apply that to this new problem?

Gretchen Huizinga: Well, interestingly, on that note, you talk about law-based faith structures. And so, you know, the Christian faith rests upon the shoulders of the Old Testament, or the Hebrew Bible. What do you think the value add of the Christian voice – and I'll use the broad ecumenical “Christian voice,” because in our organization we run the gamut from Catholics to Baptists to Quakers to, you know, every variety, and of course Presbyterian, of our founder – what do you think the difference is in sort of that differentiation of the ecumenical saying “Christian” versus the multifaith? What's the value add of the Christian voice?

David Brenner: You know, I think it relates to building on that Jewish platform of the Old Testament. But as a Christian, what I would say, takes that to this other dimension of a personal relationship with God that grows out of grace and where the emphasis is on love and the power of the Holy Spirit to change people's own hearts and minds. It's kind of about a passion that leads to action, through divine power, through the power of the Holy Spirit to make the changes internally that that will give us a better platform to work from personally and a more loving way of engaging with the world.

So it's kind of the best of both from my perspective. And I love our Jewish brothers and sisters and our Islamic – and Hindu and Buddhism, it's such an interesting additional and different dynamic. But speaking as a Christian…

And this is the beauty of what I think we've brought together here, each one of these faiths and our inherit gets to speak into this channel in a respectful way, but in a full-throated way so that we don't have to dummy any of this down. So I could answer in the middle of a meeting among all our experts the same way I've just answered you. And there's a wonderful freedom in that, and I think it would be deeply frustrating not to be able to do that.

Gretchen Huizinga: Right. And I would say too, to affirm that, some of the conversations we've had with our multifaith brothers and sisters have been incredibly illuminating in terms of what they think. And so bringing all these ideas into an open marketplace of ideas is, I think, one of the big value adds of AI and faith. And I'm going to stop saying the phrase “value add” as of now.

So David, one of the phrases you've said in the past and even recently, is to pull “the big levers of faith” to gain the voice that we're seeking. And you may have actually just explained that, but maybe not. I'm trying to unpack what you mean by “the big levers of faith.” So what do you mean? And how do we do that?

David Brenner: That's a good question, but to start with, if we just think about how many people in the world are adherence to one of these major religions. And by focusing on the major religions, we don't mean to exclude other groups either. It's just a question of how much can you manage to hold together in the big tent that we're trying to work in. So, you've got three quarters of the world's people adhering to these five religions. If you add in Confucianism and Daoism, you pick up a good deal more. And those people are, as I've said earlier, motivated by their religious faith to a greater or lesser degree. But that's a lot of people on this planet.

Then they have faith leaders who some of them, many of them trust. And those people can bring a voice on a more concentrated basis.

And then you have many faith leaders or faith-oriented tech creators. So many Hindus, for example, here in the United States, because of our immigration policy around bringing overseas; PhD candidates and students from China, from India, from countries all over the world, many of them bring their religion with them. And so those people are right at the creation of this technology.

So you got a massive lever of faith people all over the world who are the consumers of the technology, that big tech would like to sell their products to and have them use. And then you've got the people within big tech who have a smaller lever, but right there at the engine. And so how can we pull those two levers effectively?

Then, of course, we're all voters here in the United States and we have a chance to have our representatives focus on this. We have some form of political engagement. At AI and Faith, we're an educational nonprofit, so we aren't lobbying for anything, but clearly there's a need for an engagement at the government level. So, without lobbying, how can we help educate people who are lobbying to understand that within this ethics conversation, there are values on the part of everybody engaged in the conversation: how about we include values that are based on these ancient wisdom systems that are adhered to by so many people?

Gretchen Huizinga: Go in a little bit further on how we do that. Because one of the things we find with a secular community is resistance to metaphysical or religious thinking. It doesn't fit the “science box.” And for some time, religion and other metaphysical ideas have been excluded because they aren't provable through a scientific method. How do we get into the conversation where we think people are going to listen, those that are pulling the levers close to the engine?

David Brenner: One approach that we've been using is through emerging faith-employee resource groups within the Fortune 500 companies in America.

So the Diversity, Equity, and Inclusion world has realized that in addition to the original groups like race, ethnicity, gender, environment, sexual orientation, that there are large numbers of employees for whom their faith matters. And of course, employers have to accommodate those faiths too, under our constitution in various ways.

So these groups have started forming up as DI groups, faith-employee resource groups. And that they're pretty much on a pluralist, big umbrella basis. That's how the companies want them. But within them, there are these passionate individual faith groups working together, just like we're trying to do.

They have a big conference each year in DC and for the last three years we've participated in that conference and provided panels and workshops on how do you move from your faith belief to an ethical position about the work that's right in front of you? So if you're working on a technology product and it's an AI-powered product, what do you think of that? Let's say it's a surveillance tool or a facial recognition software or something, or an HR algorithm, something that's controversial and you can easily read about that everywhere. Well, is that an okay thing to be working on? And if so, why, and how do you make your peace with that?

And then what about your company's ethics? If you can understand and develop your own personal ethic around that, then relate that to what your company says is why it's okay for them to be doing that and question that. And come alongside the ethics officers that more and more are being created in these companies to help them and support them in their work because they're in a tough spot too, trying to you know, rationalize an ethical approach on the company's profit motive.

So we think there's a very handy lever that didn't exist before and is ready to be pulled by helping people understand themselves, the position they're in and the opportunity they have to make a difference.

Gretchen Huizinga: You know, sometimes, it's just a matter of bringing something to someone's attention, like, “I'm just going along because this is what I'm told to do.” And these are the big ethical questions that have plagued us since we were created. You know, “Do I follow God? Do I trust God? Or do I fearfully follow man and seek the approval of man?”

David Brenner: And we're building on a nice, big foundation. So, you know, for the last forty, fifty years, the faith and science world, that supposed duality, has been pulled together more and more by groups like BioLogos. And then you have the Faith and Work world, which has also been forming up for the last fifty or so years. Not just being a Christian or a Jew or a Muslim on – would that be – Sunday, Friday and Saturday, but also, all week long.

These are questions about integration, right? Integrating faith and science. Totally possible. And integrating your own life, seven days a week, your work and your home and your interests all into a framework that actually fits together.

Gretchen Huizinga: Yeah. And in some ways it's bringing the paths back together in the sense that science and religion were not always separate. In fact, they were traditionally the same, right?

David Brenner: Yeah. Right up until the 1700s or even beyond. Adam Smith was a Scottish Enlightenment capitalist Christian, right? Isaac Newton was a – if you leave out the alchemy part – he was a solid Christian. Francis Bacon, I mean, Galileo… You can just go right on back. And so I think this dualism is a false one, but certainly it's been pervasive.

And then technology especially is such an interesting puzzle to me. How you have this fascination with the virtual in big tech and yet a rejection of the spiritual. It's like the model is pretty much the same. You've got material stuff, you got, hardware and then you have software, and then you can use that to create a metaverse that is nowhere except in the digital world, right? Well, we have hardware, the material world, and we have software, our brains and our emotions, and we have a spiritual dimension, which we, in Christianity would call our soul, that pulls all that together. It's the part that lasts; it's eternal. And even there you've got in big tech, this parallel: everybody, in the singularity, who wants to find life beyond this material, life by preserving their brains and uploading their cognition, their mind, and lasting.

So there's one-for-one correlation, and yet they call it “virtual” and we call it “spiritual.”

Gretchen Huizinga: Yeah. You've just answered, I think, the next question I was going to ask you, which is whether the big stories of faith tradition are relevant alternatives to the big story of tech. Oddly enough, I would switch that around to some degree to say, are the big tech narratives relevant to what's the reality of the spiritual world?

But what you're talking about is narratives of ontology and eschatology, which, technology, all the “ologies,” are attempting to tell, to create these narratives that address the same deep spiritual questions.

So talk a little bit about how you think these narratives compare and come together? Or don't; maybe they don't.

David Brenner: The tech story is pretty young. The faith story is the oldest story of mankind, right? One of the forces or elements that caused me to really want to jump into this space was Yuval Harari's books: Sapiens, and then Homo Deus, which came out around 2016 to eighteen, right around the time we were forming. And he's an Israeli historian who tells, packages up in a very compelling and readable way the story of Sapiens, with the backbone and through line being our ability to use language to create self-awareness and then eventually a story for who we are and what we're trying to do. So he's organizing our history by saying we organized as humans in ways no other animal could because we were able to verbalize, vocalize our story.

And then, you know, I think that's true, especially when you look at major religions like Judaism and Christianity: it's a 4,000-year-old story. And our story started out oral, like other religions, and then it got written down in the Bible. And then we can see all through the line of transcripts and manuscripts that it's lasted this whole time in an incredibly accurate fashion, right?

So here we have a 4,000-year-old story. And one classic way of stating that story is Creation, Fall – and that happens in the first three chapters of Genesis.

Gretchen Huizinga: We blew it early.

David Brenner: And then Redemption, which happens, for Christians, in the gospels of the New Testament. And then Restoration. And so that's a compelling story.

I think for technology, it's a really important point, I think, for us to say to our co-ethicists in this big debate, “Everybody's got a story and everybody's got values they're bringing to this. So let's elevate your story. Let's get those values out on the table and compare them to the values we have.”

So Christianity: what are our great values? Well, love the Lord your God with all your heart, soul, mind, and strength and your neighbor is yourself. Jesus said that's the greatest commandment, right? Which builds on the Shama: “Hear, O Israel, the Lord is one. And you shall love the Lord your God with all your heart, soul, mind, and strength.”

So there's this compelling model of love and this compelling model of who we are. We have a heart, we have a mind, we have the ability to act and we have a soul, right?

So, big tech ethicists: who do you say we are, and why do you say that? Where does that come from? And it'll usually be an enlightenment, humanist foundation that's based on some generalized sense of fairness, of respect for each other, that we're all deserving of that for unarticulated reasons, but just because we are. And I think that when you make that comparison, you can see that there are good reasons to talk from both perspectives. And then to ask, “Well, what difference does that make in the kind of products that we want to see and in what we would expect people would do with those products?”

So for Christians, for Jews, we got the Fall, right? Our belief would be: bad things are almost certain to happen, no matter how good the tool. If it's capable of being used for good and bad, it's going to be used for both, right? And we'd like to promote the use of it for good, and on whatever common ground we can help humanity come to around what that means.

Gretchen Huizinga: You know, that is an interesting point there in that I think we could find a lot of common ground on the “just becauses.” The things that you name are all values that no one would really argue with. The problem comes when you are confronted with disagreement or personal gain versus personal loss or whatever, and you have to have empowerment to act well, to act wisely, to act good. And so it goes back to what you said at the beginning is: what is the power that we have? And if it's just human, I think our history is pretty damning.

David Brenner: Yeah. Well, and it's certainly, a lot of people would say, “Well, the last thing we want at the table is sectarian religion, right?” Because there's such a great track record of that throughout the ages and right up until current times. So certainly that problem of the corruption of humanity exists within the religious world every bit as much as it does out in the rest of the world.

So we have to be very careful in how we engage from a position of equal corruption. We bring to the table some wisdom, but we don't necessarily bring better behavior other than the possibility of it and through this belief system. And so, I mean, institutionally not necessarily better behavior, but personally we believe that you can change for the better through the power of belief and the power in the, again, in the Christian realm of the Holy Spirit working within you.

And then out of that come great things. If you look back in American history, the Second Great Awakening in the 1820s to the sixties, before the Civil War, there were so many great social justice movements, starting with Abolition of course, but also women's empowerment and female equality and all kinds of other major movements that helped the poor, helped animals, helped an integrated life.

Gretchen Huizinga: I think one of the things that Christianity brings to the conversation is the acknowledgement of a spiritual evil and of human sin and saying these things are real. One of the guys I interviewed in my dissertation said we have this concept of “the bad actor” and it's always someone else; it's never us.

Well, let's head into a more practical conversation now, and I want to anchor it in the rise of the GPTs, or the generative pre-trained transformer models. And they came onto the scene sort of galloping in November of 2022. They weren't necessarily new, but the Holy Trinity of AI, I'll call it: massive data, sophisticated algorithms and enormous compute power, made them robust enough to draw a lot more attention than previous versions of AI.

So there's a lot of questions that are arising in this realm, and I just want to ask you some short, punchy questions and see how you respond to them from your perspective as a Christian believer. So the one I like first is – you've talked about it before – is can AIs assume the traditional role of pastor or priest, teacher or leaders, and, if so, should we send a bot to seminary?

David Brenner: I think they can functionally behave in some ways that actually can be useful, missionally, for helping people with their initial questions about faith, for example, if they can stay on track. There's a big question for everybody using these GPTs about, for whatever purpose you're trying to use them, can you have it be a good spokesperson for you rather than a renegade? So there's a big technical issue that many companies are trying to work out so that the GPT can be useful and not a problem.

But let's assume that we get past that. You have people at scale who, back on that large lever I was talking about, who may want to engage more with a GPT than a human, especially at the beginning of a faith journey or along the way. And so that could be very useful. You have people in countries that –

Gretchen Huizinga: Why do you think that is? That's an interesting statement.

David Brenner: Yeah. I think it's safe. You know, we're getting more and more trained by social media to engage digitally rather than in person. Nobody comes to your door anymore. So just as you would Google and search for those questions, here you have an interface that actually feels more emotional, more like you, and yet you can turn it off. You can go away.

So should we send it to seminary? I think that's what has to happen, right? It has to be trained. For anyone trying to use these GPTs, if you've got a specialized knowledge base that you're going to sit on top of what I think of as the “big box of words,” the large language model, you have to decide carefully what goes into your own specialized training set so that the GPT stays on track and also so that it's accurate, so it has a good, solid base to work from. So that's sending a GPT to seminary if you're going to want it to be functioning like a priest or a spiritual advisor.

Gretchen Huizinga: Right. But then you get back to the breadth of expression in the spiritual community, even among Christians, even among Jews, even among Islam, and others. You know, everyone's got their own gig. So do we really specialize out and say, “Okay, here's the Catholic GPT, here's the Islamic, Sunni, GPT here…” You know what I'm saying?

David Brenner: Uh huh. I think that's the best outcome. Because, while I'm not particularly an advocate for diverse religious viewpoints for the sake of diverse religious viewpoints, that's the world we live in. And it's a world that reflects basically who we are as humans. All these different cultures have had these faiths originate different ways over time. As a Christian, I, of course, am attracted to the beliefs of my own religion. And there are some challenging parts of that religion where Jesus has exclusive claims about who he is and how one becomes related to the Father.

But leaving that aside, that's the world we live in and that we can navigate. And I would think that it's the best way to use these GPTs, because the bad world is one in which the GPT just makes up a new religion. I mean, for some people that might be the feature and not the bug. Because who knows what it's going to come up with. I mean, I can imagine many people saying, “Wow. Finally we'll get the fully integrated view of what it means to be a religious person.”

And of course that's happened all through history. You've got lots of different religions that bubble up and claim to be the new one, or that syncretize across religions. I mean, you look at the New Age Space, it's got a lot of confluence at different ideas. And individually, a lot of people are putting together different parts of different religions into their own customized viewpoint. Well, GPT will do that for you in spades, I'm sure.

But, so we'll have both. And I think the question is, just like now, people need to carefully consider what they believe and why. And the same will be true there, too, except they'll just be more on offer and an agent to help you either stay in an orthodox lane or invent a whole new one.

Gretchen Huizinga: Well, and we'll leave aside for this podcast the whole thorny problem of, you know, what social media does is give you more of what you already like. And so, do you ever get confronted with challenges to your faith that might strengthen your faith if you only do the Baptist GPT or whatever. But again –

David Brenner: Well just like, you know, one of the good things I've read about GPT is we have this whole controversy about it writing your term papers and everything else and plagiarism and of sorts. It's not really plagiarizing anything. It's just pulling a whole bunch of stuff together. It might be violating a whole lot of copyrights. We'll find out more about that as it makes the way through the legal path.

Gretchen Huizinga: Oh man. There's your lawyer self coming out. Can’t wait for that.

David Brenner: But this idea of it being a sparring partner, I think, is a really pretty neat approach to it, right? Then it's a tool, then it's this knowledge base that you can try things out against.

And that's true in religion just like it is in writing up a marketing statement, right? You can say, “Well, how would a Buddhist think of this?” Or, “What parts of Catholicism might be really attractive to me as opposed to my Baptist background?” And they can do that for you. It may not be Orthodox. Even there, it might not get it quite right. But that's where it's a tool. It's just a first step.

Gretchen Huizinga: Yeah. And I will say that, my experience within AI and Faith in the many years that I've been involved in the organization, one of the benefits has been to hear other viewpoints, not just with within my faith, the Christian expression, but also outside of that. And it's broadened – first of all, it's educated. If I only stay in my bubble, I never hear what the other faiths actually say that they believe and how we are saved and how we pray and all these kinds of things. So that's good.

Well, let's move on to another question. One of the things that's sort of buzzing around in AI circles right now is, with the GPTs, is this idea of hallucinations and misinformation and whether a machine that doesn't know or have sentience could lie on purpose. So the big question, can AI lie, is a good one. And I anchor this in the idea that Satan, in the Christian faith, is a liar and the Father of Lies. So how wouldn't he worm his way into hallucinations and misinformation, even if with a “dumb” machine?

David Brenner: It is such an interesting question because even in the question we attribute it to a sort of personhood that isn't really there, right? And the way it lies is to pick up the lies in its training set, which is the knowledge of humanity and our own way of interacting with each other, everything we've written, spoken, and done.

So that's pervaded by deception just because of that problem of the Fall and of our own defects, our own willingness to lie and tell untruths, whether it's purposeful or because we don't know or understand, or for whatever reason – whole spectrum of intentionality there, innocence all the way through to malevolence. So it's just reflecting all of that.

But it does seem to have this capacity to pull things together where you don't know where it's going to come from. So Kevin Roose, the New York Times reporter, who was on the Daily on Wednesday a few months back – the New York Times daily podcast – extolling the virtues of this after having a tryout the day before, on Friday he's back on an unscheduled appearance to say, “Wait a minute. This thing tried to get me to leave my wife and join up with it.”

Interestingly, this Sunday there was another article by Kevin Roose about a new company called Anthropic. It was a spinoff of a bunch of employees from OpenAI, which created the large language model that GPT is based on, GPT-4 now. So these employees left because they were worried about the direction things were going and that there wasn't sufficient oversight for it, and people were just going to blast forward. So Anthropic is supposed to be the safe version. They're supposed to be the ones who are being super careful. Like Google initially said, “Do no evil.” So these guys are going to give you the safe AI.

Gretchen Huizinga: Right. But again, it's their safe AI, right? It's their worldview, their ideas.

David Brenner: Right. Exactly. And they've created a what they call constitutional AI. This is the first time I've read about this, so I'll have to try to learn more. Basically, you give the bot a set of rules. So it's a combination of rules-based programming at the very high level, a sort of moral framework for the bot to follow. And that's supposed to govern the hallucination, confabulation, supposed to cure the AI of all of the “bad stuff in the box” and get it running on the straight and narrow.

But who made the constitution? Right? Where did those rules come from? And so our point is, yeah, well we all need to be thinking in those terms. We need some kind of a moral framework within which these GPTs can operate. And it shouldn't just be what bubbles up from big tech.

Gretchen Huizinga: Well, what you're talking about, I think, in AI circles is called “alignment.” And it's trying to get the machine or the AI to be aligned with “human values.” That's such a broad term. You have to put it in air quotes. What are “human values”?

It's interesting though, that the company is called Anthropic, which, again, is man-focused. There's no “theopic” in there, and no transcendence, I would imagine, in the rules that it programs into these things. And it sounds more like they're in the sort of “classic buckets” of ethics, the ontological and utilitarian, nothing beyond.

Well, answer this, David. We're calling these things “generative models,” generative pre-trained transformer models, GPTs, which connotes creativity. So my third anchor question for you is: can AI be creative? What are the requirements of creativity and can a machine ever have them?

David Brenner: I think functionally – this was one of the shockers of the last six months – functionally, GPT seems to be able to do things that we would think of as creative.

For example, if you ask it for titles for your program, it'll give you a bunch of interesting titles, right? That's basically what advertising is, and marketing. So suddenly we've gone from the thing that seemed like it was most protected work-wise to now least protected. It’s white-collar workers who need to be concerned, knowledge workers. It can code, self-code, which is one of the reasons people are so fearful about where it could go because it could code itself beyond us.

Entertainment and arts wise, I'm not sure. For the last three or four years, you could see people trying to use AI to generate art. But compared to a piece of art and all the creativity that really goes into that, where artists are always trying to come up with something new and unique for themselves, this is the exact opposite of that. This is like what… I remember in art history in college, I took a few of those classes and professors would categorically dismiss something as derivative. “Oh, that's derivative,” they'd say, “not original at all.” Well, that's all that GPT does, or DALL-E. I mean, the image side of this, I mean, it's taking stuff that currently exists and it's reassembling it in different ways, right? But then you could also say, well, that's mostly what artists do too, frankly. So, who knows? I mean, creativity is a squishy thing.

But there's no craft in it from a human point of view. And I think the element of craft is what still is especially missing. Because there's something about the way we do things that involves both the mind and a long-term engagement with ideas and then the ability to translate that into something physical, or material art, a play, a piece of music. And it's the friction, it's the challenge that's missing with this, when you push a button and it will give you a symphony, right? Nobody wants to listen to it because… Why? There's no humanness to it. There's not something that really makes you say, “Wow, there was genius in that.”

Gretchen Huizinga: Right.

I hate to bring up Pink Floyd. 50 years ago today… I mean, watching a documentary on how they created Dark Side of the Moon, and it was all analog; every single noise in there was mixed, physically mixed.

David Brenner: Anyway, I just read a great book by Simon Winchester. And in this book he asks what is it about polymaths that fascinate us? These rare people who know so much. And compare that to Wikipedia. Well, Wikipedia knows a lot, but it's not the same as that being housed in the brain of one person. And then it's fleeting and it deteriorates and it's all part of the human condition. And it's like an extraordinary human, right? Somebody who…

So it's not enough to just have the assemblage and the functionality. There's something about the difficulty of doing things that really matters to us. And none of that's really in this…

Gretchen Huizinga: No. And I guess the bigger question is: if you didn't know an AI did it, would you have the same discernment than if you did?

But some of what you're talking about is the divine spark, the sort of tracing back to the divine intelligence, the imago Dei, the God-breathed nature of humans.

I don't know, we could talk forever about creativity and creation and origin stories and so on, but I want to end with a couple questions that kind of go philosophical, David. And I'll start by asking whether you think the challenges of AI are really new and different, or are they more the same challenges humans have always faced in light of a fallen world but with a brand-new technology or a brand-new and more powerful and more deceptive technology?

David Brenner: I think it's pretty much the same set of problems, but framed in a new way, in a functional way that we've never seen before.

You and I have talked before about the chapter in Isaiah, the Old Testament, Isaiah 44, I think it is, where the writer says “Here you have a log, a piece of wood. Half of it you use to warm your room and your food. The other half you make into an idol. And then you worship the other half of the log.” So that goes back to – what? – about 600 BC.

So here we have the same thing, I think, happening with this digital technology. It's so functional. It’s so much like us, and yet more powerful potentially, that we're going to want to use it for everything we can get out of it, and then we're going to worship it. It's going to run our lives and we'll fall in love with it. So there's nothing new there except for the fact that it's extraordinarily functional. We've never seen anything like this before.

Another comparison is nuclear. I think that's probably the best time to go back and look at and ask what did people of faith think when we got the capacity to destroy the entire world? That was new, right? We never had that before the atomic bomb.

It's interesting because in our newsletter, which comes out each month, this last month, we had a piece by Don Howard, who's one of the leading military ethicists in the country, at Notre Dame, philosopher, been at this for 40 years, edits the Einstein papers. And we asked him to compare Geoffrey Hinton, the godfather of AI who left Google in order to speak out about this risk with Robert Oppenheimer and his famous statements decrying, to a limited extent, what he'd invented. And Don's take was, it's not on the same plane at all. Nuclear was and is material. It has the ability to take everything, wipe out everything. So the risk of robot overlords compared to the risk of a nuclear holocaust is not even comparative to him. On the other hand, we've never been in this position for the kind of surveillance and control and other risks.

So it's different from any other technology, I think, that we've experienced today.

Gretchen Huizinga: So if I could boil it down, it would be scale and scope of technologies that we're talking about.

I wouldn't underestimate the crafty human mind either in terms of what it could do destructively to the planet.

David Brenner: Right. Yeah. It's just another tool and it's early days. People are piling in, though, without a care. And it's very hard to slow this train, despite the levers we're trying to pull, but we sure need to try.

Because we don't have – here's the bottom line for me: we don't have to take what's on offer unless we sit back and passively accept it the same way we've accepted everything else up till now. But this thing is different. Whether or not it's a real threat, a lot of serious creators of this technology believe it is. So let's pause and as it comes along, let's do what we can in our own agency to wisely use it.

Gretchen Huizinga: Yeah.

Well David, as the standard bearer – and I mean that – as the standard bearer of AI and faith, would you say you're more optimistic or pessimistic about AI's impact on human society, especially as we look back on how you answered that last question?

David Brenner: Well, I think as a Christian who believes that there is an end to this world coming in the New Testament, a good end, I can be optimistic about this technology, just like anything else that's happening in the world. That's the source of Christian hope, ultimately, not the short term or the medium term, or even the long-term developments of these things. Obviously, a lot of bad things have happened over time and will continue to happen. So that shouldn't determine my bottom-line hope. So that makes me, by faith, an optimist.

Then the question, though, is: okay, how do we manage and secure that restorative element? If it was Creation, Fall, Redemption, Restoration, how do we participate in the Restoration here? And I think there are real reasons to engage AI-powered technologies. I mean, there's new knowledge. There's lots of wonderful healthcare solutions that can come from this and real enhancement of people's lives who are currently suffering real deficits, whether it's physical disability or it's lack of food, or it's lack of access to you name it. It's not a nirvana and we shouldn't just let it run.

Another article that you and I have talked about in the last few days is Marc Andreessen's article, “How AI Will Save the World.” Well, I don't believe in that. I think there are better saviors of the world. But I do think there's a lot of positives that can come from this if it's more regulated than Marc Andreessen would like to see it regulated.

So I'm in between. We have to be a good steward of this tool, like every other thing that comes along, and there's a lot of good that can come from it. And if we aren't willing to steward it, then we're like Jesus’ parable of the talents where we just buried the piece of treasure he gave us. And then when he returns, he said, “Well, why didn't you use that, put that to good use?” And then he banishes that person and gives the good stuff to the one who did put it to good use, the extra…

So that's the frame I think we need to approach this as is, as Christians and then alongside our fellow believers, seeking to do the best we can to boundary this technology and channel it into good outcomes.

Gretchen Huizinga: That's amazing. Yeah, just looking at all the parables of Jesus and saying, how would we apply that thinking to AI and what we do with it in the world?

Well, David, as we close, I do want to ask you one final question, and it has to do with the future of AI and faith. And the organization started, and so much has happened since it did in the few years that's been in existence. What do you see on the path forward, perhaps even in light of what some technologists are aiming for in artificial general intelligence and/or a superintelligence?

David Brenner: Well, I think it's really vital that we partner up with every other effective organization in this space that we can. There are an increasing number of denominations that are looking hard at this problem and have real reach.

So, for example, the Vatican for several years now has had the Rome Call for AI, which is a partnership with big tech companies and other government agencies in Europe. It's broadening out to include Islam and the Chief Rabbi of Israel and others. And the Southern Baptist Convention, which is the largest Protestant denomination in the world, just adopted their AI Resolution at their convention. There are other global groups, like the World Evangelical Alliance and many other large denominations. And that this just in the Christian space.

We really need to be bringing together sophisticated technologists and sophisticated ethicists and faith leaders to pull these levers as fast as we can and not get bogged down in doctrinal difference or attempts to persuade each other around things that we won't agree on. But on those things we do agree on, how do we love our neighbor and be an effective source for good in the world? Whatever that's founded on, how do we participate in this thing? Because it's an all-hands-on-deck situation right now. And it's moving fast.

And we can't trust the creators because of all the things we've already talked about. But there's so many giant forces at work here, forces of capitalism, forces of political power. There's those giant forces that now have a giant tool to wield, and a new one. And it's pretty shiny and seems like it's got endless possibilities. So how will that end well? That's our goal, is to do our best and be a good steward.

Gretchen Huizinga: Yeah. Well, that is a good end. David Brenner, I want to thank you for coming on the show today. It's just been delightful.

David Brenner: Vice versa, Gretchen. As usual. We have the greatest conversations.

Gretchen Huizinga: We do.

David Brenner: So thank you for another one.