Transcript for Episode 78

Gretchen Huizinga: I'm delighted to welcome Dr. Shanen Boettcher to the show today. Shanen is a former manager at a little software company in Seattle called Microsoft. He worked in both the Office and Windows groups and ended in Microsoft Research, bringing innovations and inventions to market. He's now consulting for labs, working on AI and large language models.

But my connection to Shanen is that he recently earned a PhD: his at the University of St. Andrews in Scotland, and his dissertation was on artificial intelligence and spirituality. And that's what we're going to be talking about today. Shanen Boettcher, welcome to the podcast.

Shanen Boettcher: Thank you for having me.

Gretchen Huizinga: So you and I share a similar story in that we both did PhDs after a significant time in the workplace, I'll say later in life, but mine was way later in life than yours was.

But we're both interested in the intersection of AI and religion. So, just as we start off here, how did this particular path unfold for you and what prompted your transition?

Shanen Boettcher: Yeah, it's really a pretty organic and meandering story. It's going to sound more linear than it really was, but it really started for me with my family. My wife and I, we both had big careers in software and we got to the point where one of us either needed to take some time away and focus on raising our three boys, or we needed to hire a second nanny to be with them on nights and weekends and such. And so we have three boys and they were in their preteen years. And I thought it was a great opportunity to spend time with them, being a dad, and focusing on that lead parent role.

And one of the things that was important in that time for me was to teach them about world religions, teach them about all different kinds of perspectives. My wife and I grew up Catholic. We did not raise our kids in the religion, but we thought it was important for them to have a broad perspective.

So first I needed to learn myself. And I started looking for [a] master's degree program where I could learn about world religions and how to teach them and I found that program at University of Warwick and I did that as a three year program. And Warwick is known in the UK for developing the compulsory curriculum that they have there where all students learn about world religions as they grow up.

And when I finished up that degree, I looked ahead to what was next. And my kids were still not grown and I wanted to keep with them. And this idea of studying and going to school as they did really was a great connection for us. And also in my search for a master's degree program, I had met Eric Stoddart at St. Andrews and found him to be a brilliant and fascinating person, and sort of filed that away as an opportunity for later to perhaps engage, sort of brainstorming and working with him on ideas for a dissertation. And that's what led me to the work that I did here.

Gretchen Huizinga: So Shanen, your dissertation research centers around concepts of religion, spirituality, and artificial intelligence.

So before we get into the weeds, could you briefly, in academic terms, operationalize those terms for us as you define them for use in your study?

Shanen Boettcher: Yeah, this is really important as far as how I looked at the way that we'll talk about later, how information becomes knowledge.

For me, spirituality is rooted in the individual. It's about how individuals make meaning of encounters in life and the information that they encounter there. And so influences can come from many places: from lived experience, from friends, from family. And this is really a very much a “religious studies” view of spirituality where there's influences from sociology, anthropology, history and philosophy. 

When I talk about religion in my thesis, it's rooted in the idea of a group, where the group is providing the structure. And oftentimes that will come through sacred texts, historical context, systematics, practical theology. And for spirituality and religion, the Venn Diagram is almost completely overlapped, and that's the root, or the desired root, of what spirituality is.

Gretchen Huizinga: Okay. So what about defining artificial intelligence?

Shanen Boettcher: Yeah. This is an area too that I think is debated and so it's important to have a clear vision of what we're talking about. For artificial intelligence, I'm really focused on the idea of a machine appearing to be intelligent to human. So it's really rooted in this idea of human-machine communication and the perception of the human of that machine.

And so Poole and Mackworth, that's one source of definition. And basically, if the machine acts intelligently and is seen as being intelligent by the human, it's intelligent in that context.

Another definition that's helpful for me in this study is by Russell and Norvig. And here it's really when a machine is mimicking the way of thinking or communicating that a human would. And so those things together come to bear, where if a human is interacting with a machine – and most of my research, the experiments were focused on that exchange between a human and machine – if the human was feeling the exchange was intelligent, then it was intelligent or it was meaningful for them.

Gretchen Huizinga: Okay, the whole idea of artificial intelligence is kind of founded on that, that human intelligence can be so precisely described and coded that a machine could think and learn and feel like a human. But for the purposes of your study, it didn't matter whether that was true or not, it just mattered if people perceived it as such. Is that fair?

Shanen Boettcher: Right. That's correct. Yeah. It was really, you know, I was looking at the potential for these interactions to have an impact on the way that people made meaning and developed their spirituality. And so, for me, it was really focused on that perception from the human.

Gretchen Huizinga: Okay, that's cool.

Well, the central hypothesis of your study is interesting to me, and that is that AI has the potential to play a significant role – these are your words – in the distribution of religious information and its transformation into religious or spiritual knowledge. So that's a really interesting segue.

So first, how do you differentiate religious information from religious knowledge? What's the transfer there? And then second – I know this is a two-part question and I do this all the time and it's quite annoying sometimes – but first, how do you differentiate the two? Second, why does it matter what role AI plays in the distribution and transformation of these things?

Shanen Boettcher: Yeah, the first question, the difference between information and knowledge in my definition here is that information becomes knowledge when a person makes decisions or takes action based on information they receive. And so you could say – there's different ways of looking at this – you could say it's making meaning, to translating and putting things into action that you hear.

So, you know, we're all in a process of filtering, and more and more, right? So much information's coming at us. We need to pick and choose what's meaningful, what we put to use in our lives. And so for me, when we think about religious information – and for the study, it was focused on answers to existential and ontological questions, the big questions, meanings of life and these kinds of things. And here it's about the answers that people received and the information that they received – were they making meaning from it? Were they showing signs of potentially taking action based on the experiences that we had? And those were the clues that we were on the road of information turning into knowledge.

Gretchen Huizinga: Okay. Well, and then I want to stop there for a second before I have you go on and say, “why does it matter?” I kind of feel like I know why it matters. I mean, that's the thing. Anytime we take information, what do we do with it? You know, it's like, you can't say “I don't know” anymore, right? It's the big thing about – when you find new evidence of something, you no longer are off the hook for not doing something about it.

But, on that note, there's kind of a – I like your Venn Diagram thing – there's sort of a middle phase of knowledge that moves into what I would call faith and action, or wisdom and action. How would you say that knowledge could be acting? I think knowledge is actionable, but maybe not enough to put you over into the “I'm going to put my faith in this.” Do you have any differentiation there?

Shanen Boettcher: Well, for me it was really about observed behaviors. And so if people made statements like, “Based on this interaction I just had with the machine, I am going to have a deeper conversation with my family about what we believe and what we practice and what we do,” or “I really need to go back to church and spend some time there and get in touch with this,” or “This has stimulated bigger questions for me and I need to go find the answers to them in some way,” and we'll get into sort of how memories played a role here as well for people, but those were the things that I was looking for.

This study was, you know, there these are finite amounts of time that you have with people. And so it's hard to say, “Hey, I witnessed the transformation of data to information to knowledge to wisdom all in that short period of time.” So I was really looking for clues that the information or the data that was exchanged made it through the filter for people and that they felt inspired in some way to take action.

And for me, in this study and in this format, the methodology, that's what I was looking for. And we will talk later about what could we do after this, and something longitudinal where you check back in with people over time to see, “Hey, did you do those things you said you were going to do?” And then, “Did that have an impact on your life? And did it transform into wisdom?” So, I think I was able to witness some of the beginning formative stages of that transformation and it was sort of out of scope to look at the wisdom end.

But I do believe you're right. Like I think there will be especially just to a lot of the questions that were asked here and answered, people, they might act a certain way in their life that might impact their interactions. But you're asking a deeper question, which is what about their faith long term? Their wisdom, their perspective, throughout their life and beyond.

And so that was something that was out of scope. That's always a convenient thing for you to do defending your dissertation. But I would say, yeah, I was really looking for an earlier stage, sign. Because there's a real possibility that people would have rejected this and said, “This is just gibberish. This is silly. Why would I talk to a machine about these kinds of things? Or why would I talk to, why would I look for the answers to these kinds of questions in this way and with this technology?”

And that kind of gets to the second part of your question, which is why would it matter? And I was very open to the idea that it might not, that people might say, this is not how I want to look for my spirituality. This is not something that is compatible with my religion or the way that I want to think about religion. And that was definitely a valid answer in the research findings, if that were to come to bear.

Gretchen Huizinga: Yeah, I love the fact that you note that it's one of those things you can say in a defense, if that was not the scope of my… And you have to do a doable size of research for your dissertation that can springboard into other things. One of them might have been thinking about a longitudinal study coming back to these people in 10 years and saying… 

But this actually speaks to an interesting role, and I think this is what you're exploring and we'll get into this in a bit, but it's sort of how could the machine step into the role of pastor or priest or spiritual guide, and maybe expand the scope or scale of the ability to propagate religious information that might become… and then if the AI is effective enough, or efficacious, as they say in in religious circles, you could say it's a good thing to use.

Talk for a second about what you referred to as the duality between religious thinkers and technical thinkers and that there's a lot of HCI, Human-Computer Interaction research, but religion's excluded from that stuff.

Shanen Boettcher: Yeah. I think it's probably not excluded by design, but that was sort of the opening for my contribution, I felt, in this dissertation, in this research, was that there's a lot written and I relied a lot on scholars in the space of human-machine communication: things that explore anthropomorphism of machines, that explore how people tend to be deferential towards machines and the internet, but a lot of that content, when they did the research, had to do with news and driving directions and weather and facts on Wikipedia and these kinds of more secular topics. And so, they didn't explore what happens when you inject religious information, spiritual information, answers to existential questions, things that are meaning-of-life types of questions for people.

So, there was a duality in that on the theological side – and when I looked at those scholars, there's a lot of exploration and research around how do people form their worldviews? Where does it come from? What are the different voices or influences that affect them and develop them? And in that body of work, technology was often left out, and certainly artificial intelligence was left out. And if it was mentioned, oftentimes it was mentioned in a way that was prescriptive on excluding that, like, “Let's not pay attention to sources that aren't from our officials, that aren't from our normative values, that aren't from our sacred texts.”

And in a sense there was this debate that I explore among the scholars about whether you should exclude all of the new information coming in from technology, versus looking deep into it and having it be a potential source of influence.

Gretchen Huizinga: Yeah. What you're talking about is theological reflection on that side of things. And you reference the four voices of theological reflection. But these are ways in which we conceptualize and internalize our own religious beliefs. So tell us what they are and why they're important reference points for your research.

Shanen Boettcher: Yeah, this is work that came out of a group called Theological Action Research. And it's often credited to Helen Cameron. It's “Cameron et al.” So there's a group that works together on this. But they were exploring how churches, how organizations, how do they create their worldviews and implement them?

And so they outlined four voices. A Normative Voice, they called it, which had to do with sacred texts and official texts from the religion.

They had a voice called Formal Voice, which is that from religious officials. So this would be or rabbis or moms or monks or gurus interpreting those texts or the canon or whatever the rules of the group are. And again, this is all very much group-based or religion-based, in my definition. So you have Normative and Formal, which tend to be the top-down hierarchy within these organizations.

And then there were two that were focused on group dynamics. One they called the Espoused Voice, which is how a group views itself and views its own spirituality. So this would be, I would be thinking about what does it mean to be a good person? And what are the members of my religious community, my church or mosque or whatever it is, how do we together interpret the Normative and Formal influences and believe ourselves as a group?

And then the fourth voice, they call it an Operant Voice, which is really the observable practices of the group. And so this is where maybe you could say it's similar in a way between the information and the knowledge. It's sort of this transition point where, okay, you've got all of these different voices talking at you, and here's how you actually act.

And, and so those are the four voices: Normative, Formal, Espoused and Operant. And one way of thinking, you could say, hey, maybe there's another voice that could be represented by technology or AI, where a lot of people are getting lots and lots of information and it tends to be in a secular context, but certainly, religious information is accessible in that way. And perhaps in the future it will be pushed to people, in a way. That's one way to view it, is: “Hey, maybe there's a fifth voice and it could be technology or AI.”

I would think that the ARCS group, they would probably argue that you could basically plant anything in the Espouse and Operant voices. So you could say, “Hey, part of the Operant voice and what the community thinks about itself or sees itself is influenced by these external forces.” That could be any number of technologies. And they might argue that it would be housed within an Operant voice.

And I didn't so much argue that there's a separate voice, like there should be a fifth voice added into the mix here, but certainly that what I observed was that there was the opportunity for influence in this way. And if you just look at where people spend their time, relative to consuming information or being in a religious context, that it most certainly would have an effect, were there to be religious information coming through this channel.

Gretchen Huizinga: I want to go back to something you mentioned earlier about the broad net you laid out for the work that you brought into your research, and it was from sociologists to technologists to futurists and theologians and so on. Why did you cast such a wide net and how did these voices from different disciplines inform the overall direction of your research on AI and religion?

Shanen Boettcher: Yeah, I mean, again, this was more of an organic process than a very thoughtful one. I would read about different concepts and then of course you would see new thinkers injected as you go. And I just kept pulling on those threads and looking and looking and looking and… I think as a student, as a researcher, you're always, at least for me, I'm always feeling like, “Well, somebody's certainly written about this before, or some, this has been done.” Or “I definitely need to be on top of anything new that's coming out” or… And so I sort of geeked out in this area in my dissertation in terms of really pulling in lots and lots of different thinkers from lots of different disciplines, maybe just because I was searching for someone to have really looked into this or to have had strong opinions in one way or another.

And so, you're right, I mean, it was a pretty wide-ranging set of scholars. My literature view is really a significant – you could say it was even kind of bloated in terms of the rest of the paper. But, for me this was a work of passion and curiosity for me, and so that part of learning from others was really, really important.

And I think what it did is it brought together, it really exposed for me the different kinds of thinking that you would see from a religious studies, which is a wide-ranging discipline anyway, from sociologists to anthropologists to historians and the like, to the theological side, where that that tended to be a little bit more of a, I don't want to say insular, but a little bit more concise, in terms of the thinking, particularly around theological reflection and technology in particular.

And so, for me it was interesting to look at those different influences. It came with a set of assumptions that I was able to bring to the research, where most of the time, my assumptions were: “Here's some thinking from the sociological side of things and anthropological side of things that people have found, relative to technology. Do these apply when religious information is involved?” “Here's a set of assumptions that are about theological reflection and how people build a view of their own theology, how they do theology, essentially, and does that hold true when those influences are coming from technology or artificial intelligence?

And so it kind of set up this duality that we were talking about earlier where it really brought together for me a lot of different ways of thinking and looking at human interaction, and how people do theology.

Gretchen Huizinga: You know, I feel like that's really important, because I wouldn't call it bloated. I think that's funny. But it's not exclusive.

And as you referred to earlier, sometimes people in their various lanes can say, “I don't want to think about what those people think about. We need to stick to our guns and do what we do.” But if you look at people like Jacques Ellul, who was a devout Christian, he actually says the two most important influences in his life were Jesus and Karl Marx. And that's an interesting dichotomy there because a lot of times Christians would say, “Well, Marxism is about godlessness, so we can't even entertain any of the things that Karl Marx wrote about,” vice versa. So I really like that you've taken that approach, Shanen.

Let's talk about the structure of your study for a second, because that is really interesting, what you did with your participants. Tell us how you interrogated the influence of AI on religious information and knowledge.

Shanen Boettcher: For me, it started with a little bit of that earlier work that I had done in my master's program about how do you teach kids about religion, how do you teach about the big questions that they have. And I started with this list of big questions that most religions address or try to address. So this can be everything from individual personal meaning and happiness: How do you achieve happiness? What are my life-orienting commitments to society and the world we live in? Why is there evil and suffering? What can I do to help others in future generations? Why are humans special?, among other things. Interpersonal relationships: How do I know right and wrong? How should I treat other people? And then, transcendent, and sort of other-worldly questions: How did this all come about? Is there a god? Is there something bigger than all of us? What happens when we die?

And so these were at the core of – let's assume there's an interaction between a human and machine: this was the fodder. These were the core questions that we were going to go after together.

And I assembled people from all different backgrounds. So I had five different religious backgrounds: the Abrahamic traditions, Hinduism, Buddhism, and then atheists as well participating in the research. They all went through these questions and had a conversation with different technology entities, so a voice assistant, an SMS chat, a web-based chat and internet search, and a quest to find answers. And then after they had those conversations with those entities, we debriefed for an hour or two depending on the person, and discussed what their experience was like, what they thought, how they felt about the interactions, what thoughts came up for them. And in each of the sessions, against the ten questions, people in the main – I knew they had told me what affiliation they were religiously– and so about 60 to 70% of the answers were consistent with their religious affiliations. So if you came in as a Christian, like six or seven of the answers would be consistent with Christianity, a couple of them would be from different traditions and one might be from a non-religious perspective. And so people had the opportunity to experience answers that would have been consistent with their affiliation and those that were not. And so this was exploring: How does that work? What happens when the machine is telling you something that is in conflict with a belief that you hold? 

And so each of the participants had, as I said, we got to see kind of different technology experiences, different ways of presenting the information through AI. How did that feel to them and did it have an influence? Did the devices that they used in the interaction, did that have an influence in it? Did the voice, like the Normative voice, like if there's sacred text involved in the answer, did that have an impact? Did gender of voice have an impact for them as well? And then also, how did it work when they got answers that didn't correlate to their own experience?

Gretchen Huizinga: So the structure, you had different content, well, you had a list of content, the core questions, and then you had different medium types, or media, technical. Did you ask the set of questions to each different medium, or was it random or how did that work?

Shanen Boettcher: Yeah, for that one, it was split evenly among the different experiences. I didn't go through all the questions on all the devices; that just would've taken too long. So, people had the opportunity to ask one or two questions in each of those contexts.

Gretchen Huizinga: I want to position this because it's post-ELIZA, which is one of the earliest AI interaction machines, where it was just basically psychotherapy on a machine, and people had a very personal reaction to that. It wasn't about religion; it was about my problems. But it's also pre your study, pre-ChatGPT, which is a huge – And maybe it's not as huge, maybe it isn't really. That'd be an interesting follow up, say, does this make a difference? –

But let's talk about what you did find. You went in with assumptions. How did they match up with your findings? Did anything surprise you? What were your big takeaways?

Shanen Boettcher: One thing that was very consistent with the assumptions going in is that people did highly anthropomorphize the experience. And that's been written about, like Andrea Guzman and her work, Heidi Campbell and her work. This is pretty consistent on human-machine communication. You'll see people have this tendency to do that. And to the point where if they were interacting with an Amazon Echo device and they were talking to Alexa through this experience, they would do things like, in their way of making meaning, they would say, “Well, this came from Amazon. This is probably something Jeff Bezos believes. They're very logical over there, so this makes sense.” And this is all in the backdrop of knowing that this was a simulation that I had created, because I was very clear about that in the discussion. And yet, those were things that just naturally occurred for people. Like if they were talking to the Google Home device, then they had a different kind of perception of like, “Well, this is coming from Google and Google is like this.”

Gretchen Huizinga: Wow. Wait, stop there for a second because I did not know this, that corporate brand names had an impact on the perception of the information.

Shanen Boettcher: Yeah, absolutely. And also, the celebrity CEOs of those companies had an impact as well. And I think it's just that – so there was this very, very strong desire to want to make meaning. Almost like a horoscope. Like you want to like try to understand why you got this information, and in particular, if you got something, if you got an answer that wasn't necessarily consistent with your views, it might be like, and people would be like, “Well, I could see maybe why it gave me this answer because, I've had about questions about this area and I'm not totally sure about it.” And so there was a lot of that kind of thinking through the why of what answer they got.

And really one of the more fascinating things in the study for me was to hear how people made meaning for themselves and how they answered these questions on their own. So there's a component, certainly, of their religious beliefs, and then there is this lived part of the spirituality. And so to hear that, and to hear it from Muslims and to hear it from Christians and to hear it from Buddhists and – it is just really interesting to hear how they make meaning and how they employ that in their lives.

And so anthropomorphism, that was a big finding that was kind of expected, but maybe more, deeper and more nuanced than I anticipated.

And then another big finding, and this was not expected at all, was the triggering of memories that people had by the answers they got. And I think this was very much related to religious information, to spiritual information. Very often people would say something like, “Well, that's something that my grandmother used to say to me all the time,” or “That's something my father always talked about.” And very often when, if these people had been deceased, it was an emotional experience for people. And so there was a zone that people got into about – these are big questions for sure. But it got very personal and very nostalgic for a number of people.

And it's the place when they had those memories, that's where people tended to show emotion in these conversations and in the debriefs. And so that for me was a sign that it's a bit cautionary, right? Now we're into a zone where people are very open, they are feeling emotion, they are open to influence. And so this was an interesting finding along the way.

So those two were probably the lead findings, in terms of how people related to the machines and the information. And then there were a number of interesting findings around gendered voices. That was pretty interesting. Most of the existing research here says that people far prefer female voices and demure female voices in particular. And if you think about all of the default voices across the cars and home assistance, whether it's Siri or Alexa or Cortana…

Gretchen Huizinga: Even the names. Even the names are female.

Shanen Boettcher: Yeah. All of these are female and that's from research that those companies have done and what they've found. And one of our research fellows, Robert Geraci, he's done a lot of work on robotics and human-machine communication. And one of his claims is that westerners are more apt to respond well to disembodied experiences, like voices from the cloud, whereas eastern religions tend to be more connected to physical objects, and that the idea that everything has a soul or an energy to it. And he would say that possibly a male voice for western religions or Abrahamic religions might be more powerful than a female voice, as you think about the Normative and Formal voices that typically come in those religions. And I found that not to be true at all in terms of – people did not want to hear male voices coming out of these devices. And nobody, males or females, in the study, regardless of their religious background and affiliation, they did not want to – it sounded too authoritative. It sounded too bossy. It sounded too dictatorial.

Gretchen Huizinga: How much of that, though, is that chicken and egg thing where you say that these large companies have done research and have put female voices and therefore we've become accustomed to female voices in our devices and defaults? And so is there any – well, we can't answer the question, but I raise it – how much of that is, you get fed AI by what you've already done and liked, and so you become accustomed. What do you think?

Shanen Boettcher: Yeah, I think it's a really good question. There was some research that I looked into. Da Costa is a name that comes to mind as far as some work there. And the theory is that it has to do with, we prefer these devices to be our assistants. We prefer them to be subordinate to us. We prefer them to be helpers. And unfortunately, there's gender stereotypes around those kinds of roles and what people expect.

What's really interesting about this point is another finding around deference, and deference to information that's coming from these devices. So, on the one hand, when we think about this idea that there's a preference for these devices to be our helpers and subordinate to us, on the other hand, we have a strong tendency to defer to the information that's coming from them as being true and as being accurate. And so, this is a really interesting dichotomy, right? On the one hand, they're subordinate and on the other hand, we think they're superordinate. And so, when we come to religious information, that's particularly important because… I mean, how many times have you been in a conversation where it's like, “Well, who holds this record or that record?” “Oh, let's look it up and, and that will break the tie of our conversation,” right?

And so that carried over to answers to questions in this realm as well, in the existential realm where people say, “Well, this must be right. This is coming from something; this is coming from somewhere. This has been thought through. I might just not understand this fully.” And so, again, if you think about the opportunity for influence and, unfortunately, manipulation, there is this tendency for us to want to defer to the information that's coming from AI.

Gretchen Huizinga: Yeah man, Shanen, everything you're saying is raising other questions in my mind, not the least of which is: these devices and platforms and apps are positioned as augmenting, not replacing, as assistants, as collaborators, et cetera, and that we put them in female voices because that's what we want to hear. And then that speaks volumes into our implicit biases. And… anyway, I don't want to go there because there's a thousand things we could say.

I do want to address before we get into that influence thing, because I think I want to really drill in on that. You had mentioned that interestingly people had a “feeling of privacy” while they were doing these questions, even though it wasn't, and the fact that everything they were saying into a machine was going into a giant database, probably training someone else's AI model as they spoke. But, and I think Joseph Weizenbaum saw this in his ELIZA configuration where the woman said, “Please leave the room. I'm going to have a private conversation with this computer.” So, what did you find there and what would you make of that?

Shanen Boettcher: It's a really important dynamic, and again, there is a dichotomy. I mean, we all know that information is being tracked as we do anything online, and yet there is this notion that in some ways a conversation with a machine is the most private conversation that you could have.

As it relates to the work here, the thing that really came to the fore was that people felt like, even if there was tracking going on, that the machine wasn't going to judge them. This was important relative to what they would otherwise experience.

And so in particular, people felt more comfortable having conversations about the big questions, about spirituality, about religion with the machine because they didn't feel embarrassed to ask a question. They didn't have to face that social uncomfortableness of addressing a person, and fear of judgment, based on their questions and the discussion. And so there was this intimacy that came out, or this safety that came out, in the discussion. And many of my participants said, “Well” – because I'm a former product manager, and so, I always listen to all of the different ideas they had for products. And they're like, “Well, what I really want to do is put this device into,” they'll say “Buddha mode” for example, “and just talk to it about Buddhism” or, “put it in a Jesus mode” or what have you, and, from a perspective of learning about different views and “Hey, I just wouldn't be comfortable talking to someone of a different religion and asking them these kinds of things.”

And so, there was, and this is something that starts as far back as Sherry Turkle, and then, Andrea Guzman, I mentioned earlier, this idea of a metaphysical aspect to machines and, in particular, a personal device, like a computer, and now, especially, a phone. People feel like this is something that I have a connection with. It's like you were saying, an augmentation or an extension of myself. And they felt like it just occupied a space for them, that was safe, that was without judgment. And again, if you think about the potential for reaching people, for answering some of their deepest questions, for providing guidance to them, or manipulating them, this would be a place of great power.

Gretchen Huizinga: Almost like a confessional booth or, I know they had for a while, I'm not sure it's still up, but there was an app that you could go and tell your crime or sin or deep-seated secret, and just get it off your chest, to some unknown entity without having to actually pay for your sins or your crimes in society or to the person you hurt.

So there's a kind of efficacious nature of talking into something that you feel like won't judge you. And I get that. The scariest thing for me is that it isn't private and that this data is collected and that's a whole other podcast.

I also want to say I love this idea of having it be potentially, as your findings are made known, companies might say, “Hey, we can do an application that does go into Buddha mode or Jesus mode” to say, I know I'm not going to get somebody else's religious – that's both dangerous and assuring. I don't know.

Shanen, we know that AI is already influencing things like shopping and buying and even political beliefs and ideologies, but your research suggests that AI holds a similar level of influence in the realm of a religious information and knowledge. So sort of a dub, but I mean, tell me what you think the implications are for religious leaders and institutions as to how they might use this power for good, not evil. Because to be honest, if I'm a potential cult leader, this is a very interesting finding.

Shanen Boettcher: I think one thing that's clear is attempts to restrict access to information and technology have largely failed, by religious organizations. So I think the perspective has to be: What are we doing? How are our beliefs showing up in this realm? And what experiences could we provide as well? I think these are questions that religious leaders should address and be aware of, if nothing else. In assembling the materials to create the simulation that I did, some faiths were easier than others, I'll say, to put together a set of answers and train a model and all that kind of stuff. And, I would say that the most important thing is for religious leaders to become aware of what the presence of their faith is online and my advice would be to be proactive in this space and to provide experiences. And we've seen some groups do this, but I think the approach of trying to steer people away from AI in general is going to fail. I mean, we've seen how quickly ChatGPT has come on, and how many hundreds of millions of people are using that so, so quickly, and they've done some very hard thinking about what types of content they provide and what they don't. And so right now you have the avoidance strategy there. Like if you ask something about religion, if you ask something about even existential questions, you often get the, “Hey, I'm just a demo for Open AI. We're not talking about that kind of stuff.” And so they've avoided it, but that information is in their model, in their large-language model. They just happen to be quarantining off different pieces, based on the questions that are coming in.

And so, there is a model that would answer these kinds of questions in different contexts, and I think religious leaders ought to be proactive in understanding how this will unfold for them.

Gretchen Huizinga: Yeah. And that raises the issue of all the different iterations and applications of generative large-language model AI. Coming from different companies, like you go back to the brand name, if you Google it, is it different from what you put it into Bing, and or just some generic somebody's website that's a Buddhist website or a Christian website or a Jewish website, what gets fed into the model is what comes out, right? And so there's this behind the scenes or under the hood programming, we'll call it programming, but it's basically data that you put in.

And interestingly, Shanen, I've tried a couple of ChatGPT prompts on the Gospel, for example, and got back basically what I would've said. It didn't dodge the question in any way, and it gave a solid Christian answer, but I know that people have said, “I put this in ChatGPT and it wouldn't answer it.” I've never had that experience. Maybe I'm not using it enough or my prompts aren't weird enough, but…

Shanen Boettcher: Maybe, maybe your questions are so good natured that you don’t have a problem.

Gretchen Huizinga: I'm not threatening in a political way. Oh my gosh.

Well, so let's land on the many ways that we can receive religious information. And I would say AI is perhaps, at least lately, the most intriguing, especially in light of those large-language models and other generative AI. But Christians have always believed in the reality and power of divine revelation through direct communication from God, Scripture, traditions, prophecy, the incarnation of Jesus, apostles, et cetera, and finally, of the Church.

Is AI a new way for God to speak to humanity, or is it just an old way with a new tool?

Shanen Boettcher: I think you could argue it either way, in the same way you could say, “Hey, as there are lots and lots of books written, that you could read or not read, there's access to the theological thinking and the ideas in many ways.”

I think what is new is the unstructured way in which people can ask these questions and the limitless potential for that. Like you could spend a lot of time talking to your religious leader, but that's always going to be finite. This is not. And so I think people will tend to get deeper. They will tend to get more intimate. They'll tend to feel safer in these discussions.

And what I would predict is that it's a way for people to explore more than they have before because it feels like it's less conforming to some structure that someone's made, which is a little bit deceptive because… you can even see it if you play with ChatGPT or with the Bing interface now.

Bing is particularly interesting to play with because you get to see the traditional web answers and the new way of AI answering. And so, on the left hand side, you get the list of links and all of the sourcing, right? And on the right hand side you get just, some text as a sort of a human, readable answer.

And so, those things have a different feel to them, right? One thing you can see is clearly there's some kind of hierarchy. You kind of understand the ordering in which there's some order that's being applied to it. There's ads interlaced in it. So you know something's going on there. The other one is less certain of where it's coming from. In some ways, it feels more authoritative. And so again, I think you run into this idea of deference. Your agency is a little bit diminished in a sense, in that you feel like this is the one; this is the answer.

Gretchen Huizinga: Yeah, that's actually very interesting. and it also triggers a thought in my mind about prayer, which is a way that people have gone to God to ask questions before. And with AI you can ask those questions and get an immediate answer, kind of like an eight ball, which has been a metaphor used before, which takes in some ways the mystery of God out of the equation. I think we've swapped in our idea of needing efficiency and answers and solid information, and God has always held back a lot from humanity. He can't give us all. It's like a “you can't handle the truth,” kind of thing.

But where does that sort of mystery and prayer and obedience and faith get moved when we put our religious questions to AI instead of God or, I don't know, maybe it's not instead of God… That gets to the root of the question “Is AI a new way for God to speak to humanity?” Does he have control over that too, or are we suddenly moving into a, “Now we control all the answers”?

Shanen Boettcher: I think it's a really interesting question. I mean, in a sense, ritual is really important to a lot of the ways that people experience their spirituality. Some of the work that I read talks about how people, when they pray or they meditate, and they are with a sacred book that they have with them, that that adds a significant dimension to the experience for them.

And so reading the same thing on a digital device is not the same experience for people, but, where is that coming from? Is that because books have been around for thousands of years? Like phones have not; like is there a point in time where that experience with a digital device or information that's coming from a digital device is similar? But, yeah…

Gretchen Huizinga: But it does kind of get back to also translations of the Bible for in Christian faith, in terms of, like, some people just say the King James Bible is the only Bible you should read because it's authoritative and so on.

But, real quick before we go on. – And we’ve got to wrap up pretty soon; I just want to talk to you all day, Shanen. – Was there a favorite or preferred device or medium that you found people responded to more favorably?

Shanen Boettcher: Yeah, this is interesting. I mean, definitely the phone was the preferred device. The phone was something that transcended a business or work or personal, impersonal context.

In large part, computers were very much seen as a productivity device, a work device, that was a professional space. And so it was a little bit tough for people to use the computer and the computer screen in this context.

The voice assistants had interesting sort of, they were very polarizing. People either really liked them or they didn't. There was a great deal of mistrust about these home devices: the Echo devices, the Google Home devices. And it's interesting because, from a technical perspective, your phone has a microphone. It's always on. It could be… But people don't think about the phone as being as much of a surveillance device as say a home assistant, which was, I found to be quite fascinating.

But yeah, to answer your question, the phone felt like the thing that was always with you. Muslims for example, felt like in my study that, “Hey, this requirement of praying five times a day, this could help in this way. I could take a few minutes, even if I'm at work, I could do my prayer in this way. This could be very helpful for me.” Other devices did not seem to fall into that capability.

Gretchen Huizinga: Now, when you say “the phone,” would that be talking to the phone or texting on the phone or searching?

Shanen Boettcher: In my experiment it was texting. but you could easily… And I think for some reason that texting interface, that SMS interface, also felt for people to be the most useful, and – this was an interesting finding as well – in some ways the most simple interface and the interface that is most commonly associated with people texting you was the thing that felt most intimate, most useful, for people.

Gretchen Huizinga: Okay. That’s fascinating to me. It raises a whole bunch of questions that we do not have time to talk about.

What's next for you, Shanen? You've had an interesting past. This was an interlude; now you have your PhD. Did this study prompt anything that you want to look at next, or is there more research on the horizon, or what are you going to do next?

Shanen Boettcher: Yeah. I'm sort of sorting that out right now. I don't have a specific answer. I do think teaching has been a passion for me as well. I think that's a potential direction.

If I were to extend the research, like I said, maybe longitudinally, taking a look at some of the people again, the space of memories, and how people form memories and even in the space of dementia. It really is interesting. I think would be interesting to look at religious memories and the way that technology interacts with them. Those are a couple of ideas for future projects for me.

But I'm also spending a lot of time with responsible innovation labs and we're creating guidelines for startups, technology startups mainly, on how to develop artificial intelligence responsibly and ways to think about your business models early and the data that you're using early and the biases that could be involved in it.

And so I've got hands in a few different places, where I'm thinking about next.

Gretchen Huizinga: Yeah. This seems like a perfect dissertation to bring to the party, as it were. It isn't specific on technical innovation, but it certainly does give a window into influence and meaning making and these kinds of things, which are all important in other AI applications besides religion and spirituality. So, I think this is awesome.

I like to end with book recommendations these days, both for me and for our listeners, because I'm constantly amazed at what I didn't know about and therefore should have read, but didn't. As you alluded to earlier, I started on a rabbit trail; I started pulling threads and realized, “Hey, I should read that person…”

So what are two or three books that have influenced you and your work that you could recommend to our audience and why?

Shanen Boettcher: Sure, sure. Can I do a few more? Does it have to be two or three?

Gretchen Huizinga: No, you can do as many as you want. Everyone says that. It's like, “What's your favorite song?” “I don't have one favorite song.”

Shanen Boettcher: Yeah. I think relative to the work, specific to the work here: Meredith McGuire in her book called Lived Religion was pretty influential for me in terms of, this is going to be the spirituality rooted in the individual and weaving a fabric throughout your life of spirituality. That's very much her view, a sociologist's view, of how people develop this. And it's influenced also quite a lot by feminist theology, the idea of how do you reconcile things that come from what can be a heavily patriarchal organization and how do you make meaning in your own life and implement it in your life? So I think Meredith McGuire, Lived Religion.

The other side of that coin for me in the work was Donald Carson, this book called Gagging of God, which looks at pluralism and Christianity in particular, and how do we think about external influences? Should we just go back to Scripture and just look at Scripture and all the answers are there? And so he provides a great counterpoint to McGuire.

Moving into the technology space, Andrea Guzman, she has a book called Human-Machine Communication. This is really core to a lot of the assumptions that I had going in to how will people interact with these machines, and then how do we inject religious information?

We talked a lot about the ARCS group. And so Helen Cameron, Talking About God in Practice, that's the book there that she and her colleagues put out about the four voices and encapsulates that. I think that's really an interesting view into theological reflection.

And then I'll put in a plug for my advisor as well. Eric Stoddart, he has a new book called The Common Gaze. And Eric, his focus is on surveillance and spirituality in religion. And so he does a beautiful job, I think, of talking about, as we were mentioning, data collection in this world, and how does it relate to Scripture, how does it relate to ideas, ancient ideas that we've had? And at the highest level, the idea of the gaze of God being this very far-reaching, omnipotent surveillance. And so, how do people behave in that context and how do they behave in the digital gaze context?

So we'll end with those.

Gretchen Huizinga: And that ending is interesting because one of the things I've often thought about AI and the companies that make it is they know everything about us and they might use it for ill. God knows everything about us and he loves us. And that's comforting to me.

But Shanen Boettcher, this has been so fascinating. It's prompted a lot of deep thinking on AI and religion for me and I hope our listeners as well. So thank you for joining us today.

Shanen Boettcher: My pleasure. Thanks for having me, Gretchen.