Transcript for Episode 47

Gretchen:

My guest today is Dr. Robert J Marks, Bob to his friends and fans, often spelled differently as well. He's a distinguished professor of Electrical and Computer Engineering at Baylor University, and is a Senior Fellow and Director of the Walter Bradley Center for Natural and Artificial Intelligence.

Bob has such a remarkable resume that if I took time to list all of his accomplishments, I'd have no time to ask him any questions. So I'll direct you to centerforintelligence.org if you want to check out his CV. But in short, Bob is a scientist, a professor, an engineer, an author, an expert in computational intelligence and neural networks, and depending on the day, the amiable or full-figured host of his own podcast Mind Matters. Bob Marks—the notorious RJM—welcome to the podcast. 

Robert:

Thank you. You hit a lot of things! Every time I introduce my podcast, I introduce myself as a new person. The last one said, "I'm your full-figured host, Robert J. Marks." So that's what you picked up. That’s hilarious 

Gretchen:

I've only heard those two, which I love. Before we get going, I want to just drill in a little bit on this idea of natural and artificial intelligence. You've named the Center with those two words. This is a question I've been asking: in an age of farm-to-table, non-GMO, back to nature, why are we so excited about artificial intelligence? Why did you pick those two contrasts for your name? 

Robert:

The reason that we chose both natural and artificial intelligence is because we want to explore both. Artificial intelligence purports to imitate natural intelligence; but natural intelligence needs to be explored on its own and compared to AI. One of the things that we have found is that natural intelligence, in many ways, is far superior to artificial intelligence. So not only do we explore artificial intelligence, but one of the things we're looking at right now is the mind-brain problem, for example. 

Is the mind an emergent property of the brain? This has been going on a lot in philosophy; there's been a lot of debate. But only recently do we have some scientific evidence that the mind is greater than the brain. We're putting together a book—so this is my plug number one, if you will—the working title is Minding the Brain, and it is a book about the mind and the brain. And we have brought in computer scientists; we have brought in a physicist; we have brought in a cosmologist; we have a brain surgeon; we have neurologists; we have a psychologist; and a number of different people who address this question: whether the mind is the same as the brain.

So this is the outcome of some of the work in so-called natural intelligence. And of course, as things develop, we have to juxtapose it with artificial intelligence and see what can be done in duplicating these remarkable things that humans do.

Gretchen:

And neuroscience isn't a new field. People have been looking at the brain, and its derivatives in machines, for quite a while. What I keep hearing is they're not getting much purchase, as it were. 

Robert:

You should talk to our brain surgeon about neuroscience. He says it's all over the map. 

But one of the things that we're experiencing in scientific development is [questions] such as: are you your brain? Are we computers made of meat?

Well, if we're computers made of meat, then if we divide our brain into the right and left hemispheres, we should become two people. But in fact, this is an operation which is performed by one of our neurosurgeon participants in the Bradley Center—Michael Egnor, just a brilliant neurosurgeon—and he does this. And why does he do this? It's because of epileptics. Sometimes a signal for an epileptic fit originates on one side of the brain and is communicated to the other side of the brain, so they thought that if they split the brain in two, down the hemispheres, that this would disrupt the communication. And indeed it works. Now what happens if you cut the brain into two pieces? Well, what happens is you remain the same person. That's an example of some of the scientific evidence that is accumulating supporting the idea that the mind is just more than the brain.

But anyway, this is an answer to your question, why do we have natural and artificial intelligence? Because we are interested in natural intelligence and how the brain works, because in artificial intelligence, we would like to in some way duplicate that. But what does the brain do? We have to understand that from a perspective different than computer scientists and engineers.

Gretchen:

Bob, you're not just a Senior Fellow and Director of the Walter Bradley Center, but you're also a co-founder, and were instrumental in defining its purpose and mission—which as I read, challenges both the techno-utopians, who think smart machines will be our savior, and the techno-dystopians, who think machines will inevitably replace us.

When we talked before I referred to this as the "Goldilocks position" on the technophile/technophobes spectrum. Unpack that position for us, and tell us why it's important to get this just right. 

Robert:

This is a forecast about the future. I think it was Niels Bohr, the famous quantum physicist, who said forecasting is dangerous, especially if it's about the future. We have to be careful in what we forecast. We have people that are accurately forecasting—I think George Gilder, who is one of the co-founders of Discovery Institute, has a brilliant track record in forecasting the technical future. And then you have people that are terrible at doing this. One of the people that I believe is terrible at doing this is Ray Kurzweil who wrote The Singularity, who says that computers are someday going to become a duplicate of the human being, and then continue to write creative software that increases their intelligence again and again and again. It's this never ending staircase to superiority, and pretty soon we're going to be the pets of this AI. I think that that's irresponsible, non-defendable, and was written just for the visibility and the hype value.

Gretchen:

One of AI's big selling points right now is that it can help us transcend human limitations. But you and others have pointed out that AI has its own limitations, and some of them are pretty big.

From the perspective of a computer scientist, what do we need to understand about those limitations to have a realistic view of artificial intelligence? What can't a computer do that humans can, and why does it matter in the debate about AGI, or artificial general intelligence, and what I might call the AI challenge to human exceptionalism? 

Robert:

Let's look at what AI can do first of all. I was challenged by AI when I picked up my first calculator. That calculator could add a heck of a lot faster than I could. I used to use a slide rule—yeah, I'm old enough to remember a slide rule—and that four banger calculator could just bang out things instantaneously. So all of a sudden, this electronic calculator was surpassing my capability as a human. And we know now, reading the headlines, that computers can do things such as beat us at the world's most difficult board game—which is Go—and it can beat us at chess. 

So yeah, there are some things that it can do better than us. That's true of all technology. But there are certain things that artificial intelligence will never achieve. These would include things like sentience, creativity, understanding. And so far, artificial intelligence doesn't have any common sense. 

Gretchen:

That goes back to your Ray Kurzweil thing: there are people who believe if we just have enough time, enough compute power, enough sophistication, we will get to that point. So you're challenging that view with the Center and some of the work you're doing?

Robert:

Let me elaborate on that a little bit. It's very clear that in this new age, we have more data. We have greater computing power, and computers are doing a lot more interesting things than they ever did before because of the availability of that data. However, there is a fundamental computer science principle called the Church-Turing thesis. The Church-Turing thesis says that the super-duper computers of today, anything that they can do, you can go back to Turing's original machine back in the 1930s, and you could do on that machine. Now it would take you billions or trillions times as long, but that's an interesting thesis. The Church-Turing thesis, named after Alonzo Church and Alan Turing. 

Alonzo Church created something called the lambda calculus, which was kind of a computing language. Alan Turing generated the Turing machine, the first general purpose computer. And they worked together—in fact, Turing worked under Church for a while—and they went back and forth and they said, you know what? Both of these techniques do the same thing. They have the same capabilities and that generalized into this idea of the Church-Turing thesis. 

The bottom line is that anything that we can look at Turing's original machine, and we could say that Turing machine can't do it—because of fundamental physics, because of fundamental science and mathematics—that can't be done on today's bigger computers.

Gretchen:

And potentially not even on the future's bigger computers?

Robert:

On the future computers. That's correct. If they are built on the same principle that today's computers are. 

Gretchen:

I think it's important right now to actually interject that artificial intelligence is a computer. 

Robert:

Yeah, that's very important because anything a computer can't do, AI can't do. 

Gretchen:

No matter how many wants we wave in front of it. 

I want to mention that the Walter Bradley Center is part of the Discovery Institute, which has been at the forefront of scholarly research on intelligent design and challenging the hegemony of materialism and evolutionary theory that constitutes the prevailing worldview in academia.

Talk a little bit about the research that's going on in your particular branch of the Institute, and how you are tackling the fundamental difference between natural and artificial intelligence. 

Robert:

I first became interested in this maybe ten years ago. I've been involved with artificial intelligence for a long time, many decades. I started to become aware of artificial intelligence in terms of evolutionary programs being purported to support Darwinian evolution.

The people who were proponents of Darwinian evolution became excited when computers were invented, because they said, we can't go to the lab and reproduce this work, because it would take too long to do any evolution in the lab. But they were excited about the idea of taking this algorithm, placing it on a computer, and performing this evolution in an accelerated fashion. And they did this; they came up with a bunch of different algorithms to do it. 

Fundamentally, Darwinian evolution has three steps: one is mutation; another one is survival of the fittest; and another one is repopulation. If you repeat those—mutation, survival of the fittest, repopulation—again and again, and again, the idea is that you will develop super-duper things. There were some proponents of Darwinian evolution that wrote programs such as Avida. Avida is a really, really big one. It was big in the so-called Dover trial, when intelligent design was put on trial, and they had proponents of Avida testifying. 

When I came to Baylor 2003 or so, I met William Dembski, who was also at Baylor, and we were in this apologetics group together. I was into evolutionary computing, which is a legitimate side of electrical engineering, and we started meeting together and began to resonate. One of the things that we both agreed on was that evolution was unable to create information. A computer program can create information no more than an iPod can create music. There just isn't the creativity; it is a tool for the programmer to do something. 

If that was the case, then these evolutionary programs that were purporting to generate information ex nihilo, from nothing, were wrong. So we developed a theory called active information, which showed that the information that was placed into these computer programs, that predisposed them to get to the solution that they wanted to get to, was infused into those programs by the programmer. In fact, we not only argued that philosophically, we did it mathematically. We can literally measure the degree of information that is added to an evolutionary computer program that allows it to work. We published a number of papers on this active information idea, and it is something which has caught on. Our paper has been referenced decently, picked up by some other people. I think the big success is we've seen no recent, at least in the last five or ten years, people coming out and saying, "I have a computer program that can be creative and generate evolution." It's because of that work. So I'm pretty excited and proud about that.

Gretchen:

This seems to be worldview driven as well, which brings us nicely into what I want to talk about next: the namesake of the institution, Walter Bradley. I want you to talk about him just for a minute, because I know you've said he's your hero. I want to clarify one thing: Dr. Bradley is alive and well. 

Robert:

He's alive and well. 

Gretchen:

It's not a posthumously named center. And he's still contributing to the field in many ways. Why is he your hero? What qualities does he possess, both as a scientist and a man of faith, that led you and others to want to emulate him and go so far as to name a center after him?

Robert:

The center was named after him because of the admiration that both William Dembski and I had for Walter Bradley. We were so enamored with Walter Bradley as our hero, we recently wrote a biography of him. So this is my second book plug. The name of the book is For a Greater Purpose, and that's the way Walter Bradley has led his life. He's my hero, because like me, he's an engineer. He also worked incredibly in the area of Christian ministry. He was very active with CRU. I first met him when he was at Texas A&M, and he joined me at the University of Washington, my old digs. 

Gretchen:

And mine currently. 

Robert:

Oh, that's right, you are there! I was enamored by him because he was a proponent of academicians living out loud as Christians. This was something that he promoted. He eventually moved to Baylor. I'm at Baylor today because Walter Bradley said, Baylor, you should hire this full-figured guy. I’m here because of that. 

The lives that this man has impacted is incredible. If you read some of endorsements of the book, these are people that have interacted with Walter Bradley and done great things in their own William Lane Craig, who was one of the greatest Christian apologists and philosophers alive today, said of Walter Bradley, "This is the most remarkable man I ever met." We also have endorsements from other people whose lives Walter has touched, including Doug Axe, Hugh Ross, J. P. Moreland, Jay Richards, and a number of other people. 

So this is his ministry, this is his life. In the area of engineering and his technical contributions, he co-wrote a classic book in 1986—The Mystery of Life's Origin, with co-authors Thaxton and Olson. There, he looked at the implausibility that life was first created by chance. Now this is not an evolutionary argument—he's not talking about evolution, he's talking about the origin of life and when life first appeared on earth. How did that happen? One of the things the Bradley Center did was to reissue this. We re-issued it a year or so ago with updated chapters by Steve Meyer, Jim Tour, Guillermo Gonzales and others, and they demonstrated that Walter's arguments—along with Thaxton and Olson—in the original 1986 book were still very solid.

So this is one of the components that Walter generated for apologetics. Now, he never talked about faith until the last chapter. In the last chapter, he says, well, how did life get here? Well, maybe it was planted by aliens. Maybe we're simulated. And he did mention at the end of the chapter maybe it was God almighty that created life. So he introduced the theistic argument also, but the rest of the book was solid science. And if those people of faith—Christians, Jews, et cetera—if what they believe is true, then what they believe should stand up to the scrutiny of science. That's what Walter was showing.

Gretchen:

It's not a new argument either. I mean, that's what the entire world believed until Darwin et al. If people want to know a little bit more about that, they can read that book. But there's also a bit of an homage to Walter Bradley on the website, so they can read more about why Walter is your hero if they go to the Walter Bradley Center's centerforintelligence.org. And I loved that piece. 

Now let's talk about the science of AI a bit. I want to double-click—I hate myself for saying that word—on this concept of computational intelligence. It's one of your areas of expertise. In fact, you wrote a book on it, and the subtitle is "Imitating Life." We've talked a little bit about this, but tell us a bit more about what computational intelligence is and how it's different from other flavors of artificial intelligence. 

Robert:

That's an excellent question. IT has to be placed in a historical context. There was artificial intelligence in the 1960s. What was artificial intelligence in the 1960s? It was an area which was championed by people like Marvin Minsky and Seymour Papert at MIT. They were trying to write AI based on expert systems. 

Now what's an expert system? That would be somebody that would come to an expert and try to tease out of them the rules that they used in order to accomplish whatever they accomplish. So you might go, for example, to a person that traded the stock market. And they would say, "Well, if the S&P goes up three points, and the Dow goes down a point, and the NASDAQ goes up three points, I would buy Apple,”or something like that. They would try to copy down all of these rules from these experts and reduce it to code. 

Along came the connectionists. The connectionists were people like Bernie Widrow, Stanford; Frank Rosenblatt, Cornell. They came along and they said, "You know what? We could probably model this as neural networks, as all of these nodes which are connected together." And then, as often happens in academia, they came to a conflict. It ended up with Minsky and Papert writing this book called Perceptrons, which totally derailed artificial intelligence; and they got hit by their own ricochet, because it also ended their work in artificial intelligence. So when the term computational intelligence was created, there was a stigma. In the resurrection of neural networks, which happened maybe a decade after the Minsky-Papert book, there needed to be this separation from artificial intelligence, which was always recognized as a rule-based system. 

[When] we got together, I was in a leadership position in the IEEE. IEEE by the way is the world's largest professional society—Institute of Electrical and Electronics Engineers—it has over 400,000 members, so that's pretty big. So I was in a leadership role of the arm of that dealing with neural networks. We wanted to come up with a name that differentiated us from artificial intelligence, which at the time was associated with Minsky and Papert. On a back and forth email exchange between the leadership people, we came up with the idea of computational intelligence as being kind of a separation. Now that was then, but this is now, and there has been—if you, excuse my phraseology—but there has been an evolution in the definition of the name. Today you hear terms thrown around like artificial intelligence, computational intelligence, machine intelligence. I think artificial intelligence dominates; but all of them basically mean the same thing today. At least in the media they do. 

Part of the area of computational intelligence is an area a lot of people aren't knowledgeable about, which is called fuzzy logic. Fuzzy logic comes from the way that humans communicate. If I'm telling you to back up the car, I will say back back faster, fast, slow, slow, slow down, slow down, stop. I'm communicating no numbers to you; I'm communicating to you in vague, fuzzy terms. What fuzzy logic allows you to do is to take those terms and translate those directly into computer code. This was pioneered by a guy at Berkeley called Lotfi Zadeh--he passed away just a few years It made an incredible impact and was a way that expert systems could work because it took this biological, wherever-it-came-from communication of fuzzy linguistics and allowed computer code to be written around this fuzzy communication.

Gretchen:

That is a hard problem as far as I'm concerned—how do you make concrete the very abstract that humans seem to have no problem with, that machines have more of a problem with? 

Robert:

One of the things I talked about was the inability of artificial intelligence to have common sense thus far. There are things that we can do in terms of common sense that computers have a rough, rough time doing.

There's something called the Winograd schema, which AI has failed to conquer over many years. Winograd schema is, for example, "I can't cut down that tree with this axe because it's too small." The question is, is the axe too small, or is the tree too small? We know immediately it's the ax that's too small, but a computer would have a difficult time figuring that out.

Gretchen:

You know, as an ex-English teacher, this is the problem with misplaced modifiers. It's very similar, just in language, to confuse people. 

Robert:

I call them vague pronouns, at least in the Winograd schema. But there are other ones that don't require pronouns, and these are called "flubbed headlines." For example, "Milk drinkers return to powder." Now, if you think about that, you know that there's a funny interpretation of that, and you know that there's the true interpretation, what the writer of that headline meant. We know that immediately, and computers would have a difficult time doing that. They don't have that common sense.

Gretchen:

Bob, one question I keep asking people who work on AI is: what's hope and what's hype? And I keep asking because the chains keep moving with technological breakthroughs. In many cases, what was hype is now hope; but there's still, as we've talked about, many grand challenges, maybe some insurmountable challenges. What's your take on hope and hype in AI, based on what you've called "seductive semantics" and "seductive optics" and how that impacts our view of AI? 

Robert:

We are, through social media and even so-called news media, seduced into clicking on web pages. And how do they get us to click on these web pages? They use what I referred to as "seductive semantics." They want something there that's sexy, that makes you want to click on that. And of course, if you click on that, you go to the webpage where there's lots of ads, and whoever is writing the article gets paid in accordance to how many times those ads are visited. They're trying to make money. So one of the sources of this is clickbait. 

Another one I maintain is uninformed journalists. I think there's a lot of journalists who think they know what AI is, but when they write about AI, they have no clue. 

The other one is promotion. Everybody wants to give their product or whatever they're doing a brand. Even as professors we're guilty of doing that. 

Gretchen:

Oh yeah. 

Robert:

We are supposed to go out there and drum up support and get money from the government and industry to support what we do. So those are the motivations for some of the hype, I think, in general. 

Now what is "seductive semantics"? Especially with AI, we use terms all the time that aren't defined. There will be a headline that says "New AI is Self-Aware." And the word "self-aware" is used without definition.

Now, my car, when I back it up, is aware of its environment. If I get too close to a wall as I'm backing up, it starts beeping at me. Does that make my car self-aware? No; I think self-aware[ness], if you examine it closely, is like Descartes: that you're aware that you are. "I think therefore I am." I mean, that's what self-awareness is; but I think you can see in seductive semantics, self-aware could be used that way. 

Therefore it's always important to define terms before you use them in a hyped up story. That's seductive semantics. 

Gretchen:

What about seductive optics? 

Robert:

Seductive optics is when, for example, AI is put in a package which enhances and amplifies its view of being human-like. An example of that is the robot Sophia. She has been on late night talk shows and other places, and she was even granted full citizenship in the country of Saudi Arabia, believe it or not.

Gretchen:

Oh my gosh. 

Robert:

The interesting thing is that they have this lady, she can make facial expressions, her lips are synced with her words; but really what she is doing has little to do with the AI. She is simply a package for the AI, and she is a package that gives the impression that the AI is more human than it is. And so that would be an example of a seductive optics.

Gretchen:

Bob, the last time we talked, and this is kind of related to this question, you gave me a word that I was unaware of. You told me about claquing. It's a French word, and it refers to a paid cheering section for French operas. I dug around a little bit on that word and found out that not only do they get paid to cheer, that they actually morphed into figuring out how much power they had and said, you have to also pay us not to boo. 

Robert:

That's right.

Gretchen:

I thought, I've missed my calling in life! 

Do you feel like there's an AI equivalent of a cheering section to the claquers of the French opera here? 

Robert:

I think certainly we have individuals that are claquers. These are people that promote AI, and promote it inappropriately. But even today we have the claquers, not [just] in AI. We have people that are promoters. We have people that are publicists. We have people that are producers. All of these people are basically claquers. They're jumping up and down for a client and saying, "Hey, look at this client!" I think that any time that we have that, we are experiencing this idea of 19th century claquing.

Gretchen:

In my previous podcast, I interviewed a lot of people about the latest developments in computer science and AI, and I started each interview with what gets you up in the morning? And a bit later I asked the bookend question, what keeps you up at night?

I want to ask you that question, Bob, because you wrote a book called The Case for Killer Robots. I figured something must be keeping you up at night in order to write a book with that title. So make the case for our listeners: why do we need killer robots, also known as autonomous weapons, when so many people are saying that's one area AI should not go? 

Robert:

I think that's a terrible mistake. We only need to look at history to see the foolishness in not developing everything that we can technologically. The reason for this is because technical superiority in nations wins hot conflict. And even in the absence of hot conflict, it gives pause to people who would do us harm.

I think this is true in the United States. There is, for example, a "Stop Killer Robots" movement. These are people trying to diminish the development of autonomous killer robots as they call them. They put out this scary video, and the scary video has been picked up by Hollywood in the movie Angel Has Fallen, where in the beginning there is an attack of swarm drones. Swarm drones are a manifestation of these “killer robots.” 

I tell you, swarm drones used to really bother me, because if you have a swarm of say, a thousand drones which are attacking you, you can take out 90% of them and that drone swarm can still complete its mission. That to me is very scary. Swarms are very resilient to attack. So that gave me pause for a long time; but the beauty of the technology in the United States, as long as it's supported, is we can come up with ways of countering that. 

How do you take out a swarm of drones? Israel came up with the idea of shooting a laser and taking out drones one at a time. Well, that would just take too long. Then Russia has come out with an EMP cannon. We're all familiar with EMP: it's where an EMP explosion goes off. 

Gretchen:

I'm not sure that we are all familiar with that.

Robert:

EMP is electromagnetic pulses. If they put up a thermonuclear bomb sufficiently high above the United States, above Kansas, it could disable our entire power grid. EMPS fry your electronics. Why is that? Because if you think of your cell phone, what happens to the cell phone? The cell phone receives these very, very small microwave signals and turns them into audio and video and everything of that sort. It takes microwave signals from the air, it converts it to electricity, and it does its magic.

Imagine though, that that signal is increased a billion times. That energy, which is going to be introduced to the little wires in your cell phone, is going to be so large that it literally fries your cell phone. That's the problem with EMPs. So EMP has given me pause, especially with China's new launch of this supersonic missile. That means that with a push of a button, they could take out our entire power grid. 

But getting back to swarms and our resilience towards swarms: Russia came out with this EMP cannon, which is a big antenna that generates an EMP pulse. Imagine this EMP pulse going into an attacking swarm. This EMP pulse would be like spraying bug spray in a swarm of gnats that is attacking you, and all the gnats would fall down. This swarm idea bugged me for a long time until I heard of Russia's solution. And all of a sudden I feel a lot better, and hope that the United States is pursuing similar sort of technology.

This is the problem with the arms race. People come up with different devices, including AI, but then there need to be countermeasures. This is terrible. I wish it didn't exist, but it's existed throughout history, and will exist forever because of man's fallen. That's something that is unfortunate, but it needs to be pursued. That's the reason that I support the development of so-called "killer robots." 

One last thing, there are autonomous killer robots available. Israel has something called the Harpy missile that can do totally autonomous missions. They launch this Harpy, and it goes over a battlefield, and it loiters. It kind of flies around and waits until it's illuminated by radar; and when it's illuminated by radar, it follows the radar beam back to the installation and, as a kamikaze act, takes out the radar installation. All of this can be done autonomously without human intervention. We shouldn't mistake: these autonomous robots are already among us.

Gretchen:

I think that goes back to our worldview point as well: the people that are saying we shouldn't be making them are not factoring in that other people are. And again, with the fallen nature—we're not going to change human nature, so this whole defense system is necessary. 

Robert:

For example, even with the best treaties in the world, do you think that the North Korean dictator is going to follow a nuclear arms treaty? No. They're not going to. What about the dictator in Syria? Is he going to follow a biological weapons ban? No, he's not. No matter what we do in terms of treaties, it isn't going to work.

And many of these people, including Iran, are developing these weapons behind the scenes, covertly. It's important we stay on top of it. 

Gretchen:

As you know, I'm also part of an organization called AI and Faith. We've mostly been talking about AI, but I'd like to focus for a minute on the faith side of things. The prevailing worldview in high-tech is notably skeptical, if not openly hostile, to the Christian faith. I know there are many Christians working in AI, and they care deeply about AI from an ethical perspective.

You've talked a bit about the different ends of the ethical discussion. How should Christians inform both design ethics, and end user ethics? And can we even? 

Robert:

I think you divided it well. There is design ethics—this is for the engineer. I think that design ethics is pretty well agnostic. The design engineer is supposed to develop AI.The question is not whether it's autonomous or not; the question is whether this AI, autonomous or not, does what it was designed to do, and no more.

That's design in general. You want whatever you designed to do what you designed it to do, and no more. This takes a lot of domain expertise; it takes testing, because you have to take this AI and you have to test it; and that testing must be done with a degree of design expertise. Once your product is done, you can go to your customer and say, "I have developed AI, and this AI does what you asked it to do, and no more." That's design ethics. 

This is in contrast with end user ethics. End user [ethics] would be, for example, a commander in a field who sees a certain scenario and wonders whether or not he should use his AI weapons in the conflict, in accordance to the circumstance. It also has to do with a lot of ethical questions. 

I have attended a lot of ethics lectures. Some were secular ethics, and I always found the secular ethics were built on sand. Because they use things such as community standards and things of that sort. If you go back to Germany, there it was considered ethical to do terrible things to the Jews, right? That was part of their ethics. I think that any ethics which is built on a secular view is going to be built on sand. 

I think if you go to the Judeo-Christian sort of foundation, that you're building it more on a rock. Even there, I see disagreements as the ethical end user, but I see a lot greater consistency, and at least a reference which you can give to substantiate your end use of whatever you're using.

Gretchen:

This podcast is part of a larger project called Being Human in an Age of AI. We ask what makes humans special, and what does it mean to flourish on the frontier of a technological future? We've already talked about what makes humans special, so I want to interrogate this idea of flourishing. Since a sort of frictionless existence is a common selling point of AI technologies, what does it mean to flourish, in your mind, Bob? And how might a Christian view differ from, say, a humanist or materialist view? Is there a point where flourishing might be bad for us and a little friction good? What does the Bible say about it? What do you think about this? 

Robert:

I think on the negative side, too much technology is kind of numbing. There is the distraction, which technology is; it gives credence to the old saying that the idle mind is the devil's playground. We grow weary, I think, because we have too much play time. It isn't that we don't have enough play time; we have too much play time. 

One of the things that Ecclesiastes in the Bible says is that one of God's gifts to us is toil. This was recently pointed out to me by one of my PhD students who is a Christian, who has literally memorized the entire book of Ecclesiastes. But this is a gift to us—it is toil. 

 John Tamny wrote a great book called The End of Work. It's [about] how AI and the technology of the future is going to make life, for many, many people, much more enjoyable. I'm old enough to remember my grandmother said, "You gotta go out Bob, and you have to hoe the weeds in the corn." So I would go out with a hoe and I would chop down these weeds and things like that. That doesn't happen on modern farms, and it's because of technology. AI is going to add to this.

Now, are there going to be dangers? Heck yes, there's dangers. Anytime you add a new technology, there is a danger. We have the good and the bad. All of these things are no more than tools, and it's the way that these tools are used. It's going to be the same exact thing with AI. It's not going to be our master; we are going to be the masters of AI, and we're the ones that are going to have to decide how this AI is ultimately used.

Gretchen:

So you have a positive view on the impact of AI and human flourishing, more than a negative view. 

Robert:

I would say yes. We started out saying whether I was a dystopian or a utopian, I guess I lean more toward John Tamny's work. Tamny's idea is tracing through history the things which developed technologically and resulted in our lives being much more enjoyable. And I think that AI is going to do that. We are going to have problems. We are going to have job losses--everybody talks about the impact on jobs. But on the other hand, I'm a great believer in free enterprise, if people get out of the way and let free enterprise and capitalism flourish. Because right now, today, we have things which we never had before: we have web designers, we have social media stars, we have graphic designers, we have podcasters, right. 

Gretchen:

Ad infinitum. 

Robert:

Ad infinitum. And we have the guy that has to be hired to ban your video on Google. That made his job. 

Gretchen:

Fact checkers. 

Robert:

Fact checkers. Right. 

All of these are going to be new things happening, and there is going to be quite a transition. But I think overall it's going to be incredibly positive. 

Gretchen:

I'm going to continue to interrogate this in future talks with people. Because I wonder if there isn't a computational version of the prosperity gospel that Christian's should try to avoid. Looking at the film Wall-E, you see at the end of the film that the people who were taken off the planet have just become so leisurely that computers make all their decisions, and they become unable to have agency. 

As we close, I want to give you a chance to share your personal vision, Bob. In your role as the director of the Walter Bradley Center, how do you see your personal mission, and what kind of stamp or legacy would you like to leave on the field? 

Robert:

We talked about my hero, Walter Bradley. I would like to leave a legacy much like he has. I am a Christian. I am a follower of Christ, and have been so since I've been a junior in college. I guess my end result is how I please the Lord, God almighty. I want to be like Paul at the end, where he says in second Timothy, "I have fought the good fight. I have finished the race. I have kept the faith." I want to do that with whatever I'm doing. 

We're all given gifts. My gift is being a nerd. I'm really good at AI, and God has gifted that to me. I want to use that as much as I can for his glory. Part of that is a pushback against atheist and secular conclusions of what happens with AI.

That is the truth with evolution. I don't care if somebody believes in evolution or not. I don't care if they're theistic, evolutionist, young-earthers, or proponents of intelligent design. But I do get troubled when somebody comes along and says: evolution is the description of our origin, and therefore there is no reason for a God. All of a sudden that's something that needs push back. I do want to push back against those who say that AI will be smarter than us. I think that both computer science, and the Bible talk against this. 

 That is, I think, the goal of the Bradley Center. It's to show that we, as humans, have exceptionalism above AI. Creativity, sentience, understanding; but there's also things we can do. We interact with each other. We feel emotions. AI's never going to do that. This is an exceptionalism that we enjoy and that needs to be celebrated. All of this is consistent with the Judeo-Christian teachings.

Gretchen:

I forget who said it, but: AI might beat me at chess, but it's not going to be happy that it did. 

Robert:

That's right. Emo Philips is maybe the greatest comedian of all time. He said, " The computer can beat me at chess, but it never wins a contest of kickboxing." So there are things that we're always going to be dominating over, and computers will never beat us at kickboxing. 

Gretchen:

Well, Boston Dynamics may come up with something! 

Robert:

Oh, Boston Dynamics does incredible things. Lots of seductive optics, but it's still very impressive. 

Gretchen:

I'm afraid of their dogs. 

Robert:

You know why you're afraid of their dogs? There's something called a Frankenstein complex. IT was coined by the science fiction author, Isaac Asimov. And it is things that you're afraid of that are very close to things you're familiar with in real life. That's the reason we were afraid of the Frankenstein monster when it first came out. And that's the reason that we're afraid of these things that look like dogs. It's because we relate to something that's very close to what we're familiar with, and the Frankenstein complex gives us pause on that. Really fascinating.

Gretchen:

I think they call that a field trip into the uncanny valley. 

Robert:

Yes, it is the uncanny valley! You're exactly right. The uncanny valley, if I remember right, is just a dip in a regression curve or something like that. 

Gretchen:

Just a dip in a regression curve. Bob Marks, the notorious RJM--thank you for joining me today. This has been super fun. 

Robert:

This has been fun. Thank you, Gretchen.