From weaponized drones to dancing robots, artificial intelligence has become the locus of many hopes and anxieties about humanity’s future. In the face of rapid technological development, finding the golden mean between utopian daydreams and dystopian forecasts can seem an impossible project.
Robert J. Marks—professor of Electrical and Computer Engineering at Baylor University, and Director of the Walter Bradley Center for Natural and Artificial Intelligence—sits down with Gretchen to dive into this question. He walks through the history of both the development of AI, and the evolution of how we think and talk about it. He reminds us that it is the very human qualities that AI can’t replicate which bring it into being and give it purpose— our natural intelligence, creativity, and common sense. While any new technology comes with its own dangers, he contends that it is up to those who use it to make ethical choices, and that Christians specifically are called to make sure that those choices are based on the rock of God’s law rather than the shifting sands of community standards.
Join Robert and Gretchen as they discuss AI, what it can and can’t do, and our relationship to it.
Since artificial intelligence purports to imitate natural intelligence, it requires an understanding of the brain to make it work
The Mind-brain problem: is the mind the same as the brain? “Are we computers made of meat?” Brain surgery that divides the hemispheres without changing the person indicates not
Techno-dystopians vs techno-utopians: we must be careful to get our position just right between these; “we have to be careful what we forecast”
Although AI can do many things better than humans can, it still lacks sentience, creativity, understanding, and common sense.”
The Church-Turing thesis: “anything that we can look at Turing's original machine, and we could say that Turing machine can't do it—because of fundamental physics, because of fundamental science and mathematics—that can't be done on today's bigger computers”AI=computers; if computers can’t do it, AI can’t do it.
Because Darwinian evolution can’t be reproduced in a lab setting, its turned to algorithms to model it
“A computer program can create information no more than an iPod can create music. There just isn't the creativity; it is a tool for the programmer to do something.”
Active information theory: computer programs are predisposed to get certain solutions by information infused into them by the programmer
Walter Bradley’s legacy: he argues that if what people of faith believe is true, then it will stand up to the scrutiny of science
Humans can convey information in vague, imprecise language, without numerical values, and understand it; fuzzy logic models allow computers to translate that speech into code (Gretchen: translate the abstract into the concrete)
Winograd schema: tests of common sense that computers have trouble passing, i.e. “I can’t cut that tree with this ax because it’s too small”
Seductive semantics: using exciting language without defining terms, i.e. calling AI “self-aware”: “Now, my car, when I back it up, is aware of its environment. If I get too close to a wall as I'm backing up, it starts beeping at me. Does that make my car self-aware? No; [...] but I think you can see in seductive semantics, “self-aware” could be used that way.”
Seductive optics: “when AI is put in a package which enhances and amplifies its view of being human-like,” for example the robot Sophia, who has “little to do with AI”
Despite the Stop Killer Robots movement, we need to support the development of weapons technology so we can defend ourselves against those who would do us harm
Ethical questions arise more often around the end-use of technology rather than design ethics
Secular ethics based only on community standards are “built on sand”
”Too much technology is kind of numbing [...] it gives credence to the old saying that the idle mind is the devil's playground. We grow weary, I think, because we have too much play time. It isn't that we don't have enough play time; we have too much play time.”
“[AI is] not going to be our master; we are going to be the masters of AI, and we're the ones that are going to have to decide how this AI is ultimately used.”
Links:
The Brain Is Not a “Meat Computer”
How Does the Brain Work with Half of it Removed? Pretty Well, Actually
Google Does Not Believe in Life After Google (George Gilder)
Walter Bradley: Why He Is a Hero
Making AI Look More Human Makes it More Human Like
When Paid Applauders Ruled the Opera House
U.N Chief Urges Action on “Killer Robots” as Geneva Talks Open
The Trouble with Trying to Ban “Killer Robots”
AI: Design Ethics vs. End User Ethics—the Difference is Important