Can Tech Ethics SHape Our Future?
As technology develops at an ever more rapid pace, it can seem that ethics struggles to keep up with it. While science and technology advance by building on discoveries of the past, virtue and moral knowledge must be cultivated afresh in every individual and each generation.
This is where Brian Green comes in. As director of technology ethics at the Markkula Center for Applied Ethics, his areas of research are many, ranging from transhumanism and artificial intelligence, to catastrophic risk and the ethics of outer space. This diverse array of interests all pivot on the intersection between technology and humanity.
In this episode, Brian and Gretchen dive into many areas of tech ethics that both impact our present lives and promise to shape our future. From immediate ethical dilemmas like self-driving car crashes and responsible tech development, to long-view issues like the establishment of extra-terrestrial colonies and the achievement of artificial general intelligence, they reflect on a large range of themes that can affect human lives for both good and ill. Listen in as they discuss old and forgotten tools for answering ethical questions, the Christian commission to work miracles, which human qualities can’t be programmed into machines, and more. Together they ask, should our predictions about technology and ethics be dire, or hopeful? What choices are we making now that will shape coming generations?
The Markkula Center for Applied Ethics was founded in 1986
Space ethics asks: should we be going to space at all? Is it worth the risk? What about issues like space debris?
“I think [an] approach towards technology which is generally positive is good; but you also always have to have in the back of your mind [that] something could go wrong, and we need to think about what that could be. In other words, ethics always needs to be there evaluating technology.”
Humans see tools and know what they’re for, but that “for-ness” is not present in the thing itself but in our minds—other animals don’t recognize it
Because we can conceive of things having a purpose, we are capable of religious thought—asking the purpose of our own lives, and of the world around us
“We've been commissioned to perform miracles on behalf of God, and technology is a way that we can perform those miracles.”
It’s hard to regulate technology that can be used for good or ill, because of a long-standing implicit agreement that scientists and engineers can build or research anything they’re able to
“[Transhumanism is] not so much about the technology; it's a way of viewing technology, and it's a fundamentally exploitative understanding of how technology works in the world. And it also has a lot of other associated ideological and even religious aspects to it that I just don't think are compatible with Christianity.”
“If we can externalize everything about humanity that makes us particular and special and who we are, and put it outside of us, and then that outside object is better at that than we are … What's good about us anymore?”
“Humans love. This is something that we can do that nothing else can do. We are commanded to love God and neighbor, and that is really what makes us distinct in all of creation. It doesn't matter how much we offload onto machines; it's still going to be love and compassion and caring that makes us different.”
Tech companies can be motivated to ethical action by “carrot” or “stick”—the stick being legal action and loss of reputation
Making sure AI technologies are safe includes both current problems (self-driving cars, etc) and concerns about the future
Responsibility for AI related mistakes (IE self-driving car crashes) has become complicated, because it is often diffused across multiple people in an organization
“I want to give people the tools to make better ethical decisions [about technology]. We have these tools in the past; it's just a matter of getting them out there and helping people know what they are and how to use them in particular situations.”
“If you're doing science or technology, you can build on the past, because you're working with things that have been externalized, and those things are still there. But moral knowledge is something that has to be learned, every single generation, over again.”
Even if the future looks dire, we are called to try to make it better: “If we fail, at least we tried. And if we succeed, then we've done a wonderful thing. [...] This is why we are born in this generation. We are here because this is our job.”
Links
Markkula Center for Applied Ethics: What is Tech Ethics?
The Ethics of Sending Humans to Mars
The ethics of brain-computer interfaces
Religious Transhumanism and its Critics
Can Christians Embrace Transhumanism on Their Own Terms?
Pontifical Council for Culture
Responsible Use of Technology—World Economic Forum
Safety Critical AI (Partnership on Artificial Intelligence)
A Tesla on autopilot killed two people in Gardena. Is the driver guilty of manslaughter?