Keeping Tech Ethics Grounded: A Discussion with Stephanie Hare

Dec 1, 2023 34 min listen

In this discussion with Senior Fellow Arthur Holland Michel, researcher and author Stephanie Hare describes the fundamental dimensions of technology ethics. She explains the importance of keeping the AI ethics discourse grounded in the needs and rights of those who will ultimately be most affected by the technology, and offers a few thoughts on how to brace—and empower—ourselves for the work that lies ahead.

ARTHUR HOLLAND MICHEL: Hello. My name is Arthur Holland Michel, and I am a senior fellow at Carnegie Council for Ethics in International Affairs. This episode of the Carnegie Council podcast is brought to you in collaboration with the Peace Research Institute Oslo as part of its RegulAIR project. RegulAIR is a multiyear research initiative about the integration of drones and other emerging technologies into everyday life.

I am very excited to be joined today by Dr. Stephanie Hare. Stephanie is a researcher, broadcaster, and author focused on technology, politics, and history. She is the author of Technology is Not Neutral: A Short Guide to Technology Ethics, and if you follow the tech space with any regularity, you have likely seen her on one of the various news programs that she regularly appears on or read her writing in publications such as The Washington Post, The Guardian, and Wired. Stephanie joins us today from London.

Stephanie, welcome to the show.

STEPHANIE HARE: Thank you so much for having me.

ARTHUR HOLLAND MICHEL: First, tell me, what would you say is the overall thesis of your work?

STEPHANIE HARE: Ooh. I don’t know if there is an overall thesis of my work other than every now and then I find something that motivates me enough to get off the couch or close whatever book I am reading and take a closer look at, and there is no shortage of that when it comes to technology ethics.

ARTHUR HOLLAND MICHEL: Maybe I will meet you halfway: Is there an overall theme, if you will, to the types of topics that interest you?

STEPHANIE HARE:
I think I approach technology from a pragmatic perspective. In my book, the question I had was, how do we build technology, use it, and invest in it in a way that maximizes benefits and minimizes harm? That was the overriding thesis for that book: Is that possible, and if so, how?

For me the triggers, if I take a closer look, are privacy, civil liberties, and human rights particularly regarding children, women, and groups that tend to be excluded from power. On the positive side, I am motivated by things that can make the world a better place and what the corporate governance structure looks like.

I do not always assume that technology is doing something new. Oftentimes things look new, but when you look at them they are not in fact all that new. It is pretty rare. In a lot of cases the science might be new but not the way that the companies that are building it are doing it. It is often a clear power grab or a pursuit of market share, but that is what makes it interesting. If you are somebody who is interested in business, you might be studying technologies just from a business or national security perspective.

ARTHUR HOLLAND MICHEL: Something that is interesting to me about your work is that within the technology space you have a very broad scope of verticals that you look at. I am wondering in that sense if in your experience, having looked at such a range of technologies through an ethical lens, if all emerging technologies share a common set of ethical challenges? Is there a fundamental base layer of ethical challenges that you see being universal to emerging technologies, or are the challenges really unique to each system, each application, and each domain?

STEPHANIE HARE: That is a challenging question because I think you have to always look at the technology that is in front of you or the tool that is in front of you—a product, service, or object—and ask, what problem is this trying to solve? The problem might be a negative problem, like it is a harm or an inconvenience and it is making something better, or it might be that there is this opportunity for us to do something that we could not do before, and so, isn’t that great? So you look at it and go, “Does it do what it says on the can? Does it solve the problem it is trying to solve, and how would I know that, so what metrics can we use to measure that and have discussion?”

But then you also have to step back and go: “Okay, cool. It might be defining success correctly in that it is solving the problem that it is trying to solve and we have metrics that can measure that, but in doing so is it creating new problems or other problems? Is it making something worse or simply having an impact elsewhere?” It might not be linear. It might be in a completely nonlinear way, but it is still important. Unintended consequences is probably the best way of thinking about that.

I think I try to look at it pretty calmly and forensically and to go through it and also to look around it. I will not analyze a technology head-on. You might parse it through the lens of politics, economics, society, or the climate—the climate impact is often understudied when we are looking at technology. I do not look at it necessarily in the same way that you might get from the perspective simply of money or return on investment and shareholder value. People who do that are going to have different answers to whether or not a technology is a good or bad thing versus somebody who is looking at it from a rights perspective.

What I try to do is embrace all of those different lenses in my analysis depending on whether I am doing this for a broadcast, I am doing it for a private client who has asked me to do some assessment for them or due diligence if they are looking to invest, or I am talking to concerned parents, teachers, or children. It all depends on who you are talking with and what skin they have in the game and their stakeholder position. That is different depending on who you are talking to.

ARTHUR HOLLAND MICHEL: That is fascinating, Stephanie. It is almost as though it is like a formula that can be applied for determining ethical implications.

If we were to dig into this notion of the formula a little bit more, I am wondering how one calculates for uncertainty. Oftentimes when a new technology is presented it is at a very early stage. It might only be at the concept stage or the prototype stage, and there is going to be a lot of uncertainty about how it will actually perform in the real world: Who will use it? What kind of policies will exist around it?

I am wondering in this kind of ethical calculus how does one factor in that uncertainty, and how does that play? If we have uncertainty about, say, the net benefits of the technology, does that draw away from the calculus of its net ethical balance more broadly?

STEPHANIE HARE: I think there are probably several different components to that question. It is totally fine to release a tool in beta as long as you say it is in beta. I think where you get into trouble is if you put something out onto the market and do not tell people that or you don’t manage expectations because then what happens—and we can come up with all sorts of examples, I am sure, and we will all have them in our minds as I say what I am about to say—is investors get very excited and you can get a hype bubble.

It is often fanned by the media, which is motivated by different incentives than helping the public and protecting the public. Some journalists are doing a great job of that, but others are just looking for clicks, so they will be like: “How can I be the newspaper or the journalist who tells a story that everyone is going to want to read? I will hype this up and make it sound super-exciting and not actually tell people: ‘Here is where this tool or technology is in its lifecycle. Here are the risks and here are the uncertainties. You would want to use this with caution.’” What this does is create a lot of problems for companies that do not know if they need to immediately get on the bandwagon with a certain type of tool or risk being left behind, and that could be very dangerous for their business, so they are all having fear of missing out.

Then you get the public, which starts freaking out about what this means for my job or for my society, or how I help my children deal with it, simply because everybody has gotten so burned by that particularly in the past 20-something years. That can be a real problem.

It is kind of like playing whack-a-mole. Somebody introduces a product, service, tool, or technology, and then you get the aftermath. All the critics pile in. They might be journalists, they might be academics, they might be rivals to the people who put out the product, or it might be that finally the regulator steps up or the National Cyber Security Centre in the United Kingdom will put out a blog post at 5:30 in the morning on the 30th of August—which is actually what they did—about generative artificial intelligence (AI) and large language models, saying: “These technologies are in beta. They have a lot of problems. We cannot spot those problems easily, and there are no mitigations available.” This is the sort of information that frankly should have been on the front page of every newspaper and was not. It becomes tricky to pull all of those threads together and get them to the public in a digestible, clear, actionable way, and that is the gap that I try to fill.

ARTHUR HOLLAND MICHEL: The way you are talking about it makes me think about how in the tech sector people speak very confidently about topics where perhaps there is less certainty than they would give onto. I wonder if in an idealized sense one might even go so far as to say that there is an ethical imperative to highlight uncertainty, there is an ethical imperative when talking about these technologies to be candid about what we do and do not know. Would you say that is a fair characterization?

STEPHANIE HARE: Yes, but I think it comes down to how we understand confidence. You have highlighted the fact that a lot of people when they talk about technology like to make these big, bold statements, and what you never see is people hold their hands up and say, “I got that wrong,” or pointing to somebody who is speaking very confidently on television, radio, or social media acting as if they know stuff, and then if you actually were to show the track record of their calls—do they get their calls right or wrong? There is no follow-up. What I am saying is that there is no penalty in the marketplace for ideas for being confident and wrong.

That is a massive statement. What I mean by that is, if you want to be first and get it right, I don’t know, five times out of ten, you are still going to get away with that because nobody is going to hold it against and go, “Yes, but half of your calls are completely rubbish,” whereas if you are cautious, if you are slower, if you are taking your time, you face a penalty as an analyst for doing that. You might actually have a higher batting average in terms of accurate calls and you are speaking a more measured language so the people in the know will follow you, but if you are looking for people who just want to make big, grandiose statements and they use various histrionic language, the marketplace of ideas rewards that, particularly social media.

ARTHUR HOLLAND MICHEL: As you say that it also gets me thinking about how it is not only uncertainty that is often glossed over in this space but that things are often discussed also as binaries, which does not cohere with the vocabulary of ethics, which is not so much about binaries as it is about looking at those in-between spaces, finding tradeoffs, and trying to understand the coexistence of positives and negatives in the same space.

I want to move on a little bit to look at the AI ethics discourse specifically. So much in the AI space has evolved so rapidly in the last few years. I was wondering from your perspective if the AI ethics discourse has itself evolved. Is AI ethics discussed or acted upon differently now than it was, say, five years ago?

STEPHANIE HARE: I think it is more just that people are looking for action, like it is great to have talk, but at a certain point talk needs to evolve into action or it just becomes irrelevant to decision makers. What I think is changing is that we have the EU AI Act potentially coming online, although we will have an idea closer to the time around the 6th of December. That is when they are doing the final vote if we get that through or not in terms of this parliamentary cycle. There is very little action coming in the United States. There will be no action in terms of regulation or legislation coming from the United Kingdom. Prime Minister Rishi Sunak has been very clear on that.

What we have at the moment is a commitment to have more summits, a commitment to talk more, and that is important. You want to keep a dialogue going, so that is great, but if this does not lead to regulators being able to regulate, if it does not lead to empowering consumers to potentially file lawsuits and change operating models of some of these companies, then ultimately it is all just a lot of talk.

ARTHUR HOLLAND MICHEL: There has been so much talk recently as you alluded to around AI safety. There was the UK AI safety summit just a couple of weeks ago. In a way it almost feels like AI safety has gained a profile that has superseded that of AI ethics as a concept. Do you see that happening yourself, and if so, if the focus is in one way or another shifting from AI ethics as a concept to AI safety as a concept, what kind of material effects do you see that having on how AI is controlled?

STEPHANIE HARE: I think it is important to keep ourselves grounded and ask, what is all this about? If you ask most people on the street, if you go down to your local coffee shop, your local parent/teacher organization, or call your parents and ask them, what is the difference between AI safety and AI ethics, they will probably shrug. It will have no meaning to their lives right now, so this is very much an “inside baseball” conversation that we are having. It does not mean that it is not important, but I think it is important to put it into context.

Most people on the planet do not care about this, and if they care about it, they want to know if it is going to take their jobs. That is the first thing I hear when I talk to ordinary people: “What does this do for my job?” Is that a safety thing or an ethics thing? I don’t know if people would care what term you want to use. They just want to know if you are going to be creating mass unemployment or if this is somehow going to help them be better paid, better employed, and also what this means for their kids if their kids are in education or potentially going to university or making career choices.

That is something that every political leader around the world is thinking about quite seriously: What is this technology going to be doing for employment and what is it going to be doing for productivity? What is it going to be doing for election interference? I am less interested in the abstract discourse, people-who-go-to-academic-conferences chat—although I love listening to that—but my contribution to this is keeping it real for most people.

Is AI safety about existential threats to humanity or is AI safety about deep fakes being used to take little girls’ pictures in school, which has been happening in several school districts around the world now, and turn them into deep fake porn? The little girls have no protection, their parents have no protection, and the police don’t even know if it is a criminal matter or not because the technology has completely eclipsed existing laws.

This question of what we even mean by AI safety, which Vice President Kamala Harris of the United States highlighted in her speech when she came to the United Kingdom a couple of weeks ago, is interesting. Existential risk for whom? What do we mean by existential risk? Are we really talking about AI as a potential risk equivalent to nuclear war or pandemics—which is what some people have said—or are we talking about AI destroying the fabric of society and rendering elections useless, et cetera?

I think it is important for those of us who work in this space to remember that the vast majority of people on the planet do not care about definitional arguments. They care about very practical things: How is this affecting them and how can they protect themselves and their loved ones and use this technology for good and not for harm?

ARTHUR HOLLAND MICHEL: From the way you describe it, it sounds like we are in perhaps a bit of a transitional or in-between stage where governments are talking seriously about guardrails and regulation but we do not actually have anything super-concrete of that nature yet in place.

How do you see things evolving in the years ahead, either in the way people talk about AI or the ways that institutions actually action it, with a caveat which I can preempt, which is that we do not know how the technology itself is going to evolve necessarily and whether it is going to continue on this same kind of pace of progress or whether it will slow down. What are, if you will, some of your predictions in this space for the years ahead?

STEPHANIE HARE: I am not a great maker of predictions. That is a bit of a fool’s game in the sense that if we had been talking last October you would not have predicted any of the stuff that has been happening in generative AI the whole past year. I would not have either. Ultimately I am not sure how useful it is.

I think it is more a question of are we going to be really empowering the population to understand what is at stake here. For instance, do most people know about how AI is already shaping their lives? Could they say with confidence: “I know how many departments within my government are using AI already. I know the areas where it is making decisions such as health care and my bank, whether or not I get a loan or a mortgage, how it is being used in our judicial system already, how it is being used in elections, and what is happening on data protection and privacy?” The United States does not even have a federal privacy law yet. Sometimes I think we want to run before we can even walk. The whole piece around child protection is massive, and we do not have an answer for that.

The question is more are we going to either be beefing up the enforcement of existing legislation and regulation—consumer protection, antitrust, data protection, and privacy—or are we going to be seeing new legislation created so that regulators and consumers can take action to protect themselves? That is point one.

There is an argument that you do not necessarily want to be relying on regulation, and that is where this very vague term of “guardrails” comes into play. Whenever you hear somebody talking about how we need guardrails around AI, ask them specifically what they mean by that. If it is principles, guidelines, ethics committees, and the like, it does not work. We have seen that. We have watched that. It is better than nothing—an umbrella is better than nothing in a hurricane, but it is not going to keep you dry.

There is this question of what we mean by guardrails. What I have been seeing is that when we use the word “guardrails” we are hearing it from people who do not want new regulation. The easy way to think about it is, is something coming into place that would allow me to sue someone? Who can I hold liable? How can I get accountability? It is not going to be guardrails.

ARTHUR HOLLAND MICHEL: As you say that, one of the parts I am curious about, because you are speaking so much about the actions that need to be taken, is bridging the gap between public awareness and actual regulatory action. It sounds like you see yourself as almost like an interpreter or translator of these complex technical issues for people who maybe do not have the time to follow this stuff as closely as we do.

Does that extend in your mind to policymakers? Is part of the issue here—as others have commented, and I still do not know where I stand on this—that policymakers just do not understand the technology, so part of the work that still needs to be done is giving them a more technically accurate sense of the stakes?

STEPHANIE HARE: I don’t know that necessarily policymakers don’t understand it; it is that they are busy. They have a lot on their plate. We are still working through the aftermath of the pandemic. In a lot of countries there is a cost-of-living crisis. We have several wars going on around the world. There is a lot going on. Some of them are very informed and have great teams. For others it is not their strength or area of interest because they are focusing on climate change or healthcare, and that is good. We want that.

The real question I guess is, how do we empower not just policymakers but also the population who elects policymakers to make sure that we have enough people keeping an eye on all of these issues and protecting the things that we all think are important. Of course that is going to depend. If you live in the United States, you might have a different idea of what you think needs protecting than you would in the United Kingdom, France, or over in China, one of the great AI superpowers. They have a very different discussion around this, but that does not mean that they do not think things are important or that they do not have things they want to protect. They do. It is more of that.

I think that is the case around educating the general population around things like privacy, data protection, and copyright and intellectual property (IP). If you are one of the people who has had your IP stolen in order to train large language models, if you are an actor, writer, or musician, you are just as involved in this if not more so than a policymaker because it is your work that was stolen to make other people very, very wealthy. So what does that mean?

Anybody who is trying to protect election integrity is looking at that. That is not just policymakers. That is just ordinary citizens. It is journalists even. How do we make sure that our journalists are empowered to hold power to account? I think it is a whole system problem.

I almost view it like a public health issue because in cybersecurity there is a saying, “You are only as strong as your weakest link,” and I think we all saw a nice demonstration of that during the pandemic. All you needed was for certain people to be doing certain things and you could get super-spreading events and that certain people’s behavior could either contribute to the problem or help shut it down and control it, and then we got vaccines. How would you apply that to empowering your population to be more constructively critical, mindful, and thoughtful about the role of technology and specifically AI in their personal life, their business, and their community?

ARTHUR HOLLAND MICHEL: Your point about so many of the stakeholders here having very little time given everything that is on everyone’s plate at the moment, to me that is a problem that extends to regular citizens as well, who have so much to deal with, and they might not share the same sense of urgency or priority around AI issues that people like you and me, who have our heads stuck into this stuff every day, certainly hold.

With this sense of being efficient about surfacing the issues that are most critical, what would you say are some of the ethical issues around AI that you think we need to be focused on more? I am alluding perhaps directly or indirectly to this question of artificial superintelligence or replacing people in their jobs, which I have written takes a lot of the oxygen out of the room for other issues. Whether you agree with that or not, what would you like to see surfaced more in the mainstream discussion around AI ethics? If there is one thing around AI ethics that you feel like someone who has a lot on their plate already should be focusing on, what would it be?

STEPHANIE HARE: I guess I would like to see everybody who considers themselves if not an expert then certainly active in the world of AI to do everything that they can to help share their perspective, their research, and their analysis with their community because I think the more people who have knowledge and skills to look at AI the better off we are going to be.

I say that for a number of reasons. One is this question of diversity. There is a very small pool of people on the planet looking at this technology right now, and they are by no means representative. That could be from a gender perspective, it could be from a nationality perspective, a geography perspective, all of it. That is not great for the planet. If we are truly going to take this idea that AI is one of the most important technologies for the 21st century, it cannot be shaped by a minority of people on the planet. That is just not healthy.

I think it is great to see it being discussed at the United Nations and at different government levels and intergovernmental discussions, all that high-level stuff. That is great, but it should also be discussed in your community. Go to your school, talk to small businesses about it, offer your services as much as you can, and answer the call anytime a journalist calls and asks, “Can you come and explain something,” because we need these different points of view because what is happening is a real battle of narratives, and that is often linked to people who have inside interests. They want to see certain narratives win over others, and that could be for financial gain or whatever. It is not healthy. So we want to have those different perspectives.

As a researcher I personally welcome that and need it. I need people to challenge me as a researcher to make sure that I am thinking about things in a different way. The question that you and I were discussing earlier, the AI discourse, ethics versus safety, is that being discussed, for instance, in China? Is that being discussed in France, or is it an Anglo-Saxon English-language discussion because it is easy for all of us to be having a conversation in this way? I wonder if we went to Peru or Kenya and were discussing how people there see AI and risks and opportunities, would they frame it in the same way, and that involves curiosity to be hearing what everybody else is thinking about it, what they want, what they are scared of, and what their potential solutions are.

I would like to see all eyes on the prize. Some people have more information than others. With great power comes great responsibility, as the saying goes. If you have the knowledge or the exposure, sharing it with the public and inviting as many people into the discussion as possible will hopefully help us get to a better outcome.

ARTHUR HOLLAND MICHEL: As I am very well aware, working in the tech space, covering technology every day can be taxing, and it can feel like there is an overwhelming amount of grim or difficult news to process on a daily basis. In closing, I want to ask you a question I ask all my guests, which is: What gets you out of bed in the morning? What are you optimistic about in this area?

STEPHANIE HARE: I don’t know necessarily that I am optimistic or pessimistic. I think I am probably at a stage in my life where I am like: “You are here. You can either try to make a positive impact or at least be neutral.” Being silent is also an option. There are days where I do not have anything to say about a particular topic, and there are more of those days than there are days when I feel the need to put my oar in.

I trained as an historian, so I think a lot about the evolution of technology over time and that there were all sorts of moments in human history right up until the present day where some people had to decide what they were going to do when they got out of bed. Yes, things can feel really grim right now, but I think they probably felt very grim at other moments in human history—we can all think of our favorite ones—and they still decided to do something amazing or just do something decent. It does not always have to be amazing. It could just be a nice, kind, or compassionate thing to do. I sometimes feel if I do not know exactly why I am getting out of bed, I think, Well, why don’t you just try to do something that is okay? Net neutral to net good would be great. I do not put the pressure on myself.

It can be tough working in tech. I am not going to minimize that. I know exactly what you mean about the burnout, exhaustion, and despair, but talk to a nurse, teacher, doctor, a climate change scientist, somebody who serves in our military, or a firefighter and you get a different perspective. All sorts of people are facing all sorts of pressures, and they still have to get up every day, so I do not feel like I deserve personally any special consideration. It is my job to manage myself. If I need to go for a walk or take a night off, I do it, and look after each other. This is a marathon, not a sprint. AI and other tech issues are not going anywhere.

And just being a bit more mindful about it. What are we ultimately doing all of this for? You and I are probably very aligned on the fact that we would like to see this be a net good for humanity. It is not there yet, but that’s okay. Rome was not built in a day. This is not either.

ARTHUR HOLLAND MICHEL: Patience is a rare commodity these days.

STEPHANIE HARE: One step at a time.

ARTHUR HOLLAND MICHEL: Stephanie, I appreciate the very grounded perspective that you have. Lord knows in these days we all need it. I am certainly going to go to bed tonight and wake up tomorrow morning thinking about your words. Let’s leave it at that.

I want to end by saying that I really, really appreciate the discussion. To our listeners I highly encourage you to follow Stephanie’s wonderful work, a rare voice of reason in this chaotic space that we operate in, and just to say that we hope we can have you back at some point and to keep up the good work and also the occasional moments of self-care.

Thanks so much, Stephanie.

STEPHANIE HARE: Thank you so much.

Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this podcast are those of the speakers and do not necessarily reflect the position of Carnegie Council.

You may also like

NOV 13, 2024 Article

An Ethical Grey Zone: AI Agents in Political Deliberations

As adoption of agentic AI increases, it is critical for researchers and policymakers to agree on ethical principles to inform governance of this emerging technology.

Left to Right: Eleonore Fournier-Tombs, Ambassador Chola Milambo, Ambassador Anna Karin Eneström, Doreen Bogdan-Martin, Vilas Dhar. CREDIT: Bryan Goldberg.

SEP 19, 2024 Video

Unlocking Cooperation: AI for All

On the eve of the Summit of the Future, Carnegie Council & UNU-CPR hosted a special event exploring the implications of AI for the multilateral system.

SEP 16, 2024 Video

AI for Information Accessibility: From the Grassroots to Policy Action

How can citizens, civic institutions, and industry professionals work together to make sure that emerging technologies are accessible for everyone?