Cybersecurity. CREDIT: Pixabay.

CREDIT: Pixabay.

Surveillance Tech's Infinite Loop of Harms, with Chris Gilliard

Apr 12, 2022

In this discussion with Senior Fellow Arthur Holland Michel, Chris Gilliard explains why the arc of surveillance technology and novel AI bends toward failures that disproportionately hurt society's most vulnerable groups.

ARTHUR HOLLAND MICHEL: Hi, everybody. My name is Arthur Holland Michel, and I am a senior fellow here at Carnegie Council for Ethics in International Affairs. It is my great pleasure to be joined today by someone whose work I have admired for a very, very long time, Chris Gilliard.

Chris, hi. I thought it might be best if you could actually introduce yourself. Tell me what is it that you actually do every day.

CHRIS GILLIARD: That's an interesting question. It is probably separated into two main things.

I am an English professor by day and by night I am a privacy researcher. I hesitate to use the word "expert," but I am a scholar who researches surveillance and privacy and particularly how that affects marginalized communities. I spend a lot of time cataloging ridiculous, absurd, and obscene examples of invasive and extractive technologies, most of which I then put on Twitter and make snarky comments about.

ARTHUR HOLLAND MICHEL: To those listening, if you are on Twitter, I highly encourage you to follow Chris's account, which is @hypervisible. It's that side of your work I wanted to focus on today. We have spoken offline about what it's like to wake up every morning to news stories about proposed artificial intelligence (AI) or other emerging technologies and the feeling of just being extremely tired when we see these stories. I wanted you to explain why it is and to perhaps give some examples of some recent stories that have given you that old familiar feeling.

CHRIS GILLIARD: I think why it is interesting is because so much of this is just a gigantic hype cycle. There is a tremendous amount of money and resources put into making the claim that technologies can do a certain thing whether or not they can actually do that thing so that the real investment is in making people—institutions, governments, schools—believe that such a thing is possible and also desirable. In many cases it's neither, but there is a tremendous investment in making that seem like a reality so that these companies can profit from it and a select few individuals can profit from it as well.

In terms of what that's like, I don't take joy in it in that I wish I could be doing something else. I wish I didn't wake up to stories of AI that claims to be able to detect whether or not you're lying, but I do take joy in exposing it and pointing out the ridiculous nature of it. I just wish that these things weren't continuing to harm communities that I care about and democracy and the world as a whole.

ARTHUR HOLLAND MICHEL: Another side of it is that you have no shortage of material to expose. It seems like every single day there are more stories than the previous day and that these stories somehow are able to outdo the previous day's story in terms of the claims that they are making or the potential harms.

I am wondering if you could maybe share a couple of recent stories that have particularly caught your attention that you have been pondering in the last couple of days, again just to give a sense that this is happening nonstop. We are not just talking about one story a week. We are talking about multiple stories a day that catch our attention.

CHRIS GILLIARD: Yes, it's a literal firehose. A couple that I noticed just from looking at my own tweets is the discussions about an "edit" button on Twitter; the move that Meta, formerly known as Facebook, is doing where they are developing their second form of currency after the first one failed, lovingly referred to as "Zucker Bucks;" and the Worldcoin process, where some venture capitalist is dispensing "orbs" that scan people's eyeballs and in return for their biometric information he is promising these people X amount of cryptocurrency that they are encouraged not to spend but that in the future they will be able to spend. Of course, they are trialing this in poor countries.

Then, as I think I referred to earlier, there are the people who are making apps that they say use artificial intelligence to diagnose mental illness based solely on either the movements of someone's face or the sound of their voice. The claim is that this is going to revolutionize mental health, but the article has no mention at all of the ways that not only this may not work but all the ways that it will be leveraged to further harm marginalized communities.

ARTHUR HOLLAND MICHEL: Why don't we run with those last two stories that you shared? I have certainly been losing a bit of sleep over them myself over the last couple of days.

The question I am about to ask may seem to have an obvious answer, but because these things keep happening and keep getting reported on in a way that doesn't answer this question, what are the specific harms that you could anticipate from apps that claim to use AI to diagnose certain disorders or these orbs that collect biometric information in return for an as-yet-nonexistent cryptocurrency? How are people going to get hurt?

CHRIS GILLIARD: The list is so long that it's hard to even know where to start. I will just say that there is a long history over the world in which mental illness and disabilities are used to stigmatize people, set them off, marginalize them, and institutionalize them. The article starts out with a claim that it would be better if people had better access to therapy and health care, but the solution it poses is that we could now use AI to diagnose people.

There are two really big problems with it. One is that, depending on who you ask, and I am in the company of the skeptics, not only can this not work, it's never going to work, but also again the ways in which this could be used—I shouldn't even say it as potential, not the way in which it could be used, the ways in which it will most definitely be used—to stigmatize, to marginalize, and to institutionalize. Not only that, but we can even look at some less harmful but still really problematic uses where we would get to the point where companies are "diagnosing you" and using it to target ads, using it to tailor specific kinds of misinformation and disinformation towards you, using it to encourage you to buy certain things, and law enforcement using it. There are all kinds of ways that this will definitely happen, and most or much of it will be often without people's consent and many times without even their knowledge.

If we just think about what the potential harms are for a thing that may or may not even work, a really useful question will be: Is it worth it? People often talk about these things in terms of tradeoffs, and there is a mountain of harms. For something that may or may not be effective I think that is a real important discussion to have.

ARTHUR HOLLAND MICHEL: To say nothing of the risks that that data could be stolen by nefarious actors who have all the wrong reasons to access it.

What about these orbs, this Worldcoin?

CHRIS GILLIARD: Again, where to start? Any time a company starts by trialing their product in poor countries, on black and brown people, and it operates sort of as a pyramid or multilevel marketing scheme I think that immediately raises a bunch of red flags. Even the point that you just mentioned, about what happens to this data, we could start there.

I think that accumulating massive troves of biometric data as a general rule is a really bad idea and has a lot of potential harms. We can look at the ways in which all the biometric info that the United States left in Afghanistan when they left and how that information is being used against, say, people who helped out the United States when U.S. soldiers were there.

There are a variety of ways in which just the existence of information makes the potential that it falls into "the wrong hands" a real and persistent problem. But using the phrase "the wrong hands" sort of presupposes that there are "right hands," and I don't think there are any right hands. Even if there were, it wouldn't be some of the actors that are behind the Worldcoin scheme; but I don't really think there are right hands because one of the things we have seen over and over again with any extractive data scheme is that a big part of the grift is just to amass a bunch of data and then figure out what to do with it, who to sell it to, how to target people, how to use it to make more money, and how to use it to gain more data. It is the lifecycle of tech companies, which is to amass a bunch of data and then figure out what to do with it.

Again, I think it is especially pernicious and gross frankly to pilot it—I almost said "pioneer," which is a word I typically don't use, but it's actually kind of appropriate in this case—in poor countries and use the promise of future wealth on people who are very often struggling and looking for ways just to survive. I have so many problems with this.

ARTHUR HOLLAND MICHEL: We could almost fill a whole episode just running through the reasons that this is definitely not a good idea. Also, this cryptocurrency hasn't even been launched yet. The people who have given up their data haven't even gotten anything in return yet. According to the reporting it keeps getting delayed, and they might not ever see a benefit.

CHRIS GILLIARD: Yes.

ARTHUR HOLLAND MICHEL: I guess the question that really is lurking behind all of these individual anecdotes is: Why the hell does this keep happening? Why is it so repetitive? Why is it that even if I cannot anticipate the exact story that I'm going to wake up to tomorrow, I can anticipate very accurately the contours of that story? In a way it is the repetitive nature of this issue that is what is kind of wearing me thin. Why does it keep happening?

CHRIS GILLIARD: That's a really interesting question. From my perspective there are a couple of answers. One is that the people driving this are always the same. They are the select few venture capitalists and select few millionaires and billionaires who are almost always behind these things, so that the investment structure that we currently work with means that these are the same people over and over funding these kinds of schemes, so of course they are going to follow the same arc.

The other thing is that, at least for the last 20 years, we have operated under the assertion that we are a data economy, that everything flows through the extraction and moving of data. The way I put it is that "every company that exists now is a data company that also does X." What I mean by that is that Ford Motor Company is basically a data company that makes cars, and Amazon is a data company that also sells items. Whole Foods is a data company—they are part of Amazon—that sells food, and on and on and on.

I am not making that up. Ford has said that. General Motors has said that. Because that is the prevailing notion of not only how companies make money but how they think they should make money and the deal we are supposed to accept as consumers and users or citizens for that matter, that drives so much of the contours of what business and then ultimately society looks like unfortunately.

ARTHUR HOLLAND MICHEL: If you zoom out a bit from that, it also feels like these stories challenge this notion that is oft repeated, "Oh, technology is either good or bad, it really just is a matter of how you use it," that it's a neutral thing, any technology, and if a person uses it for ill, then it will cause harm, and if a person chooses to use it for the benefit of humanity, then it will have a net positive.

Given how many stories of harm we are seeing and how few stories of net gain we are seeing despite shall we even say the best intentions of some actors who are perhaps—one would hope—acting with some amount of good faith, it really calls that into question. Your writing has been very influential to me in thinking about that because you point to the bigger issues that are sort of lurking behind this notion.

You have written, for example, that surveillance technology always "finds its level," which is a haunting phrase because it really runs very much against this notion that surveillance technology is neutral and does not gravitate toward one particular application or another. So I am wondering, why is it that these technologies always seem to find their level? And beyond just the nature of the economy that we live in what does this say about, I don't know, inequality? It seems that ultimately we come back to these non-technical issues that are at the root of it.

CHRIS GILLIARD: My fundamental assertion is that most of these things are assertions and instantiations of power so that a particular surveillance technology, for instance, or a particular predictive technology ultimately is going to be used to surveil and predict against the less powerful or the people for whom these devices are leveraged against. To make that more clear, even if everyone all the time is always surveilled, so from the unhoused person all the way up to millionaires and billionaires—we imagine this is a society where everyone is constantly surveilled that we are unfortunately moving closer and closer to—the way that power will be exerted on the most vulnerable is far different than it will be towards the more powerful.

As you move further and further up that level, there will be no accountability and no consequences. Because these technologies are often instances of power, then it is really important to understand them in that regard and to question the idea or assertion that somehow if we have enough of a technology or if it is in the hands of everyone it is going to make things more equal. Again, I fundamentally reject that assertion.

ARTHUR HOLLAND MICHEL: Here at Carnegie Council we talk a lot about AI ethics. What you are talking about raises some very uncomfortable questions for the notion of AI ethics because if every technology is an instrument or vessel of power, then just slapping a couple of in most cases voluntary rules on that technology or its use or even making the claim that, "Oh, this technology will be accountable or transparent," seems to be fundamentally incompatible. If a tool is an instrument of power, then there is no way that it will be accountable. Is that in part what you're getting at, that perhaps even the way we talk about AI ethics misses a bigger issue?

CHRIS GILLIARD: Yes. I absolutely agree. I think that we have very much moved away from discussions about whether or not something should even exist. Again, we have been fed a line of reasoning. I will use a very concrete example. There is sort of a brouhaha going on on Twitter right now where a computer scientist argued that there were positive-use cases for an AI that could determine whether or not someone was gay.

Let's just get this out of the way. First of all, it's not possible to do that. Second, even if it were possible, it's not desirable.

One of the claims that someone made was that—and it's not unique to this person—a consistent claim is: "Well, someone's going to build it." As we get down the line with these things, that "someone is going to build it" claim or even the proof of concept thing where someone does something to prove that it's possible to make it are two of the most harmful ideas that we have been burdened with over the last 20, 30, or 40 years.

But we really need to have discussions about whether or not certain things even should exist. I think that has been lost. There have been famous lines about "We got so enamored with whether or not we could do something that we forgot to ask whether we should do something." It's like a cliché from a movie, but it's also accurate. There are so many things we probably shouldn't build, but that has gotten lost. Because these things are largely often instruments of power I think there should be a lot of questions about whether or not we should build them in the first place.

ARTHUR HOLLAND MICHEL: That gives us an opportunity to maybe do a little bit of an exercise here in how to actually anticipate the harms of a new technology. As you mentioned earlier in the discussion, often these stories are not about a technology that exists today but really about the claims of a technology that could potentially exist in the future. That potentially gives us an opportunity to get ahead of some of the on-the-ground harms that said technologies if they were created would no doubt incur for the most part on, as you mention, vulnerable populations.

I am wondering if you and I could perhaps help listeners, when we see a story of some new AI technology, with a little bit of a rubric for figuring out whether this is something that we actually should build or whether this is something that should exist, figuring out what the harm specifically would be and how if at all any of the claimed benefits would weigh against those.

CHRIS GILLIARD: I think there are a couple of questions that I typically start with. One of them is: Is it possible?

ARTHUR HOLLAND MICHEL: That's a big one.

CHRIS GILLIARD: For so many of these things the central claim is that there can be a degree of certainty about things where there is no such certainty—being able to predict a person's trajectory, being able to predict a person's inner state, being able to predict events that will occur in the future, things like that. Again, many of these are based on predictions or assessment. "Is that thing possible?" is a really important question to ask.

And if it's possible, is it desirable? If it works, is the thing that it claims to do an ideal thing, something we want? What communities were involved in making that thing? Are those the communities that will be affected? Under the ideal use case, who is going to be harmed, whether the thing is accurate or not? In what ways is it going to be "misused," because it most certainly will be? Those are a couple of questions I start with. What problem does it solve?

ARTHUR HOLLAND MICHEL: That's a big one.

CHRIS GILLIARD: Is it better than the existing thing that we have, or does it create more problems than it solves? A very specific one that has come up a lot is Apple is now making it available and states are cooperating with Apple to put people's driver's licenses on their phone. So the ideal use case is for people who consistently forget their wallets but remember their phones.

There are many ways to solve this. You could stick a magnet on the back of your phone that has your wallet in it. Problem solved. But it creates a whole bunch of other problems. Even to stay with Apple, AirTags. The ideal use case is for people who lose their keys, but as we have seen and as people have predicted—accurately—it has also created a climate to aid stalkers and other sorts of people who wish individuals harm to track people without their knowledge.

On the one hand we have the ideal use case of figuring out where your keys are. That is like the best thing they can come up with: "I lost my keys." Again, there are simple analog ways to solve this: Leave them in your pants. Put a hook by your door. There are ways to solve that.

ARTHUR HOLLAND MICHEL: I think we're smart enough to figure that one out without an engineering degree.

CHRIS GILLIARD: And people have been solving that problem in a variety of ways—make multiple copies—and all of those don't enable stalking. I think we need to figure out in what way does the problem, even the ideal use case, what is the tradeoff for having this thing that we're saying is better? If the best use case is finding my keys and the other prominent case is people being stalked, I think probably people should find a better way to keep track of their keys.

ARTHUR HOLLAND MICHEL: There are even stories where the net positive of the technology is unquestionable, but the risks are commensurately massive. I am thinking about one recent story where these researchers who developed an AI system for coming up with new medical drugs with reduced toxicity—something we could all probably agree is solving a pretty significant issue, coming up with new drugs and making sure they don't poison people—but the researchers in an experiment essentially flipped the logic of the system to design drugs with increased toxicity, and in a couple of hours it had produced thousands of designs for new toxic compounds, including known chemical warfare agents like VX. It is not even cases where a system has, if it were to perform as desired, an unquestionably positive benefit to society, the harms are still there, and they are massive.

CHRIS GILLIARD: What troubled me about that story—I read at least six accounts—is a line, not verbatim, that came up consistently in all the accounts where the researcher said, "We never thought about this before, up until now." This is a little bit of the problem with some of the ethics discussions. I am not in that field, and that is a thing that would have immediately occurred to me. So I wonder a little bit about the insights of people who use these systems, design them, employ them, and work with them because this seems to be a very obvious—and it's easy for me to play Monday-morning quarterback—thing you would think of pretty early on: What are the potential harms?

ARTHUR HOLLAND MICHEL: Perhaps it's partly that these perceived use cases have such an allure to them, and that allure is so kind of intoxicating that it potentially could blind one to any of the downsides. The idea of finding a technical solution to this vexing challenge is so irresistible that it's hard to see it as being anything but.

At the same time, as you mentioned there are also these stories of a system providing another kind of irresistible solution, which is solving a problem that nobody really asked for a solution to. I was spending a lot of time today thinking about this new open AI system that can make paintings or make images if you give it a prompt. You say, "a koala playing basketball," and it will produce a picture of a koala playing basketball. What I have been trying to get people to tell me today is, how does this benefit humanity? That is the mission statement of the organization that created this system.

One person—in good faith and earnestly, and I appreciated this interaction—said to me on Twitter: "Well, you know, it would be really useful if the night before you have to give a PowerPoint presentation you need to create some graphics." Okay. Just don't procrastinate? I don't know. Make your PowerPoints maybe more than a few days in advance.

It is so lovely speaking to you and we have been laughing a lot in this conversation, but we are laughing in part because there is no other coping mechanism that I can really think of for this stuff, but we have been talking about some really uncomfortable questions, this notion that AI ethics as broadly understood is kind of meaningless in the absence perhaps of measures that address underlying structural inequality, something that AI ethics at least on a policy level doesn't address, issues as you mentioned earlier about how these stories are recurring in part because that is how companies make money these days because it's a data economy.

All of which is enough to give one very little hope for the future, yet I was wondering if in the last couple of minutes you could tell me: Are you optimistic? Is there anything that does give you hope or reason to be optimistic about the future?

CHRIS GILLIARD: I am by nature a pessimist, but there are very worthwhile reasons to consider that the way in which it is is not the way in which it has to be. I point to a couple of things.

I point to the ways in which people overall in the last several years have become much more aware of how these systems touch our lives in so many different ways. Whether it's what kind of job you get, if you can get a loan, if you are considered a criminal, and on and on, there is no one whose life is unaffected by these systems.

As people have begun to see that, they have become more active in not only recognizing it but using the levers available to them to question it and in some cases overturn these systems, get rid of these systems, and things like that. That is a thing to be optimistic about. An example I point to consistently is Atlantic Towers in New York, where the landlords installed the facial recognition system in the apartment complex, and the people who lived there did not want that. They banded together and got rid of it.

The other thing is that there is a history of recognizing that systems, that compounds, that chemicals, and that products are harmful and then regulating them. We have seen it with cars. We have seen it with asbestos. We have seen it with food. So many of these things that are called AI, artificial intelligence, and machine learning (ML), are new to many of us, including people who make laws, pass legislation, and things like that. There has been movement in developing mechanisms to regulate these better than we do.

Again, the example I consistently use is food and drugs. For the most part, if I invented a new kind of food and wanted to open a restaurant, opened it, and poisoned a bunch of people with my first set of new meals, I couldn't just say: "Okay, now I have this burger 2.0 that poisons 30 percent fewer people." There are mechanisms in place that mostly would hopefully prevent me from doing it in the first place, but then again would also prevent me from continuing to do it.

In terms of algorithms, in terms of AI and ML, there are not a lot of those kinds of mechanisms in place, but there could be. Again, not being an optimist, I recognize still that that is a possibility, and I think it is a very real one as people become more and more understanding of how these systems work and influence our lives.

ARTHUR HOLLAND MICHEL: It also goes to I guess perhaps an underlying source for potential optimism in all this in that we have no choice but to be optimistic. Just as when all these new chemicals were coming out or when cars were hitting the roads and people were dying needlessly, society had no choice but to do something about it. We are at a time now where it feels like we have no choice but to act on these issues. It is no longer just an imaginary set of issues. These are real. That is perhaps a strange way of couching what keeps one optimistic or hopeful, but it certainly is what gets us out of bed in the morning.

CHRIS GILLIARD: Yes. Unfortunately we're stuck with a lot of people who are basically saying seatbelts shouldn't exist, speed limits shouldn't exist, food safety regulations shouldn't exist, and water cleanliness regulations shouldn't exist. There are actually people who believe that in those specific cases, but there are also people who are arguing for the equivalent of that in algorithms. It is a battle, and it is not one that I think can be definitively won or lost, but it is a battle that we definitely need to have.

ARTHUR HOLLAND MICHEL: I will just add that a source of optimism for me and I think what should be a source of optimism for our dear listeners and for a lot of people out there is that there are people like yourself who are working day and night on these issues, often losing sleep over them, and I think if we do find a way to work things out it will be in no small part thanks to your efforts.

I want to thank you for that and also thank you for joining us today for this discussion. I have really enjoyed it. We will hope to have you back sometime soon with more stories of this dystopian future that is being engineered for us every day.

CHRIS GILLIARD: Absolutely my pleasure. Thank you very much.

You may also like

NOV 13, 2024 Article

An Ethical Grey Zone: AI Agents in Political Deliberations

As adoption of agentic AI increases, it is critical for researchers and policymakers to agree on ethical principles to inform governance of this emerging technology.

APR 30, 2024 Podcast

Is AI Just an Artifact? with Joanna Bryson

In this episode, host Anja Kaspersen is joined by Hertie School's Joanna Bryson to discuss the intersection of computational, cognitive, and behavioral sciences, and AI.

Ukrainian refugee center in Moldova.

JUN 8, 2022 Article

Ethics & Artificial Intelligence: Migration

With Russia's invasion of Ukraine leading to Europe's worst refugee crisis since World War II, this article from researchers Gustavo Macedo and Lutiana Barbosa details ...