AI & Equality Initiative: Algorithmic Bias & the Ethical Implications

Dec 21, 2020

In this AI & Equality Initiative podcast, Senior Fellow Anja Kaspersen speaks with three researchers working with the University of Melbourne's Centre for AI and Digital Ethics about bias in data and algorithms. How can these types of biases have adverse effects on health and employment? What are some legal and ethical tools that can be used to confront these challenges?

ANJA KASPERSEN: Welcome back to our listeners. My name is Anja Kaspersen. I am a senior fellow with the Carnegie Council and co-lead and co-host with Wendell Wallach for the new Artificial Intelligence & Equality Initiative. In this podcast we are tapping into some of the important insights and deep thinking going on globally directed at understanding the innumerable ways in which artificial intelligence (AI) impacts equality for better and worse, and how we can address and respond to both the opportunities and potential risks of our increasing reliance on new digital and algorithmic technologies.

This week I am joined by a fabulous team of experts to help us better understand the depth and breadth of the impact of AI on equality, continuing our mission to unpack what exactly we mean by equality, especially in the digital context, as well as equality of life itself.

Let me quickly introduce my guests on this episode. Leah Ruppanner is an associate professor of sociology and co-director of The Policy Lab at the University of Melbourne. Piers Gooding is a Mozilla Foundation fellow and researcher, also at the Melbourne Social Equity Institute and University of Melbourne Law School. And Kobi Leins is a legal scholar and a senior research fellow in digital ethics at the School of Engineering as well as a nonresident fellow of the United Nations Institute for Disarmament Research.

Thank you, everyone, for joining me today.

Kobi, you are all working with the new Centre for Artificial Intelligence and Digital Ethics (CAIDE). On your website—I did a little bit of homework—you state that your core mission is "to facilitate cross-disciplinary research, teaching, and leadership on the ethical, regulatory, and legal issues relating to AI and digital technologies." Can you tell us more about who you are and what you research and teach within the Centre and its activities?

KOBI LEINS: Thanks so much for the generous introduction, Anja. It is a delight to be here despite the time. Thanks for hosting us.

The Centre for AI and Digital Ethics was set up last year by Jeannie Paterson and Tim Miller, who were trying to bridge a gap between the worlds of computer science and the ethics conversations that were happening. In doing so, they have created an umbrella under which Leah and Piers and a number of other fellows fall where we are all doing work on digital ethics from different perspectives and often interweaving our expertise. So alongside me sit two colleagues who are actual ethicists, Marc Cheong, who is a tech expert but also an ethicist, and Simon Coghlan, who unfortunately could not join us today, who is a former vet and also a philosopher. Through our work we aim to respond to some of the problems that are arising but not from a single discipline.

In terms of the teaching we have rolled out a subject which has been interdisciplinary in the law school, and we are planning to roll out many more including in the computer science department, teaching computer scientists about how to program ethically but even just to contemplate what they are building and what impact it might have, which is not necessarily happening in a lot of computer science schools.

ANJA KASPERSEN: Thank you so much, Kobi, for that quick introduction.

Piers or Leah, anything you would like to add in terms of your involvement in the Centre's activities and your line of work?

PIERS GOODING: I am happy to start. I have a background looking at the law and politics of disability and an interest in international human rights law related to disability and a strong focus on the politics of mental health.

In recent years I have been interested to see the developments relating to the use of AI, machine learning, and so on in mental health settings. That could relate to clinical settings, to community-based service provision, to the provision of Social Security, or even in criminal justice settings

ANJA KASPERSEN: You mentioned your work in disabilities and particularly in mental health, Piers. I read a recent article where you stated that the COVID-19 pandemic is not just a physical health crisis but that the duress and the measures taken to respond to the pandemic have also caused a mental health crisis in some ways with online support for people suffering this distress about to become more important than ever and the infrastructure, especially digital infrastructure for digital mental health care, has accelerated as a result, and with it—and you say this—steep risks. Can you share with us what you mean by that?

PIERS GOODING: Sure. Mental health, like many other areas of life, has radically digitized and virtualized under the COVID-19 pandemic conditions. In the mental health context we are seeing a lot of regulatory authorities loosen their rules related to digital technologies being used in the provision of health and social care services, and that is leading to something of a rush to get people support via telecounseling, via digital platforms whether they are on apps or websites, and there has been a sped-up process that has seen governance for those technologies organized in a matter of weeks and months in what would have otherwise taken years.

There are steep risks in terms of people's sensitive personal information relating to their distress, which could have significant implications in many other areas of their lives. With highly personal sensitive data people have risks concerning discrimination in the financial context, in relation to loans, in relation to insurance, health care discrimination, employment discrimination, and it is extremely important to start to discuss these matters because we are seeing mental health websites receive record numbers of visitors, not necessarily with the protections that the public might expect from mental health services.

For example, Privacy International released a report showing that over 130 of the most popular websites concerning mental health in Europe contain third-party elements that shared personal information with multiple third-party brokers and so on and transferred information to Big Tech providers like Google, Amazon, and Facebook, and were likely in breach of the General Data Protection Regulation (GDPR). So I think there is a lot of work to be done in this area, and that is just one area in which algorithmic technologies are being used in a mental health context, but I am very interested in drawing those issues out and promoting public debate.

ANJA KASPERSEN: You are saying there is essentially a risk due to the fact that we are not guaranteed that our data and what we submit is anonymous. As you said, it can be used to create certain patterns around our searches and our behaviors, but there is also of course a risk embedded into the situation you are describing that might discourage people from seeking out online health providers as a result of it at a time when many people experience that normal health providers are not able to see them given the pandemic, and this could create its own set of risks of people not getting the assistance they need because there is a revenue stream around them which is beyond what can be expected and maybe what most people know of. That is what you are referring to.

PIERS GOODING: Indeed. You put it really well. And highlight too, the real potential to improve equity using technologies of this nature to reach people who might otherwise have difficulty accessing services, not just people under pandemic conditions, where lockdown might mean that people are not able to leave their homes, but people who have mobility issues, people who are in rural or remote areas, and people for whom it is difficult to leave their houses.

There is enormous potential, but there are also risks, and this is being seen in multiple countries. In Canada there was an example of police breaching privacy laws by sharing non-criminal-related mental health data about citizens with a national database that was in turn shared with U.S. Homeland Security and essentially used to profile people at the border and refuse entry to Canadians with a history of engagement with mental health services.

But there are also developments in places like the United States where the major providers of telecounseling are considering whether to have geolocation devices arranged or geolocation capability for anyone who is calling the major suicide help lines. Ostensibly that is to get help to people in crisis, but there has been very little public debate about the implications of these, and exactly as you say people may be less inclined to reach out for support if they are concerned that their personal sensitive information will be leveraged against them in the future in some way.

ANJA KASPERSEN: And that marketing will be targeted against them as well, responding to their specific health needs.

PIERS GOODING: Indeed. We have seen Amazon start up its online pharmacy in the United States, and it is publicizing medications at well below market price. There have been arguments that the reason they are doing this is because it is not the money they will be making off the medication that is important, it is the value of the personal data about the users and the consumers of these medications.

ANJA KASPERSEN: Very important research here.

I will come to you now, Leah, because in addition to your affiliation and your work with the Centre together with Kobi and Piers you bring with you deep expertise on the impact of digital transformation and how we think about integrators such as gender in a digital context, particularly biases around gender. You recently co-authored a study which clearly demonstrated that human gender biases are mimicked and exacerbated by AI, and in the case that you were looking at it was used for sorting résumés, increasing the entry barriers for women in particular, albeit the implications of your findings from this research obviously are much more far reaching, especially as we tend to speak of these AI models as impartial. Can you share with us more from this research and also your involvement with the Centre and how it comes together?

LEAH RUPPANNER: I am an expert in gender. I am less of an expert in AI, but what has happened is that I had a student who was really interested in doing this project around the ways in which hiring algorithms may discriminate against women.

We approached CAIDE because when you don't know what you don't know you need to find the people who have clear expertise in those areas, and this was a location that was clearly emerging as a world leader in bringing together an interdisciplinary group of scholars who can share their expertise to tackle big questions.

We came together with a group of computer scientists and asked the question: How does gender bias get introduced into the hiring algorithm? We know from previous research that companies like Amazon had big cases where they showed that they were using their data, and effectively their hiring algorithm was discriminating against women, but what we don't know is the mechanisms through which this kind of thing is used because the problem with algorithms is that they are behind black boxes. So we don't know what is the proprietary decision-making logic that is happening for, say, Monster.com, Seek, or any of these big search engines that many people are engaging with day to day to look for jobs. This is the future. The future is using this kind of algorithmic association to identify and match employers and employees.

We wanted to work through the process of how this could happen and then see how quickly or how easily we could introduce gender bias into our hiring algorithm. We brought together a group of experts, people who had experience hiring, and we gave them real CVs. Half the CVs had the real genders of the people, and in half of them we switched the gender, based on the idea that if you see a man's CV but with a woman's name it might trigger some sort of unconscious bias that you may introduce into your decision-making process.

We brought together this group of human panels and we asked them: "Okay, what do you like about that?" We just listened: "What do you like when you are sorting through these CVs? How did you make your decision?"

They did not talk about anything controversial. They said: "When I look at a CV, I am ranking it based on experience, whether the CV matches the keywords, and the level of qualifications. So these three things are the ways in which I am ranking the CVs."

That seems totally reasonable. If I said to you, "Okay, people are going to look at CVs and rank them by experience, keyword match, and qualifications," you would probably say to me: "Okay, that seems really fair and equitable."

We then built a classifier. We actually built an algorithm to sort the data, and what that allowed us to do was see if we could match the human decision-making process. Could we get the classifier, the algorithm, the hiring model, to match what the humans did?

For some of it, it did well, and for others it didn't. What we found was that as the jobs had more men in them—we picked jobs that were male-dominated like data analyst, finance officer, which is actually gender-neutral in Australia, and human resources professional—our classifier did a worse job of predicting what the humans decided, particularly around that finance job.

While our human panel was saying, "Okay, we're using experience, qualifications, and keyword match to rank the CVs," when we actually created an algorithm that ranked the data, ranked the CVs, took out the information, created a classifier, and ranked them based on that, we did not find that we could match the human panel. In fact, those three characteristics did not predict how the humans ranked them.

What that suggested to us is that they were looking at finance and thinking male because our male candidates got bumped up. So they are going finance-man, finance-man, and ranking the CVs higher as a result. They are not doing it consciously. They are not sitting in the groups going: "You know what? This is a finance job. We really think men are good at math, so let's bump the men up." They are not doing it in a conscious way. What is happening is that in their minds they are thinking they are using these three characteristics, but what actually is happening is subconsciously they are pushing that male CV up, even when it has female characteristics. They are moving that up the chain without even being conscious.

I will take a big world perspective. I promise I am almost done.

What this shows us is quite easily how you can introduce gender bias. If we have humans on the panel, they can quite easily introduce gender bias unconsciously into the design of the algorithm.

But also we know that algorithms are not just using humans. They are using data from the world that is biased, so there are two ways in which women are going to get knocked out or women and minorities are going to get disadvantaged, and that is going to come through human bias that people are either aware or unaware of and the machine learning on the data of a world that is stratified by race, gender, and class.

ANJA KASPERSEN: Can I ask you a follow-up question? What is really interesting of course is transference to other domains. You spoke specifically about how this impacts on recruitment and how these human biases consciously or subconsciously even through the coding part of it then get exacerbated or amplified in this instance when you add this into an algorithmic process.

There have been some studies from the medical field, for example, when you are doing clinical diagnostics, by keeping—you were mentioning earlier—the classifiers, and even if you tamper or tinker with the classifiers it does not always help in managing this issue around bias, and that in fact by trying to do data modeling based on old classifiers, so not even where the bias shows through, or simply where you stick to certain classifiers that you are used to doing, you are amplifying and exacerbating in a sense inequalities through this algorithmic process. To use AI in this context we really need to question or challenge the entire framework of how we classify the data by humans and by machines.

Is that something you also found in your research? Because it is very interesting that your research takes it from the theoretical of what we potentially could see algorithms used or misused for and then what actually happened in your case, but I am assuming you also looked wider on different kinds of domains.

LEAH RUPPANNER: We have a Ph.D. student, Sheila Nejev [phonetic], who is doing exactly this type of work, to test some of these theories empirically with the big search engines. We are using one example where we are showing you: Okay, here is how it can happen, and we are building this model, and the model doesn't predict, and the result is that the bias could become amplified.

Keep in mind too that if we talk about qualifications or experience being a key indicator of success in a job that women inherently have less experience in because they take gaps in employment for caregiving—something we think is logical and fair—and we are using it as a metric, then it can become amplified due to the fact that we have very different careers based on the caregiving demands that women have and men don't have per se.

But we are looking at this on a bigger scale with the bigger algorithms, and we are finding that gender bias is emerging in the algorithms like Seek and in the hiring algorithms in the United States, so these are pretty clear indicators that this is not a small problem that is isolated to our one domain and is exacerbated as you say, Anja.

I think what is interesting about this is that there have been pretty high profile cases showing that there is some degree of bias emerging even amongst the big search engines, say, for example, Google and Amazon. Exactly as you said, they are using their existing classifiers, assuming that they are neutral, and finding that they are actually not. They discriminate or have bias in certain concrete ways.

This is not a small-scale problem. It is a big problem, and it seems as though it is something that people are aware of even at the highest levels of the companies that are developing the technology, which shows you that this will continue to be a problem and perhaps is one of the biggest problems for AI and movements toward machine learning and movements toward face-recognition software that need to be addressed into the future to ensure they are not at the expense of certain groups.

ANJA KASPERSEN: Leah, in extension to what Piers was saying, he was referring to how some of the mental health online providers and the way that data has been used and monetized, even the sensitive data, obviously there is a big economy particularly behind the search engines that you are referring to, so you could assume that the way that the search engines are set up, the way the classifiers are organized, the way they are used to perpetuate certain patterns, what is driving this? I am not saying this was necessarily reflected in your report, but my impression is that you reflected a lot upon it in your own research. What is driving it? Where is the government? Where is the industry? Who is benefiting from this? What is needed to be able to challenge some of this framework and maybe rethink where we go from here?

LEAH RUPPANNER: The number one thing that is likely driving this is clicks. The algorithm is just trying to match information to what the consumer wants: What are people clicking on? What have people clicked on in the past?

I was speaking to an expert at the University of Zurich, and she said: "We have to assume the algorithms are just kind of stupid." They are not smart. They are just trying to match previous predicted behaviors to the outcomes. They want to connect an employee to an employer based on a previous history of clicks.

If we are assuming that algorithms are not necessarily thinking and acting beings but are responding to two things—(1) human behavior, and (2) the data within our world—and we know that the data within our world has bias, the bias exists, and we know from our study and many others that humans can introduce bias in meaningful and unmeaningful ways, intentional and unintentional ways—we know this—it could just be the clicks, it could be that they think they are being logical and are using this kind of behavior to create a classifier. I don't think anyone is necessarily coming out with bad intentions and thinking: You know what? Today I am going to develop a classifier that discriminates against women. That's not a conscious component, it's just a part of the way in which we behave in ways that we don't even anticipate. If we know that this exists, if we know that this is the way the world exists, then what are we going to do about it? Then we have a call to action.

The first thing is to understand that it does exist, to acknowledge it exists, just to say: "Okay, hey, there is disability discrimination. There are ways in which they are being monetized, there is unconscious bias coming in, there is machine learning based on biased data that is amplifying bias, we are using classifiers that didn't work in the past. We need to understand all the mechanisms through which we are doing that."

Second, we need to be able to measure that in a concrete and empirical way.

A third thing is that there needs to be transparency in algorithms so people can be working on them across disciplines and understanding what is going on, so that needs to no longer be black boxed but opened up in a way in which people can understand how the bias emerges.

Then you actually need guardrails, and Kobi would probably have great insight into this. We have a legal infrastructure that says we cannot discriminate in Australia and in the United States as well, but has that legal infrastructure been adapted to the new ways in which we mine data or interact with the world? I would say probably not in a way that is meaningful, that has teeth, and that allows there to be serious consequences if we keep documenting a degree of discrimination. How do you create a legal structure that actually has an enforcement capability that is meaningful so we are not saying: "Oh, this is a problem, but it's too big to address," or "This is a problem, but we don't know what's happening," or "This is a problem for Google and the big companies but not for governments."

There are alarm bells going off all over within industry saying: "Big problem, will continue to be a problem, amplification of a problem, certain groups are disadvantaged." So we can't say we don't know. We do know, and now what do we need to do to solve it?

ANJA KASPERSEN: That's a very good segue to Kobi.

For those of our listeners who are not aware of your work, in addition to your work in the legal aspects of digital ethics you are also a world-renowned expert in cybersecurity and biosecurity and nanotechnology, just to add on all of these new emerging technology fields.

Beyond the personal privacy issue that both Piers and Leah referred to, there is also a bigger question at play, which is the data about our biological selves, which are being shared in ways and at a pace unthinkable only months ago, and the big data and platform companies are increasingly shifting their focus to become the health care providers of tomorrow. As Leah said, although there may not be bad intent going into the creation of these ethics, I have heard you many times say that you may create an algorithm to be neutral, but once you embed it or deploy it, it carries a certain political agenda.

Can you share some of your reflections on what we have discussed until now but also based on your own work and looking also into the security aspects of this?

KOBI LEINS: Thank you, Anja. That's a very generous introduction. As you know, I come to this from a really unusual angle. My background was looking at biological and chemical weapons and thinking about risks and misuse and dual use, so I come to this new technology with a different perspective. I am particularly interested in the national security implications but then more broadly how society is changing, as you said.

Interestingly I think you have touched on one of the issues that is dear to my heart. So often when we are talking about these topics, we are actually talking straight past each other. I was in an international standards organization meeting the other where we were talking about ground truths, and someone said, "How can you establish what a ground truth is?" And the technical expert said, "but no, a 'ground truth' is a technical term." So even if you talk about fairness, transparency, or ground truths, a lot of the language we are using, even if we think we understand each other, we don't.

So when technologists and those outside of the technological fields are trying to engage and trying to speak meaningfully, it's not that they are not hearing each other, but they are not understanding each other, so there is a big gap in the knowledge that those different groups have about each other's work.

There is also a difference in how they are valued. One of the more recent pieces I worked on, again with my colleagues Marc Cheong and Simon Coghlan, looking at diversity just in gender across science communities. Another misapprehension is that computer scientists are one homogenous group. They are not. We broke down the groups into approximately nine different subcommunities of computer scientists, and they had very differing degrees of awareness of the potential risks of their use.

My hypothesis is that it is potentially based on the diversity within those groups. We are exploring that now. We have crunched some numbers. We did a margin of error obviously, but what we are really interested in doing is talking to those communities and talking about what it is that triggers concern.

In some communities, such as the natural language processing communities—when you think about a word search or a lot of the chat bots, the tools that Piers is talking about, that is natural language processing—a lot of them are thinking about how their data can be reverse-engineered and how you can actually identify people using the data, just by your typing pattern or certain phrases that you use. If you are thinking about putting data into a chatbot, there is a fair chance that can be reverse-engineered to figure out who you are even if there are all the safeguards—which Piers said there often are not—but there is that next level of protection there. Some of the communities are actively thinking about these risks, and some of them are not.

In not just the health sphere but in any of the spheres, what we tend to think about and focus on is the immediate problem. We saw protests on the street in the United Kingdom about the International Baccalaureate, that people's marks were being established by an algorithm. We have seen protests in Australia about RoboDeck, which wasn't even really AI, it was just a dumb algorithm that did what it was designed to do. But what we are not talking about are the structural inequities. What a lot of these tools are doing is doing at speed and scale what they are meant to do, which is maintaining existing power structures.

There are a whole lot of questions, but also, who is going to get these tools? On the one hand, remote communities will have better access, but are we going to have further class systems, where those who are remote and less privileged will receive the technology but the privileged classes will still have access to humans and better care? Those sorts of concerns remain, even if you have a lot of the safeguards that Piers was talking about.

In the life sciences I could not agree more, Anja. The amount of data that is being collected, collated, and connected is terrifying. We are not having the debates, as Piers said, about where that data should be going, how it is being stored, and what it is connected with. The differentiator for genetic information is that it is also relevant for our children. It is not just us, it's the next generations. We might be sharing data about ourselves that we cannot retract and that cannot be modified.

One of the things I think is the most obvious but is often forgotten is that these tools are just tools. They are tools to amplify power that exists, and the fact that they are inscrutable or difficult to inspect makes them all the more powerful because you have a very small elite. I am going to refer to Foucault: "The power is in the knowledge," so you have to have the knowledge of understanding how algorithms work to even interrogate them.

If you take that tool and wield it, again at speed and at scale, you end up with a power imbalance. Take policing, for example. If you are automating policing, you are going to find more people committing crimes because you are there seeing them. But we are only seeing these tools rolled out within certain demographics, and that is deeply concerning because it is entrenching the existing inequalities, and it is amplifying them without recourse and without the ability to criticize them.

I cannot help but refer to Sheila Jasanoff, thinking of the science, technology, and society community: We are lacking sociotechnical imaginaries. We are not sitting back and saying: "Well, what is the world we want to have? What are the tools that we have at our fingertips? What could we be doing?" We are plugging tiny holes, and so many of the technological solutions that we are providing are really just quick fixes that continue to support a system that is not equitable and that is not fair.

For example, when we are talking about chatbots or apps, maybe what we need is more funding to the health system. In Australia we have been able to basically stop COVID-19 because we have employment benefits. We have benefits that have meant people have not had to go to work and have been able to stay home, and we have seen the upside of that sort of societal decision, that sort of culture, a community that can respond to a health crisis.

So from a national security perspective what I would like people to do is step back and say: "What is the world that we want? Who is making these decisions? Are they making them on our behalf, or are they making them based on some sort of philosophy that we don't support?"

Which comes to another point that we have not touched on but that I have to mention, which is the misinformation/disinformation campaign. I was reading this week that so much of the misinformation is not just by companies or actors but is actually by governments, and they are hiring marketing managers who are spreading misinformation and disinformation, which is fundamentally threatening to democracy. I think that is a question we might have to save for another podcast, but how these technologies are amplifying misinformation and disinformation to the point where I think a lot of people can't engage with this topic because it feels too overwhelming and too huge. Kudos to you for trying to crack it open.

ANJA KASPERSEN: Your last point is very interesting, and I agree with you. It maybe is for another podcast, but it is nevertheless a very important point and one that often gets overlooked, how these new technologies, although as Piers was saying, there are ample ways that this can be used to transform the world to better, but there is also a high risk that they might just be replicating existing power patterns. To be able to challenge and engage with those patterns requires certain knowledge.

As Leah was saying, it is not necessarily that those power structures that are being replicated necessarily inform you that that is what they seek to do, but it becomes the inevitable consequence of not fully grasping and understanding the sheer power of these technologies to replicate our inherent biases or these power structures that lay there in our historical data, in the way we are doing things, and in the way that if we automate decision-making.

Is that something you have thought about and brought into your work as well?

KOBI LEINS: Yes, absolutely. I would go a step further and say I think there is a baked-in assumption that we are actually going to challenge societal inequities. AI in a way is just a lens that is being used to have a lot of conversations that we arguably should have had a long time ago. We know that there has been discrimination, inequity, and lack of equality to frame it in the work that you are doing, but the use of the technological systems is forcing us to have difficult conversations about what equality actually should look like, and unfortunately a lot of those conversations are not being had with the technologies being pushed through to either maintain the status quo or exacerbate the status quo.

I think that is where the tension lies. I think that is why for so many years there have been so many ethics strategies, many of which as we know are inconsistent, and the language isn't necessarily enforceable. Now we have standards. The standards world is waking up and saying: "Hold on. We need to be having standards around these tools." The next step will be regulation.

Which is all useful, and as Leah said we need all of these tools. The governance aspects are all important, but I cannot help, coming back to the point in the work that I am doing, if you don't get it right in the build, if you don't have those in the room who will be affected by it, contemplating the use of it, and having input from the very beginning, these systems will inevitably do harm.

I know that is going to be controversial, but I think when we are talking about diversity we are not just talking about diversity in writing papers, we are not talking about diversity in different aspects and different perspectives, we are talking about inclusive design, which Piers will be very familiar with from a medical perspective. But we actually need the people in the room who will be affected.

So we need those people there designing the tools. We need ethical frameworks of those creating the tools. We need standards that govern the tools. We need laws that can respond when they go wrong. And we need people to have the right to know when they are being used. We have not even touched on this.

So many times in so much of our interactions we do not even know where we are engaging with AI. We do not know when it is being used. I know in a number of governments it is being rolled out and has already been rolled out in many departments for many years, and people do not even know it's there. So how can you question or challenge a decision that is being made?

Then we get into the other small rabbit hole of even when people know decisions are being made by AI, they tend to believe them. There is magical thinking around computers. The computer says no, and it's a joke because it's kind of true—people are much more likely to take no from a computer than they are from a human. That whole spectrum needs to be addressed, and I think the international community is slowly coming around to it.

But the relationship between companies and governments remains tense. A lot of these companies have far bigger budgets than the states they are dealing with do, and that shift in power globally is a tectonic one. Whoever has the technology will have the power. It's important.

ANJA KASPERSEN: And the data.

KOBI LEINS: And the data.

ANJA KASPERSEN: Leah, come in. I also wanted to go back to Piers because of this issue around inclusive design.

LEAH RUPPANNER: Kobi raises these interesting points. The technology is there to make money, and that is the goal. Reinforcing inequality is an artifact. It's like: How do they get from A to B as fast as possible to automate systems to make money? The challenges that thinking about what the world will be is secondary to the main priority, which is generating data and generating income. I think you raise a great point about that.

One of the things I have been grappling with is there is this educational gap even in what AI is. We are sitting in a room talking about machine learning or talking about facial recognition or algorithms, and there is a whole segment of the population that does not even understand those concepts. There is a knowledge gap, and women often sit in that knowledge gap, so they are sitting within automated systems but may or may not know it, and they cannot advocate for themselves because they do not have the knowledge base. Not that they aren't smart, but the technology came, and we are living within that world.

One of the things we have been working on are these MicroCerts where we are talking about what AI means for women and bringing it back to step one, where you are saying: "Okay, what is automation? How is that different for artificial intelligence? What is machine learning?" Women in particular, who often are often out of the science, technology, engineering, and math fields (STEM), or who often are not thinking about these concepts, now have the vocabulary to understand the world in which they are living.

AI is not in the future, it is here in the now, but you may not even be aware. You cannot advocate for yourself if you do not have the basic knowledge of what you are living in, and that often is gendered and classed too, so that knowledge gap sits at the intersection of race, class, and gender, and often the bigger inequality is there.

ANJA KASPERSEN: I absolutely agree with you, Leah. Those are very important points. Who are we designing this for? Who do we want to empower to raise the right questions? When do those questions need to be raised in terms of the process of deploying any form of technology because they too might differ depending on where you are in the lifecycle of the algorithm?

Piers, what does equality look like, and how do we get to that inclusive-design thinking to overcome shifts in power and skewed patterns of power that get exacerbated or replicated through these algorithms? Is this something that is featured in your work? What are your thoughts on this? What are the big tradeoffs that we need to come to grips with?

PIERS GOODING: In the mental health context there is a huge risk that the people most affected will be left out. My colleague Tim Kariotis and I recently undertook a survey of all scholarship concerning algorithmic technology in the mental health context with online initiatives particularly. We found about 130 papers that fit our survey and identified that only 10 to 15 percent of them actually engage with ethics and legal issues, and that was anything from one sentence noting them in passing to one or two paragraphs, and not a single one had engaged with those who used services, those for whom the technologies are ultimately designed for ostensibly, people who are experiencing distress and seeking support.

I think that is a massive problem. I don't know if it is a problem of the way journal articles are structured at the moment, where ethical requirements simply require sign-off from committees that are more concerned with covering liability of the university. I don't mean to be cynical, but I just mean to say that participatory ethical principles are not necessarily the top priority of ethics committees and institutional review boards. I do think that is a serious problem.

That is not to say that inclusive research is not occurring in industry and in scholarship, but it is just to say that the survey revealed something that was shocking even to me of the scale of overlooking the people who are most affected, in my area at least. Drawing the frame outward to some of the structural issues that Kobi and Leah have described, one of the algorithmic and data-driven technologies in mental health involves monitoring, and they do so in ways that could affect high-stakes decisions for people, for example, concerning welfare or Social Security payments, sentencing, medical treatment, and access to services. Technologies applied in these areas are areas where there are complex constraints on people's actions including limited choices and other cumulative effects of disadvantage. It could be families or individuals facing housing insecurity, returned veterans, people with addiction, ethnic and racial minorities, and so on.

One researcher I have been looking at is Lana James from Canada, who points to these concerns. She asks things like: "How will data that is labeled as 'Black' and 'poor' and 'disabled' or all three impact a person's insurance rates?"

Like Leah said, I do think some of the Big Tech companies are grappling with this. I know several natural language processing experts at Google are grappling with some of the labeling of data concerning disability, and they identified biases that allocated what was described as "toxicity" to terms that referred to disability, particularly to terms like "mental illness," and therefore it would lower the outcomes of these searches for people. I think that is a real risk in an employment context.

As Leah and Kobi have pointed out, current legislation in many places around the world don't appear to protect people in this area, certainly not health service recipients or patients, from this type of algorithmic discrimination.

I think health data is fundamentally changing to the point where inferred data is what is being used in these kinds of interactions or what some computer scientists and clinical experts are describing as "invisible" data, where a person's swiping patterns, typing patterns, or mobility detected by devices is being used to infer some kind of health condition. I think that brings enormous risks as well as difficult questions about how knowledge is produced, which goes back to Kobi's point.

One last thing on how to involve those affected communities. What Kobi pointed to was the kind of work being done in algorithmic accountability, where the questions that are being asked are not just how to involve affected communities in the auditing of algorithms and understanding of how algorithms work but in fundamental questions about whether algorithmic technology should be used for particular purposes, and if so, who gets to govern them? That is some interesting work that Frank Pasquale and others are doing which I think has great implications for my area and all areas concerning algorithmic technologies.

PIERS GOODING: One thing I could say about the rise of the big technologies in the mental health context is that it brings together two highly individualizing forces. I think mental health sciences traditionally have led the focus down to the individual, and similarly these algorithmic technologies and a lot of discussion about privacy drills the focus down to an individualist, possessive account of data.

But I think a great potential for an emancipatory approach or something to draw the focus out to the social level is to think about how these technologies could be affecting the mental well-being or health of communities. By doing so, it draws the focus away from efforts to monetize the data of individuals and draws the focus toward efforts that could be taken collectively to make sure that technology is being used for the good of all.

You point out a lack of access to online domains could actually impact people's well-being, and there are valid proposals at the policy level to ensure that people have access to good Internet simply as a matter of general well-being, and that could apply to people experiencing various levels of disadvantage. I do think there is real potential there to think about it more in social terms.

ANJA KASPERSEN: What would be the key takeaways that you would like the listeners to reflect on going forward?

I will start with Leah, then Piers, and then Kobi.

LEAH RUPPANNER: From my research I think the key takeaways are twofold. The first is that you are already in the digital AI world, so thinking that this is something coming in the future—it's not; it's the present. The second is that basically our digital world is reflecting our lived world, and our lived world is an unequal place.

I think Kobi actually said this best. What you are seeing is that the digital world is putting in sharp relief all of the structural inequalities that have been around for decades, and it is just replicating that, amplifying it, and it is happening more quickly with bigger consequences as our world shifts digitally.

I think Piers raises some interesting points about the ways in which we are connecting data across systems, the connection of your personal data to policing data and that this is being triangulated not just to sell you Q-tips or to sell you some sort of product but is being triangulated to make decisions about where you can and cannot be, who you can and cannot be seen with, the types of jobs you are going to get and loans you are going to get, etc. So there are big implications.

There are three main points: You are living in a digital world, and you need to understand the ways in which the digital world is amplifying inequality that already exists. Third, the ways in which that data is being triangulated is not just to push you products but to actually limit your decision making, limit your life choices, and impact the ways in which you are policing and impact the ways in which you are going to get or not get health services. That actually is not a small question. It is not just clicking. It's a big question.

ANJA KASPERSEN: Thank you so much for those very important key takeaways, Leah. I like the poetic touch to it, that your digital reality and your lived world somehow have to come together to manage those tradeoffs that are baked into this conundrum that we are facing looking ahead.

PIERS GOODING: "Public governance" are the two words that come to mind. I am coming to this again as a newcomer in the technological space, but in the mental health realm and the disability realm there has been a long movement for strong public governance with the voices of those most affected really driving the agenda.

I think given that AI and algorithmic technologies affect us all—as Leah said, we are just in them now, there is no question about that—determining ways to create robust forms of public governance is for me the most pressing and cross-cutting question, and I think it is something that everyone is interested in. The reason I use "public" is because I think for too long it has been a technocratic, expert-centric, and capital-driven endeavor, and it needs to be brought down to a level where people can contribute meaningfully through deliberative democratic processes and in ways that are robust and have some regulatory teeth.

ANJA KASPERSEN: So national governments and municipal levels, is that what you are thinking of?

PIERS GOODING: Absolutely as well as knowledge production in the university sector and in tech development of course, at all levels.

ANJA KASPERSEN: You in some ways describe connectivity as a basic public good, as an essential service, which is obviously a discussion that has been going on for quite some time, that to make connectivity meaningful it needs to be viewed as a basic public good essentially, as an essential service, which it is not now. It is kept separate from electricity, water, and sanitary facilities, and the other things that we expect to come out of public governance. Is that something you see as important as well when you speak about public governance and inclusivity?

PIERS GOODING: I do. I don't think it is going to solve the problems. I mean, electricity is an intractable problem in the current world, given particularly the environmental crisis. So you see this huge public debate about it certainly in Australia where it is an acute political issue.

But I see that as the logical endpoint for algorithmic technology, to be used as a public utility. But that requires a whole lot of demystification about how they work and all of the things that we have been talking about in this discussion—making knowledge accessible and knowledge production accessible as well as those political levels of governance and legislative change and so on.

ANJA KASPERSEN: Thank you for that, Piers.

Kobi, your final reflections and key takeaways for our listeners.

KOBI LEINS: It really depends who we are talking to. If we are talking to governments, governments need to become familiar with the technologies and be better equipped to communicate with the companies. If we are talking to academics, they need to start listening. They need to start listening to the target audiences they are talking about and not find technology as the answer. And if we are talking to those who want to see change, we need funding.

Piers is 100 percent correct. Governance on every level is a toolbox, whether it is ethical codes through to law but also co-design, requiring some kind of regular audits of research, production, and industry, ways to show what we are doing. Looking at an algorithm or a system that is automated once is not enough. For those who are running companies, the risks are enormous. You can comply with the law and still have amazingly huge reputational risks that can cause you harm.

So the range of things to contemplate—sorry I have not just answered it with one thing to think about—are enormous, and I hope that by having this conversation we have triggered some thoughts about more interdisciplinary ways and more ways into the problem than people might have thought about before.

ANJA KASPERSEN: As all of you mentioned, sometimes we all run the risk of staying in the abstract because it is difficult to take it down to tangibles—"What's next? What do we actually do about it?" This is also very much at the heart of the work of Carnegie Council for Ethics in International Affairs. We have to get it down to a level where meaningful action happens, and we need to be able and willing to also confront whether we are perpetuating unhelpful frameworks.

This has been an absolutely fascinating conversation, and I want to express my huge thanks to Leah, Piers, and Kobi for sharing their time, expertise, and insights with all of us, and of course the fab team at Carnegie Council for Ethics in International Affairs for doing all the hard work recording this podcast. I hope you enjoy this podcast and conversation as much as I did.

You may also like

APR 26, 2022 Podcast

The Promise & Peril of Brain Machine Interfaces, with Ricardo Chavarriaga

In this "Artificial Intelligence & Equality" podcast, Senior Fellow Anja Kaspersen talks with Dr. Ricardo Chavarriaga about the promise and peril of brain-machine interfaces and cognitive ...

APR 5, 2022 Podcast

AI & Collective Sense-Making Processes, with Katherine Milligan

In this "Artificial Intelligence & Equality" podcast Senior Fellow Anja Kaspersen and Katherine Milligan, director of the Collective Change Lab, explore what we can learn from ...

DEC 9, 2021 Podcast

Ethics, Governance, and Emerging Technologies: A Conversation with the Carnegie Climate Governance Initiative (C2G) and Artificial Intelligence & Equality Initiative (AIEI)

Emerging technologies with global impact are creating new ungoverned spaces at a rapid pace. The leaders of Carnegie Council's C2G and AIEI initiatives discuss ...