CREDIT: <a href="https://pixabay.com/illustrations/abstract-geometric-world-map-1278080/">Pixabay (CC)</a>.
CREDIT: Pixabay (CC).

The Technical Limits of AI Ethics

Dec 17, 2020

In recent years, the global discussion on "AI ethics" has succeeded in mainstreaming key principles to limit the risks that would otherwise arise from the unrestricted and unconsidered use of artificial intelligence, particularly with regards to privacy, safety, and equality. But it may have overlooked a much more fundamental and uncomfortable question: What are the limits of "AI ethics"? This panel discussion, hosted by Senior Fellow Arthur Holland Michel, looks at this question and much more.

ARTHUR HOLLAND MICHEL: Good afternoon, everybody. My name is Arthur Holland Michel, and I am a Senior Fellow here at Carnegie Council. I am delighted to welcome you to this event, "The Technical Limits of AI Ethics."

To get us started, I will just set the scene a little bit. Essentially this is the goal we are trying to get to with this event. As I am sure you are all very well aware, over the past few years there has been a very vigorous debate on what we broadly refer to as "artificial intelligence (AI) ethics." This discourse has been very, very fruitful in producing a range of pretty much universally accepted principles to ensure that AI does not do more harm than good once released into the wild. We have declared, for example, that AI should be fair, that its effects should be spread evenly; that it should be safe, that it should not cause any unintended harm; and that it should be transparent, that we can look into the machinery of AI itself as well as the societal and organizational structures that go behind it and find out how these systems came to be and how they operate.

But in a way, coming up with those principles was the easy part. It is all very well to say that AI should be ethical, but that declaration, those principles, might belie a murkier, perhaps less optimistic technical reality. It is perhaps, to offer a bit of a loose analogy, like declaring that the brakes on all cars should never fail or that electrical grids should never go dark. It sounds nice in theory, but there are certain inescapable technical realities that get in the way, and we are here to talk about those realities today.

I should say from the outset that the title of this event makes it sound like this could be something of a smackdown of AI ethics. That is by no means the intention here. Instead, our goal is to develop a more grounded, realistic vision of AI ethics, informed first and foremost by technical fact.

To that end, we are tremendously lucky to have three absolute superstars in the field to help us do just that—Deborah Raji, currently a Mozilla Fellow, Kristian Lum from the University of Pennsylvania, and Zachary Lipton from Carnegie Mellon. All three in their own distinct ways stand at the very forefront of efforts to bridge the gap between the "idealism," if you will, of AI ethics and the facts of life.

Here is a bit of run of show. We will have about 45 minutes of panel discussion followed by open questions from the audience, which you are very welcome to submit through the Chat function at absolutely any time. Please bring your questions forward. Do not be shy. We are going to keep this non-technical, but any area of clarification, anything that you are looking for, we are there to have a rich conversation around it.

I should also say that a transcript and recording of this event will be available on our website, carnegiecouncil.org, along with an archive from all of our previous events and announcements of upcoming events. A recording will also be available soon on our YouTube channel.

Before we get in with each of the speakers, I am going to give one small piece of technical clarification here, which is that in terms of AI what we are largely going to be talking about today is machine learning, a subset of AI that is essentially applying probabilistic models, systems that train themselves on data that we provide in order to achieve a certain desired result. That is perhaps an oversimplification, but for our purposes that is what you need to know.

We always start with data, the data that we train machine learning systems on. With that in mind, I want to turn to Kristian first to talk a little bit about data.

Kristian, we hear a lot about notions of data bias. I was just wondering if you could, first and foremost, explain to us what is meant by the term "data bias" and why it is an issue for the ethical application of AI and machine learning systems.

KRISTIAN LUM: I think that is exactly the right place to start. When people talk about data bias, different people mean different things.

My background is in statistics, and as a statistician when I hear "data bias" the first thing I think is representivity, or sampling bias or selection bias, which simply is to ask: Is everything in the population you are trying to sample from equally represented in the data? To put a technical spin on that, does everything from the population you are interested in have the same probability of ending up in your data set?

There are a lot of reasons why that might not happen. One example where that condition isn't met is if you use a police database of crimes to measure all of the crimes that occur in a city. The population of interest there would be the city, and the data would be the police database. If the police are collecting data on these crimes in a way that, for example, is emphasizing enforcement in minority communities, then the crimes that occur in those locations will be more likely to appear in your data set than the crimes that occur elsewhere, so that data would be unrepresentative or would exhibit something like sampling bias. That is one type of bias, sampling or selection bias. Like I said, that is the first place I go to as a statistician.

There are other types of bias as well, and these might be more what people are thinking about when they are speaking colloquially about bias and data. Another version of this might be measurement bias. That is to say the concept or the idea that you are trying to measure, you are systematically measuring too high or too low with the data you are actually collecting. One example of this might be if you are using teaching evaluations to measure teaching effectiveness. Many studies have noted that women, for example, are more likely to get negative teaching reviews, even when their students learn the material equally well. In that sense, the women in that sample would be systematically measured as worse than the men or other people, relative to some other notion of the thing you are trying to measure.

This is especially problematic when there is this differential measurement bias, this thing that I just measured here. If everybody is measured too high or a little bit too low, depending on the application area that might not be that big of a deal. But when you have certain groups that are systematically measured too low relative to other groups then you can induce real problems in the data.

The third type of bias that I think we should talk about—and I am going to talk about a fourth in a second—is societal bias. This is to say, is society representing some sort of bias, or is society really unfair in some way? The data you have could accurately measure reality, but reality is unfair. For example, you could have a data set that measures the salaries of all the people in a company. You have all of those salaries, so we don't have to worry about sampling bias. Everything has probability one of ending up in your data set. You have every single person in that company. And maybe you have an accurate measurement of what everyone has been paid. You have access to payroll. You have a very accurate reflection of the underlying system. But if there is some sort of unfairness in the way that salaries are awarded in that company, then their data set will reflect that sort of societal bias.

One thing you will notice in at least the first two cases is that I am talking about bias with respect to what? I have talked about this in a technical and maybe more concrete way of talking about things, like there is some real thing we want to measure, this population is the population of crimes, and say the thing we are using to measure it is a police database. I would say that exhibits sampling bias with respect to the population of crimes. But if the police are actually accurately recording all of the crimes that they record, I would say that does not reflect sampling bias with respect to the population of things that are policed or things that police know about. In that sense, it is bias with respect to what? If you are talking about the population of crimes, it is biased. If you are talking about the population of crimes that are enforced by police, it's not.

When we are talking about this measurement bias, I mentioned the measurement bias with respect to what the students actually learn, so there is a sort of mismatch. If we are talking about societal bias, we can still have that problem: Is it right or is it wrong if there is a gender disparity in wages? Some people might argue that the disparity that we observe is not a problem. In that sense, they might argue that it is not biased.

The last type of bias that I think bears mentioning—and I think every data set from the beginning of time until the end will exhibit this sort of bias, whether you can fix these other ones, and we can talk about that in a minute—I don't think there is a great term for this, but "feature bias." What do you actually choose to record?

When we are talking about something like manufacturing errors for screws in a factory, maybe everyone will be in agreement that the right thing to record is the length of the screw, and if it deviates by some amount, then there is a problem. But when we are talking about data about people in particular a lot of decisions go into determining which data you are going to collect, how you are going to record it, how you are going to measure different things, and even if the data set you have doesn't exhibit sampling bias, doesn't exhibit measurement bias, and you are sampling from a world that is totally 100 percent fair, there is still somebody deciding what things you are going to measure.

That in itself I think is a form of bias because that will then determine the uses of the data downstream and the ways you evaluate that data. We often see that in certain situations people do not want to collect information on sensitive attributes, say, race or gender, but that in itself is a decision that makes it difficult to evaluate whether your data is representative or whether a model you have built from that data is reflecting some sort of disparity or bias with respect to those variables.

All of these are different ways we can talk about bias in a data set.

ARTHUR HOLLAND MICHEL: That is tremendously helpful as an overview. Just to recap, remember when we talk about machine learning systems, the training data that it is fed on becomes that system's worldview. If there are inequalities reflected in that data to the machine learning system—not to humanize it too much—those inequalities will be normal. That will be how the world works as far as they are concerned, and they will perpetuate those biases.

With that in mind and particularly with this notion that all data sets are going to be biased in one way or another, which I think is a key takeaway for us to think about, that you cannot separate non-biased data sets from biased data sets along any of these dimensions, technically speaking how do you root out those "bad" biases, if you will, the kind of biases that will go against the principle of AI fairness, the biases that will cause AI to be unfair? Is there a technical way of finding those and doing something about them?

KRISTIAN LUM: It goes back to which sort of bias you are concerned about. Again, starting where I start as a statistician, let's start talking about sampling or selection bias. There are ways to figure out whether certain groups are over- or underrepresented, especially if you have access to some sort of ground truth, that you know who everybody in the full population is. In old school statistics, you have the phone book for some regions, so you know who everybody is in that region. That's probably a little inaccurate. Certainly there are some people who don't appear in the phone book, but let's say that's the population we want to sample from. If you know some characteristics about those people, then certainly you can figure out if certain types of people are less likely to appear in your data set than others relative to this sort of ground truth total population that you have.

Of course, that requires, like I said, access to some kind of ground truth. It requires access to certain co-variants or information about those people so you can see if group A is underrepresented relative to group B. That is certainly one thing people do. There are statistical solutions to overcoming that sort of bias, like re-weighting, for example. That is one thing you could think about doing.

When we are talking about measurement bias, I think it again comes back to thinking about what the thing you are trying to measure is and determining whether what you have is a good measurement of that. A good first pass at thinking about that from a technical point of view is looking for disparities that you think shouldn't be there in the underlying concept you are trying to measure. So if you do see large disparities between genders or between any other number of sensitive attributes, that might be a good place to start to try to do a more qualitative analysis of how that data is being measured and where those sorts of disparities might be generated.

When we come to societal bias, there are statistical methods you can use to find disparities, to see if what you are seeing, differences between different groups or different individuals, are statistically significant and it is not just random chance. But at the end of the day, I think your best bet is a more qualitative approach to understanding where fairness in society lies, how it is generated, and how that ends up being encoded in your data.

ARTHUR HOLLAND MICHEL: This to me gives rise to what seems like a bit of a catch-22. If we start with the assumption that all data sets are going to be biased in some way or another and may have negative biases that we don't want, mitigating those biases is going to depend on very subjective human decisions in one way or another to try to counteract those.

But if you are relying on humans, then you are back at the beginning where humans have this capacity potentially for societal bias. It seems to me like a circular problem perhaps. You cannot rely purely on the numbers, but you also cannot rely purely on the squishy human bits either.

I know that does not feel like a satisfying way of looking at it, but would you say that is a technical inherency that we are going to have to continue to contend with?

KRISTIAN LUM: I understand where you're coming from. I think that is essentially right. The data is always going to reflect somebody's worldview, so I think the idea that we are going to completely un-bias data in some way is a fool's errand because it is always going to be biased with respect to something.

If we are going in the direction of solutions and not trying to leave it at this place where we feel like "let's throw our hands up and not do anything because data is always going to be biased," let's think about what we can do, and I think that is more about awareness, being very clear about documenting whose worldview is encoded in this data, why decisions were made, and what sorts of checks were made to verify that certain disparities aren't there that might have detrimental effects downstream, but coming into it with the idea that, yes, somebody's worldview is going to be encoded, somebody's bias is going to be encoded in some way, and being cognizant and careful with that bias as you move forward and use it for different purposes.

ARTHUR HOLLAND MICHEL: Got it. That's fantastic.

I want to move over to Zach after that fantastic overview of the data set piece.

Zach, let's assume that you have a data set that is perfect in every way, and as a result one might imagine that the machine learning system that you train on that data set is also going to be perfect. As Kristian just explained, that is a pretty lofty assumption, but let's roll with it for a second. Can you talk to us a little bit about the challenges of what one might call the "gap" between a machine learning system's training data, its worldview, and the real world? You have written about concepts like "data shift." Can you explain some of these issues and give us some examples of what they mean for AI and its applications?

ZACHARY LIPTON: Yes. I think there are a couple of distinctions that need to be made from the outset. When you use the word "AI," it is an aspirational term that refers to a broader field beyond machine learning, not necessarily just data-driven approaches but the broader field of inquiry of people looking to build systems that reproduce certain cognitive faculties in silico.

The technology we are talking about, at least in this conversation, is quite narrow. We are talking about data-driven systems, specifically pattern-recognition systems. Even disciplined researchers have trouble avoiding it. Undisciplined researchers are all over the map, and the press is somehow even more ridiculous, but there is an impulse to anthropomorphize. Even here there are subtle ways we start doing it. We start saying—I'll pick on you—

ARTHUR HOLLAND MICHEL: Please do.

ZACHARY LIPTON: —maybe I will draw a little contrast. You said, when you have a data set, this is the model's "worldview," whereas Kristian said a subtly more reasonable characterization, which was to say that the data "is reflective" of somebody's worldview, that somebody's set of normative commitments and beliefs inform the data that you have, so this data is reflective of the decisions that someone has made, which is different when you say, this is a "worldview." There is something a bit anthropomorphic about it. What do we mean by a worldview? I think we are referring to a collection of beliefs or normative commitments or the sorts of things that these models do not have that involve structure, assumptions, and mechanisms about how the world works.

The model doesn't know anything about how the world works. It doesn't know that the world exists. There is no such thing as a "world." We are talking about pattern-recognition systems.

If you remember from high school class, you took some measurements, like you turned up the heat and looked at the volume of water that evaporated or whatever kind of pattern you were doing, and you drew a curve. You are doing the same thing. Almost all the technology we are talking about is just curve-fitting.

So it doesn't have any worldview. It doesn't know what temperature is. It doesn't know what volume is. It doesn't know what income is. It doesn't know what race is. It is just that you have presented a machine with some data. Typically the systems that we are actually deploying in the world almost all of them are coming from this paradigm of "I have collected some set of examples"—we call it a data set—"of inputs and corresponding outputs," and the idea is that at deployment time for training purposes you can see both the inputs and the targets. At deployment time the targets will be unknown, so you are going to want to infer something unknown from something known.

Drilling down on this, before you get to distribution shift, the next step is that we are using the AI, using machine learning, we are deploying them in the world, and the first important elision that happens here is between decisions and predictions. So you say: "Well, if I collected the perfect data, then it will do everything perfectly."

What is this "everything?" What is this thing we are asking the model to do? There are settings where you just want to make a prediction, like you are a passive observer to the system. You have negligible ability to influence it, at least in the short run.

What is a good example here? There is no meteorologist in the house, so nobody is here to call bullsh*t on me, but let's say weather forecasting would be such an example where, unless you are in Looney Tunes or in the future you probably don't have that much ability to influence the weather just by forecasting it, at least not like the seven-day forecast or something. Your goal is just to make a prediction. But in many of these systems that we are talking about the goal is actually to guide a decision, either to make it autonomously or to influence a decision-making process.

What do you mean if you "have the perfect data?" The point is, what do you do with the data when you have it? You train a predictive model. You do some curve-fitting, but that is an exercise in prediction, but ultimately what you are really concerned with principally is the impact of decisions. What binds the prediction to a decision is often spit and mud and whatever, duct tape.

Here would be an example. I work at Netflix, and I want to build a better customer experience, so I want to curate engaging content. How do I do this? I will build an AI to do it. What do I do? I collect data. What data do I have? I have who saw what movie on what day, and did they give it a thumbs up, did they skip it, did they continue, or whatever?

Now I am going to construct some kind of prediction goal. I am going to say, "I want to predict which things people are going to say they like," or predict which movies people are likely to click on. I give examples of movies that were suggested to people, and I have ground truth of what they clicked on, and I am going to train a predictive model. The actual decision we want to make is, "What should I show someone in the future?"

Are certain groups of people overrepresented in the data? Are you deferring to the dominant tastes of some people and missing out on or under-serving some minority of people because you don't capture in resolution that there are these distinct demographics? There are those problems.

There is also the broader problem, the bigger reckoning that is coming to most of the applications of machine learning in the world, which is that there is a pipeline. Prediction is part of it, and there are huge problems just with prediction, and I will get into that, but there is also part of the pipeline that consists mostly of bullsh*t, and this is what is binding the prediction to the decisions you ultimately plan on making.

For example, in the recommender system I could say now I'm going to try to predict who is going to click on what. Among the items that are shown, click or no click? I could have data. Data could be comprehensive. The data could be enormous. I could capture data for everyone who has been on the service, but what it doesn't tell me is that showing people the items that they are most likely to click on, given that they were presented to them according to the previous recommender policy in place, that now choosing those as the items to curate is a sensible way to actually perform curation.

When you start talking about any of these kinds of decisions, I think you will always find that there is some kind of a loop where there is something missing. I can look at the set of people who were arrested and then try to train some kind of predictive model that is going to predict some target of interest. What it doesn't tell me is that this is a coherent way of now guiding what sort of decisions the police should be making in the future.

If you don't account for this difference between the prediction and the decision, you could come up with all kinds of behaviors that are like—yes, you maybe train a predictive model. Maybe you train a predictive model effectively. Maybe you train the most effective predictive model that could be trained up to some very small approximation factor given the set of inputs you have and the target, but it doesn't tell you what actually is the relationship between this prediction that you are making and the actions that it is supposed to guide and the ultimate impact that you hope to have in the world, which is mediated by some kind of real-world dynamics.

If you say that based on the features available, you can predict a higher probability of crime for certain individuals, you have to account for what you are actually planning to do. Are you planning to allocate where a police officer should be stationed, on the basis that if you look for crime you will find more crime? So there is this coupling of a model to a decision through the data that you are subsequently seeing that is almost always completely ignored.

I think that initial level of sobriety of not only not creating the perfect model or the perfect data set, but largely we are engaged in an extremely ad hoc process of automating various decisions that really ought to be guided by reasoning, on a lark that prediction is going to do magic and that the benefits we get from scale will outweigh the consequences of doing something fundamentally incoherent.

I think you see this all the time. YouTube curates videos based on people who like similar content to this, who are therefore similar to you and tended to like this other content that was recommended to you. That seems great if you are like, I like Wolfpac, and then I see Nowhere or some other hipster virtuoso band, I'm like, this is okay.

On the other hand, you have these well-documented cases where YouTube is curating playlists of naked baby videos for pedophiles because people who like naked baby videos tend to like other naked baby videos or something like this. There is a failure to account for the actual enterprise we are engaged in. That's all about the "everything we are doing is wrong" level.

If you just zoom in on prediction, which is the thing we sort of know how to talk about, even if you just consider yourself a passive observer trying to do prediction, all of our technology is proceeding on a very naïve assumption, which is that basically there is some distribution, like the distribution, like there actually is this fixed distribution that God knows, but we don't, and that we have some procedure to collect samples which are drawn from this probability distribution, and that the data that we have seen has been collected by drawing a bunch of samples completely independently from this fixed underlying distribution, and then the data that we will encounter a deployment time will also be created by drawing more examples also independently from the same underlying distribution.

The problem there is that that is just not how the world works. If you were trying to train some text classifier based on Twitter data up through 2016 and then you are deploying it on data past 2016, people are talking about different topics, and hopefully again in 2021 we will get to talk about different topics. People will use language differently. There will be slang that wasn't there before.

So there will be a shift in distribution over inputs that we are encountering, and it is occurring in a haphazard way. You have gradual changes in the use of language, you have sudden changes in the emergence of topics, the fact that overnight everyone is talking about COVID-19 20 percent of the time, which never happened before March 2020.

In these cases there is the question of: What can we do? How can we make valid predictions? How can we have some kind of faith in the reliability of the predictive technology that we have, even if we are just focusing on predictions, when we are faced with this kind of dynamic environment even in a setting where there is no feedback loop and we are not worried about decisions?

The answer is that this is in general impossible. So basically if we assume nothing about the way that the world can change, we actually are always acting under some amount of ignorance. If anything can change in any way, there is no reason to believe that we can ignore assumptions about the way the historical data is representative of the data that we will see in the future, and all bets are off.

I think this is something that people who argue more informally and at greater length and with more histrionics, like Taleb. You can read whole books about our reluctance to accept ignorance and what we can't predict.

Technically there are a few different paths forward. One is that we actually can make technical progress on this problem of learning when the data distribution is shifting if we can make some assumptions. We don't have to assume that everything is exactly the same, that there is some fixed underlying distribution. But we can make certain types of assumptions, like, okay, the frequency of the categories that I am trying to predict are changing, but what the distributional instances look like, given the category, are relatively the same.

An example would be like a disease, like COVID-19 is trending up and down, and I might want to track what is the frequency of COVID-19 in the moment. If I believe that what COVID-19 looks like in terms of its manifestation and symptoms is not changing in the short term, but the prevalence of it, the incidence of the disease, is changing dramatically in the short term, then using this structural assumption I can develop some reasonably elegant machinery around it to leverage this invariance and say, okay, now I can actually identify this parameter of what is the incidence in the moment. So I can assume this invariance of a conditional probability, like the symptoms given the disease aren't changing, but the prevalence of the disease is changing.

You could also sometimes find scenarios where you could work with the opposite assumption, that the distribution over the inputs is changing, but you believe that the probability of the disease given the symptoms is not changing, disease is a bad example, but the category given the inputs is not changing.

I would point out, though, that this requires some prior knowledge. You need to know which assumption applies, and if you make the wrong assumption you will come to the wrong conclusions. This is like a common thread through causal inference. I know Kristian has done a lot of great work in applied causal inference in the context of criminal justice. There is a basic tenet there like: Data plus assumptions can lead to causal identification of parameters of interest, but the data often alone is insufficient to tell you which assumptions are valid, and if you make different assumptions, you will come to different conclusions. So these might be things you need to experimentally validate.

I will give a simple example. People often assume when they deal with shifting data distributions that probability of Y given X, the label given the inputs, is not changing. This is invariant. The easy counterexample is: Imagine that you are looking at symptoms data, and the only feature that you have is cough, and the label of interest is coronavirus. If you look in December 2019, the probability of coronavirus given cough is about 0 percent.

In the future all that has changed is the amount of coronavirus, but now the probability of coronavirus given cough is maybe quite high. Maybe it's 5 percent or something. You get the idea.

These are very hard problems. You have two worlds. You can make some kind of assumption about there is some kind of invariance in the conditional probabilities, or you can assume something about smoothness, that the distribution is changing, but it's changing only very gradually. If I can assume that like between any two nearby time periods the divergence in distribution is limited, there is something I can do.

Then you have the world of deep learning, which some of you might have seen in the world in which I grew up academically, which is the world of training large-scale neural networks, and which has thrived largely on an ethos of "try shit and see what happens." This is very effective when you have a well-defined, supervised learning goal. You assume no distribution shift, and you basically say: "Hey, I'm going to get to verify if the model works on holdout data so I can license you to try whatever you want and see if it works."

People have tried using that mindset of "let's just try stuff and see whether it works" to approach hard problems of distribution shift. They will say: "Okay, I have domain A and I have domain B, and they are different from each other. I am going to throw spaghetti at the wall and see what works."

The problem here is that they are trying 9 million methods, and they only have like three different data sets. They realize that now the relevant effective data size is not the number of examples you have in each data set but the number of instances of shift that you have seen. Just because you find one method, like you trained it on handwritten digits and then it worked well on colored handwritten digits does not mean that this is going to work for other kinds of shifts.

I will leave it there.

ARTHUR HOLLAND MICHEL: Thanks so much, Zach. That is a tremendously helpful overview.

If I can extract from that, perhaps a second inescapable technical fact is that necessarily we build AI systems based on the past, assuming that the future will be exactly the same as that past that they have been built upon. Obviously the future is not exactly like the past, and systems are not capable of accommodating to that.

With that, I would like to turn to Deb because now we turn to systems once they are out in the wild or about to be out in the wild. If you think about these serious challenges that Kristian and Zach have described, to what extent can you get ahead of the kind of ethical issues that may arise? Can you look at the pipeline of development and identify these kinds of issues and figure out where it is going wrong and perhaps how they can be fixed?

DEBORAH RAJI: I want to go back very quickly to something that Zach mentioned. One challenge of it is that the data of the past doesn't represent the present. But something else he also alluded to was that at times the way that we represent problems with the data does not actually reflect the kinds of problems that we want to solve with the model. This is at the core of a lot of the issues that we see.

The other thing I wanted to mention quickly before answering your question is that a lot of the work I do is on algorithmic auditing. What we are doing is assessing models that have already been deployed. There are a lot of interesting things you observe in those kinds of situations, one being—and this is again to Zach's and Kristian's points—because we often tend to ascribe humanlike characteristics to these models for whatever reason, there is this reduced sense of basic responsibility that happens with people who actually build these models.

So, to the theme of this entire workshop around the limits of what kind of ethical expectations we can have for these models, we expect things from these models in the same way we would expect our child to behave. We would say: "Oh, yes, we want the model to be responsible," or we want the model to be transparent or fair, not understanding that it is truly the humans making decisions about the model that we want to act responsibly and adhere to some of these ideals.

It doesn't seem like a big deal except for the fact that you often have very simple decisions that people are making about the model—the data source, how the data is labeled to Zach's point earlier around how effective the training data set reflects the real-world problems or simple decisions about which algorithm you are using. In a typical situation, if you were building a car or building a bridge, you would record these calculations, you would record these decisions, but in the machine learning/AI context the engineer does not feel like that is under the purview of their responsibility.

I think Zach also gave a good example. In the AI space we have three big data sets that represent very specific types of problems, and people think that is sufficient to warrant: "If I do well on this data set, that's enough for me to deploy it in this other situation or this other context."

With the work that we did auditing facial recognition it was very clear that first of all the data sets that we carry as a research community represent a very specific worldview of the people who created it and also very specific biases. But it also demonstrates a high level of neglect. People are not doing a good job analyzing the data sets that they are using to develop models that they deploy broadly, but also they don't even feel a sense of responsibility. They don't feel like it is one of their jobs to pay attention to these decisions, record them, and communicate them. That has been a big challenge.

To speak to the title of this workshop or conversation, which is, how does that affect what we can expect with respect to how well these models can adhere to our ethical expectations as a society, I think it does mean something very interesting. I have been recently reading a lot about the automobile industry and how engineering responsibly plays out in that space. It is fascinating because if you are designing a car or building a car, it is very clear what kinds of decisions you are making. So people are very meticulous in terms of how they communicate to each other and to the public about the details of the car's design. If you have a car where the brakes don't work, you are able to understand that that is an error that was made, and you understand that it is something that you can fix.

But when someone comes to you and starts talking to you about, "Oh, cars are inherently awful because they lead to roads that ruin our cities" or "Cars are inherently awful because they cause pollution," people can differentiate between this is a make of a car that has dysfunctional brakes, so we can recall the entire make of this car, and they can differentiate that conversation from the conversation of "Cars are bad for the planet, so we need to completely reinvent the way that we do cars or we need to completely get rid of cars."

I feel like in the AI/machine learning space that conversation is not very clear cut, so people will have ethical expectations. This came up a lot with facial recognition. Facial recognition is incredibly harmful when it doesn't work, when it misidentifies someone and is a threat to their life. They can be misidentified, falsely accused, and then falsely arrested.

But it is also a huge threat when it does work. There are these huge privacy risks that are inherent to collecting millions of examples of biometric data and storing it in a way that does not necessarily always reflect the highest security standards. I have noticed certain characteristics of deep learning—the data requirement, the resource requirement, basic characteristics of deep learning models that Zach and Kristian have already mentioned—the inherent biases in the model that are like: "This is the way deep learning works right now. We require you to collect all of this information."

There is this tension that now exists where if you want to use deep learning today and if you value privacy, it is impossible to build a deep learning facial recognition system, because in order to build that it requires you to violate the privacy of millions of people to collect their biometric data. Or, if you want to do it differently, it requires you to completely reinvent the wheel of how it is done.

I think that distinction between those structurally inherent issues based off of a definition of what we say this thing is, there are inherent ethical limitations to it, and that makes it difficult to adhere to specific ethical expectations or ethical ideals. I think that conversation is something that the field has not gotten to quite yet, and it might be because we still have a lot of cars where the brakes don't work, we still have a lot of very simple things we are not doing and we are not evaluating for performance on different demographic groups.

To Kristian's point earlier, we are not even paying attention necessarily to measurement bias and things that we should be paying attention to, so it is easy sometimes to get caught up on the fact that: "Oh, we're not even doing these very small things. We have so many cars we need to recall, so how can we think about the environmental impact?" I do think at some point we will have to reckon with the reality that if you want to use this method, there are going to be inherent limitations to it.

I think that is where auditing plays a role—to your actual question now. I have found that there are a lot of inherent challenges to auditing like you are alluding to, where if you are not an internal auditor it is very difficult to access any information about the system that can inform your understanding of how it works and what those limits actually are.

Something I personally enjoy about auditing is that it is a great way to articulate those limitations, where you can talk about the fact that here is what the data requirement is for this particular model, or here are the decisions that were made about this particular model. It makes it easier to have conversations around things like if they are going to use this particular type of model and apply it to this particular specified context for this particular intended use, are those things actually compatible with what you say as an organization or as an individual your ethical expectations or principles are? Often that is where a lot of these tensions arise. You can see things very visibly.

Machine learning is a method where, because you are using data rather than explicitly defining rules for prediction, the outcome is going to be inherently cloudy. You cannot have certain expectations around transparency, around interpretability and explainability. It is inherent to the method. I think those kinds of conversations become interesting when you are trying to apply it to a high-risk healthcare context. Maybe this is not the compatible method for that particular application.

Kristian has done a lot of great work in the criminal justice context to highlight that if you are going to build a model using these particular methods where it is impossible for us to figure out how this result came about, maybe you shouldn't be using it to determine how someone is going to spend the next 25 years of their life.

I want to see the field move in that direction. That is hopefully where we are going. The way I would frame it is there is some ethical risk in using specific methods at all, and we are beginning to slowly recognize that. As a result of that, I think there is now a movement or a push to say that as a result of the fact that by virtue of using deep learning or by virtue of having these big data sets or whatever it may be, however you want to characterize your model, by virtue of specific characteristics of your model you cannot use it in particular contexts, or you cannot use the current version we have of it in particular contexts. I think audits have done a good job of exposing this fact.

The last thing I am going to speak to is distribution shift. Something that machine learning models are sensitive to is distribution shift. I think it is a known challenge in the field to say: "My training data has a very specific scope of examples, and the real world has a slightly different scope of examples." I think the field has come to the point where there is enough evidence to say: "Yes, deep learning is a method that is incredibly sensitive to distribution shift."

So my question is: Why are we then trying to use deep learning methods in self-driving cars, where we know that the distribution shift of the training data is from California, and people will drive to Montana? These are the kinds of questions I am hoping become revealed the more we expose specific products. But also, I do think it is a much larger conversation than the brakes don't work and we need to recall this particular make of vehicle. It is a much larger discussion that I hope we have.

ARTHUR HOLLAND MICHEL: I would love to ask a follow up on some of your fantastic work on the audit process. To me this seems like a massively complex challenge. Over the course of the conversation we have talked about a lot of human decisions that have to get made through the pipeline, and these human decisions have enormous ramifications. Is the idea of an audit to try to identify and map every single one of those decisions to make sure they were the right decision? Presumably even for an insider that is going to be hard to do, but for an outsider looking in it is going to be challenging.

DEBORAH RAJI: Yes. That is one of the big challenges with auditing. Auditing is like an attempt to get some sense of either justice or accountability in the sense that we are trying to figure out what did someone do that they maybe shouldn't have done or should have done differently in order to protect this particular population we care about.

Yes, at the heart of it is an attempt to try to map out the decision making of different stakeholders to particular outcomes and then map those outcomes out to the experience of those who are impacted and trace this journey in an attempt to say if the impact to the affected population is something like I am denied a loan or I can't get access to housing, then we can actually trace that to the decision to use this particular data set or to care about this particular feature or to characterize the task in this particular way or to use whatever particular algorithm or to evaluate it in this particular way. The whole point of auditing is to get to accountability, to be able to bring forth that lawsuit or run that campaign or pull that model off so that it doesn't affect people in the same way.

I think it is really challenging because, yes, you are right, there are so many little decisions being made, but even more importantly, to my earlier point, because people like to see AI systems as their "babies," as humans. Machine learning engineers do not want to take responsibility, and that has probably been one of the biggest challenges with audit and accountability work, getting people to write things down and admit the fact that they are involved in these very critical decisions.

Something as simple as data curation or data labeling are decisions that a machine learning engineer is likely involved in or even makes him or herself, but because they might, for example, be sourcing the information from the web or from a popular discussion forum, the framing of that decision of "I'm getting this particular data source" comes from this place of "Here's what society is saying" or "I'm just sourcing my information from the way things are," this sample of society that is presented as this neutral sample, when in reality, you went on the Internet and found a particular source with a particular perspective, you made a series of decisions on how you would frame the problem and how you would label that data set. You were actually quite involved in terms of shaping the outcome of what that data was. I think there is a lack of willingness to admit that.

The other thing as well is that, like I mentioned earlier, the difference between machine learning and software is that software—if I'm writing a computer program, I am articulating every single step. Not only is the code better written, but I am also very explicit in terms of the steps to get from input to output. With a machine learning model those steps are actually defined by the training data. The way that machine learning works is that training data, through examples the steps are inferred, and that is what will define my output model.

Because of that, because it is a data-driven technology rather than something that is explicitly defined, that is another excuse for engineers to walk away from responsibility. They feel like: Oh, I'm not actually picking these features to define my output. The model just learned those features. That is something I have heard in my ear many times: "Yes, it's using ZIP code, and that is a proxy for race and it is awfully racist, but it just learned those features. We didn't control it." Well, who picked the data set that it was trained on? I think a lot of auditing work has now devolved into understanding which humans made certain decisions that resulted in particular outcomes.

The very, very last thing I will say is that data itself is another way of distancing engineers from responsibility. I recently had a conversation with a friend. If you are a health care worker, teacher, or doctor, you are working with people directly, so if you are a doctor and you make a mistake, you know who that patient is, and you have seen that it is a human, and the connection is clear. Whereas with machine learning, if you build a model that is systematically oppressing or impacting a particular population, you just see it as a data point, maybe one of a thousand people you are oppressing, which is categorically worse, but it feels much more distant because that person is represented digitally in a way that doesn't resonate, and in a way that is difficult to grasp.

People always try to write papers about data science or the machine learning field adopting ethical codes in the same way that a doctor has ethical codes, but we are so much further from the people we impact than a doctor is from their patients. We do not see anyone's face usually unless you are doing facial recognition, but even in facial recognition you are seeing a million faces, so you are not registering the 10,000 faces you failed on. One of them is a father with a family and three kids whose life is ruined because of something you built. That does affect how responsible or accountable these people feel.

All of this is to say that, yes, it is a huge challenge on one hand to trace that path, but it is an even bigger challenge to convince people about their own role in things. When we actually try to pursue some level of accountability we often have to pull the product off the market in order to get people to stop and reflect a little bit.

ARTHUR HOLLAND MICHEL: With that in mind, I am going to hold onto you, Deb, and ask you one last question. I am then going to punt over to Kristian and Zach in a final round-robin before we go over to the wonderful questions that have been coming in from the audience.

In your opinion, does this all suggest that there are certain applications of machine learning that, at least as the technology stands today and for the foreseeable future, we shouldn't touch, that there are certain things that we shouldn't get machine learning systems to do, given these technical inherencies, there is no fully ethically "perfect," if you will, way of doing that? I would love quickly your thoughts on that, and then we will get Zach's and Kristian's take on the same question.

DEBORAH RAJI: I definitely think there are a lot of premature deployments. Like I was mentioning earlier, a lot of our approach to audit work has been capturing information in terms of documenting things so that we can communicate amongst ourselves and to the public so that the public can actually have a conversation about it. One of the big challenges or tragedies with a lot of deployed machine learning systems is that they happen without any level of disclosure or consultation with the public, so there is no opportunity to participate in terms of defining an algorithm that might actually impact your life or affect you in a specific way.

If we keep thinking about ways to capture information around the decisions being made throughout the whole machine learning development pipeline and afterwards and if we can figure out ways to communicate that to the people who should be the important decision makers, like domain experts, the public, government officials, or whoever we trust to understand and evaluate that risk appropriately, if we can do a good job with that, then those people will be able to say: "Based off of fundamentally what this is, this is not appropriate for what we want to do or what this decision should be about or the impact this decision will have."

I think that might change from institution to institution. Some schools might really value equity in their admissions process, and as a result of that if they all get a good sense of what the automatic admissions filter algorithm is doing, some of them might agree with it. Some of them might agree with it, and that is a conversation that they deserve to have rather than a decision that is made for them on behalf of one or two decision makers without any kind of clarity on what's going on.

I think it depends on the people impacted. If there is a way for them to have a say, that would be the ideal.

ARTHUR HOLLAND MICHEL: That's a tremendously important point.

Kristian, over to you. Given these technological realities, are there certain applications that we should not touch for the time being?

KRISTIAN LUM: I will be brief here. I think there are certain ways that certain technologies shouldn't be used. Unless we have the controls in place to make sure that the technologies are not used in ways they shouldn't be, then they shouldn't be built either. I will leave it there.

ARTHUR HOLLAND MICHEL: Very succinctly put. I appreciate it.

Zach, over to you. Quickly, anything that we shouldn't touch given these technological limits?

ZACHARY LIPTON: Yes. I think that's a great question. I have been to some extent banging on that can for a number of years. I won't speak as an ideologue on the Luddite side.

To start, part of why I would say it is a great question is because it is often the overlooked option. You assume that certain decisions are going to be guided by a certain kind of incoherent technology, and the question is, what is the fix versus is it ready for primetime at all?

Questions where there is a serious concern of procedural justice are the kinds of things where it seems totally absurd to me that we are okay using it. My ethics on this are I don't know if "left" is the right word but a bit maybe more aggressive than even concerning technology in that I don't think incarcerating people for risk is fundamentally coherent. If you could go out and look at the population of people who have not committed crimes and say this person has a 5 percent chance of committing a crime and this person has a 20 percent chance, and on that basis of my oracular premonition I am going to incarcerate some of these people, you would say that it's crazy. But somehow we are willing to entertain the idea that this is a reasonable thing to do the moment they get arrested. If they have been arrested, now we are willing to say: "We presume that they are innocent, but we are going to make this decision."

You could make those decisions not based on some oracular likelihood, you could just say the gravity of the crime. There are certain categories. If someone has been accused of rape, you are not going to release them on bail at all, and not because of the likelihood but because of the seriousness of the offense or something. I see a more coherent route to that.

I do think it is an overlooked option in many of these cases. I think it should be on the table. I don't think these systems do anything that is procedurally just. In a setting where that is a requirement, I think it should be off the table, at least this generation of technologies.

That said, I think there is an interesting counterargument that is a familiar counterargument that comes up. The counterargument is: "Well, the relevant comparison should be to the status quo, and judges are messed up in all these different ways, and look at these data." I think it is not a trivial counterargument. It is something that should be taken seriously. You could imagine a world where this technology would be half-baked and awful and whatever, and it would still result in 50 percent higher arrest rates for Blacks versus whites. It would be hard to completely rule out. You could imagine a system that is constitutionally messed up but has significantly less disparity than the status quo. I think someone like Kristian is better positioned to do the analysis on actual criminology data to say whether there is any evidence to believe that is actually the case, but it is something reasonable that you can imagine.

Then I think there is a counter-counterargument that then comes up often which also needs to be considered. That is that even if it is a little bit better, there are other costs associated with automation, and I think one of them is the abdication of responsibility. Once you say I'm okay licensing these decisions, removing individual accountability—if there is a process by which you could say I think these judges are doing things wrong and I am going to appeal their decisions or I think this police department is doing something wrong, and the law plays in this murky space where you can air and re-air these cases over generations. There is a process for revisiting judicial opinion on these things and re-legislating these issues. I think there is a worry that once you say, "Okay, this is the system, and it is like the equivalent of FDA-approved to make all of these decisions," it removes an important incremental process for challenging the system. The "too long; didn't read" is like, yes, but it's complicated.

KRISTIAN LUM: Can I jump in really quick? I want to add one thing to that.

ARTHUR HOLLAND MICHEL: Of course, Kristian.

KRISTIAN LUM: I think the question of "Is it bad but better than humans?" like Zach said always comes up. I think it is a relevant thing to ask, but I think it misses the possibility of other interventions. It presupposes that there are two possible states—the status quo or adding on top of the status quo this particular technology or a similar technology, some sort of predictive-based technology that is being deliberated.

But in fact there are many other possibilities one could consider. You could consider policy changes. You could consider new laws. You could consider any number of different changes that might be better than both of those scenarios and that come with fewer downsides, some of which Zach enumerated just now.

That is always where I go when that comes up—"This is bad, but it's better than humans." Yes, but what is better than both? Let's not limit ourselves in the scope of possibilities We are humans. We are creative. There are all sorts of levers for us to pull. There are all sorts of options at our disposal. Let's not just assume that a predictive model is the only option relative to the status quo. There are many other options we could consider.

DEBORAH RAJI: That's a great point because I remember on November 3 one of the proposals on the California ballot was something like that—abolish cash bail or install a predictive policing algorithm—setting up that false dichotomy between these two things of we have to do things the way they are or we have to use this machine learning model to solve all of our problems when there is actually a full range of possibilities.

The other thing I was going to add is that actually I do think there are certain situations where deep learning is completely unacceptable. There is this current trend that we see of these data-driven technologies that require an insane amount of data. I think there are some situations where that is never going to be okay, where the privacy risk is too much, and it is completely inappropriate, facial recognition being one of those. To collect all the information, the face data that you need in order to build those systems, requires a lot of faith in whichever authority figure happens to be controlling that data and how securely they are going to control it. That is sensitive information to gather en masse for the sake of a technology that has not proven itself yet. I would say that is one of those situations where I am not disheartened when it gets banned. It makes a lot of sense to me to move away from that direction.

There are a lot more ambiguous cases of that, though. I see a lot of situations in health care with people collecting a lot of private information for the sake of health care applications. That is more of an ambiguous case where we are not sure if the payout of that is going to actually play out, and it becomes a much more nuanced conversation, so I am not sure if that is a hard-and-fast rule, but there are certain situations, especially in the surveillance context, where it makes a lot of sense to step away from machine learning methods or automated methods for that.

ARTHUR HOLLAND MICHEL: That's an important point to add, and I appreciate it, Deb.

I would like now to turn to some of our fantastic questions in the last 17 minutes or so that we have. I would like to start with a question from Joanna Bryson, who we are lucky enough to have joining us today.

Joanna asks: "To ensure people don't do harm with AI, one thing we have agreed is to talk about AI as human-centered and not as a technical system that is"—and I take full blame for this verbiage—"'released into the wild' rather than the property of its developer, owner, or operator. I would love the panelists' perspective on this notion that if you have a human in the loop whether that mitigates some of the considerations, the technical limitations, that we have been talking about."

ZACHARY LIPTON: Think of like a varied sort. I think there is a little bit of a word game here in that there is a question of what are we characterizing. I agree with Joanna from a research perspective about adopting a posture toward how we think about these systems in developing them. What should be the aspirational way we characterize them? I very much work in and support that sort of human-centered focus.

But at the same time, I think there is also characterizing the systems as they exist in what is the status quo, which I think is largely technology developed in a sandbox, not in a human-centered way, and because of that in a sense released into the wild often with some embarrassing consequences. I think it is a question of are you describing the way we should think about the aspirational goals of the field and conceive of our mission, or are you characterizing what people are actually doing for the purpose of criticizing its weaknesses, and I don't think those are incompatible.

DEBORAH RAJI: There is a popular misconception in the machine learning field I find that the bigger you can make the sandbox the closer you are to making an open problem. If I can make this a bigger and bigger closed problem, then it magically transforms into an open problem. I find that to be completely untrue.

Making it a bigger sandbox does help in terms of it becomes more difficult to identify that failure mode or that situation where it won't work, but it is still a closed problem, and I think that is something the field is in the process of realizing with this very recent wave of work looking at how sensitive these machine learning models are to distribution shift.

KRISTIAN LUM: The question about whether having a human in the loop or a human as an ultimate backstop often gets used as an excuse to absolve the makers of the model from responsibility for making sure that the combination of human and model are behaving in a way that is more fair. The idea there, at least implicitly that is being suggested, is that as long as there is a human who is ultimately making the decision, well, then, it's not the model. The model is just providing additional information, and if the human goes and makes some sort of bad, unfair, or biased—in whatever sense of the word you mean that—decision, that's ultimately on the human.

But there are cases where the combination of human and model ends up being worse. So having that human backstop actually doesn't fix the problem, and having that model in combination with the human seems to make things worse in some ways.

One example I can think of of this is a great study by Alex Albright. Most of my work is in the criminal justice system, and her work is in that domain as well. She looked at how judicial decision making changed after the introduction of a new risk assessment model, a new tool that predicted recidivism risk and a variety of other risks. I should have studied up. I was not anticipating talking about this. I don't want to mischaracterize her study. You should go look for it yourself if you are interested. But my takeaway from this was that prior to the introduction of the tool judges were making decisions for people who were similarly risky by the metrics of the tool. There were not huge discrepancies by race.

Then, after the introduction of the tool, when that risk score was explicitly shown to the judges, you ended up seeing this split where judges were more likely to give the benefit of the doubt to white defendants than to non-white defendants. Before the model was there, there was some similarity and some parity in terms of how decisions were being made. The introduction of this additional information seems to have caused a split where they started making decisions differently by race. That is one example.

Is the maker of the model absolved of responsibility because there is ultimately a human there? I don't think so. It seems fairly clear to me that it is the combination that is causing this particular problem. I think it is more complicated than just thinking that because this is only a suggestion or a recommendation, that having a human backstop there is going to solve all of our problems.

DEBORAH RAJI: Another quick comment about human in the loop too is that human in the loop is optimizing for performance on an objective that has been fixed by either the engineers or the researchers like those who have built the model, and to speak to the lack of responsibility that is a huge—they are not necessarily giving up any agency by introducing humans in the loop. It is introducing humans for the sake of improving or optimizing performance on an objective that they have already defined and already clarified rather than inviting humans to shape the objective of the model or better characterize that task like was mentioned earlier to more domain-specific considerations and things like that. For that reason also I am quite skeptical of human in the loop.

ARTHUR HOLLAND MICHEL: I want to make sure that we at least have a chance to ask the slew of fantastic questions that came in in the 10 or so minutes that we have left. I am going to ask these questions together, and you can pick and choose as to which you would like to address again in these last few minutes that we have.

ZACHARY LIPTON: Lightning round.

ARTHUR HOLLAND MICHEL: Exactly.

A question from Renato Vaccaro that comes in is whether we think that multidisciplinary teams with different backgrounds could help in reducing some of these issues that we have been talking about in regard to, for example, bias.

A question from Raja Chatila: "If you think that AI systems should be banned from certain applications, how can we convince legislators to do exactly that?"

Clarence Okoh asks about this notion of whether we should have strict product liability in the design and development of AI in tech, a strict liability regime in the same way that a person who has a dangerous animal is liable for anything that animal does.

Again, I am going to run through these just to make sure that we have everything.

Another question from Raja Chatila: "When we say that a human in the loop is in control over these systems we forget how humans might be drawn to still trust the system, this notion of automation bias," so he asks about the kind of governance mechanisms that should be in place to avoid that.

I will just ask each of you to pick one thread there, give us a couple of minutes on it, and any concluding thoughts that you might have.

DEBORAH RAJI: I think it's closer to product liability than strict liability in the sense that it is not an animal, it is not a person, and it's not your child. It's a product. Someone made decisions and created it. It is like a technological artifact. It is a built artifact.

So, yes, I think the framing of consumer protection and product liability is much more appropriate than other forms I have heard of strict liability. Also, people talk about it in terms of guardianship, and I think that is speaking about a technological artifact in the way you would speak about a human or an animal, and it doesn't feel appropriate.

KRISTIAN LUM: I am going to take one of the questions that I saw pop up earlier. I don't think it was one of the ones you mentioned.

ARTHUR HOLLAND MICHEL: By all means, yes.

KRISTIAN LUM: One of the questions I saw in here was about whether it is a good thing. "Do you guys think that multidisciplinary teams of different backgrounds could help reducing biases?" This came from Renato Vaccaro.

That was one of the ones I wanted to mention because I usually answer these things as stories, so I have another story for this. I think it is essentially necessary but not sufficient. By that I mean I think you do need a variety of backgrounds and, to overuse the word, "worldviews," in the room to spot potential bias and to spot potential impacts that others might overlook if they don't have that sort of experience.

But just having people there to point this out isn't enough. I think you need some sort of balance of power because the decisions that are ultimately made are going to reflect the power structures of the people in the room.

One example I have from this again comes from the criminal justice context, and it has to do with the redesign of a risk assessment tool. There were a variety of stakeholders in the room who were consulted about different ways that the tool could be built so that it would operate in a way that was more fair. Many different opinions went into this. All these different opinions were heard. I actually thought the process was really fantastic, but then at the end of the day it was the judges who made the final call. So despite there being a variety of different dissenting opinions about—I can't say the particulars here—certain decisions about how this thing should be deployed, the judges had the power in the sense that they were saying that it wouldn't be used if it doesn't follow X, Y, and Z. So, X, Y, and Z is what happened.

Yes, I think the process is improved by having multiple points of view and a diversity of backgrounds, opinions, and demographics even at the table, but ultimately if all those categories aren't equally empowered to make the decision, I don't think at the end of the day it is going to make a whole lot of difference.

ARTHUR HOLLAND MICHEL: Zach, did you want to take a stab at anything or offer a final concluding thought?

ZACHARY LIPTON: Very lightly. I am going to punt on a couple of the questions. I am going to take on one or two.

One is this question, what about multidisciplinary teams? On one hand I am militantly interdisciplinary, so I am all for it, but I would be careful. Maybe I am more like anti-disciplinary than multidisciplinary in that I don't think that disciplines are first-class citizens in academic discourse. I think it is usually a weak fallback that people have of like, "Well, you know, you're a theory guy, you're a computer science guy." Whenever we attribute almost anything to the discipline itself as an intrinsic factor, we are making some kind of horrible mistake.

It is one thing to be like, "Okay, this is a group of people who have figured out some class of technical things, a likely source to look for trying to find the sort of paper you are looking for on some specific question." What I caution against is a kind of lazy multidisciplinarity where people almost act like, "Oh, well, you just need more human computer interaction people" or something, like that is going to magically solve things. I think that is almost always naïve.

Nobody has the answers to the questions we have. They are hard questions. There is not a particular disciplinary bent or pile of knowledge that has some kind of sovereign right to tackle these questions, and I think there are multiple bodies of literature that bear on it, but in general I don't think there is any magic bullet. I think we all have to have the humility if we are interested in these problems to follow them where they go, and that means getting the relevant people with the right expertise to collaborate with and acquiring the right expertise yourself.

There are theorists who try to go on questions of fairness, but they are not necessarily so committed to justice. They are more concerned with proving theorems, and if it turns out that addressing the problem that ostensibly motivates their work does not involve doing lots of theory, then at some point they get bored. That is my position on it at a high level. You have to attack the problem, and that involves things that we associate with different disciplines, but I don't think there is a magic bullet. I think the right people are already in the room, and the people who have a tendency to collaborate across disciplinary boundaries are already doing it.

A much lighter answer I would give is this thing about liability. Thinking about where the buck stops or whatever is an important and interesting question. It is not always obvious who that is. Increasingly it is a landscape where people provide data to people who develop models as services to people who commercialize them where then there are individual agents who are accessing them. An example would be the ImageNet data set, some large data set that is given to a company like Google, Amazon, Facebook, or whatever, who is developing a system that they are making available through some platform to some company who licenses the technology, and then they have some individual people who are making decisions that are based on the technology.

I do think it is not a totally trivial question to ask who is responsible for what. Is the person who built the data set responsible for every technology that is based on it, or are they just responsible for providing an honest accounting of what the data set consists of and what it may or may not be appropriate for beyond that? I do think those questions are not always straightforward to resolve.

The law is concerned with holding people accountable for the decisions they make, and if we are going to make decisions in fundamentally different ways than maybe we historically have in some of these regimes, we are going to have to—this is what the law does, like when people kill—tease out how we—

It is not immediately clear to someone what the right protocol is or how to formalize these chains of accountability, but it is definitely one of the right questions and something we do need to be thinking about.

ARTHUR HOLLAND MICHEL: I am sure there are many, many more questions, but unfortunately our time is up. That is all we have time for today. Ordinarily, in a traditional Carnegie Council event in the before times in New York City this is when we would have all retreated to the real event, the cocktail hour, but sadly we will have to contend with I hope continuing this conversation at least in the online format until such a time arises as we can do some of these dialogues in person as they so desperately need to be done.

I just want to thank all of our panelists today. Thank you so much for your fantastic interventions. I encourage everybody to closely track their work. They are doing some truly important projects in this space, and we have a lot to be grateful to them for in terms of moving the conversation forward.

I would like to also remind you once again that a full recording and transcript of this event will be available shortly on our website, carnegiecouncil.org, and a recording will be available on our YouTube channel.

With that I will close up. Thanks everybody for joining us. I hope you have enjoyed this conversation as much as I certainly have. Let's look forward to continuing the conversation. There are clearly no easy answers here, and that is why it takes the participation of everybody to try our best to move things forward.

On that note, I will wish you all a wonderful rest of the day. Until next time, signing off. Thanks so much.

You may also like

NOV 13, 2024 Article

An Ethical Grey Zone: AI Agents in Political Deliberations

As adoption of agentic AI increases, it is critical for researchers and policymakers to agree on ethical principles to inform governance of this emerging technology.

APR 30, 2024 Podcast

Is AI Just an Artifact? with Joanna Bryson

In this episode, host Anja Kaspersen is joined by Hertie School's Joanna Bryson to discuss the intersection of computational, cognitive, and behavioral sciences, and AI.

Ukrainian refugee center in Moldova.

JUN 8, 2022 Article

Ethics & Artificial Intelligence: Migration

With Russia's invasion of Ukraine leading to Europe's worst refugee crisis since World War II, this article from researchers Gustavo Macedo and Lutiana Barbosa details ...