Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence

Jan 14, 2016

"Artificial intelligence" is a misnomer, says computer scientist Jerry Kaplan. Machines are not intelligent; their programmers are. What we're seeing is a huge acceleration of automation, which will eliminate all kinds of jobs and create all kinds of unimaginable new ones. This will create a great deal of wealth. But the question is who will get that wealth?

Introduction

JOANNE MYERS: Good afternoon, everyone. I'm Joanne Myers, director of Public Affairs programs, and on behalf of the Carnegie Council I'd like to thank you all for joining us.

Our guest today is Jerry Kaplan, author of Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence. This book has been selected by The Economist as one of the Best Science and Technology Books of 2015.

I believe you all received a copy of Mr. Kaplan's bio, but for those watching or listening to us online, let me briefly note that Mr. Kaplan is widely known as a serial entrepreneur, technical innovator, best-selling author, and futurist. He co-founded four Silicon Valley startups, two of which became publicly traded companies. His earlier book, Startup: A Silicon Valley Adventure, was named one of the Top Ten Business Books by Business Week.

Mr. Kaplan is currently a fellow at the Stanford Center for Legal Informatics and teaches ethics and the impact of artificial intelligence (AI) in the computer science department at Stanford.

The future is fast approaching. While technology has been displacing human jobs since the Industrial Revolution, many economists and technologists believe the world is on the brink of a new industrial revolution in which advances in the field of artificial intelligence will make human labor obsolete.

The idea of robots working on assembly lines, which once seemed so futuristic, is regarded as primitive by those in business today. Thanks to vast increases in dexterity and the ability to see in three dimensions, modern robots can cook and serve fast food, pick fruit carefully distinguishing between the ripe and the unripe, keep control of huge inventories, stack shelves accordingly, and so much more.

But that's not all. Another thing on which most in the field now agree is that machines are coming not only for the low-level-paying jobs but for the high-wage, high-skilled jobs as well. Soon machines will think and analyze information that will be able to outwit humans. They will be able to make complex medical diagnoses or write legal briefs. It is a genie that has been let out of the bottle.

The challenge is what this will mean in terms of jobs and income inequality. It is how we humans adapt to that big question. This is an issue that our guest has been thinking about for some time, and he would like us to be thinking about it as well.

That being said, in the next 25 or 30 minutes Mr. Kaplan will take us on a tour of the breakthroughs fueling this transition and the challenges it possesses for society. In the end, he will give us the intellectual tools, the ethical foundation, and psychological framework required to successfully navigate these challenges.

First of all, thank you for coming.

JERRY KAPLAN: Thank you, Joanne.

Discussion

JOANNE MYERS: In the introduction to your book, you write that "to understand the policy debate surrounding the impending revolution it is necessary to understand what is meant by artificial intelligence." Perhaps you could spend some time telling us about what it is, how we use it, and how we should be thinking about it.

JERRY KAPLAN: Sure, Joanne. That's a very difficult question. Where did you come up with that? [Laughter]

JOANNE MYERS: I read your book.

JERRY KAPLAN: Actually, I'll start off with a story about it.

First of all, how many people here have heard of the term "artificial intelligence"? [Show of hands]

Great. And, presumably, you've seen many movies and TV shows.

Anybody here have an engineering degree? [Show of hands]

Okay, we have one person. So I can point to you and say "as you know."

Most fields you can't point to somebody who invented the field. But artificial intelligence is different. There was a gentleman by the name of John McCarthy, who at the time was a junior professor at Dartmouth, and I believe the year was 1956. He wanted to bring a bunch of colleagues together to discuss the use of computers to solve certain classes of problems that really at the time were not considered things that computers could do. Bear in mind it was 1956.

So he put a proposal in to the Rockefeller Foundation actually, I believe it was, to fund this for a month in the summer. He said he wanted to study the idea that we could make machines do things that normally require human intelligence and we're going to study this area. We're going to call it "artificial intelligence." So there was actually a guy who named the field, and that was the first use of the term.

Now, later in life I knew John McCarthy. I didn't know him back then; I was only a few years old—I'm old but not that old. Later in life, John explained why he did that. At the time, he was a real junior guy at Dartmouth. There was a very senior guy by the name of Norbert Wiener—I believe you've heard of him—who was a cyberneticist. He had invented the field of cybernetics. McCarthy just was trying to do something different, to get out from under the idea that he was just having a cybernetics conference. So he came up with the name "artificial intelligence" for what they were thinking about doing.

Now, as I said, I knew John McCarthy. I had the pleasure of working in a lab with him at Stanford many years later when he was quite a bit older. John is your classic academic—disheveled. Anybody here from England? He's a boffin. Do you know what a boffin is? That went great when I was at the Imperial College in London a couple of weeks ago—"Oh yeah, he's a boffin."

John is kind of this disheveled guy. As you might imagine, he was a little bit Asperger'y. But he did one thing in his life that was really amazing, and that was name the field. He did other things that were amazing, but this one thing was one of the great marketing coups of all time, which was completely inadvertent. He picked a term which has come into the public domain to mean something which is very different than what it actually is, which is what I want to talk about for about the next five or ten minutes.

Now, most of you get your ideas about what artificial intelligence is, of course, by seeing the movies.

How many have seen the movie Her? Was the other one Evawas that the name of the movie—the one about the female robot? Ex Machina? How about the movie Artificial Intelligence? Everybody has seen Terminator of course—I, II, III, IV, V, and VI, or whatever it is.

Now, from that you would think that what artificial intelligence is about is building machines that are intelligent. But I've got news for you. It really isn't about that at all.

John actually believed that; that was his hope—but his work has been, to a large degree, discredited today. I'm actually here in New York at another conference at NYU (New York University) about the future of AI, where the people there don't really pay any heed to John's ideas. I won't go into any details about why his approach has been discredited and what the new approach is.

You've probably heard of machine learning. Anybody heard the term machine learning? No.

Neural networks?

Now, all of those terms are anthropomorphizing something that is basically just an engineering practice.

The problem is he was trying to build computer programs that would perform tasks that humans perform using their native intelligence. But the truth is that they aren't actually intelligent; they are just doing those same things in other ways.

So what you see in the movies is a fantasy. Now, maybe someday there will be things like that, but we have only taken the first couple of steps really toward that. Instead, we have some very clever programming techniques that are intended to perform tasks that people normally perform using their native intelligence.

Now, to give you an idea of what that means, let me give you a historical example where we are kind of on the other side. This is after something has happened.

You know what a calculator is, I assume. Many of you use calculators. You might not know that that term referred originally not to a mechanical device but it was the name of a job. When you said "go get me a calculator," they would go and bring the calculator in. It was a person—interestingly enough, usually a female, because they were more accurate and could pay better attention, they found, than men. I'm not making this stuff up. And the calculators would sit and do the calculations. It was considered a high-skilled job and a highly paid job, and it required considerable training and intelligence, attention to detail. So that's what a calculator was.

Well, during World War II and slightly before, they began to build machines that could perform numeric calculations. Now, those machines could outperform the human calculators both in terms of accuracy and in terms of speed, as you might imagine. But the people then thought, "These things are amazing. They're intelligent machines because they can do calculations."

Now, today I think there's nobody in this room who thinks a computer is intelligent because it can add up a list of numbers very, very quickly. It seems obvious that that's a purely mechanical function. And even though a great deal of scientific work and innovation went into the creation of those machines, we're not threatened by them. We don't think they're going to rise up and take over. That's not our view of what those things are.

Well, now, today you just flip that around. Now we see machines that can process visual information, that can drive cars, that can perform a whole bunch of other activities, which before today required human intelligence. So the natural assumption that people make is that that means the machines are intelligent.

Well, I'm here to tell you that's bunk. The truth of the matter is these are simply programming techniques that are used to solve these problems in different ways than humans solve them.

Now, the great tradition that John McCarthy started by naming the field artificial intelligence continues today. When you say "machine learning," what that means to this crowd is probably very different than what it meant down at NYU a couple of hours ago at the conference today. Down there it's a set of techniques using Boltzmann machines and convolutional neural networks and a whole bunch of other technical stuff in order to solve certain classes of problems.

So those words are used in the field, but they have a very different meaning for the experts than they do for a lay audience. This was John McCarthy's great inadvertent innovation, this marketing term "artificial intelligence," which has captured the public imagination.

JOANNE MYERS: Can I just stop you for one moment?

JERRY KAPLAN: Sure.

JOANNE MYERS: You've talked about how down at NYU that means something different than to the people in this room. But would the moral obligations or the ethical responsibilities of AI be different to the people down there than to the people here?

JERRY KAPLAN: No. That's a very good point. There are a lot of issues that are raised by the technology that will have significant effects on the world that we are going to live in, on how our children will work. These are very, very serious issues.

The problem is that when you call the field artificial intelligence and when your basic idea about it comes from movies like Terminator, that means the robots are coming to take our jobs. Look at the introduction that you gave. Your mental image of this and the way in which you are inclined to respond to this is, "Well, let's lock our doors and let's go get tasers, because they're electronic, and we'll shock them and knock them out. Let's keep the robots out. Let's not build those things because those things are bad."

Instead, I'd like to suggest to you, unfortunately, a more pedestrian and more mundane way to look at the field, which is much more valuable in terms of looking at what the consequences are going to be over the next few decades. That is, artificial intelligence is really a natural extension of the history of automation. That's all we're doing; we're automating tasks and we're automating jobs.

When you think of it that way, you can go back and think about the economics and the sociology of this and talk about the kinds of consequences of that different framing of the problem, that we're dealing with an acceleration of automation, and what is that going to mean for society.

So I've written a book, right here, that explores a lot of these issues, and also goes into the whole history, some of which I have just given you, about the field.

I talk about two fundamental problems that artificial intelligence is going to create for society. Now, let me talk about the positive things first and then I'll talk about the problems.

These are very powerful technologies, just as those machines in the 1940s were very powerful in terms of their ability to do calculations. What this will do is create a great deal of wealth. The question is, who gets that wealth?

If you look back at the history of automation—the Industrial Revolution, the invention of the steam engine—what you find is that it scrambled the economic structure, and there were a lot of people who became disenfranchised. You guys might have heard of the Luddites, the Luddite revolution. Well, we are in danger of a new Luddite revolution because of these new technologies that are being developed.

When we have new forms of automation, they affect the labor markets in very particular ways. This has historically proved true for hundreds of years. In particular, they change the nature of work; they change the skills that are necessary to get the work done. So it obsoletes the skills. The weavers were losing their jobs because of the new machines back during the first Industrial Revolution as a result of the invention of mills and steam power—I may have mixed up a couple of things—they were out of work. But the mill owners, they got rich.

Now, what we are seeing today is that advances in the field of AI, which I can actually tell you specifically which ones and how they're going to be—I'll be happy to do that; it's worthwhile—are going to make it possible to automate a great deal of the tasks that people perform today both in blue- and white-collar jobs.

Firstly, we have the potential of putting a great number of people out of work, in what economists call "structural unemployment" or "technological unemployment." It doesn't mean there won't be jobs; it will mean that the skills that they have aren't suited for the jobs that are available. What are we going to do for and about those people? We don't have the social safety nets in place, or the re-training in place, in order to ensure that the effects of this technology do not make most people's lives worse and a few people's lives better.

The second moral issue is that, as happened with the earlier industrial revolutions, the fact is that the benefits accrue to the people who can afford to buy the automation. That was basically in the old days the factory owners, or today the people who are down at this conference at NYU—the Googles of the world, the Facebooks of the world. They get very, very wealthy and a lot of people get put out of work.

So we are going to see an acceleration of income inequality, which is already a very serious problem. Income inequality, in my view, in this country is a national disgrace. We need to do something about it.

The problem is this particular technology, for all of its benefits and for all the ways it is going to make life more pleasant and cheaper and generate a great deal of wealth, is going to generate most of that wealth for a very small group of people who are the ones who can afford to invest in that automation.

JOANNE MYERS: So should we then be manufacturing all these robots if they are taking away their jobs?

JERRY KAPLAN: Absolutely we should, and the reason—

JOANNE MYERS: Why?

JERRY KAPLAN: The answer isn't, as I was trying to say, don't lock the door and keep the robots out. We're generating wealth. It's a normal part of the process. The question is, how do we more equitably distribute that wealth in the United States?

JOANNE MYERS: I know you have some ideas.

JERRY KAPLAN: Oh, of course I do.

JOANNE MYERS: Please tell us.

JERRY KAPLAN: I am nothing but ideas. I could pontificate for hours.

JOANNE MYERS: Okay. Please do.

JERRY KAPLAN: Just to give you more of a feel for this, 150 years ago the U.S. economy was primarily agrarian. Ninety percent of the U.S. population worked to grow food. That's all people did. The stuff you see on Downton Abbey, that was a tiny little sliver of the population. Everybody else was working in the fields, even right here in the United States, up through 1900. In 1800, more than 90 percent of the people worked in agriculture. So if you said, "I'm going to work," what you meant was you were going out there to work in agriculture.

Now, today we feed way more people with 2 percent of the population. Less than 2 percent of the population works in agriculture. And by the way, that 2 percent is under threat from the new wave of artificial intelligence. Now, that's a tremendous advantage, because we no longer have to all go out and scrape around to grow food in order to eat. It's the wealth that was created that allows us to live the way that we do.

You know, we tend to think statically. I grew up here in New York. We were talking about this before I came up here. Forty years ago, I lived in New York City. That's how old I am. I'm not 40. An odd fact is that the U.S. economy doubles every 40 years. This has gone on for several hundred years. Today we are twice as wealthy as a society as we were when I grew up here in New York. It doesn't seem like it because it happens relatively slowly over a period. Each year things get a little better—you get a raise, you get a better job, there are more people, certain things get cheaper.

But it's interesting if you trace that all the way back to 1800. The average household income in the United States was $1,000 a year. Now, that sounds ridiculous today. But the truth is that's the same as it is today in Mozambique and Malawi, to pick some examples. The reasons are they have the same kind of agrarian economy that we had 200 years ago. But every 40 years we have been consistently doubling, and that's because of the kind of work that's going on in artificial intelligence today, because of the automation and the improvements in technology and our ability to do more work with less.

So the answer isn't stop all of this. Of course we are going to want to continue this. The question is, who gets the benefits? If the Industrial Revolution had taken place not in 200 years, which we were able to adapt to, but had taken place in 20 years, we would have had mass starvation, terrible problems that would have occurred, because everybody would have been out of a job. There would be plenty of food and nobody would have any money to buy it.

Now, we are facing with artificial intelligence another one of these kinds of transitions. And I'll tell you, it's not going to take 200 years. It's going to happen in the next couple of decades. All kinds of blue-collar and white-collar work will be possible to automate. A lot of people are going to lose their jobs and a lot of smart engineers who keep building these applications are going to get extremely wealthy.

So the question we need to ask is, what kind of society do we want? Our current economic system is not properly aligned with what I think are reasonable social goals. They become decoupled. We need to find ways to change our economic policies and our economic system so that the benefits that all of this wonderful new technology will bring are much more widely distributed.

JOANNE MYERS: One of the ideas that you propose in your book is the job mortgage. Perhaps you could spend a few minutes talking about the job mortgage idea.

JERRY KAPLAN: First let me state the problem and then I'll explain. This is an approach. I was merely in my book trying to explain what kinds of things we could do about this.

The problem is when I grew up—and we still have the same system today—you went to school until a certain point, you learned a skill, you had a job, you did that job for the rest of your life. We're not going to be able to do that anymore. The way in which we finance education and the way in which we tolerate people being out of work and needing to gain new skills is really designed for that old style of economic system where you just basically got a degree in law and you stayed in law for the next 50 years and that was it and you did the same thing you used to do. You had lunch with the same set of people.

Well, the problem really is that the government is the lender of first resort for vocational training. As a result, the skills that we're teaching people are disconnected from the actual needs of the marketplace. You can get a student loan to study just about anything that you want to study. We are grinding out reams and reams of people, tons of people, who can't get a job, and yet they're on the hook for their student loans.

So one of the concepts—and it wasn't new with me; this goes back to Milton Friedman and others—is we should think of this differently. We should think of it as an investment. So just as you have a mortgage on your house—living in New York, maybe many of you don't have that; you have a mortgage on your condominiums, if I've got that right; not co-ops, a different thing—these are non-recourse loans, which means basically that the security is the real estate itself. You can give up your deposit, your down payment, and get out of that. But you're not on the hook for the rest of your life for a bad purchase in terms of real estate.

Well, if you get a student loan, you are on the hook for that for the rest of your life. In fact, you can't even get out of that if you declare bankruptcy. A little known fact: they changed the law to exempt that.

So what we need to do is come up with new financial instruments where private parties provide capital to people if they invest in new kinds of job training, and then the loans are paid back, or are limited, by a proportion of their future income. So that becomes the security for the loan, your future labor, as opposed to everything that you own.

If we were to do that, nobody is going to loan you the money to go to school perhaps to get a law degree—it's very difficult to get a job right now if you get a law degree—but they might loan you money to go to school to become a nurse because there's a great demand for nurses. It simply ties the skills of your learning to the return on that investment.

We're not investing wisely in education as a society right now because we have this kind of myth that the government pays for it and therefore everybody has an opportunity. But it's economically broken and we need to fix it.

So the idea of a job mortgage is just what I said: You have a limited form of loan which limits your risk but would permit you to re-train yourself if you needed a new set of skills if your skills have been obsoleted.

JOANNE MYERS: Before we open it up to questions from the audience, I just want to go back and go in a different direction totally and talk about artificial intelligence. How do you program it or encode it with sort of a moral system—I think people here would probably be interested in that—to make sure that they're for good, not for evil?

JERRY KAPLAN: When we talk about ethical issues in artificial intelligence, they come in two classes.

The way they talk about one class is basically that it will create societal problems. That to me is moral or ethical issues. We don't want to disenfranchise large portions of our population. That's not what society is about. That's not what good people do. That's not the basic compact that we have with society. So that's the effect of the technology.

There's another side of this—very briefly, because I know we want to take questions from the audience—it's that a lot of these systems are increasingly autonomous; that is to say they can make decisions. For example, how many of you have heard of the self-driving cars? [Show of hands] How many of you have taken a ride in one? [Show of hands]

A couple of you. If I came back in two years, half the audience would raise their hand. They're available. You can go to your local Tesla dealership and try their autopilot. It's fun. It's a cute function.

But when you delegate decisions about how you are going to drive to your car, you are also implicitly delegating a number of moral considerations. In particular, let's say you're going down a street, the street is suddenly blocked, maybe there's a group of kids on one corner and an elderly couple on another corner. You may have to make a choice, as unpleasant as that might be, as to whether to steer left and run into the kids or steer right and run into that elderly couple. You may make a split-second decision on the spot.

Well, it's one thing for you to make that kind of decision, which I regard as an ethical or moral decision. It's another thing when you delegate that kind of decision to your vehicle, because you're delegating it to the programmers in Mountain View, California—at Google, or at Tesla, or other places—and they have to actually wrestle with just these kinds of questions in figuring out how they're going to program those cars to behave. They're going to find those cars in dangerous situations where there are ethically meaningful decisions that need to get made.

So there is computational ethics, which is an emerging field, about how do we take all of this 2,000-year history of the philosophy of ethics and take these principles of deontology and utilitarianism and all this technical stuff, and boil that down into a computer program so the car can do something that's reasonable?

That's another one of the ethical issues which is raised by the technology of artificial intelligence.

JOANNE MYERS: But who is responsible for this program? You are in a car and another car comes up. Who is liable?

JERRY KAPLAN: I've been to a number of meetings and conferences on this. It's not going to be a problem. You guys don't have to worry about it. There's a whole other group of people who are worrying about who should be held responsible.

The basic answer really is more mundane than you might think. It comes out of product liability laws. If the car is not performing up to a standard which it is supposed to perform, that's a problem for the manufacturer. My advice to you is if you've got a self-driving car and you get into a dangerous situation, push the autopilot button and take your hands off the wheel, and I don't think you'll be considered to have any particular liability at all.

Questions

QUESTION: Hi. I'm James Cropcho. Thanks for this.

Should we have a guaranteed minimum income?

JERRY KAPLAN: My personal opinion is yes. Interestingly enough, if you go back approximately 150 years and look at the asset base in the United States, what constituted our assets? Because we were an agrarian society, the vast majority of our assets were land. The way in which the government very consciously—there was a lot of debate over this—made sure that young people had the same kinds of opportunities and inherited wealth didn't take over was they would give away land, and in giving away land they were giving away assets. As long as you were willing to work the land for, I think it was, seven years, you owned that land. So you could apply for a grant and improve the land and you owned it.

Now, the equivalent today, my calculation was that the value of all the real estate in the United States is 14 percent of the total economy, if I remember my figures correctly. Our wealth is in a different form. It's financial instruments. So when you think about that, what's the equivalent today of what we used to do in the Homestead Acts, which is the act where they would give away the land? It's basically, give people assets if they are going to use them productively.

Well, it's not very far from that to think about basic income. A lot of people are talking about it. It's a very good idea. Contrary to a mythology of this, which suggests that people are just going to get lazy and all that, it actually frees people up to take greater risks and to engage in more productive entrepreneurial activities. That has been demonstrated in study after study.

So I'm a big proponent of either a negative income tax or a guaranteed basic income.

And remember, 40 years from now there will be twice as much money in the United States as there is today. So we don't need to redistribute today's wealth, we don't need to steal from the rich and give to the poor. That's not what we need to do. The question is not what do we do with today's wealth, it's what do we do with the new wealth that is going to be created, assuming that the economy follows its regular pattern and doubles in the next 40 years.

QUESTION: Susan Gitelson. Thank you for being so insightful and provocative.

JERRY KAPLAN: Thank you.

QUESTIONER: Let's consider some of the political implications. For example, right now in the Republican primaries it is said that Donald Trump is getting a lot of support from blue-collar men who are apprehensive that they don't have many job opportunities, and therefore they don't like the system the way it is being operated. Even worse, in the third world—let's say the Arab Spring, or potential terrorists and so forth—there are many people who have gone to school and earned degrees and yet they can't find work, and they are the ones who are often most ready to become terrorists because they can't stand the system. What do we do about all of this?

JERRY KAPLAN: Well, the answer is we need a more inclusive society. When societies are organized so that people have equal opportunities, so that we invest in—like giving away the land, investing in people who are willing to work the land—today that means train them for jobs and make sure they all have an equal opportunity to get those jobs. When people know that they have opportunity, they don't turn to the kinds of activities that you're talking about.

QUESTIONER: Well, that's the ideal. But let's look at what's happening. The strong men who control society, how concerned are they about finding ways to keep their populations gainfully employed?

JERRY KAPLAN: I don't think they're concerned about that at all.

QUESTIONER: Exactly.

JERRY KAPLAN: That's the problem. The conference that I was just at, 10 years ago it would have been mostly a bunch of academics talking about this subject. I couldn't help but notice that there were probably today, out of 20 speakers, five or six of them who are known for being billionaires. The problem is the money is piling up in one place, it's not being spread around in effective ways, because we don't have systems that can ensure that a broad swathe of society benefits from these sorts of things.

If you want more terrorism, if you want more civil unrest, if you want demagogues speaking at Republican conventions or at Madison Square Garden, wherever they go these days, just continue what we are doing.

The funny thing is that those people are promoting the polices which make the problems you are talking about worse while they are appealing to the people who are going to get hurt by those same policies. That's the strange contradiction which I find very peculiar.

Thank you for your question.

QUESTION: Sondra Stein.

Assuming we have an education system that can train in the way you said, or some method that everyone gets the maximum training they need, it seems to me that what is now being done with artificial intelligence and the trajectory, if you take society as a whole, as a bloc, we won't create enough jobs for the people. There will just be much less jobs than there are people, no matter how you shuffle it. Now, I could be wrong, but it's just a thought I have.

JERRY KAPLAN: You think, "Well, jeez, we're going to put all these people out of work." There's a recent study out of Oxford that says that nearly half of jobs will be subject to potential automation. Well, what are people going to do if there will only be half as many jobs? That's thinking about the problem statically.

The truth is that every time we have been through one of these transitions, new kinds of jobs have been created that allow people to make perfectly good livings. Now, it's very hard for me point to them and say 20 years from now there'll be a whole new set of professions. Because they haven't been created yet, it's difficult to look at that. As a futurist, it's quite hard to look into the future. But the truth is it has always been the case.

My own personal experience: I have four children, four girls. Hopefully, I'll get some sympathy from this audience. They were all teenagers at the same time. How's that? No sympathy? The youngest is now 17. The oldest is 23. Two of them just got their first jobs. I couldn't help but notice that the jobs that they got didn't exist when they entered high school.

The job market is rapidly changing. I'll tell you what they are. One of them is working at an online education company that sells courses online. The other one does social media promotion for restaurants. There was no such thing 10 years ago.

My personal point of view—and many of the economic studies back this up—is there are going to be plenty of jobs. Because there will be so much wealth, they will be willing to pay people to do new kinds of things.

There will be whole new fleets of competitive online gamers where you can make a living doing that. That's a major emerging area.

There will be something called Bitcoin mining. I don't think we have time to go into that. You don't put on a helmet and dig, but it's actually a real profession and you could make money as a Bitcoin miner. It's an electronic thing.

So we'll find all kinds of new things that need to be done. And we'll have more wealth to pay people to do things like grow prize orchids or to perform in symphony orchestras. We're just plain going to have more money, and that money will generate jobs. They may be different jobs than the ones we have today, but there will almost certainly be jobs.

I can make one other comment. We went from 90 percent to 2 percent employment in agriculture. If we had been here 200 years ago having this conversation, everybody would go, "Oh my god, what is everybody going to do?"

And then we think none of you were working. If I could transport people in time 200 years, if we went through the room and everybody here explained what they did for a living, people would go, "Well, that's not work. You're just doing that for fun. You can just knock off, work two hours a day, go buy a jug of wine, build a shack out in the woods, and dig a hole for an outhouse and you'd be living the high life just working two hours a day."

Our expectations rise and the job market inevitably evolves in this way. So I'm pretty confident we're not going to have a problem.

QUESTIONER: I don't know that we can always extrapolate from the past. But you did explain it well. Thank you.

JERRY KAPLAN: Thank you very much.

QUESTION: I'm Isaac Scheinfeld, a high school senior.

I'm planning on studying artificial intelligence and neuroscience in college. I was wondering if you had any recommendations on how one could pursue that while still keeping these ideas of ethics and social morality in mind.

JERRY KAPLAN: I think these are really different. This goes to Joanne's question: It's like, "There are bad side effects to the technology so we shouldn't do them." That's a mistake. We should do them because they‘re generating wealth.

But I would suggest you think about it differently. It's not your problem. Go out, do the best you can, learn these technologies, and you can decide if you want to work at a military installation; or you may decide that's not something you want to do, run military applications, which is a whole other ball of wax about these new technologies. Or you might decide you want to work in the health care segment. But you will have that choice. There are all kinds of applications. To the extent that you want to choose a career path that you find morally satisfying, go ahead. So don't go to Wall Street, which is one of the major applications of artificial intelligence. Go into medical diagnosis.

QUESTION: Don Simmons.

Back in the 1940s, there was an idea, I think associated with Alan Turing, that if a machine could be so cleverly programmed that a questioner looking at the answers generated by the machine was unable to say whether that was a human in control or a machine, that machine would be deemed to be intelligent. My first question is, has such a successful deceiver yet been contrived?

The second question: As the universe is thought to have sparked into existence from nothing or as life sparked into existence from nonlife, do you think it's possible for a machine to undergo some sort of spark like that to become what you would regard as intelligent?

JERRY KAPLAN: I would love to talk about this for an hour, but I won't do it. I'm going to try to be really concise and brief.

The Turing test, which you're referring to, is completely misunderstood. His original paper in 1952, I think it was—"Can Machines Think?" is where he proposed this test. [Editor's note: The paper is called "Computing Machinery and Intelligence."] If you actually read the paper, his conclusion was: "So I come back to the question, can machines think?" He said, "I regard the question as so meaningless as not to deserve serious discussion"—that's almost an exact quote—"However, I predict that within 50 years' time we will be able to talk about machines as thinking without fear of contradiction."

He was talking about language shift. He was not talking about some magical test of whether machines would be intelligent or not. So the whole concept of the Turing test is wildly misunderstood.

But to answer your question, they run them every year, and yes, a machine passed his test. It's really a question of "Can you fool somebody?" It's not a question of "Is the machine intelligent?" It's a fun test, but it has nothing to do with intelligence.

Your second point was—I can only remember one point at a time at my age.

QUESTIONER: Could any truly intelligent machine somehow spark into existence?

JERRY KAPLAN: If you guys go see movies, inevitably there are two kinds of intelligent machines. There are those that are conscious and those that are not conscious. Then there is this moment in time where they kind of go into some kind of crazy visual thing and, "Ah, the machine came alive, now it's conscious. Oh, what a fool I've been."

Well, there's no such thing. We have no idea what consciousness is. There's not any indication whatsoever that machines are or ever probably will be conscious in any human sense.

What it's really about is, do we regard them as deserving the courtesy of our own empathy? That's what we mean when we say "The machine came alive." You're really talking about "How should I treat it? Is it okay to turn it off, is it okay to kick it, is it okay to enslave it, if it's conscious?"

But nobody has any idea what this means, except in Hollywood. [Laughter]

Look, there are people working on a laundry-folding robot. It can sit there and fold laundry. Now, that can be incredibly intelligent. It may wander around your house and decide where you want your laundry put, how you like your socks folded, and knowing not to wake you up at night to put your laundry away because you're sleeping. It can be incredibly intelligent. But I guarantee you it will not wake up one day and go, "Oh my god, what a fool I've been. I really want to play in the fine concert halls of Europe." [Laughter]

This is not a real thing. It is, unfortunately, one of these societal memes and things we're worried about. And if it does happen, it's not going to be like that [finger snap]. We'll have plenty of warning. This is way beyond my pay grade. I'll long be dead before this question comes up.

QUESTION: Ann Lee.

This is sort of a follow-up to that question. I read in The New Yorker an article they had about robotics. The article basically said that people thought it could get to the point of intelligence. I was at a robotics conference not too long ago where someone said that Google was working on Google Brain and it had an IQ of 70, and that it's not too far out of the question where it could have an IQ of 6,000, which is what The New Yorker article was saying.

Regardless of consciousness or not, what the guy was saying was that machines can learn, and so, like people, they can learn to do good things or bad things, bad things meaning killing people. So since this is a possibility, and he said machines can learn to do that, and there was an open letter written by Stephen Hawking and Bill Gates and Elon Musk about this, obviously some people are very concerned about it. So I'm just curious why you dismiss it so quickly.

JERRY KAPLAN: If I can communicate one thing to you today, it is that whole idea is complete nonsense. Now, I can't state it any more plainly than that. It's the confluence of a whole series of just misunderstandings and people talking past each other.

What does it mean to say a machine has an IQ? It's a meaningless statement. I will try to summarize this very briefly. The whole notion of human intelligence as a measurable objective quantity is fatally flawed. Many psychologists who study this say there's all different kinds of intelligence—there's academic intelligence, there's athletic intelligence, there's social intelligence, there's all these different kinds of intelligence.

Somebody—I should look this up—came up with the idea of boiling it down to a number, an IQ, which is nonsense. "Little Johnny can do two more math problems in half an hour than Sally can; he's 7 percent more intelligent." Well, that's silly.

Now let's take that same principle and let's apply that to machines. The machine can do a million times more calculations than Johnny. We can calculate it has a 6,000-point IQ. But, as we just discussed, a machine doing calculations doesn't mean the same thing as a human being. You assume that a human being doing calculations is using some kind of natural capability which we all agree is human intelligence. But the idea simply doesn't apply to machines.

Now, there's this incredible public discussion that seems to float around, and it will surface in places like The New Yorker. Don't believe everything you read in The New Yorker, please.

But the truth is this is not something we need to worry about. It's not real. If you think of humans as having a linear scale of intelligence, you can talk about machines, "Well, they're here and they're here, and now they've got an IQ of 70"—that would mean they're pretty dumb, by the way—"and what are we going to do when they get up here?"

Well, machines can outperform people on all kinds of tasks. The thing to bear in mind is these machines perform tasks. They're not intelligent. They're used to solving certain kinds of problems.

I don't worry about the fact that my car can go faster than I can run. I don't care about the fact that my calculator can do arithmetic faster than I can. What we're not seeing is a generalization of that.

Now, there are some people in the field of machine learning who are over-promising and misunderstand their own technology. Most of them, by the way, are quite young, sorry to say. I have the benefit of having been around for a while. I went through the expert systems craze—anybody here remember that?—and you could have heard this same argument. The Japanese fifth-generation threat—does anybody remember that? We've been through this cycle over and over and over again.

The new technologies, which I didn't get a chance to talk about—there are basically two big breakthroughs.

One is in the area of sensory perception, which allows us now to build robots and machines that can sense their environment and take certain actions in the environment. That's self-driving cars; you're going to find robotic gardeners and robotic ditch diggers and everything. We're going to have all of that stuff because of that.

The second in the area of machine learning is the business of extracting patterns out of very large volumes of data. It could have been called "big data," which it is called in some circles. That's really all that it is. But when you're in a crowd like this here, if I come in and say "machines can learn"—they don't learn the way you learn, they don't learn to play the violin, they don't learn that vanilla tastes different than chocolate. It has a different meaning in the broader society than the term means to the technical community, which was one of the points I was trying to make.

QUESTIONER: That's actually not what this one speaker at this conference said.

JERRY KAPLAN: Oh, yes. You'll find—

QUESTIONER: He said that machines can learn to compose music, write stories, and the only thing that machines cannot do right now, he said, was to do disruptive innovation.

JERRY KAPLAN: I'm not sure which speaker it was. I take exception to what they said. The truth is you can program a machine. Let me give you a modern example. You can program machines now to do what's called automatic translation, and it's really getting good. You can take things in one language and translate them into another language.

Now, think about how humans do that: You go to school, you learn two languages, you understand the respective cultures, you listen to what somebody has to say in one language and you try to render that meaning as accurately as you can in the other language.

Now, machine learning techniques can perform to some degree on a par—not quite on a par yet—but plausibly, at this task. But they do it by analyzing very, very large volumes of data. We can't do that, humans can't do that. But it turns out by looking for correlations between what are called concordant texts—here's a text, here's its translation—millions of examples of that, you can extract very complex and subtle patterns out of that using machine learning techniques, and you can apply those so that when you feed in another text you can translate it into another language. That doesn't mean that that machine understands anything. That's a great example of the points that I'm trying to make.

So when somebody stands up and says, "Now machines can learn, they can learn this, they can learn that"—you can take large bodies of data, you can use a lot of processing power and some very powerful engineering techniques, to solve problems that people solve using intelligence. But it doesn't mean the machines are smart and it doesn't mean they're intelligent. That's my story and I'm sticking to it.

QUESTION: Hi. Maria D'Albert.

I wanted to point out a connection between the two ethical dilemmas that you pointed out: the first one, obviously, about inequality; but the second one where you talked about the internal logic of coding and how it creates rules and logic by which something is interpreted and an outcome takes place.

I wanted to bring up an example, a very tactical today's example, of how coding can create inequality because of the logical interpretation of a piece of information. For example, someone brought up the point to me that there was a known software code that was interpreting whether there were two spaces at the end of a sentence and whether that would make someone a likely candidate for a job opening.

Now, it had learned that there was a higher proportion of candidates who would be accepted into this organization, a very up-and-coming organization, if they only had one space. If there were two, they were seen to be less likely, so they were filtered out as candidates. What that's a proxy for is actually grammar education and age, because it used to be two spaces and now the current logic is one.

The reason I'm bringing the question is, how do you, if we're talking about the need for inclusivity and the logic of ethics is embedded in the coding and the determination of what one values as important—I do think it's important—the question is, how do you educate and make inclusive those ideas, just as the gentleman was talking about becoming a coder? That sensitivity to the need for inclusivity at a base level in the societal opportunities has to be there. How do you include that in the curriculums and in the mode, particularly when there's a lack of transparency in these very complex systems?

JERRY KAPLAN: Well, this is a great example of the kinds of ethical issues that occur when we delegate certain kinds of decisions to machines. They may not make them in ways that we would find ethical or socially appropriate. Now, these are real problems.

To give you another example, the granting of credit. You can use a machine learning system—and they do—to decide who is the best credit risk. You may find proxies, intermediate variables, for race. It's illegal to discriminate in credit decisions based upon race. One of the very big challenges that machine learning people have is to try to figure out whether that's what it learned, because, unfortunately, that may be predictive of the outcome.

So again, these are the kinds of things we need to be very careful before we turn them over to machines. I do not look forward to the day when I'm trying to argue with a machine to make a hiring or firing decision about me. I would think that's a decision that ought to be made by a human being.

QUESTIONER: Yes. But today, already, they are doing that.

JERRY KAPLAN: Absolutely. Google—the most amazing stuff—I'm not making this up—they actually identify people based upon your search activity on Google, what you're looking for, as to whether you might be interested in a job at Google, whether you have certain skills, whether you're searching for machine learning and Java script and certain things. They will actually say to you, "Would you like to take a quick test right now to determine whether or not we would consider hiring you?" It's an entirely automated process. It spits out people who can pass their test.

I don't think we want to interpose the machines in that way. A bigger issue is I don't think we want companies like Google able to use the data about us in this way to forward what are really their own interests.

QUESTION: Chris Acosta.

Really quickly, based on what you were just saying about the Google example—so you have an egalitarian, say, in the best example, a sort of baseline where you setting up these rules—because at the end of the day it isn't Skynet, it's just automation, so the acceleration of automation—

JERRY KAPLAN: That's correct.

QUESTIONER: Do you feel that that will cause—because it gets to the system of people programming those machines and those systems, whether it's in cartoonishly Skynet or a credit score or what have you—how do you police that acceleration of the institutional racism, classism, that exists in reality on the planet across every culture, except for probably a mayfly? How do you reconcile that? This is all great—I work in technology, and I'm sure a lot of people who are here do too—but doesn't that create a sort of unchecked system that just keeps propagating everything that's broken in society, but everyone's happy and making money and there are cool jobs and there are cool movies, but then sort of the outliers around that? How do you police that? Isn't that just an acceleration of the horrible things that happen to regular people?

JERRY KAPLAN: My general answer is kind of the point of my book and the talk, which is we need to be paying attention to these side effects of what is really a very valuable set of developments in technology.

So yes. As I said to the young gentleman who wants to go into the field, don't say that the engineers have to solve this problem. That's like saying the people who developed the atom bomb have to decide whether it's a good idea to drop it. We need to have a broader conversation. These are political and social issues. There are a lot of people at places like the Carnegie Council who worry about those issues. Let them worry about them.

QUESTIONER: But can the innovation innovate its way out of those things?

JERRY KAPLAN: Probably not. It's unlikely. I'm just offering an opinion. What's my opinion worth? It's the same as the guy's sitting over here.

But it's not natural to do that. Mostly what motivates the development is the ability to make money. They want to automate tasks that can generate additional wealth. People tend to worry much less about whether that's putting people out of work, whether taxi drivers are not going to have anything to do—whatever it is. That's a different problem. We need to address those as well. We're not paying enough attention to those issues because we are thinking of this as "the Terminator is coming" and we're not thinking about it as an—

QUESTIONER: We're the Terminator. The Terminator is not coming. We're the Terminator.

JERRY KAPLAN: That's right, exactly.

QUESTION: My name is Kyle Schmidt.

As the lady mentioned before, in the age of big data—I'm in the technology field, a software engineer by trade—in the field we say that a computer is nothing without its data, and in order to perform machine learning algorithms you need at least—I don't want to put a number on it—90 percent of your machine learning depends on its data. So in the age of big data—and Google comes to mind—how should we combat, I guess, the age of information and the information that Google and some of these other companies are gleaning from our private information?

JERRY KAPLAN: The problem is—this comes as no surprise—information is power. It turns out that, because of the transformation of information into electronic form and the ability to communicate it very quickly through the Internet and other forums, has made an explosion of available information. Now we are mining that information in order to do valuable and useful things.

So you are quite right, the value is in the data. That's like saying, "Look at the television. This is an amazing invention. But where are all the shows? How does it know how to do all those shows?" Well, of course it's the data that's flowing into it that makes it so interesting and valuable. That's what we're seeing.

And, of course, that can be abused and used in bad ways because we just haven't had enough experience yet in this new world to know what kinds of controls need to be placed—what kind of privacy controls, what kind of use controls. But I think that a lot of the policymakers are really waking up to this. I'm very much in favor of our paying a great deal more attention to looking forward to what these technologies are going to do in the next couple of decades, as opposed to saying, "Here's where it is today; let's try to figure out how to solve today's problems." There is going to be an acceleration of different side effects and problems for technology.

JOANNE MYERS: As you said, information is power. Thank you for making us so powerful. It was a wonderful discussion.

JERRY KAPLAN: Thank you, Joanne.

You may also like

NOV 21, 2024 Article

A New International Order Is Emerging, We Must Bring Our Principles With Us

On the heels of a new international order, Carnegie Council will continue to champion the vision of peace and cooperation that remains our mission.

NOV 13, 2024 Article

An Ethical Grey Zone: AI Agents in Political Deliberations

As adoption of agentic AI increases, it is critical for researchers and policymakers to agree on ethical principles to inform governance of this emerging technology.

OCT 24, 2024 Article

Artificial Intelligence and Election Integrity in 2024

This final project from the first CEF cohort discusses the effects of AI on election integrity as billions of people go to the polls in 2024.

Not translated

This content has not yet been translated into your language. You can request a translation by clicking the button below.

Request Translation