Skip to main content

Will AI Make Humans Obsolete? Not in the Short Run (Part 1)

Roland Alston, Appian
January 18, 2019

This is the first installment of a two-part series on artificial intelligence (AI), featuring AI expert and Duke University, Computer Science, Professor Vincent Conitzer @conitzer.

There's nothing new about the conflict between machines and humans. But what has changed is the astonishing rate at which machines are becoming intelligent.

All you have to do is look around to see that intelligent automation has gone mainstream. (Just ask Siri and Alexa.)

And with the Internet of Things growing faster than anyone ever imagined, AI will soon be embedded in over a billion devices.

Soon machines will learn from machines. And AI will learn from its experiences, adapt to its surroundings, foresee outcomes, and change its functionality to meet your expectations.

The pessimists warn that this will make humans obsolete.

The truth is, many jobs will end up on the disruption casualty list because of intelligent automation.

But the opposite is true of work that requires empathy and the ability to step back and see the big picture. Thus far, these kinds of jobs have largely been safe from disruption.

The question is, will the rise of AI upend all of that?

Which brings us to AI expert Vincent Conitzer, who has written extensively on the human impacts of AI and intelligent automation.

A professor of Computer Science, Economics, and Philosophy at Duke University, Conitzer breaks down why AI and intelligent automation will continue to shake up the workforce stack, but won't make humans obsolete at least not in the short run.

We hope you enjoy the conversation.

Appian: Good afternoon professor. And welcome to Appian. You recently published an article in the Wall Street Journal where you argued that in the age of AI and intelligent automation, being human still has its advantages. What did you mean by that?

Conitzer: That article talks about what AI can and can't do. Some people have raised lots of concerns about AI, ranging from bias and fairness all the way up to AI taking over the world. I wanted to help people interpret these arguments. And that's basically what the article is about.

What AI Can and Can't Do

Appian: Yes, you talk about what AI is good at, what it's not good at, and how it fails in strange ways. But if AI gets better at what it's bad at, how would that change things?


So, I think today's AI still has major limitations. What AI tends to be very good at today is what's often called narrow AI. This is (the kind of) AI that solves a narrow problem, especially problems where you see the same kinds of inputs over and over again. And where there's a clear measure of success, and a clear objective to pursue.

Appian: So, what kinds of jobs are mostly impacted by narrow AI?

Conitzer: This could include radiologists who have to diagnose images and look for patterns in them. You've already got companies that manufacture robots that can flip burgers, and you've also got self-driving cars which can be more challenging, because unexpected things can happen, when you're driving around a city. And this is where humans start to shine.

Appian: So, jobs that are repetitive and predictable are at the greatest risk of disruptionÖ


When the job gets less repetitive, less predictable, when you need a broader understanding of the world and social interaction, and you have to be flexible and adapt to unforeseen circumstances, this is the kind of work that humans are much better at.

One of my favorite examples is kindergarten teachers. They have to be aware of lots of many different things. They have to have a good amount of common sense. They need to manage social relationships with the kids and with parents. They have all kinds of situations that they've never seen before, that they have to adapt to.

Beyond the Limitations of Narrow AI

Appian: Alright, so narrow AI isn't good at solving those kind of qualitative problems? So what's the opposite of narrow AI? What's the alternative?

Conitzer: Artificial General Intelligence. This would be an AI system that would not only be able to outperform people at doing narrowly defined tasks. You've already got systems that can do a better job than humans at diagnosing images. But it's not like you can easily take those systems and instantly apply them to doing other things.

The kinds of things that we want people to do in their jobs usually require lots of flexibility and common sense. For people, it's pretty easy to see when something is a little off in the data. If you're classifying images, and one day you see the images are really changing, you'll take a step back and ask: "What's going on here"?

People Are Better at Seeing Big Picture

Appian: So, today's AI systems aren't able to do that?

Conitzer: Maybe they can detect that something has changed. But they may not be able to put the change into broader context.

This [inability of AI to put things into context] is what causes some of the amusing failures of AI systems. There's a group of researchers at Carnegie Mellon that got a computer vision system that was designed to recognize people and images. And they built these colored eyeglass frames that they would wear to try and mislead the AI system.

It's strange that this would make any difference, right? Because these were people who just put on eyeglasses to look different. And the system got completely tripped up. The system consistently mis-classified people in this way.

And that's because the system had no real concept of what are the different aspects of the image. It couldn't figure out that the researchers were just being funny and putting on glasses. The system just wasn't designed for that.

This gets at Narrow AI's inability to get at the broader context of a subject and how it plays out. These are things that are important as you deploy AI in the real world.

And this is a good takeaway for C-level executives. The point is AI is not going to solve everything. You need to have a very clearly defined, predictable task that happens consistently over and over again.

When expectations and goals need to be interpreted more broadly, then human workers are more indispensable.

Appian: Which reminds of an article I came across in the New York Times about how an AI system turn down former Federal Reserve Chairman Ben Bernanke for a home loan because he had recently switched jobs, which made him a high risk to refinance his mortgage. I think a human being would've made a different decision.

Conitzer: That's right (laughter). I haven't seen Bernanke's credit report, so I don't know about the decision. That being said, there are all kinds of things that could, in principle, go wrong with a loan approval process.

The system certainly wouldn't have had the broader understanding that "oh, this is Ben Bernanke, we know quite a bit about him, and he's probably a good risk for the loan."

If you're using AI to classify people; if you're using it to determine if they should get a loan or be sent to prison, the data may not perform as well as you thought it would.

It's Not Just About Automating Jobs Away

Appian: There's also a lot of debate about the conflict between AI and human labor and how this conflict will impact workers. But you've made the case that we should also focus on how AI will make jobs safer and easier.

Conitzer: Right, there are many ways in which AI can make life easier. Earlier, I mentioned how AI could do the kind of work that radiologists do. These jobs aren't going anywhere anytime soon.

But AI could augment the capabilities of radiologists. For example, the software might identify things that radiologists might be missing. It might prioritize different images for him or her to examine.

The same is true of other jobs as well.

So, it's not just about AI automating jobs away. It's about automating certain aspects of work that AI can do better than humans. And, so, people might prefer to have these tasks automated for them. Which will free them up to do more of the kinds of work they prefer to do.

Appian: So, should we not worry about automation displacing people? Is that just hype?

Conitzer: it's not that we shouldn't worry about people being displaced by automation. We may need fewer people to do the same amount of work.

But, with most jobs, I think that it's not realistic to say that entire job categories will be automated away. And, yet, there may still be a human in the loop who's controlling the robot and analyzing the data that the robot is sending back.

Appian: Some experts worry about the weaponizing of intelligent automation to make life or death decisions on the battlefield. What do you make of that argument?


The question is, are we comfortable with the idea that algorithms are making these decisions without any human involvement? I think many people are not comfortable with that idea.

They want to make sure that a human is always in the loop when life and death decisions are being made by an AI system.

But what does it really mean to have meaningful human control? This is an ongoing debate. And not everyone agrees on it.

Tune in next week for the second half of this 2-part series.