Skip to main content

Memo to CXOs: Joanna Bryson Calls BS on AI Regulation Critics (AI Ethics, Part 2)

Roland Alston, Appian
September 27, 2018

Joanna Bryson, AI expert and computer scientist at the University of Bath in the United Kingdom.

(This is the second episode of our series on artificial intelligence, featuring computer scientist Joanna Bryson (@j2bryson), ranked by Cognilytica as a Top 50 AI Influencer to Follow on Twitter.) Read Part 1 here.

How was that credit application declined?Why was that person denied parole? How were disability benefits cut for those constituents?

And why do computers learning from humans automatically see certain occupational words as masculine and others as feminine?

And on and on and on.

Here's the thing. C-suite execs are under growing pressure to win consumer trust in the automated decisions that are already affecting many aspects of their lives.

Studies show that more than half of global consumers (53%)say we need more education about the role of AI in society.

In other words, many consumers are already wary of AI.

And lack of trust in how it lives up to our ethical expectations could fuel even more fear and loathing.

The good news? Many of the variables around the ethical use of AI are often directly related to the choices we make in developing the technology.

So says Joanna Bryson,AI expert and computer scientist at the University of Bath in the United Kingdom.

Bryson recently co-authored an AI study (featured in Science magazine), whichrevealed computers can be as biased as humans if we train their AI without due diligence.

Rooting Out AI Bias

The study confirmed that when algorithms glean the meaning of words from massive stacks of data, they adopt stereotypes very similar to our own.

This is one of many reasons that Bryson believes it's essential to prioritize due diligence in the development of AI systems.

Not by creating new regulations, but by using existing regulations to hold those who build, own, or operate AI accountable for what their systems do.

All of which is to say: Bryson gives a damn about the ethical use of AI.

Hope you enjoy the conversation.

Appian: Are you an optimist about the impact of AI on society? There's a crowd that believes the explosion of AI will create a job-killing apocalypse. What do you make of that argument?

Bryson: Because I'm not afraid of the same thing that other people are afraid of, some people call me an optimist. But others think I'm a complete techno phobic pessimist.

Why AI Governance Should Be Prioritized

Appian: What worries you about the AI future, what keeps you up at night?

Bryson: A future where we cannot be ourselves, where our entire background is online, and we can be penalized for that, I'm seriously worried about that right now. So, I'm very aware of the downside of the "surveillance state".

And I don't really see a way to get around that. People are going to know who we are, what we do, what we've done, and what we're inclined to do.

Appian: I've read that AI reflects the values of our culture. That we can't get AI right if we're unwilling to get right with ourselves. How do we do that? How do we get AI right?

Bryson:

I think that what we have to do, and what's becoming the most important topic, is how do we manage our governance of AI. How do we coordinate our actions through governance to protect individuals?

https://twitter.com/j2bryson/status/1036645253695057920

Should Algorithms Be Regulated?

Appian: So, what advice would you give organizations about AI development?

Bryson: The most important thing I'm pushing right now for business and regulators is that we need more accountability with AI and this is doable.

Appian: But there's also a strong argument against regulating AI, because it might stifle innovation. What do you make of that argument?

Bryson:

There has been a lot of smoke and mirrors around AI. As recently as two years ago, you had high-profile engineers going out there and saying that if you regulate us, you're going to lose deep learning, which is the magic juice of AI innovation.

But, now you have major tech companies like Microsoft saying that they do believe in regulation for AI.

https://twitter.com/ipfconline1/status/1029625414178807809

I think it's important to recognize that we can track and document whether or not you follow due diligence in AI development. The process is no different than any other industry. It's just that AI has been flying under the radar.

We Need AI Transparency More Than Ever

Appian: What's the danger of lack of transparency in the use of AI. What's the risk?

Bryson:

Right now, a lot of people are releasing code, and they don't even know which [code] libraries they're linked to, where they got these libraries from, and whether they've been compromised or have back doors.

So, we just need to be more careful. It's like the (Morandi) bridge that collapsed in Italy last August. When you don't know how good the materials were that went into a construction project, or whether shortcuts were taken, then you can't really say how strong your bridge is.

Today, we have laws about that, with respect to bridges and buildings. But if you go back a few centuries, anyone rich enough could construct a building anywhere they wanted to.

Now, if you want to put up a building, you've got to go before the planning commission. Your architects have to be licensed. And all that stuff happens because buildings really matter. If they aren't constructed well, they fall down and kill people.

Can You Stand Behind Your Software?

Appian: So, you think software should be treated the same way?

Bryson: It didn't use to be [the case], but suddenly it is.So, we should go through procedures to make sure that the innovations we make are sustainable.

We should be able to prove that we can stand behind our software. And we should be held accountable for what our software does.

Why Due Diligence Matters

Appian: Holding companies accountable for the software they create, that's an interesting notion. But how would that work in the real world?

Bryson: So, I heard a really great story along these lines. Almost every car on the road has some level of AI in it. And one of the things that AI does is cruise control and help maintain your lane.

Unfortunately, a man in Germany had a stroke while driving. And it was the kind of stroke that left his hands on the wheel and his eyes open.

And so, a lot of AI looks to see if you're falling asleep at the wheel. In this case, the car thought that the driver was okay. So, the AI maintained his lane and kept the car going straight. And it ended up in a horrible accident.

Prosecutors looked at the car company to find out what they had done wrong. But the company was able to show that they had followed best practices, and convinced the prosecutors that there was no case to bring.

Appian: So, what's the lesson in that for business leaders who have embraced AI, or are thinking about embracing it?

Bryson:I don't think a company that has as its mission: "move fast and break things" is going to be able to prove they did due diligence when bad things happen with their software.

Read part 3 of our conversation with Bryson.