Why humanity is essential to the future of artificial intelligence

JUDY WOODRUFF: Now another new addition to the NewsHour Bookshelf.

Tonight’s focus is the brave new world of artificial intelligence.

Jeffrey Brown has that.

JEFFREY BROWN: Is it man against or with machine? Do machines, robots help us, replace us, hurt us?

Robots are being built and seeping into more and more of our lives, but how much are their value and impact understood and accounted for?

Such questions are part of a new book that looks at the last decades and the advance of artificial intelligence and robotics. It’s titled “Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots.”

Author John Markoff is a longtime science and technology reporter for The New York Times.

And welcome to you.

JOHN MARKOFF, Author, “Machines of Loving Grace “: Thank you.

JEFFREY BROWN: “Machines of Loving Grace,” it sounds great, wonderful, right? A little bit of religion, a little bit of human love, but it’s more complicated.

JOHN MARKOFF: Yes. Well, yes, and I might have put a question mark after it.


JOHN MARKOFF: Because I think we can go down both paths. And we probably will go down both paths.

My point is, it’s a human choice at this point. These machines are not evolving by themselves. There are human designers.

JEFFREY BROWN: You’re making a distinction. You’re coming at this as a reporter and you’re making a distinction between machines that are replacing humans and those that are sort of helping us.


JEFFREY BROWN: Now, why is that? Give me an example. Why is that important?

JOHN MARKOFF: Well, go all the way back to the dawn of interactive computing, and the thing I noticed is there were two labs on either side of Stanford.

One was started by a guy by the name of John McCarthy. He thought in 1962 it would take a decade to build a thinking machine, to replace humans. On the other side of campus, there was a man by the name of Doug Engelbart. He invented the mouse. He invented hypertext that led to the World Wide web.

And he wanted to use computers to augment humans. And so you have these two different philosophies. And I realized there are two different communities. And they basically don’t talk to each other. And the idea is to square the circle, to bring them together.

JEFFREY BROWN: To square the circle, bring them together and, you’re arguing, think more about the ethics. There’s a lot of thinking that goes into the making of the products or the machines, not enough into what they’re for.

JOHN MARKOFF: Yes, we’re at this interesting juncture where machines are starting to do autonomous things.

Cars are starting to drive. We’re replacing humans in certain places with systems that are robotic and artificially intelligent. And the designers need to make ethical decisions about what they imbue the software and the robots with. It’s becoming a big deal for society.

JEFFREY BROWN: Are there good examples that you see out there where there is interesting thinking going on about this, even at the level of the lab or the product-making?

JOHN MARKOFF: There is a lively debate, for example, around something as seemingly as simple as an autonomous vehicle.

When you imbue the cars with the ability to make decisions about where to go, they’re going to run into situations where they have ethical decisions to make. And you’re going to have to build…

JEFFREY BROWN: You mean what to hit or not hit?

JOHN MARKOFF: It’s called the trolley problem.


JOHN MARKOFF: And philosophers have been debating it for years.


JOHN MARKOFF: You go this way and you run over five people. You go this way, and you run over one person.

And I actually think it’s kind of a false dichotomy. As we become more distracted, the problem for the robots is an easier one, because they always are on, they never get distracted. So — and the robots will talk to each other, so you might not really have a trolley problem because the robot will know there’s another robot or even a pedestrian or a bicyclist there.

JEFFREY BROWN: Thinking about what robots do or don’t do, or can or cannot do means to think about what it is to be human. Right?


JEFFREY BROWN: What is it that only humans can do?

JOHN MARKOFF: Well, I have been asked that question. What is it to be human?

And I think the nature of humanity is found between the interaction that you and I have. And it’s actually something that makes me slightly hopeful, because even though we’re being surrounded with all this automation technology, there is the possibility that that interaction between you and I might actually become more valuable. And, you know, it might work out that way. That would be great.

JEFFREY BROWN: Well, so what would be a better approach for designers today? What kind of questions should be asked as they’re designing new products?

JOHN MARKOFF: So, increasingly, it is possible to take the human out of the equation.

And if you’re in a purely capitalist system, and it’s just a question of cost, then why not? Here’s an example. There is a wonderful small startup in San Francisco called Momentum Machines that is going to make hamburgers. And a lot of people worry about this, because not only are they on the verge of automating the people who take your order, but they’re talking about taking the fry cook out in the back, too.

But he’s not planning on doing it that way. He’s going to have a human concierge who will sort of be there to oversee the whole process, even though you will order your hamburger with a smartphone, and apparently get the perfect hamburger.

He realizes that that’s not a great job, being a concierge in that situation. So he sort of made a deal in his planning where he will offer the people who work in his hamburger stands the ability to get an education to do something different after two years.

And that’s an example of sort of rethinking the equation, because, like, you and I, we have had the same job for our entire career. In the future, people are going to go through many jobs, and we have to retrain them.

JEFFREY BROWN: Are you arguing that this is a fundamentally new situation?

JOHN MARKOFF: Well, we are in a new situation because A.I. technologies that didn’t work in the past — they overpromised and underdelivered — are working now. Machines are listening. Machines are seeing.

But, at the same time, what’s really interesting about the anxiety we feel right now, you can’t just take a snapshot. You have to realize that the human population is changing very dramatically.

For example, in China, they have a one-child policy. And the Chinese population is aging. It might be the case in China that the robots come just in time, which is a very counterintuitive idea.

JEFFREY BROWN: Why would they be just in time?

JOHN MARKOFF: Because the work force will shrink, and you will actually need robots.

And at the same time, you will get this aging effect of the population. And maybe, just maybe, elder care robots will come in time to take care of us.

JEFFREY BROWN: All right, John Markoff is author of the new book “Machines of Loving Grace.”

Thanks so much.

JOHN MARKOFF: Thanks for having me.

JUDY WOODRUFF: And a postscript: You can spend some quality time with Jeff tomorrow, when he hosts a daylong live-stream report from the Library of Congress National Book Festival; 170 authors will be in Washington. Tune in from noon to 6:00 p.m. on Saturday. Find that on our home page, PBS.org/NewsHour.