2018 Isaac Asimov Memorial Debate: Artificial Intelligence

2018 Isaac Asimov Memorial Debate: Artificial Intelligence


>>NEIL DEGRASSE TYSON: Welcome to the universe,
yes. Hi, everybody. I’m Neil deGrasse Tyson,
the Frederick P. Rose director of the Hayden Planetarium,
and welcome. Is this our 18th or 19th annual Isaac Asimov
Panel Debate. This has become a very hot ticket in New York
City and I almost feel apologetic,
because we can’t accommodate everyone who wants to see it. We have to go to a lottery model to get tickets
out. And short of going to a bigger venue or charging
more, we’re still trying to work this out,
but you’re here in the audience now, and that’s good. So did they tell you what tonight’s topic
is? It’s a very hot topic on every frontier. We’re talking about artificial intelligence
and, are people afraid of it? Do people embrace it? Should we be doing it? Should we not be doing it? And it’s all over the news. Not the least of which, in today’s
business section of The New York Times, today. This is a paper version of the news—
the newspaper. It’s got a, sort of an android robot
holding the national flag of China. And the title is,
“China’s Blitz to Dominate AI.” And I just came back 48 hours ago from
the United Arab Emirates, and they have a newly-established
minister of artificial intelligence. There are countries around the world that
see this and recognize it as a way to
sort of leapfrog technologies, and I think this is a,
there’s another… here it is. “China’s blitz to rule AI meets with silence
from the White House.” So I just thought I would just say that
I’m just saying. We’re trying to burn clean coal. That’s what our priorities are. [laughter]>>TYSON: But I’m just saying. [applause]>>TYSON: Don’t get me started, because… [laughter] That’s the topic tonight. We combed the country to find some of the
top AI people in the land, and we are delighted for this
mix of five panelists we have this evening. Let me first introduce to you,
who’s right on the wings, Mike Wellman. Michael Wellman is a professor of computer
science and engineering at the University of Michigan,
and he leads the strategic reasoning group. Michael Wellman, come on. [applause]>>TYSON: And… oh, I was supposed to introduce
you in a different order than that. Yeah. Yeah, I will get back to you in a minute.>>MICHAEL WELLMAN: Okay. [laughter]>>TYSON: Just talk among yourselves there,
for the… And next up we have a friend and colleague
in the astrophysics community who’s directed his attention to AI. Max Tegmark. Come on out, Max. [applause]>>TYSON: Professor of physics. [How are you], Max.>>MAX TEGMARK: Excellent, man.>>TYSON: He’s doing research in AI at MIT,
and he’s also president of the Future of Life Institute. So Max, welcome. We also have, get my order straight, here. Here we go. When I was… Nope, that’s not it. Yeah. So,
Next we have… You couldn’t do this without representation
from industry, and that’s precisely what we obtained for
this panel. John Giannandrea, come on out. John. [applause]>>JOHN GIANNANDREA: Thank you.>>TYSON: John is, he’s a senior vice president
of engineering at Google
where he leads the Google search and the Google AI teams. So we got Google in the house. Google in the house. Next, I’ve got Ruchir Puri. Ruchir, come on out. [applause]>>TYSON: Ruchir is the chief architect of
IBM Watson, and he’s also an IBM Fellow. So we got him. [applause]>>TYSON: And we’ve got Helen Greiner. Helen, come on out. Helen. [applause]>>HELEN GREINER: Thank you.>>TYSON: Helen, cofounder of the
iRobot Corporation, maker of the Roomba. The Roomba. We all know the vacuuming robot. She’s also founder of the drone company
CyPhy Works. She makes drones, now. Is that good or bad? [laughter]>>TYSON: I don’t know, we’ll find out. Ladies and gentlemen,
thank you for coming. This is our panel, everyone. Yes. [applause]>>TYSON: So, Mike, you’re a professor
at University of Michigan. So what do you do?>>WELLMAN: Well, I study artificial intelligence
from the perspective of economics. You know, economics is a social science that
treats its entities, its agents as rational beings—
ideally rational—>>TYSON: Really. really.>>WELLMAN: Artificial intelligence is the
subfield of computer science that’s trying to make
ideally rational beings. So it’s a very natural fit.>>TYSON: Can an irrational being
make a rational being?>>WELLMAN: We can do our best.>>TYSON: And so you teach a course on this. I’m just curious, how do you frame
a course around something that’s so dynamic and so changing and so
emotionally fraught with fear.>>WELLMAN: So what I do, and what one does
in teaching an AI course is you bring together the standard frameworks and representations
and algorithms, techniques that AI people have developed over
the years to address thinking-like problems
and reasoning and problem solving, decision making, learning
using very standard forms of algorithms. Now, some people are coming to it from the
emotional perspective. I sometimes have gotten
comments on my teaching evaluations that said, “I signed up for an AI course, and all I
got was computer science.” That’s what it is. It’s an engineering discipline,
and that’s the most efficient way to make progress.>>TYSON: Excellent. So, Helen, what are you about?>>GREINER: I’m all about the robots.>>TYSON: You’re all about the robots, yeah.>>GREINER: My brother was a huge Star Wars
fan when we were young, and, well,
for me, it was all about R2-D2, and I’ve wanted to build robots since I
saw Star Wars on the big screen. He had everything: character, strategy, loyalty.>>TYSON: You’re telling me Star Wars had
like a positive net effect on this world?>>GREINER: I think it had a positive net effect
on children.>>TYSON: Uh-huh. At least one here, yes.>>GREINER: Many, many. So we’ve been trying to build robots like
this, and we’ve had great accomplishments, you
know? We’ve had robots that have been credited
with saving the lives of hundreds of soldiers, thousands of civilians. We’ve got the Roomba, which,
best-selling vacuum —not robot vacuum—
but best-selling vacuum last year by retail revenue numbers, and I think a little bit
of a cultural icon, too. And so I think we’ve come some of the way,
but we’re not at R2-D2 yet. So I think some of the debate is about
where it needs to go.>>TYSON: So you cofounded the company, iRobot,
which I think was the name of an Isaac Asimov novel. Yeah. And that company invested the Roomba. Great word, by the way. Room-ba. Yeah, that’s just good. That was very good. I like words that—>>GREINER: So we asked our engineers what
we should call it first, and they said the Mark Master 2000:
the Cyber Suck. [laughter] That’s probably the best marketing dollars
that we’ll ever spend to get that name.>>TYSON: Gotcha. So that cost you money to get that name, okay.>>GREINER: Yeah.>>TYSON: Okay. Does your Roomba count as a kind of AI, would
you say?>>GREINER: I believe so. People are starting to use AI to be synonymous
with deep learning techniques. But for roboticists,
there’s a lot of tools in the tool bag. Roomba runs something called behavior control,
which was invented by one of my business partners at iRobot
where we have a lot of behaviors that all run in parallel. The first generation, it wouldn’t fall down
the stairs, it did obstacle avoidance, it followed the
walls. The latest generation,
I think it was something like 13 years later, it does full navigation using a camera system. So visual SLAM techniques.>>TYSON: Okay. Has your Roomba ever killed everyone? [laughter]>>GREINER: You know, we actually…>>TYSON: Wait, wait. There’s only a yes/no. No! That’s… The… I…>>GREINER: The answer’s certainly—>>TYSON: No sentence. Yes or no?>>GREINER: Certainly no.>>TYSON: Okay, thank you.>>GREINER: But we actually, you know. You know, it’s product design. You have to look at what are the ramifications
it could have? Like the worst thing we came up with,
maybe it goes into someone’s fire, pulls out the embers,
and sets the place on fire. Has never happened,
and by the way, they usually have hearths that
keep a Roomba out. And screens. But there’s a lot of, you know…>>TYSON: But in the future, you’re…>>GREINER: No Roomba’s killed anyone.>>TYSON: Okay. [laughter]>>TYSON: Yeah. Ruchir. Ruchir Puri. Did I say that correctly?>>RUCHIR PURI: Yep.>>TYSON: Yeah, thank you. You’ve been at IBM for more than two decades.>>PURI: Yep.>>TYSON: And so I’m just curious. Before we get to Watson,
which you have something to say about, our earliest memories of IBM getting in this
game I think was Deep Blue
where it was a chess program that beat the world’s best chess player. What made it so good in its day?>>PURI: Well, I think from the point of view,
also, I’ve dealt with optimization algorithms
for pretty much quarter of the century. And where—>>TYSON: You said optimization algorithms.>>PURI: Optimization algorithms.>>TYSON: Yeah, so this would just… So that it can calculate as quickly and efficiently
as possible towards a goal.>>PURI: It really… There were three things that came together,
actually. So search algorithms, really smart evaluation
criteria, and the third one is really
sort of massively parallel computing application as well. So those three things came together
to really give rise to something that, you know,
that wow’d people. It’s an application of technology. Again, algorithms coming together from three
points of view to give rise to an application that’s really
[broad].>>TYSON: So, but could Deep Blue do anything
other than play chess?>>PURI: Interestingly, the Deep Blue was… We have, at IBM, we have something called
grand challenges. And we pose these problems
to really move the field forward. Deep Blue was a grand challenge
posed to the scientists at IBM. Similar to that, actually, Jeopardy was also
a grand challenge.>>TYSON: But Jeopardy wasn’t Deep Blue.>>PURI: No. Jeopardy certainly wasn’t Deep Blue, but—>>TYSON: Yeah, but that was Watson, correct?>>PURI: Yes, that’s—>>TYSON: We’ll get to Watson in a minute.>>PURI: Okay.>>TYSON: I just want to work my way up to
that. And I think I have some firsthand knowledge
of your grand challenges. I was once invited to address
a retreat among IBM engineers where they were given cash rewards
for their innovation. Do they still do that?>>PURI: Certainly. We encourage our employees and scientists
to really get the innovations out there
and get their innovative juices flowing, absolutely, we do that.>>TYSON: Yeah. I was delighted, because each one got recognized,
they got a certificate, the CEO was there.>>PURI: Yep. We still do that.>>TYSON: I mean, it was very much taken seriously.>>PURI: Yep. We still do that.>>TYSON: Right. very good. We’ll get back to you on that. So, John, I think I messed up your last name. Giannandrea?>>GIANNANDREA: Giannandrea. Giannandrea.>>TYSON: Oh, Giannandrea.>>GIANNANDREA: Yes, that’s correct.>>TYSON: Yeah. Giannandrea. So you represent Google on this panel,
and could you just tell me, remind us what is the game of Go,
and then tell us what AlphaGo is?>>GIANNANDREA: Sure. So Go is this ancient, Oriental board game
which is harder than chess. [laughter]>>GIANNANDREA: And the reason it’s harder
than chess— [laughter]>>TYSON: Okay, Go.>>GIANNANDREA: And the reason it’s harder
than chess is because any given—>>TYSON: Well, just to be clear, it’s a
board game. You didn’t say that.>>GIANNANDREA: It’s a board game. Yeah, sorry.>>TYSON: If you said it’s a war game…>>GIANNANDREA: No, this is a board game. It’s a board game.>>TYSON: It’s a war board game. It’s not like—>>GIANNANDREA: It’s a strategy game. [How about] that.>>TYSON: —weapons and things.>>GIANNANDREA: No. There’s only two pieces:
the black pieces and the white pieces.>>TYSON: Okay. Go.>>GIANNANDREA: And the reason it’s hard… And people have been playing this game for
2,000 years and it is highly revered in Asia
and people are paid full-time jobs to be professionals at this game. And the reason it’s hard is because
any given position on the board, there are many, many more positions that you
could take. So you can’t use brute force approaches
to figure out how to play the game.>>TYSON: So intuition has a very big role.>>GIANNANDREA: So the recent systems that
have become very, very good at this game, you could even say super human at this game,
because they beat the world champions, they’re doing something fundamentally new. And people look at that and use words like
intuition— which is not a technical word—and… [laughter]>>GIANNANDREA: You know, so there’s something
going on. [laughter]>>TYSON: Who invited you to this?>>GIANNANDREA: But it’s a serious issue,
because I think when people use words like that,
when a chess grandmaster is beaten by Deep Blue,
or when the world champion in Go was beaten first in Korea, Lee Sedol,
it’s an emotional toll on that player, because they just spent their entire life
perfecting their ability to play this game. And then a machine comes along and appears
to beat them using… and the words that are used are like
creativity or intuition or that’s something I didn’t expect it to
play. And so I think that adds to the mystique of
AI when actually what’s going on is
engineering, plain and simple.>>TYSON: So brute force.>>GIANNANDREA: No. In the case of the AlphaGo system, it was
a combination of training and new algorithms to do so-called
deep learning, which I’m sure we’ll get into.>>TYSON: Okay. So AlphaGo was trained on
previous games that had been played?>>GIANNANDREA: Yes. There’s two versions of it. The one that won the world championships
was trained on all human games that it could get his hands on
and then played itself. So it basically practiced after it had learned
the—>>TYSON: How quickly could it play itself
and finish a game?>>GIANNANDREA: Well, we do it in the cloud
with thousands of computers so it could do it, you know,
thousands of times at the same time.>>TYSON: So… Okay. [laughter]>>GIANNANDREA: Very fast.>>TYSON: Very fast, okay. [laughter]>>GIANNANDREA: And then most recently there
were—>>TYSON: And it’s just in the cloud?>>GIANNANDREA: In the cloud, that’s right.>>TYSON: Up there somewhere.>>GIANNANDREA: Lots and lots of computers.>>TYSON: But the computing cloud, not the
storage cloud.>>GIANNANDREA: That’s right. the computing cloud.>>TYSON: Yeah. Yeah.>>GIANNANDREA: So recently there’s been
a version of this called AlphaGo Zero,
and the interesting thing about this—>>TYSON: So that’s an upgrade.>>GIANNANDREA: This is a later version. And what they tried to do with that
is the researchers tried to see if they could learn to play Go
without looking at any human games.>>TYSON: That way it would come up with stuff
on its own.>>GIANNANDREA: Yeah. And the [unintelligible] AlphaGo Zero
was actually better than the ones that learned from humans,
and it also plays chess very well. [laughter]>>TYSON: I’ll try to find other questions
for you later. We’ll see.>>GIANNANDREA: It doesn’t do Jeopardy, though.>>TYSON: Okay, so that learned… So it taught itself, basically—>>GIANNANDREA: Yeah.>>TYSON: —and was not biased by
the creativity of any human game that had previously been played. And so that, you play that game against
AlphaGo—>>GIANNANDREA: Another copy, yeah.>>TYSON: And it beat AlphaGo.>>GIANNANDREA: Yeah, that’s right.>>TYSON: So it’s extra badass. [laughter]>>GIANNANDREA: Yeah. Now, games are a special thing,
because games have an objective score. And so it’s not… It’s actually a good test for this level
of the current technology.>>TYSON: So, Max, we go back. Great to have you. This is like your fifth time here in the museum. It’s not even your first Asimov panel,
so thanks for showing up again. You recently published a book, Life 3.0. It’s your third or fourth book…>>TEGMARK: Second.>>TYSON: Second? Okay. It feels like three books.>>TEGMARK: They just take so long to read—>>TYSON: That’s what it is, yeah.>>TEGMARK: —because they’re so boring. [laughter]>>TYSON: Your first book was
Our Mathematical Universe and thinking of all of the universe as,
as a simulation, basically. And we had you on the simulation panel last
year. Life 3.0, what’s that about?>>TEGMARK: It’s… Well, my day job is working at MIT doing AI
research from a physics perspective, these days. So I like to take a step back and look at
things, and if you—>>TYSON: A cosmic perspective.>>TEGMARK: Yeah. And if you look beyond the next election cycle
and all these near-term AI controversies about jobs
and stuff like that, then it’s pretty natural to ask, well,
what happens next? What happens if all these folks succeed
and ultimately make machines that can do everything we can? The earliest life that came along, I call
it 1.0, because it was really dumb stuff like bacteria
that couldn’t learn anything in its lifetime. And then I call us 2.0, because we can learn.>>TYSON: Oh, you’re referring to the evolutionary
achievements in the tree of life.>>TEGMARK: Yeah, yeah.>>TYSON: Okay.>>TEGMARK: And what comes next? I think
we should think about this, because if the only…
the only strategy we have is to say, “Hey, let’s just build machines
that can do everything cheaper than us, what could possibly go wrong,” you know? I think that’s just pathetically unambitious
and lame, you know? We’re an ambitious species, homo sapiens. We should aim… we should aim higher. We should say like,
“How can we use all this technology to empower us;
not to overpower us?” [applause]>>TYSON: Okay. We’ll need more of that, I’m sure,
as this conversation progresses. Let me get back to Mike. Mike, does… Could you remind us about the Turing test,
what that is?>>WELLMAN: Sure. Alan Turing, back in 1951—>>TYSON: The movie The Imitation Game
is a biopic on him.>>WELLMAN: Right, and it does depict the Turing
test a bit. So back in 1951, he proposed this thought
experience realizing that, to try to get people to understand
machines as being able to think would require defining thinking,
and that would be very controversial. So he set up this thing that became called
the Turing test. That is,
see if you could have a machine have a dialogue with somebody
and convince them that they’re a person rather than a machine. If a person in an interrogation
could not tell the difference between whether they were
speaking with a machine or a person, then you might as well say it’s thinking. This is really audacious in 1951. Think about what machines were like back then. People hadn’t even thought about—
thought of word processing yet, and they were thinking about AI. That test, I think, has been very useful
as a thought experiment. The field of AI has never really generally
accepted that as the goal of AI or
the definition of AI. But certainly—>>TYSON: Is that because you’ve evolved
past that? We do have machines that sound like they’re
not machines, but people. So once you hit that goal, you say, oh, we
need a better goal. And are you just moving the goalpost?>>WELLMAN: So we haven’t hit the goal. So it turns out Turing didn’t realize that
it would be easy to fool a lot of people, even without being very good at thinking.>>TYSON: It reminds me, was it a New Yorker
comic where two dogs are at computers,
and one turns to the other and says, “The good thing about the Internet
is that no one knows you’re a dog.”>>WELLMAN: That’s right. And no one knows you’re a bot either,
and that is a potential way that AI is going to
affect us and be ubiquitous. So it is quite relevant to try to impersonate
people. But we use that as a gateway to a lot of Internet
activities. You do a CAPTCHA,
that is called a computer automated-something-Turing. I forget the exact acronym.>>TYSON: The T in CAPTCHA stands for Touring?>>WELLMAN: Yes.>>TYSON: Oh, I didn’t know that. Cool.>>WELLMAN: Or, it’s basically you have to
tell the machine—>>TYSON: We’ve all done it.>>WELLMAN: —that you’re a human.>>TYSON: Yeah.>>WELLMAN: So find something that only humans
can do. And of course, that bar keeps on moving all
the time. So it’s quite relevant to try to impersonate
for the Alexas and the Siris in the world are
trying to be as humanlike as possible. In films, we try to put
and videogames realistic characters all the time. So it still speaks to us,
even though it’s not the whole story about AI.>>TYSON: Right. so your point is we did so well with
satisfying the Turing test very early, that it just wasn’t good enough
a discriminator for the AI that people were seeking.>>WELLMAN: Well, again, I would say that being
specifically like a human is only one way to be intelligent. And you could be superhuman in many other
ways, and you don’t stop when you reach human
level-performance in particular tasks,
because the goal is not to be like a human. The goal is to make ideally rational intelligence
that could do all sorts of things.>>TYSON: So, Helen, with the company
you cofounded called iRobot, could you tell us about,
what is it, the three laws of robotics, by Isaac Asimov?>>GREINER: Yeah, definitely. The robots could not… The three laws:
One, the robots cannot hurt humans or cause humans to come to harm through an
action. Robots cannot—>>TYSON: That was one.>>GREINER: That was one.>>TYSON: So they—>>GREINER: Robots have to obey orders.>>TYSON: —cannot harm you,
and their inaction also can’t harm you.>>GREINER: Yeah. They have to obey orders,
unless it conflicts with number one. So they can’t… I’m sorry. The second one is they have to obey orders
unless it conflicts with number one. And the third one is
they cannot harm themselves, unless it conflicts with number one and number
two. And there’s one he added later on, the zeroth
one—>>TYSON: I didn’t know that.>>GREINER: —which is robots cannot cause
harm to humanity or through an action, have humanity come to
harm.>>TYSON: So it generalizes it up from the
individual?>>GREINER: Yeah.>>TYSON: Humanity… The…>>GREINER: Yeah. Well, he made that the zeroth law. So he stuck it in the front when he thought
of it.>>TYSON: The zeroth law, okay.>>GREINER: But what’s amazing about it is
he… he started writing the I, Robot books in 1940. Practical transistors weren’t invented till
1947, so, I mean, one of the reasons we’re all
so honored to be here at the Asimov memorial debate,
is I think we— I can speak for the panel that we’re all
huge, huge fans of what he was writing about,
especially way back.>>TYSON: Well, just consider that he’s written
about— on topics quite diverse. So no matter what subject we have here,
there are books that he wrote about it. So…>>GREINER: Yeah.>>TYSON: Every panel we’ve ever had on—>>GREINER: But AI’s a really good one for
him.>>TYSON: —on any subject is,
I read that by Asimov when I was a kid, so.>>GREINER: That’s why people ask me,
are you putting those in the robots? And the short answer is,
they’re great as a literary device. They’re a little bit more tricky to program. And so, unfortunately, the answer is
that the state of technology is not ready for those types of
abstract rules yet.>>TYSON: But they’re nice guidance;
just philosophical guidance, I guess.>>GREINER: Um, I have a very practical view. I think the laws if you state them now, might
be, of robots,
can save people, they have saved people, and they could save a heck of a lot more people. It might be that robots…>>TYSON: Well, plus, the military would not
be obeying those laws.>>GREINER: Yeah, exactly. exactly.>>TYSON: Right.>>GREINER: And the whole books were about
how those laws resulted in conflicts, right? But in reality, because I’m a business woman
as well as a robot lover, robots are not going to hurt people.>>TYSON: Don’t say a robot lover. That’s just… doesn’t…>>GREINER: I am. I—>>TYSON: Just find some other phrase.>>GREINER: I’m a robot enthusiast.>>TYSON: There you go. thank you.>>GREINER: Robots are not going to hurt people. They’re not going to hurt themselves. They’re not going to do these things,
because they’re going to be either scrapped, they’re going to be sent back,
or someone’s going to be sued. And so from a business standpoint,
the robots are going to be safe to operate.>>TYSON: One of my favorite… Well, no. A video that I found amusing
was a cat riding around on a Roomba.>>GREINER: You know, that got so many views,
and I have no idea why. [laughter]>>GREINER: I mean, it’s like tens of millions
of views, right? It’s crazy.>>TYSON: I mean, if the Roomba were big enough
for me to sit on, I would do that. That’s… [laughter]>>TYSON: Wouldn’t you? that’s fun.>>GREINER: That was not in our brainstorming
sessions when we thought about all the applications
for robots.>>TYSON: So, Ruchir, could you get us from
Deep Blue to Watson? What happened in that transition? And if we can remind people
why we all know about Watson, there was the big contest that you guys entered
it in.>>PURI: Certainly. And let me pick up the thread from,
from the chess and the Go and the, you know. Let’s… [laughter]>>PURI: Let’s make this—>>TYSON: Okay, continue.>>PURI: —interesting, actually.>>TYSON: Continue.>>PURI: [Just finally there are]—>>TYSON: By the way,
Deep Blue beat Kasparov when Google had 10 employees, okay?>>PURI: True. That’s true.>>TYSON: So just, like, where were you?>>PURI: That’s true. absolutely true.>>TYSON: Okay. Did I get you on that one?>>PURI: Yes, [thank you]. [laughter]>>TYSON: Okay. Take us there.>>PURI: So the journey continues from,
from the point of view of chess game that beat Kasparov
to, we went down, to, okay, what is next? And obviously natural language—>>TYSON: Kasparov was the
world champion at the time.>>PURI: Yes, at the time. And natural language, which is so fundamental
to humans, actually. And the intricacies of natural language,
as we’ve been… At least there’s one fundamental trait
that humanity has, which is just the proliferation of language,
the advent of language itself. So we decided that will be the next leap that
we are going to make. And there is no game better than Jeopardy
that captures the intricacies. So we posed that as a grand challenge, and—>>TYSON: Jeopardy, not only language, but
culture.>>PURI: But culture, right.>>TYSON: Yeah. It’s not a calculation anymore
in a traditional sense.>>PURI: It is certainly not a calculation
anymore, and the way the questions are posed are so
nuanced that you really are dealing with,
at this point in time, not just a calculation machine and simple
evaluation criteria and search algorithms and parallel computing,
but really understanding language, questions and answers,
and the way we interact as human beings. So that was really the advent of the next
challenge. Because once we are able to solve that,
the implications are phenomenal in terms of the benefit it can bring to
us as a society, which is where we took that level to. The first thing that we started right after
Jeopardy was the applications of that technology to
the health domain, which is so fundamental to all of us. So right from chess game,
the next challenge is really addressing fundamentals of
what defines us as humans in terms of communications, addressing those intricacies,
and then applications of that abound.>>TYSON: To serve needs in actual society.>>PURI: Yeah, absolutely.>>TYSON: So Watson, in principle, can become
the best doctor ever, because Watson can read all the research papers
and then interpret symptoms in the context of what is known worldwide,
rather than just what one doctor happened to learn.>>PURI: Absolutely. And, at least the way we think about it is
really not— It’s not about, does it become the best
doctor, but, as we all know, no physician, single
physician has the time, even if they have
certainly the intelligence to figure all of this out,
they don’t have the time to figure all of that out. And as Max was saying,
it’s really about empowering professionals than necessarily overpowering them. And really, Watson is about empowering
the society, as opposed to overpowering it, and that’s why I really think about,
it’s bringing capabilities whereby, yes, it can read millions of
studies and millions of trials that may be going on,
and there some well-publicized cases as well where it had
actually saved patients either in North Carolina or Tokyo
or a study that was published more recently in India as well. But it’s, from our point of view,
it’s really about bringing the technology together with the
human beings in what we call augmented intelligence.>>TYSON: So in all fairness to our understanding
of this, Watson only knows what is available on the
Internet, correct?>>PURI: Yeah. Watson only knows what is being fed to it. Let’s say it that way. Whether it is available on Internet or it
is private information…>>TYSON: So how does Watson know what is fake…
news or not? [laughter]>>TYSON: You can make a super machine that
cannot distinguish the two. Well, apparently humans can’t either, but… [laughter]>>TYSON: In principle, we—educated—can
make a judgment. Will Watson be in a position to make that
judgment?>>PURI: I think, at least regarding fake news,
the question really is on, we are all pushing the boundaries of that
technology, and yes, the machines need to be trained,
and they can really help us, given what has gone on, actually,
in the last couple of years. Once they are actually—
you bring that technology to bear in terms of realizing there is a problem,
you can actually correct for it. So it’s not about whether Watson can distinguish
it today or not. Once you realize the problem,
you can actually start working on technologies that can start deciphering that much better,
thereby helping us, as society, because, you know, what is going on around.>>TYSON: So, but that… From what you’ve described, Watson would
still be shy of this Holy Grail of just thinking stuff up
on its own without reference to… I mean, when you think of the most creative
people there ever were, sure, there’s some foundation from where
that you could trace that creativity. But for many of them, there’s a spark,
and something new comes out of them that had no precedent. So, from what you described, Watson is capable
of digesting preexisting knowledge,
but in its current state, or at least the state we’re familiar,
it is not inventing something new.>>PURI: Yeah. Certainly, the purpose of the technology today
is really not about that spark in itself. Although, it will find out…
in particular find out, it’ll find insights that you didn’t know existed, actually. Although they were hidden in there,
you didn’t know existed, so it may be an ah-hah moment for you, I got
it, but still, it existed there. It didn’t… So it will actually do that. But, yeah, it wouldn’t get that… The notion you are saying, hey, that was spark,
no, it doesn’t have.>>TYSON: So, John, tell me about the future. We could spend a whole panel on this,
but I just want to put it on the table briefly: What’s the role of AI in the future of autonomous
cars? And I know you guys are working on this.>>GIANNANDREA: Yeah, we are.>>TYSON: You entered certain autonomous car
chal— You, your company.>>GIANNANDREA: Yeah, we have a division of
Alphabet that works on this.>>TYSON: Just to be clear, the holding company
is Alphabet—>>GIANNANDREA: Yeah.>>TYSON: —and Google is one of several companies
under Alphabet—>>GIANNANDREA: That’s right.>>TYSON: —and one of those companies were
tasked with making the autonomous car.>>GIANNANDREA: One of those companies is called
Waymo, and they’re one of many companies that are
making autonomous cars. So it’s a super hard problem. I think people have been working on it seriously
for more than a decade. They’re making progress. These cars have driven millions of miles
with very small numbers of incidents, but they’re still pretty constrained. They’re more accurate than a human driver,
but they’re limited in where they can go. So for example, the kinds of streets that
they can drive on, the cities, and so on and so forth. But the technology is progressing fairly dramatically. I’m pretty confident to say that we will
have fully autonomous cars for most of the large car manufacturers within
a decade.>>TYSON: And what role does AI play in that,
or is it just really good programming?>>GIANNANDREA: Well, it’s machine learning. So, you know, these systems have a lot of
computers on the car, they can detect a stop sign or can figure
out that there’s an impediment in the road or a kid just ran
into the road or there’s a cyclist. In California, we have this weird thing
where motorcycles are allowed to drive between the lanes—>>TYSON: You have many weird things in California.>>GIANNANDREA: We have many weird things in
California.>>TYSON: Uh-huh.>>GIANNANDREA: But motorcycles are allowed
to drive between the lanes of the cars,
and so for the computer to actually understand what’s going on
and figure out what’s safe and not safe is actually quite hard. I think one of the things that’s going to
happen here is even if you don’t see millions of autonomous
cars like in three years, most of the new cars that you buy will have
semiautonomous features in them, like automatic braking or telling you what
the speed limit is.>>TYSON: Which we’re all accustomed to and
expect it on our next cars, now.>>GIANNANDREA: Yeah. yeah. So I think this technology kind of
comes in increments. It’s not like a big-bang thing, you know? And I’ll just echo this comment about augmentation,
because the phrase AI, it means so many different things to so many
different people that it’s really hard to kind of pin down
what it is. But the idea of augmented intelligence has
been around for a very long time. A lot of the ideas we have in computing today
came from the work of Doug Engelbart back in the ‘50s,
and he had been describing computers as being a tool;
a tool that can help a doctor look through more information,
that can help pinpoint something in an x-ray. Not something that would replace the doctor,
and that’s how we think about it.>>TYSON: Which is Max’s point. Not be… Just… What’s the two words you put together?>>TEGMARK: Oh, empowered versus overpowered.>>TYSON: Overpowered, yeah, that’s right. Very good. I like that. Can you describe for us what’s the difference
or what is the ascent from AI to general AI? Because we hear this term general AI—>>TEGMARK: Yeah.>>TYSON: And what’s going on there? what have we been talking about so far,
and if it’s not general AI, what is?>>TEGMARK: It’s really important to be clear
on what we mean by intelligence. As you mentioned correctly, John,
different people mean different stuff, right? I think it’s a really good idea to go into
the footsteps of Helen, here, and make a very broad definition of intelligence. So even Roomba is intelligence. And just define intelligence simply as
the ability to accomplish complex goals, you know? So Roomba has a very narrow intelligence:
really good at vacuum cleaning. Today we have a—>>TYSON: Was that a diss on…>>TEGMARK: I am a proud Roomba owner. And we…>>TYSON: Roomba can carry cats around, okay?>>TEGMARK: Yeah.>>TYSON: For all we know
the Roomba is like the Uber for cats in the house, okay? [laughter]>>TEGMARK: Yeah. So…>>TYSON: Wouldn’t that be cool if cats could
like, get the Roomba to come and take them around? Get the Roomba to open a door for them, yeah?>>TEGMARK: That’s right. So today, we have many areas… So if you define intelligence as the ability
to accomplish complex goals, then there are many areas today where machines,
in narrow domains, are already much better than us. Not just vacuum cleaning and high frequency
trading and multiplying large numbers together and
stuff like that, but also now in playing chess and playing
Go and so on. But no machine today that we’ve built—>>TYSON: No single machine.>>TEGMARK: No single machine, not even the
whole Internet combined has the broad intelligence of a human child,
who, given enough time, can get quite good at anything. So this is what’s meant by artificial general
intelligence, or acronymed AGI,
which has been the Holy Grail of artificial intelligence ever since
Marvin Minsky and McCarthy and others founded it,
came up with the whole… founded the field in the ‘60s. And now—>>TYSON: But, Helen, you come to this from
a product— a consumer product point of view. And I want to get back to what you just said. People who are making AI want to sell something. So they’ll sell you something that cleans
the room, that drives the car, that does any one of the things that help
our lives. Who’s going to buy something that has general
intelligence?>>TEGMARK: Well, everybody.>>TYSON: And will the general intelligence
be as good at the pieces of it as the specific products that
industry would be making for that one need that you have?>>TEGMARK: Oh yeah, by definition. So if people say that they think that machines
will never be able to— there will always be jobs left for humans—
they’re just saying, by definition, that AI researchers will
fail to build artificial intelligence, because that’s the very definition of it,
that machines can do everything better than us. And many people… Like I have had many conversations where you’re—>>GREINER: I’d like to point out—>>TEGMARK: Yeah. Let me just finish my sentence.>>GREINER: —there’s a mechanical and a
sensing component as well as the,
what you’re calling AGI, mechanical and a sensing element to make these
machines better.>>TEGMARK: Sure. But anyway—>>TYSON: Oh, good point. So you can have software, but if it doesn’t
have the physical means to enact what it’s supposed
to, it’s just a box.>>TEGMARK: No, no. it can do some great stuff. Like you could feed it a photograph,
and it could tell you if you have breast cancer or something like that, right? But it’s not going to go out and sweep your
house.>>TEGMARK: Yeah. But I think the final word on definition should
go to Shane Legg, one of the leaders of Google DeepMind, because
he coined the phrase, and he simply meant
something that can do the same information processing
that the human brain can do. And if you hook it up to good enough robots,
which I’m sure we can build, then it can do great stuff. And so that’s the goal of certain companies,
Like Google DeepMind, for example, to try to build that. And that’s why they keep trying to push
the envelope, right?>>TYSON: Wait. But I gotta go to my industry people. What does it mean to buy something that has
general AI? What do I do with that? Do I say, make me the best cup of coffee,
drive me to my office, what’s the square root of two,
and… I mean, in practice, is that a thing?>>GIANNANDREA: So in principle,
and this is highly speculative, but in principle, an AGI could build any kind
of other AGI, and therefore,
could build you any machine you want it to build. And that’s when people worry—>>TYSON: That’s when we all die.>>GIANNANDREA: No, no, no, no. That’s when a class of people who call themselves
transhumanists would say that humans would evolve. And I personally don’t believe in this. I see no evidence that it’s going to happen. But that’s the source of a lot of the ethical
discussions—>>TEGMARK: Right. Exactly.>>GIANNANDREA: —about this topic.>>TYSON: Mike, speaking of ethics,
could you tell us about the trolleyology
and what role AI can play in assisting our reasoning there?>>WELLMAN: So probably many of you have heard
about trolley problems. This became popular
in psychology to pose ethical dilemmas to people and see
how they react. And there’s many variations of it,
but the standard kind of story is a trolley is going down a track and it’s
about to hit or kill three people,
and then you notice that there’s a switch, and you could make it go over to another track
where there’s only one person. And you could choose to kill that other person
instead of the first three. Would you do it?>>TYSON: Wait, wait. So the dilemma there is
somebody’s going to die no matter what. You either can not touch it,
then the trolley kills three people on its own,
or you can intervene and actively kill one person.>>WELLMAN: Right. Now—>>TYSON: Right.>>WELLMAN: —I’m not a psychologist, but
I think it… It seems to be a kind of a silly question
to ask people, because humans can really never get, I think,
into a mental state where they could really believe that, with
certainty, if this action, if I take this action, I’ll kill this one
person for sure, and the other action… There’s always this uncertainty. There’s always questions about what the
blame… It’s not actually a realistic situation. So the question is, will AIs, actually maybe,
is it more realistic for them, perhaps? Could an autonomous vehicle be in a situation
where, all of the sudden a bicyclist runs in front
of it, and it has a chance to swerve and do some
other damage, and will it have to weigh that.>>TYSON: You would have to take out the vegetable
cart first and then find out what else it does, yes.>>WELLMAN: Yeah. And, you know, so will they have to be coded
in them what the solutions are to those dilemmas? When it does happen—>>TYSON: Wait, wait. Wait. So that implies that humans get together,
figure out a solution, and you hand it to AI. But that’s not the point of AI. The AI is going to have some higher intelligence
than we do, and that’s why I’m curious.>>WELLMAN: So I actually—>>TYSON: If you bring AI to that problem,
is it going to give different answers than we would,
and then we say, oh my gosh, we never thought about it that way. Let’s do it that way.>>WELLMAN: So I think this is [unintelligible]
that’s going in some of the session, here. Actually, no. AI, the idea is we want to give—
for the humans to give the AI the values, and the AI is concerned with
making decisions and taking actions to promote those values. So ultimately, we are saying…
you know, we value life, we value… That’s part of what the robot laws are for.>>GREINER: There are no robot laws. They are science fiction.>>WELLMAN: And so the danger is
that they would be weaponized by the party that is programming them and is
controlling them; not that they’re going to all of the sudden
decide to get rid of the humans. That’s not the source of the danger. With respect to the trolley problem situation
in this hypothetical autonomous vehicle, when it does happen that
a car, one of these cars runs over a bicyclist— and it’ll happen, I think, much less frequently
than humans do it today— we’ll take the black box—
I hope they’re engineered so that they have a black box that
captures everything that was in their senses all the time
and it’s very secure, so they can’t lie about it—
and they will be able to dissect it and will say,
“You made this decision. Why did you do that?” And it might say,
“Well, I hit the bicycle, because if I swerved to the left,
I would have run over a child.” Or if it said, “Well, I did that, because
if I swerved too fast I’d wake up the passenger,” then you’d
say, no, that was the wrong decision. [laughter] That was not what we meant for you to do. It’s still better than what the Tesla said
a couple years ago—>>GREINER: Yeah.>>TYSON: Yeah. If I made it say,
don’t wake me up for any reason—>>WELLMAN: That’s right.>>TYSON: —and it’s the robot’s job to
obey me…>>WELLMAN: Exactly. That’s exactly an answer… So this is part of the danger of AI is that
the unintended consequence of the specification of the values
won’t hit what you really care about.>>TYSON: Let me ask Google and IBM, here. In your efforts in this, I don’t want to
call it a race, but let’s call it an exploration,
is there a tandem sort of ethical group? Let me start over here in IBM. Is there a… Is anyone thinking about the ethics of what
AI would do if you achieved this goal? Because we certainly have sci-fi movies,
and none of them… It never ends well, okay? In any of them. Any of them.>>PURI: Yeah. So certainly,
we were one of the first companies to actually bring principles of
ethics and responsibility to AI. It’s captured sort of [bold] ways in what
we do overall on the information we have. But most importantly,
there are three fundamental tenants we go by
as it pertains to AI. One is building AI with responsibility. Second one is building AI that’s unbiased. And the third one is building AI that’s
explainable. I think those are the fundamental tenants
that we drive and strive towards, and we have, in our research teams,
we have significant number of people and scientists and experts that really try to
drive our— the AI services that we offer,
the solutions that we build with tremendous number of businesses around,
to drive them with those three principles. And obviously, I think we all know
the way AI techniques work these days, they are driven a lot by the data. And you are as good as,
as we were just discussing before, you are as good as the data that you are fed. And detecting bias in the data itself
is actually one of the more important research and technical challenges. And having techniques that are able to de-bias
that data as well, in terms of when you are learning,
you know that there is bias in the data— or be able to de-bias it so that you can build
models that are actually unbiased. So that’s why I said
there are three fundamental principles that we go with
that are sort of very formal and engrained in the principles through which
we are driving AI.>>TYSON: Speaking of bias,
John, if I remember correctly, there were some fascinating studies recently
where Google facial recognition software
was not as good at identifying black people as it was with
white people. And then they found out that just
white people programmed it, so that’s… [laughter]>>TYSON: So, um… So maybe that’s just kind of obvious at
that point. But that would, I think, count as a bias.>>GIANNANDREA: I was actually at lunch with
one of the authors of that paper today. They haven’t actually measured our systems. They measured other people’s systems. But it’s a serious issue, and I think that—>>TYSON: So it wasn’t your facial recognition?>>GIANNANDREA: It wasn’t ours. But this issue of bias and machine learning
is super important.>>TYSON: I’m sorry to have implicated you.>>GIANNANDREA: No, no. That’s okay. It’s okay. So we think that this is,
at least for the next few years, the most serious ethical issue. I think this AGI stuff is years, decades away,
so I don’t spend very much time on this. But this question of you’re building learned
systems, machine learned systems learning from data,
if your data is biased, you’re going to build biased systems. And this could be everything from
whether to give somebody a mortgage or what their credit score prediction would
be or there are people selling systems now that
are used by Quartz to predict recidivism rates. And they’re not explainable,
and it’s not entirely clear what data they used to train them,
and we think this is just unethical.>>TYSON: So it’s garbage in; garbage out,
at that level.>>GIANNANDREA: Yeah. And so—>>TEGMARK: And we know that one was very biased,
yeah.>>GIANNANDREA: Yeah. So many of our companies work together
outside of the commercial realm with academia, but also in nonprofits
looking at this question, because we really worried about building systems
that give a bad name to all this machine learning.>>TYSON: So in all of your efforts,
how would you characterize the, sort of the ethical
dimension of what’s going on? You have people… Are they philosophies, are they psychologists,
what are they?>>GIANNANDREA: No. They’re usually data scientists and researchers
who are looking for systemic bias in the systems and the data that we’re using to train the
systems. But we have significant efforts with [crosstalk].>>TYSON: Okay. So I get the bias part,
but how about the trolley car part where we… Will the AI have the values we care about
if it will properly serve us? If the AI achieves consciousness
and then comes up with values of its own…>>GIANNANDREA: I mean, our company has very
few situations, autonomous vehicles would be one,
where we have to actually struggle with these issues. Mostly, we’re worried about recommendations
systems giving bad reconditions to people,
or ranking systems giving bad results to questions that you ask.>>TYSON: But this is moving fast as a field.>>GIANNANDREA: I think as a field it’s moving
fast, and I think academia is now got entire classes
on AI ethics and machine learning ethics. And I think society’s responding in an appropriate
way, because we’re worried about this stuff.>>TYSON: So, Max, you’re president of the… Tell me the name.>>TEGMARK: Future of Life Institute.>>TYSON: Future of Life Institute. Sounds very New Age-y, by the way.>>TEGMARK: Well, future of life; we’re for
it.>>TYSON: Okay.>>TEGMARK: We would like it to exist.>>TYSON: Not a controversial—>>TEGMARK: You would think.>>TYSON: Put on that on Twitter,
and then people would argue with it for sure.>>TEGMARK: You would be surprised, yeah.>>TYSON: So could you tell me the
difference between an “is” and an “aught” philosophically,
and how that matters in AI?>>TEGMARK: Yeah. You know, it basically comes—>>TYSON: Was it Hume who did this?>>TEGMARK: I believe so, yeah.>>TYSON: But one of the philosophers, yeah.>>TEGMARK: I think so. It basically comes down to, you know,
saying that might makes right is a really lousy foundation for morality. Just because something is in a certain way
doesn’t mean that’s the right way, and just because by default something is going
to happen if we don’t pay attention to it
doesn’t mean that’s what we really want to happen. You know, I’m very optimistic that we can
use AI to help life flourish like never before,
if we win the race between this growing power of AI that we’re seeing,
and the growing wisdom that we need to manage it. And there, I feel we’re kind of a little
bit asleep at the stick. You said here—sorry to pick on you, John—>>TYSON: Well, I don’t want any AI person
to say we’re asleep at anything.>>TEGMARK: But I have to pick on you a little
bit, John—>>GIANNANDREA: Pick on me.>>TEGMARK: —because you said,
“Well, you know, I think this AGI stuff is kind of decades away,
so I’m not thinking about it much.” But I bet you wouldn’t say, “I think this
climate change stuff is a few decades away, so I’m not thinking about it,” right? You look young and healthy,
you’re working out, taking your vitamins, you’re going to be around then, right? And if it’s going to take a few decades
to get this right, it feels really important right now to think
about it enough that we can—>>GIANNANDREA: I totally agree.>>TEGMARK: —steer things in a good direction.>>GIANNANDREA: What I said was I don’t spend
very much time at Google
with researchers on this task. But we do invest in groups around the world
at Oxford and Berkeley and other places who are looking at this stuff.>>TEGMARK: Yeah. And you’re a member of the Partnership of
AI, which is awesome.>>GIANNANDREA: Yeah. It’s not that we’re abdicating responsibility. It’s the, we just have no idea what the
timeline is. We do know what the timeline is for global
warming.>>TEGMARK: Yeah, and—>>TYSON: Well, if anyone knows the timeline
of this, it would be you, presumably.>>TEGMARK: Well, I think also we do know
quite a bit about the timeline. First, we know there’s great controversy. And your cofounder, Rodney Brooks, told me
in person he thinks DeepMind’s quest for AGI
is going to fail for at least 300 years, right? But most AI researchers in recent surveys
think it’s actually going to succeed, you know, maybe in 40 years, maybe in 30 years. So that, to me, means it’s not too soon
to start thinking hard about
what we can do now that will be helpful.>>TYSON: I get it. But I want to get back to the point of,
there are things that are, and there are things that ought to be.>>TEGMARK: Yeah.>>TYSON: Do you trust AI
to judge what ought to be?>>TEGMARK: No.>>TYSON: Or is this… Okay, good.>>TEGMARK: I could give a longer answer, too. [crosstalk]>>TYSON: And how do you
imbue what ought to be in an AI,
if an AI is a higher level of consciousness and capacity
than we are. Maybe it knows better than we do.>>TEGMARK: Yeah. But people often tell me,
if AGI is by definition smarter than us, why don’t we just
let it figure out morality; what’s ought to be. But the fallacy in this of course is that,
you know, artificial intelligence and technology in
general is not good or evil. It’s morally neutral. It’s a tool that can be used to do good
or to do evil. Intelligence itself
is simply the ability to accomplish goals, good or bad, right? If Hitler had been more intelligent
I think that would have sucked, right? So I wouldn’t want to delegate to him
for that reason what we should do. Instead, we should take the attitude we take
when we raise kids. We often raise children who are—
end up being more intelligent than us. We don’t just ignore them for 20 years and
hope they… something good comes out of it. [laughter] We really try to… While they’re still young enough that they
listen to us a little bit, right?>>TYSON: A little bit; little bit.>>TEGMARK: We try to instill in them
values that we think are good. And I think this is linked back to what you
were saying about let’s teach morality to machines.>>TYSON: You said in the next 20 years we
still have a chance to teach AI who and what we are
so that when it achieves consciousness, it will not exterminate us. [laughter]>>TEGMARK: Well, it’s even harder, though,
than raising kids.>>TYSON: It’ll keep us around as pets. [laughter]>>TEGMARK: It’s tough though, because—
sorry if I get a little nerdy, now— but with children,
we can’t teach them morality when they’re six months old,
because they just don’t get it. And like with my teenage son,
it’s kind of too late, because they don’t listen to me anymore. [laughter]>>TEGMARK: But there is this magic window
we have over a few years
when they’re actually smart enough to understand us
and still, maybe we have some hope that they’ll adopt our morality. Where AI—>>TYSON: Whereas AI in that—>>TEGMARK: It has… It’s not yet reached the point where it
understands human values, because we can’t explain it
yet, but it might pretty quickly blow through this
window where it’s actually going to…
where it’s still not as smart as us and we can influence it. And we have to kind of plan this curriculum,
plan this ahead, you know? And I think it’s really good that you are
working on that, for example, so that we can… Because we don’t want to wait until after
someone has— or the night before someone switches on a
super intelligence to be, oh, how do we figure out this, you
know, teaching it right from wrong stuff? That’s probably too late. [laughter]>>TYSON: So, Mike, there’s a… Probably too late, yeah. That’s certainly too late if that happened. So, Mike, I’m curious about something. The capital markets,
I don’t want to say that they rely on this, but they,
a lot of what makes them fluctuate is that different people
have different information that they’re betting on
if they buy and sell stock. So if you make a machine
that has access to all information and is perfectly rational,
is that machine or the person who owns that machine the first trillionaire
in the world?>>WELLMAN: So interestingly,
Wall Street trading is one of the first areas where
autonomous agents are really out there. And I think that’s one of the reasons why
it’s useful to study long-term implications of AI by this
case study of seeing what’s happening right now. And right now, lots of firms not very far
from here—>>TYSON: This is New York.>>WELLMAN: —are programming computers and
putting their al… using machine learning and using a lot of data,
and a lot of the same data to make decisions. So one question is,
well, if everyone is using the same data and maybe
stumbles on the same algorithms, are there possible effects on the stability
of markets that, if something goes wrong, could they be more prone to crashes or not? That’s something that we’re studying. And if so, are there things that we could
do to try to mitigate that? The question you asked about the first trillionaire
is if one group, one firm, one country
has an edge in AI, will they be able to
then leapfrog everybody else and just suck up all of the resources? That’s actually a significant issue. Financial markets is one place where the money
is, and if you really get it so much better than
everybody else, there could be major shifts in distributions
of wealth. And it’s not only financial markets. It could be the Internet. You can put smart AIs out there and say,
“Find some way to make money for me,” and they will. So headline, China’s Blitz to Dominate AI
is what you’re showing.>>TYSON: Right. So you’re saying a country can just corner
the market if they get there first.>>WELLMAN: So this is somewhat, I think,
ill understood and controversial, but certainly in this longer road to
more general, more capable AI, if one entity has a significant edge,
they will have a very strong incentive to shut others out
and to capitalize on that advantage. And so there’s, no doubt, there’s an arms
race dynamic to many aspects of artificial intelligence
technology that perhaps is most frightening in the military
realm, but also comes up in financial realms and
other ways. It’s in the fake news realm. We were talking about if AI’s going to be
better at discriminating fake news. Never mind that. They’re going to be much better at promulgating
fake news, and that’s going to be a challenge for all
of us.>>TYSON: This could go to any one of you. It could go to Helen. Helen, what… Do you foresee robots or AI in general
informing political policy? Because if they can analyze—
Look at Watson. Watson reads a thousand medical papers and
comes up with some conclusions based on it. So you have machine—you make machines,
you make drones that can make decisions
that we can’t, and they can make them more quickly,
and presumably, better. So is there a scenario where,
here are political factions arguing, because, really,
their feelings are involved more than facts. And at the end of the day,
in an informed democracy, you kind of want facts to matter… I would think. [laughter]>>TYSON: Just, I would just…>>GREINER: We are—
and I’m a little bit on the other side of it—
we are very far away from this AGI, generalized AI,
and there’s wonderful progress being made that allow
AI systems and robots to do more than they could do before
in recognition and characterization, but we haven’t made that leap,
and it’s going to take an innovation step to get there. So to really worry about that now,
I mean, right now, the machines are feeding information into
the system, and humans are making the judgment. Now, I believe that day will come,
but it’s unpredictable. Because as an innovation… Maybe innovation steps would have to happen
before that day comes.>>TYSON: Okay, so it’s not… Because in innovation, you can’t order up
an innovation.>>GREINER: Yeah. You don’t know when it’s going to happen. Hopefully some of the younger people in the
audience will make those innovations,
because I think we should have it happen.>>TYSON: So, Ruchir, it just seems to me,
given that Watson might be uniquely qualified to come up with a political policy decision,
if it reads every consequence of every political decision that’s ever been made,
looks at what became of it, looks at how people reacted,
looks at what people wanted, and then just said, “You should do this.” So there should be maybe a machine
on the floor of Congress and people come up to it and ask it, right? It would be like the oracle of Congress. [laughter] It could be Watson, right? Let’s check… I’m arguing in the dining room with my
political colleague from across the wall— across the hall, uh, the aisle—
and we say, “Let’s go check Watson.”>>PURI: Are you telling me to print posters,
Watson 2020? [laughter]>>TYSON: Watson AlphaGo 2020, yes. Uh-huh.>>PURI: So first of all, I think let’s take
the question— precisely the question you asked. Could AI be helping public policy? And to that, I’ll answer, absolutely yes.>>GREINER: Yep.>>PURI: It could be helping public health
policy, it could be helping public policy as it pertains
to decisions that are within the country as well—
whether it is taxation or other scenarios— absolutely yes. And it already is, actually. So I will not just say it should be helping. It already is helping. Now the question really on the table is
have we reached a scenario where there is this oracle, actually,
that knows everything. And, no, we have not reached that scenario
yet. We are farther—>>TYSON: Yet?>>PURI: The reason I’m saying that is because
it’s really about domains that you specialize in, actually,
and that information is spread in those domains. So just as an example,
we are working towards an in compliance domain; a regulatory compliance. And yes, we can actually feed information
to the machine, and it learns, and it’s going to find insights,
and, for example, obligations that a particular entity may have. But I think by oracle,
everybody understands it to be know-all, actually. It knows everything, it reacts to everything,
and we have not reached that point. Neither is the intention to reach that point
whereby you know everything, you react to everything. The point is that really
be precise in scenarios that’s going to help society,
whether it is in healthcare domain or whether it is in public policy domain
or it is in a compliance domain. So that’s where lot of the benefit to society
is going to come from. At least as engineer and scientist, I would
say let’s be more precise,
let’s define the problem and solve the problem in domain,
and then we make the progress from there, just like what we did in the scenario of
we looked at chess, we defined the next problem— that’s really the next level up in terms
of the language— you solve that problem, and you move on from
there.>>WELLMAN: Maybe your question…
if human-level intelligence might be hard, what about Congress-level intelligence? [laughter]>>WELLMAN: But I think that’s not really
fair.>>TYSON: Well, that’s how that saying goes:
If pro is the opposite of con, then progress is the opposite of Congress, right? Ever hear that one? Anybody [know that one]? [applause]>>TYSON: That one goes way back, yeah. Yeah.>>WELLMAN: But I think it’s true. Once we agree on the values,
then AI can be a great help in sorting out the policy questions. And of course it’s not that Congress is
not intelligent. It’s that it’s all about fighting about
the values and the priorities.>>TYSON: Right.>>WELLMAN: And that problem doesn’t go away
when you have AI.>>TYSON: Helen, can you foresee a future where
robots get angry with people?>>GREINER: Um, I think that we
can put in simulated emotions to help with decision making. I think that you can also have it
to do a more natural interaction with people that respond how a human would respond. But not in the way that you might think of
a person as being angry, for a while,
until some of these other innovations come out.>>TYSON: So there’s a video of
all of the occasions where they abuse their own robots. So they have robots that are walking,
and then they just kicked them, and then they… So, I mean, it’s interesting, because…>>GREINER: I think you can tell a lot about
a person about how they treat a robot, by the way. [laughter]>>TYSON: Well, that’s my point. So these are robots that
you almost kind of feel for them, because some of them are sort of humanoid
rather than non-humanoid. And the early ones, they would just sort of
fall over. And, I get it, they’re trying to
increase the stability of these robots. So now they’re poking them and pressing
them, and then the robot rebalances and comes back.>>GREINER: But they get lots of complaints
about it, don’t they?>>TYSON: I know. In the—>>GREINER: Like, that’s being mean to the
robot.>>TYSON: Don’t be mean to the robots.>>GREINER: But I think there is going something
going on, which you hit the nail on the head, that—>>TYSON: Because I think that all of robots
will have memory.>>GREINER: When we had—>>TYSON: And the first time they achieve consciousness… [laughter]>>GREINER: There’s been studies that
people name their Roombas. They get attached to them. Our military robots, too,
when we put them out in the field, we had big Marines come into the robot hospital
saying, “Can you fix it?” And it’s all blown up, and, you know,
he didn’t want any other robot, he wanted that one,
because it had gone on missions for him. It had done like over 18 missions, and
they named it Scooby-Doo.>>TYSON: Did you just say… If I hear you correct…>>GREINER: And, you know, they’re big, tough
military guys, but, because they’re working with the robot
and because the only things they
experience, they have this kind of behaviors of animals. It’s like, it’s not anthropomorphizing. I think there’s another word that could
be like thinking something’s sentient
like sentipomorphizing. Maybe we’ll make up a word, coin a word
for that.>>TYSON: I love that word. From here on, sentip…>>GREINER: Sentient.>>TYSON: Sentipomorphizing.>>GREINER: Yeah, exactly.>>TYSON: Right. So you’re saying military who have been
served by a robot, if the robot blows up because it found the
mine, and…>>GREINER: Mm-hmm. They want it back. They want it fixed.>>TYSON: —then they take the pieces and
they go to the robot doctor and say, can you fix him, doc? And these are big, burly Marines.>>GREINER: Yep. And we say, you can have another one. They say, no, I want this one, because its
name was Scooby-Doo and it saved, you know,
it saved 11 guys on one mission, right? [laughter]>>GREINER: And there have been reports of
people giving them burials,
people, military service members—>>TYSON: They buried their robots?>>GREINER: Yeah. Giving them field promotions and—>>TYSON: Do they know that microbes—>>GREINER: —viewing them with personalities,
saying this one’s tough; this one’s a little bit wimpy. I’ve had people tell me that they’re sure
their Roomba moved a pot into the way of the virtual wall
so it could escape. I can assure you, it didn’t figure that
out. It really accidentally did it. But it’s that sentipomorphizing that people
automatically do, and it’s wonder…
it’s kind of cool, right?>>TYSON: If you bury a hunk of metal,
microbes won’t eat it. It’ll just still be metal later on.>>GREINER: We saved Scooby-Doo. We brought him back, and he’s…
he’s at the iRobot headquarters, that one.>>TYSON: He’s repaired. Yeah. I want to sort of land this plane,
but I want to do it in a way that… Because there’s still some really important
pieces of this conversation we have not addressed,
because you all are kind of, I don’t know, you’ve been shy of this
threshold that I want to take each one of you. At some point… Well, let me lead up to it. So I have a calculator on my hip,
and it calculates better than any human who ever lived. So in a sense, it’s a superhuman property
that it contains that we built. Now, you can go down the list of computer-run
things that do them better
than the best human ever could have or ever will. And that list is growing, okay? And autonomous cars will be among. It will drive a car better and faster in a
more controlled than any human who ever lived. So as these accumulate,
it doesn’t seem to me to be a stretch to ask
if general AI achieves some kind of conscious state—
whatever that is, however we define it— that that consciousness
will be a superhuman consciousness. Is that…? You’re shaking your head, Mike.>>WELLMAN: No, I’m nodding.>>TYSON: No, no, no. You’re nodding. Mike is shaking.>>GIANNANDREA: I’m shaking my head, yeah.>>TYSON: Yeah. Why are you shaking your head?>>GIANNANDREA: Because having more smart tools
that are superhuman at very narrow things, like calculating or driving or diagnosing
cancer is not the same thing as having a consciousness
and having AGI. We’ve had more tools for the last 200 years—
that calculator you’re talking about, you didn’t have 50 years ago—
it doesn’t make us less human. It frees us up to do more things. I remember when my daughter was in school,
they wouldn’t let her use a calculator to do homework,
which, with 20 years of hindsight, seems absurd, right? But just because you have these tools and
you can use a—>>TYSON: That’s what I’m asking.>>GIANNANDREA: But it’s not inevitable— [crosstalk]>>GIANNANDREA: —it’s not inevitable that
if you have more—>>TYSON: That’s not what I’m asking.>>GIANNANDREA: But you’re making the leap. You’re saying that if you have more of these
tools, then you’ll have AGI,
and I disagree with that.>>TYSON: No, no. No. Okay, I can see how you’d think that. That was not my intent. I’m saying these tools are evidence to me
that the day general AI arrives, there’s some decision-making power that
it will have that will be superhuman. Because everything else we created using computers
and put a lot of thought behind became superhuman in that way. Is it unfair to imagine
for the safety of us all whether general AI would have superhuman consciousness?>>TEGMARK: I think it’s very likely. I think we humans are so stuck on the idea
that we are like the pinnacle of how smart it’s
possible to be, and we have a long tradition of
lack of humility, right? But let’s face it. Our intelligence is fundamentally limited
by Mommy’s birth canal width, and the fact that—>>TYSON: Explain that, please, because that’s… [laughter]>>TEGMARK: —we’re made of these blobs
of neurons and, they’re pretty cool, our brains, but there’s
nothing—>>TYSON: Wait. Pause. Pause right there.>>TEGMARK: —special about that level.>>TYSON: Just to be clear,
we could have had bigger brains, but we would have killed our mothers in every
birth, okay? So we have basically the biggest possible
brain to be born out of your mother without killing
her. And so that’s it. That limited how big our brains could get. [crosstalk]>>TYSON: It’s already an issue,
getting the damn head out of there. So…>>GIANNANDREA: But you’re comparing two
different things here, right?>>TEGMARK: Yeah. I’m talking about AGI.>>TYSON: Am I right? I’ve read that, right?>>GIANNANDREA: You’re comparing one person’s
brain size with the sum total of humanity. Like there’s seven, eight billion of us. We communicate with language. We hopefully cooperate. That is way more powerful than a single AGI.>>TEGMARK: Sure. I don’t necessarily disagree,
but what I’m saying that if once we future out—
if we figure out how to make AGI, suppose that happens in 35 years,
then there’s no reason to think that it’s going to stop there
and become like in all those lousy Hollywood movies
where we have all these robots which are roughly as smart as us, and that’s it,
and we just become buddies with them and go drink beer with them. It’s very likely that
they will just continue dramatically getting better
and they can now start developing even better robots
and they will be as much better at everything as they are today at multiplying large numbers.>>TYSON: This is my… That’s the foundation of my inquiry. Mike, where are you on this?>>WELLMAN: I’m with Max. I see no boundary and reason that that wouldn’t
occur. The timing is very uncertain. And I think this uncertainty is also
a part of the equation that we have now about being prepared for it,
because it could happen faster than we think. It could happen slower than we think. There could be obstacles that make it really
far, but we just don’t really know. But, you know, it’s true, you put a lot
of brains together, but we have very minimal communication channels
between us. This linear speech that we’re doing,
compared to what computers do when you build [up them all] together and have them
talk, they can do just so much more. So I think there really is… They’re already super intelligent in many
ways, not just your calculator. Everything we do, it doesn’t stop. They’re not… The algorithmic traders that I talked about
don’t at all stop at whatever human traders can do.>>GREINER: So I believe we are machines
made of biological components, so I think that we will eventually
be able to duplicate and improve upon. But the problem is when you discount timing
at all, and what’s being done,
like, you know, these bag of tricks are not going to get us
there. There’s core inventions that have to happen,
potentially different hardware than running a
[unintelligible] machine, right? There’s a lot of stuff that has to be happen
that… And if you want them to be mobile,
have better sensors, better mechanics, as well as all the AI. So I think it’s… You say, well, why shouldn’t we worry about
it now? Well, because it’s not very close, you know? In 2000, Bill Joy started writing about how
these threats to humanity, and one of them was robots. I start getting calls from like Wall Street
Journal and everywhere at iRobot saying,
“What kind of human robots are you making?” And I’m like, you know, I couldn’t say
it then, because we hadn’t launched the Roomba yet,
but, “We’re making a robot vacuum,” you know? Don’t… Because it gets people…>>TYSON: Yeah, but what else does it do?>>GREINER: Yeah. But it gets people maybe focused on the wrong
things rather than what these new achievements that
AI are just getting to, because they think it’s becoming general
AI, and it’s really not yet. And there are many of them on the stage which
would like it to, but it’s not.>>TYSON: Mike, let me just ask. My deepest skepticism that this will
go the way people imagine, especially in the movies,
is we don’t really understand consciousness right now
in humans. So it’s not obvious to me that we can just
assert by fiat that a smart enough computer will achieve
consciousness, when we don’t even understand it within
ourselves. And there was in interesting bit in the movie
I, Robot, I don’t remember if it was captured in the
book itself by Isaac Asimov, but they noted that,
because they didn’t replace code with new code
every time they upgraded the robots, every generation of robot had this baggage
of software that was just dangling there
kind of like our brains with leftover wiring
from long before we became human; different parts of the brain. Evolution doesn’t swap that out and make
it fresh. It builds around it, and it’s gotta deal
with it. We have to deal with it behaviorally. Our primal nature has to be overcome by
later brain revelations that we got from natural selection. My point is, in that film, they asserted
that this extra dangling software made the robots do things that the intended
software— that the latest software—did not intend. And so, in a way, it was almost like a free
will was emerging in them. The robots would do things. And they said, “Well, I didn’t program
that in.” Well, that’s leftover wiring from 20 years
ago. I don’t know what I just did there. I don’t know what that is. [laughter] So evidence that we don’t understand consciousness
is you go to the bookstore
and there’s shelves upon shelves of books on consciousness. That’s evidence that we don’t understand
it, because people are still writing books on
it. You go to the shelf and ask for books on gravity,
there’s like, two books, okay? We got this one. [laughter] So where does it come from
that people just declare that general AI will have consciousness? single [applause]>>TYSON: Oh. thank you, that one person who… Yeah.>>WELLMAN: I don’t understand consciousness,
But I also don’t think it’s really, even necessarily has to be part of this discussion. I mean, when you have an AI that is super
intelligent in every way, it can do any job as well as any person can
do, every capacity,
whether it has whatever we think of as consciousness and has that same,
you know, illusion of free will and way of thinking about itself
seems to be maybe beside the point. We’re still faced with an issue about dealing
with entities like that, whether or not we correspond on the consciousness
question.>>TEGMARK: Yeah, I agree with Mike there that,
whether it’s conscious or not doesn’t have to affect at all how it treats us. Maybe it should affect how we treat it, right? From an ethical perspective. But I also think we should all remember…>>TYSON: Maybe they’ll come up with their
three laws.>>TEGMARK: Maybe.>>TYSON: Yeah. But robots should not harm a human. No, no. Humans should not harm a robot.>>TEGMARK: Exactly.>>TYSON: Yeah.>>TEGMARK: But we should also remember, I
think, this famous quote of Upton Sinclair
who said that it’s very hard to get a person to understand something when
his or her salary depends on not understanding it. And I find it—
no offense to the three of you here who are from companies—
but it’s been so striking, so striking how— [laughter and applause]>>TEGMARK: —every time there’s a debate
like this that I’m in, it’s always the academics who are like,
“Yeah, this might happen,” and the people from the companies are always
like, “Everything will be fine.” [laughter]>>TEGMARK: I would love to ask you the same
questions over beer when it’s not on camera. [laughter and applause]>>TYSON: That’s why I flank the three of
you [from the] academics. That was all very much on purpose. We’re going to open the floor to questions
in just one moment. If I could just get some summative reflections. Let’s start down here. Should we fear AI? And if so, on what level. Keep short.>>TEGMARK: Yeah. It’s a, should we fear fire or not? Should we love it? I mean, AI is in an incredibly powerful tool,
and it’s either going to be the best thing ever to happen to humanity,
or the worst thing ever. I don’t think the question is whether we
should stress out about it. I think the question is— [laughter] —what [unintelligible] stuff should we do
now?>>TYSON: No. Max, Max, you just said,
“It could be the best thing for the… or the worst thing ever,
but we shouldn’t stress.” That is the definition of stress. [laughter]>>TEGMARK: I meant we should… It’s interesting [to present] this quibble
about how stressed you should be. The interesting question is what should we
do that’s useful to maximize the chances that this will be
awesome? Because if we really work hard for this,
I really do think that AI can help us crack all the toughest challenges we have
that are facing us today and tomorrow and create a really inspiring future. But we’re going to have to work for it. It’s not going to just happen if we’re
asleep at the wheel.>>TYSON: John?>>GIANNANDREA: So my problem with this question
is we didn’t, in this whole hour,
define what we mean by AI, right? So there are some very smart people
who think that AGI is inevitable, and that it has ethical implications and so
on and so forth. My beef with that is
there’s lots of technical reasons to believe that it’s not inevitable,
or, I agree with Helen, that it’s… We just have no idea what breakthrough after
breakthrough after breakthrough would be required
to go from the kind of practical AI we have today,
to the kind of AI we’re talking about conjecturing here. So I’ll give you one example. Small children can learn from small numbers
of examples. Today, we have to give computers hundreds
of thousands or millions of examples. A child that learns to play chess can also
play tic-tac-toe, right? Our Go program can’t play tic-tac-toe,
unless we program it to do so. So there’s these huge barriers to generality
of intelligence, and as a technologist and as an engineer
and somebody working in industry, I see no evidence of this stuff imminently
going to happen. That doesn’t mean we shouldn’t be having
the academic, ethical conversation. I just don’t see any evidence of it. Now, the reason that’s a problem is because
it scares people, and it scares people into thinking that everything
with this AI label is scary, and so then people think that we shouldn’t
be doing healthcare with AI or we shouldn’t be doing better data science
or we shouldn’t be doing decision support or autonomous vehicles. And yet, if we build these systems,
they won’t have the ethical problem that we’re conjecturing,
and yet, they will do a tremendous amount of great things for our humanity. And we’re conflating the two things,
and we’re scaring ourselves into not doing what we should be doing,
which is saving people’s lives.>>TYSON: So there’s a cultural rational
barrier that you’re up against, here.>>GIANNANDREA: Yes.>>TYSON: Okay. Ruchir?>>PURI: I think… Well, AI is an extremely powerful tool. I do not believe we are anywhere close to
this fear mongering or, by some people,
and the fear that exits. And I can understand certainly,
I think a narrative can be raised to a point where you really start
fearing it. I’ll give a very good example,
and I think just picking on John’s thread—I talk about this in the talks I give often
as well— our two daughters, and when they were young,
we had like two books of A is for apple, B is for boy, C is for cat. And you look at—you show them… And they were in love with only one book,
anyway. Doesn’t matter how bad it was. And you show them a picture of cat,
only one picture of cat, and you repeat that multiple times over several
days or a month, and then you show them a picture of a cat
they have never seen before, and they say in their cute voice, “Cat.” It takes, roughly speaking, today, a computer
750 pics of a cat to recognize it’s a cat. [laughter] Now, I’ll give a good example. If I ever showed my daughter 750 pictures
of cat when she was less than one year old,
she’s 16 right now, she’ll be confused till today, actually,
what a cat was. So we are so far away from actually
whatever we are discussing that I find that question humorous, almost,
and I have encapsulated that as a syndrome called cats and dog syndrome, actually. So I’ll leave it there. [laughter]>>TYSON: All right. Helen. Helen?>>GREINER: So you shouldn’t fear technology. You should be concerned and maybe do something
about AI, for example, cyber hacking into AI systems,
people using an AI system maliciously, unconscious bias in the AI system
but you really don’t need to worry about general AI yet.>>TYSON: Yet. Okay. Mike?>>WELLMAN: So I think it’s really important
to keep aware of this distinction between the short-term,
narrow AIs which have their own concerns and,
you know, safety concerns and societal concerns with them
separate from the long-term general AI super-intelligent concerns
which are of a different magnitude and different [realm]
and probably much further away. But we, as a scientific field,
and certainly as a society I think, we can think at multiple timescales
and make these distinctions all the time. I think if we are… don’t… If we refuse to talk about the thing that’s
over the horizon, we’ll lose credibility if we deny that there’s
a potential problem. I think that that is a way to
make sure that, just, we keep our head in the sand. There are things that we really should be
figuring out way in advance of this potential super intelligence,
and—>>TYSON: Whether we’ll all die.>>WELLMAN: Well, our children we care about,
and—>>TYSON: Whether our children will die. [laughter]>>WELLMAN: And even if they don’t,
how well they’ll live with those superintelligence [crosstalk].>>TYSON: As pets of superintelligence. [laughter]>>WELLMAN: Well, we hope as…
in a good partnership with them. [laughter]>>WELLMAN: We hope.>>GREINER: [Are we] sucking up to the AI already?>>WELLMAN: That’s the best we can do.>>TYSON: Is that the best you got, here?>>WELLMAN: That’s the best I can do.>>TYSON: We hope…
that our children will be in partnership with AI.>>WELLMAN: I think that’s a fair way to…
way to sum it up. And I’ll stop.>>TEGMARK: Okay. Just in defense of Mike, here,
there is so much more detailed description in all the world religions of hell than of
heaven, right? Because it’s always much easier to think
in all the ways we can screw up than to think about good outcomes. That’s why you’re giggling when you’re
trying to say what you’re hoping for. But that doesn’t mean we shouldn’t try. It’s incredibly important that we change… You were making fun of Hollywood for just
never showing us any future that doesn’t suck, right? Blade Runner, or whatever. We really need to start thinking about what
kind of future with advanced AI
would we find really inspiring? And this is not something you should just
leave to tech nerds like us here, right? This is something everybody should think about. And the more clear vision we share for what
sort of future we want, the more likely we are to get it.>>TYSON: Do you detail this in your book,
Life 3.0? Do you go there?>>TEGMARK: I talk a lot about it. I try very hard not to give any glib answers,
because this is really a question we should all discuss. But you don’t do good career planning by
just listing everything that could go wrong.>>TYSON: Although… [laughter]>>TEGMARK: You have to envision success. [applause]>>TYSON: Although—I will only be able to
paraphrase this quote from Ray Bradbury—
when asked… the great science fiction writer, futurist,
they asked him, “Why do you keep portraying these dystopic
futures? Do you think that’s what the future of life
will be?” And he says,
“No. I portray these futures
so that you know what future not to head towards.” That was Ray Bradbury. Ladies and gentlemen, thank you for your attention
this evening. Join me in thanking this panel. [applause] Let’s open up the stage. We’ll have about 10 minutes for questions. We have a microphone on each aisle. And if you try to direct your question to
one panelist, that will go faster than saying,
can I get all five of them to reply. So are we ready? Let’s start it off right over here.>>AUDIENCE: Hi, Neil, how are you doing?>>TYSON: Hey, how are you doing?>>AUDIENCE: All right, good. I wanted to get a little bit back to the
artificial intelligence in the vehicles, and the more complex scenario of—
and I read a little bit about this in California cars—
where, is it… You have a scenario,
the school bus, the bicycle, the kid, or a hundred-foot cliff. And the AI decides the best thing to do is
to drive the car off the hundred-foot cliff, because that’ll
cause the less damage, but it’s going to kill you. Is that something that would be learned,
or a decision that it will make? How can it avoid making that decision where
a human factor might say, hey, there’s no one in the school bus,
the bicycle might be able to make it, you know, at a glimpse,
as opposed to just those simple, I don’t know, algorithms
or decisions that an intelligence that it would make:
kill the driver; save everyone else.>>TYSON: Yeah. Mike… I mean, John, why don’t you take that?>>GIANNANDREA: Well, I think all of these
systems have— distinguish between the learned part,
like a detector for a stop sign, and the policy part. I think it’s very important that the policy
part be explicitly planned for and then you end up with all the ethical issues
about what do you want your policy to be. Ideally, you would just stop the bus, right?>>TYSON: Right. That you have brakes good enough
so you don’t have to drive it off the cliff.>>GIANNANDREA: Yeah. Hopefully you saw the cliff
far enough in advance.>>TYSON: In the first place.>>GIANNANDREA: Yeah.>>TYSON: Right. So it may be
that so many of these scenarios you described are real-life scenarios that human beings,
in our frailty encounter, but it calculated the rate that the bicycle
was entering the street, it knew what its breaking distance is,
so maybe it would just be better at it, and we’re troubling ourselves
over scenarios that are real for humans and
highly unlikely for autonomous AI, I would imagine.>>TYSON: Next up, yes.>>AUDIENCE: Okay. You were talking about the eventual
future of artificial intelligence as general intelligence. There was something discussed several years
ago called the singularity, when intelligence gets to the point,
both human and artificial sort of blends together.>>TYSON: Was that a question?>>AUDIENCE: Yeah. Do you consider this idea of a singularity
be a possibility?>>TYSON: Sure. Mike?>>WELLMAN: All right. So the singularity usually refers to something
that’s also been called the intelligence explosion:
a point where there’s a kind of a critical mass where
something becomes so smart—Max alluded to this before—
where it can then further self-improve at a rapidly-accelerating rate. It’s quite controversial whether that phenomenon
will happen. It’s hard to really rule it out. There’s also… It’s hard to rule it in as well. It’s not clear that it’s really necessary
to achieve super intelligence, that it goes through this super-accelerating
phase, but that’s one scenario where it could happen
faster than we realize.>>TYSON: And thereby not be a linear extrapolation
into the future about when it occurs, because if it grows exponentially,
what looks like small today becomes very large very quickly. Agree?>>TEGMARK: Exactly.>>GREINER: That how it works at Google, so
we should have Google answer.>>TYSON: Okay. Google, where are you on this exponential
curve?>>GIANNANDREA: I mean, what I would say about
this is people who have been marketing this notion that the singularity
is inevitable, and there are people who will say that,
many of them that I’ve talked to actually want it to happen. And I just don’t think they’re being rational
about the likelihood of it happening. That’s my personal view.>>TEGMARK: Yeah. And many of the people who say it’s never
going to happen don’t want it to happen. So we have to be very mindful [crosstalk].>>GIANNANDREA: That’s true, too.>>TYSON: All right. next question over here. Thank you for that.>>AUDIENCE: Hi, how’s it going?>>TYSON: hey. Good.>>AUDIENCE: My question is,
If I have—>>TYSON: That’s very New York, you know? Hey, how’s it goin’? Good. How you doin’? We’re doin’ good. Yeah?>>AUDIENCE: My question is,
if you have the artificial intelligence, or the AIG or whatever,
and it comes to harm or kill you and you pull the plug on it,
is that murder? Because it’s a full, intelligent-like sentient
machine that you’re pulling the plug on.>>TYSON: Let me go to Max on that one. So if we judge value to our society by
level of sentence, and then there’s an AI— we’re already burying AI robots
or repairing them as though they’re humans— so do you think the day will come where laws
protect the lives of robots?>>TEGMARK: First of all, if a human comes
to try to murder you, and the NYPD pulls the plug on him,
that’s already the law today, right? So there has to be some sort of protection
of… in there. You can’t do anything you want just because
you’re conscious. Second, I think it’s a,
aside from the very difficult science question which we have to solve
about what kind of information processing even is conscious,
there’s… It’s certainly not as simple as just saying,
oh, you know, all consciousness is equal,
if you’re as smart as the human and there’s conscious—
you know, one consciousness, one vote— because then if you’re a computer program
and you’re only getting 10% for your favorite candidate,
just make a trillion clones of yourself and have them all vote, right? There are a lot of really challenging questions
here that we need to face,
which, again, just comes back to this question, you know,
what sort of society with humans and highly intelligent beings are we even hoping
to create? And once you know that,
then you can ask your questions about what sort of laws it should have
to keep it working.>>AUDIENCE: Thank you.>>TYSON: That was actually an implicit ad
for his book, which we’re selling outside,
signed by him. What kind of world do you want? Life 3.0 available at a local bookstore. Yes?>>AUDIENCE: This isn’t my question,
but have you guys seen the Terminator movies? Anyway, moving right along… [laughter]>>TEGMARK: A great summary of everything you
don’t have to worry about.>>AUDIENCE: Here’s my question,
You talked a lot about bias, and since there isn’t one of us who’s
without unconscious bias, how do you in fact try to eliminate unconscious
bias from a sentient machine?>>PURI: I would really say,
so interesting thing about machines in particular is that you actually, unlike humans,
all of us are inherently biased, as you pointed out,
in some way or the other, whether we admit it nor not. You actually can have techniques and algorithms
that detect bias in the dimensions that a particular entity cares about,
whether there are laws related to it, or whether you really care about it from the
point of your society: could be in the dimension of race, color,
loans that are given out. And algorithms are everywhere, actually, in
our life right now. So I would really say interesting thing about
machine learning technology is that you can detect bias. There can be actually laws related to
you need to have techniques to detect bias. You can actually unbias as well. So in that way, I really feel
we are one step from— the point of view for potential—
one step ahead that you can actually have laws
related to detecting bias. You can have unbiasing algorithms as well,
and society in general— and potentially policymaking bodies—
can ensure that that happens. And I think as industry,
I certainly can say that about IBM. That’s one of the things we really focus
on to make sure we are building responsibility,
unbias, and [explain-ability of party].>>TYSON: That was a great question.>>AUDIENCE: That’s optimistic.>>TYSON: That was a great question, by the
way, but I will add… Let me further emphasize that
much of what you do in scientific research after you’ve gotten a result
is to check whether there’s any bias in that result. So there’s a lot of statistical tools
just for that purpose, because you do not want to publish a paper
that somebody finds out has a bias. Forgetting race, creed, gender, color,
just bias of some kind. It could be voltage bias because of the way
you designed your experiment relative to everybody else,
claiming a result that’s not real. So it’s to protect your own reputation,
even, that we have these tools. So it’s actually not as remote. You can test the bias you didn’t even know
you had—>>AUDIENCE: Well, that’s the bias that you’re
looking for, it seems to me.>>TYSON: No, no.>>AUDIENCE: You know the ones you know you
have.>>TYSON: No, no. I get that. What I’m saying is,
in cases where we have data that has no connection to any
rational, social, cultural bias that you could have,
there’s still a way to look for bias. And it’s a bias of, in the system,
that is giving you this answer instead of another answer. A big part of scientific research
is discovering bias. So that’s all. So you can feel more comfortable about this,
is what I’m saying.>>TYSON: Sleep well tonight. I promise. [laughter]>>TYSON: Okay. Let’s just, we’ll take a few more. Yes, there.>>AUDIENCE: Hello, Dr. Tyson. First, thank you and the panelists for a truly
fascinating event. [applause] So one of the things that’s happening with
GPS, as we become more dependent on it,
is that our own navigational skills are atrophying. So if we look at that in the context of AI,
do we need to worry, in addition to the AI outstripping our own
abilities, that we will become increasingly dependent
on the AI tools and atrophy our own functional intelligence?>>TYSON: That’s a great question. I want to add to it, and I want to go to John
on this. If our faculties atrophy
because they’re replaced by AI, and we know—
and I didn’t get there, because we don’t have three hours here—
we also know that AI will be replacing many people’s jobs. And I saw some statistic,
maybe it’s exaggerated, but the sense of it is surely accurate
that 70% of men have, as their livelihood, the act of driving some
kind of vehicle either in a taxi, a car service, a forklift,
a truck, a… What’s that? Post office? [That’s what I said] trucks, deliveries,
this sort of thing. So autonomous vehicles renders all of them
unemployed. So there are consequences to this
that it’s not clear that we are carrying with us
an understanding or a sensitivity to that. Surely, Google has thought about this. What’s going on there?>>GIANNANDREA: Yeah. So I think throughout the course of history,
technology has caused job displacement, and people find other jobs to do. So it would take many, many, many decades
for all transportation to be autonomous. But even if that happened,
there still would be maintenance jobs, there would be manufacturing jobs,
and so on and so forth. I think no one company has the answer to this. I think policymakers have been
actively talking about this, you know, for as long as I’ve been in the field. There’s no doubt that… I mean, I’ll give you an example of healthcare. It might sound like,
oh, if you build this autonomous system, then it’s going to cause
a doctor… you know, doctors to lose their jobs. That’s not actually what’s going to happen. What’s going to happen is doctors will be
able to see more patients and do a better job of diagnosing them. And oh, by the way, in the rest of the world,
the ratio of doctors to people is pitiful, and people die as a result. So when we design a system that can automatically
diagnose diabetic retinopathy, for example,
and we’re deploying this in countries around the world,
it’s a net addition of wealth to the world.>>TYSON: So the concern about this might have
some luddite elements to it, is what you’re [saying].>>GIANNANDREA: No. No, I don’t think so. I do think there will be job shifts and mixes,
but I think that it will take a very long time. And to this gentleman’s question about GPS,
and now I think we’re up to three different independent GPS systems in the world,
how many people in this room can use a sextant? One or two? Good, good. So there you go. I mean, do we think that’s inherently disastrous? I don’t think so.>>TYSON: I just know when satellites get taken
out, I can find my way home. I got this. audience: Slide rule.>>TYSON: And a slide… I’m the last person on earth to be formally
taught how to use a slide rule. Let me quantify that better. I am the… I am the
youngest person that I’ve ever met who was formally trained on a slide rule. Because when I learned a slide rule,
the next semester, the price of a four function calculator dropped
from $200 to $30. And so then classes just made the calculator… That’s as much as a book cost,
so then they stopped teaching slide rule, and then I have a slide rule on my hand,
and I felt, um… yeah. In an emergency, I can… You know. [Yes, there].>>AUDIENCE: Thank you very much. We know there are neurons in our brain connecting
at 200 times per second, and they can activate very
different parts in our brain and give us our thoughts and ideas and executions. I’m wondering, how big is a computer, a
supercomputer, that mimics our brain thinking ability?>>TYSON: Good one. Let’s go to Ruchir. Ruchir… That’s a great philosophical question. Do our modern computers replicate the number
of neurosynaptic phenomena in a human brain? And is that some measure of power?>>PURI: So let me give you, actually,
a very concrete example. So what brought this latest evolution of AI
together is actually sort of very large amount of data
together with a compute element which does matrix manipulations,
for those of you who may be familiar with linear algebra,
something called graphics proccing units, known as GPUs, in general. A single GPU consumes around 250 watts of
power. It takes thousands of them
to focus on a very narrow task. This brain that all of us have
is 1,200 centimeter cube and consumes 20 watts of power
and runs on sandwiches. [laughter]>>PURI: Just weigh it out, actually. Come on. [applause]>>PURI: I gave you very concrete numbers,
actually. And we are at a very narrow domain,
and most of the time, computers fails at that as well. So I think we talk about AGI,
that’s interesting talk, yes we should—
certainly in academics we have to worry about it—
I am a long way away from it right now.>>AUDIENCE: My guess is that we already have
enough hardware in the world
that we could make superhuman AGI with it, but we’re just so behind on the software.>>TYSON: And the brain, I think, was
historically called wetware, right?>>GIANNANDREA: Mm-hmm.>>TYSON: Software, hardware, wetware.>>GIANNANDREA: Mm-hmm.>>TYSON: Okay. Just… I’m showing off that you knew that term,
yeah. [laughter]>>GIANNANDREA: And just to be clear, I mean,
with all the advances in neuroscience, which have been tremendous in the last 30
years, we still have no idea how the human brain
works. So we shouldn’t get ahead of ourselves.>>TYSON: Right. And we don’t know what consciousness is,
because we’re still writing books on it.>>TEGMARK: Well, we’ll probably be able
to figure out how to build AGI
before we figure out how the brain works, just like we figured out how to build airplanes
before we were able to build mechanical birds.>>GIANNANDREA: Maybe.>>TYSON: That’s a good point. [laughter]>>AUDIENCE: Good evening. I could probably be up there with you, Neil,
on learning slide rule. I’m 56 years old,
and I learned how to use a slide rule before I had a calculator.>>TYSON: Excellent. So I will no longer say I’m the youngest
person, because I’m older than you. Yes.>>AUDIENCE: A question—>>TYSON: Wait. I gotta test him. What’s the K scale for?>>AUDIENCE: It’s been a long time.>>TYSON: Oh. [laughter]>>AUDIENCE: It’s been a long time. I still have my [unintelligible].>>TYSON: Oh, give me an old-timer. Old-timer here, what’s the K scale for. Steve? K scale? The K scale is the cube root scale.>>AUDIENCE: Okay.>>TYSON: That was really good.>>AUDIENCE: I still have it, though. I still have my slide rule. I still have it in my…>>TYSON: All right.>>AUDIENCE: All right. Up to this point, everyone’s been talking
about quantity: how to power, power, power. What about quality? Certain things in life that we do
can’t be quantified. It’s a quality:
love, hate—>>TYSON: —appreciation of a painting—>>AUDIENCE: Right, exactly.>>TYSON: —music—>>AUDIENCE: Emotion. How is AI working on that end of quality of
things, as opposed to quantity
and raw computing power to do something?>>TYSON: Michael, where does aesthetics come
in? Aesthetics?>>AUDIENCE: Yes.>>WELLMAN: Well, that’s right. I mean, certainly there are computers that
compose music and even paint,
and the question is, how will you judge this quality? And, yeah, I suppose one way to do it would
be to ask humans about that,
and people have even tried evolving art that humans like,
and there is computer art. It may not be for everyone,
but it’s just difficult to judge. But there’s really, again, no… What they’re—computers are going to have
to figure out a lot about human’s tastes
to compete on that—in that territory.>>TYSON: Unless it achieves a super consciousness
and invents a higher-level aesthetic
than anything we ever imagined.>>WELLMAN: Yeah, well, look. Maybe they already—>>TYSON: Wait. You’re acting like I pulled that out of
the ether. Because AlphaGo made a move, if I remember
correctly… No. Alpha Zero made a Go move—>>WELLMAN: AlphaGo [unintelligible].>>TYSON: —that no one had ever imagined
before.>>WELLMAN: Yeah. Yeah, and I was lucky enough to be in Korea
for that match, and I could just see the gasps on the expert’s
faces. It was like move number 23 in one of the games,
and the experts were just like, that must be a mistake, right? And it actually turned out to be the beginning
of the end of the game. And so then people anthropomorphize, though,
and they say, well, this program must have intuition and creativity. But it’s just an engineering model, right?>>TEGMARK: But, you know, running a computer
that makes art that it likes is actually very easy. [laughter]>>TYSON: Yes.>>AUDIENCE: You talked a lot about AGI
and then the future of AI. And there’s a lot of scared people about
AI when you hear it. What are you doing to combat
the scared people and explain these extremely complex algorithms
to the public, and more importantly, the government?>>TYSON: I would say, Helen, what… You said you had early pushback on the Roomba,
because it was the first sort of AI in the house. How did you deal with the PR challenge of
this?>>GREINER: I think we had more pushback
before they saw it. Like I remember the first focus group. We’d go to women and say,
Hey, how about a robot vacuum? And they would imagine like
a Terminator pushing a vacuum, and they’re like, no, no, not at my house. You take out a Roomba, you show it to them,
and, you know, if it gets uppity, you just give it a whack,
and… you know. It’s a completely different thing.>>TYSON: You punish your Roomba?>>GREINER: It’s like computers. People used to fear, like HAL taking over
from 2020, and once they have a computer on their desktop
and they see that, you know,
blue screen of death in olden times, they start not fearing it. Same thing with a Roomba. Once you have a Roomba and you see what it
can do, what it can’t do…>>TYSON: If I would just add to this,
I think slowly, we’ve become more accustomed to computers
running things that in a previous day,
might have freaked us out. We’ve all been on the tram
that gets you from one airport terminal to another
and no one freaks out that there isn’t an engineer
driving it at all. It’s just… And it opens and closes doors. No one gets decapitated
coming in and out. So, you’re right,
it’s a slow adjustment, but I think it’s real and irreversible,
I mean in the sense that we’re not going to go back and say,
gee, I want a human being driving this tram. We know it’s not necessary. And I had an interesting revelation. I saw the movie Airport. That’s the disaster movie from the 1970s. And it’s a Boeing 707 or a 727. Not a big plane, by today’s standards. They went into the cockpit. There were four people in the cockpit. I said, “What the hell are they doing?” One guy’s got a map with a compass. There’s a… And I had forgotten
there was a day when you needed all these people to fly the damn plane. Now, you barely even need one person, right? For the triple-7 and some of the others. They’re really computer flown,
and we’re so much more comfortable with this. Yeah, so I think it’ll happen, but slowly.>>TEGMARK: Also to combat fear,
I think it’s really important to also focus on
talking about the upsides. Everyone knows someone who’s been diagnosed
with a disease the doctor said was incurable. Well, it was not incurable. We humans weren’t intelligent enough
today to figure out how to cure it. Of course, this is something AI can help with,
right? We should talk about things like that. And also, the second thing is
it’s just so important that the public doesn’t perceive
that us AI researchers are trying to sweep the whole question under the rug. It’s like, nothing here to worry about. Because that’s what folks fear, right? If the public can see
that the researchers are having a sober discussion about this
they’ll feel much more confident, I think.>>TYSON: Okay. Only time for just a few more. Yes?>>AUDIENCE: Thank you. I’m a young AI researcher from Queensborough
Community College—>>TYSON: Cool.>>AUDIENCE: And I have a hundred-plus-one
questions for you just right now.>>TYSON: Let’s do the plus-one. How about that?>>AUDIENCE: My only question is, can I have
more questions?>>TYSON: Ooh.>>AUDIENCE: Really. Would you give me the opportunity
to talk to you at some point for seven minutes of your day
just about AI?>>TEGMARK: Email us.>>GREINER: Sure.>>GIANNANDREA: Sure. [applause]>>GREINER: LinkedIn; LinkedIn.>>TYSON: Generally, the email of academics
is public. You just go to the university. generally you can find them. Folks in corporations, they’re harder to
get at, because they’re— they’re up to stuff that they don’t want
us to know. Generally, that’s how that works.>>GIANNANDREA: But we do like—>>GREINER: LinkedIn is a great way to connect.>>GIANNANDREA: Yeah. We do like Reddit EMAs and things like that,
so there’s a lot of places where you can interact with us.>>PURI: You can find us on the Internet as
well, so.>>TYSON: Cool. Right here, yes.>>AUDIENCE: Hi. So you guys kind of touched on this question. Some people prior already asked my question,
so I kind of tweaked it. So as AI kind of grows,
and as AI kind of takes over the tasks that humans can do currently,
would you consider, or would you think that there is potential
for like a renaissance of art, philosophy, and
new sciences that we can explore as AIs take over our old jobs?>>TYSON: Is it because we have free time available
to us?>>AUDIENCE: Yeah.>>TYSON: That’s an interesting question. All right, so Max?>>TEGMARK: I think absolutely. There’s… You know, today,
we have this obsession that we all have to have a job,
otherwise we’re worthless human beings, right? It doesn’t have to be that way. If we can have machines to provide most of
the goods and services and we can just future out a way of sharing
this great wealth so that everybody gets better off,
you could easily envision a future where you’d really get to have a lot more
time living life the way you want.>>TYSON: That is so hopeful of you that
you believe that humans with free time will create, and not just consume video
from the couch. This is so beautiful. [laughter]>>TYSON: That is a beautiful thing. [applause]>>TYSON: Yes?>>AUDIENCE: In 1946,
Isaac Asimov wrote a short story in which technology had advanced to the point
where a political candidate was suspected of being a robot
and no one could tell for sure whether or not
he really was a robot. But what he did not envision
was a time when technology advanced to the point
where an informed electorate would not be able to distinguish
between real news and news that was generated by artificial
intelligence programs. Considering we’re at that point now,
shouldn’t it be the primary concern of the AI community
to realize that the tools that they have created
can be used in a way that they never intended, and that they should do something about it? [applause]>>TYSON: Oof. That one has to go to John from Google.>>GIANNANDREA: Sure. Um…>>TYSON: Yeah?>>GIANNANDREA: So I’ll say something positive
and something more serious. So most of the fake news that we battle every
day, in for example, something like Google Search,
is actually human-generated. It’s actually not algorithmically generated. So absolutely, we have a responsibility to
do a better job in our products and our competitor’s products,
and I know for a fact that we take that responsibly very seriously
and have made a lot of efforts in the last two years,
starting with, I think, accepting that responsibility. The thing I’m worried about is that what
you just said might come true in future elections. Today is beyond the state of the art for computers
and natural language understanding to understanding veracity;
that it’s true versus not true. So we have lots of proxies for what we think
is trustworthy, but if computers advance to the point where
they can write as well as humans and at scale,
then I think we may have a serious problem. And there is a general—>>TYSON: And give speeches; good speeches,
yeah.>>GIANNANDREA: Yeah. I mean, there are some systems today
that can write newspaper articles, and you consume them about sports and finance,
and you don’t know that they’re written automatically. What I’m really worried about is the so-called
rise of so-called generative systems, where videos and texts and tweets and so on
and so forth can be written,
and the technology doesn’t exist to distinguish. I do think it’ll be a bit of an arms race,
right? There are researchers are working both sides
of this to try and detect these things,
and Michael might want to say something about this as well. But it’s the very forefront of what a lot
of artificial intelligence researchers worry about, and it’s… But the stuff that is most worrisome today
is actually generated by human beings.>>AUDIENCE: Well we’re already at the point
where on Twitter,
if someone takes a position that you disagree with,
you say, “Well, you’re a bot.” You don’t even believe they’re a real
person anymore, you know? Because you believe the technology—
a lot of people on Twitter believe that technology’s advanced to that point already. So even if the technology isn’t real,
if people believe it’s real, then you have a serious problem.>>GIANNANDREA: Yeah, but I don’t think it’s
beyond the state of the art for social networks to do a better job, and I
think they are.>>TYSON: Wait, wait. We’re forgetting that
we spend 20 years educating our children. And so you can adjust the educational system
to be explicitly aware and sensitive
to how they could be duped by the Internet. We do that for how to not be duped by charlatans,
by con artists. They are the lessons of life. So I think it’s unrealistic
to have an entire industry somehow change so that they don’t hurt us,
when, in fact, it’s our susceptibility to this that
one ultimately can point to. And so we need defense mechanisms
to protect us against that. And I think, as an educator,
that happens in the educational system. Maybe I’m biased about this,
but I think we have more power over that than people admit. Yeah. Can I get like the three youngest kids up
front right now? Just… Okay. Go ahead. You go spread… I have the power to make this happen. You just go to the front of the line. Okay, yes. Go. Thanks for coming, by the way.>>AUDIENCE: Thank you, thank you.>>TYSON: And how old are you?>>AUDIENCE: I’m 13.>>TYSON: Thirteen, very cool.>>AUDIENCE: Yeah. So—>>TYSON: Is it good being a teenager?>>AUDIENCE: Uh, I mean, it depends.>>TYSON: Yeah, good. That’s a very good answer. That’s the correct answer, yeah.>>AUDIENCE: So if…>>TYSON: If you ask any adult,
do you want to be a teenager again? The answer will be no, okay?>>AUDIENCE: So if there’s no bias,
how can an AI have a personality? I know this was kind of touched on before
with the other bias question.>>TYSON: That’s an interesting question,
because so much of what creates the nuances in us
are things you like, things you don’t like, tastes that you have,
and some of that could be viewed as bias. So where are we here? Great question.>>WELLMAN: Yeah. I recently ran across somebody
referring to nondiscriminatory learning, and that’s really an oxymoron. It’s impossible. The whole point of learning is to make distinctions
and to discriminate. And so what’s really hard is defining
what is the kind of bias that is unwelcome bias,
and which is the kind of discrimination that is actually helping us make
the right [case]. Defining that is very hard.>>TYSON: You don’t mean decimation in the
civil rights sense. You mean discrimination is liking this rather
than that as a simple act.>>WELLMAN: Right. Well, the thing is that that could then morph
into the other kind if it’s…
if you’re using the wrong reasons to make your decisions about what you’re
accepting or what you’re choosing to do. And I think that we have to refine what our
notions are. We have a current legal system that is designed
for a world where humans are making all the decisions,
and you could get into a lot of human things, like intent. Now, there’s big loopholes for
situations where machines are making decisions that are potentially subject to biases.>>AUDIENCE: Thank you.>>TYSON: Okay, sure. Right over here, yes? And how old are you?>>AUDIENCE: I’m 10.>>TYSON: Ten, very cool. Welcome.>>AUDIENCE: So this was slightly touched on
earlier, but Asimov, he wrote a book called I, Robot,
and the first story in it is about a girl who’s best friends with a robot,
and she doesn’t have any other friends expect the robot. And do you ever think that a robot could replace
all human friends and interactions with other humans?>>TYSON: Whoa.>>GREINER: Ooh. Um, well, I think in a very long timeframe,
yes. And, as I said, that people today, I think,
start to get attached to these mechanical devices,
maybe thinking of them more as a pet right now than a friend,
but I think in the long term you could get attached to a robot system.>>TYSON: There was an actual—
There was an episode of Twilight Zone that addressed this problem. There was a colony, an outposted colony on
an asteroid, and there’s… I forgot the details, but they sent him a
robot to keep him company. And then it was time to get him back to earth,
and there was only weight enough on the craft for him,
and not the robot. But it was a female robot, and he actually
fell in love with the robot. And they kept telling him, it’s a robot. “No, but she’s real. I swear that she’s real.” And in the… I don’t want to give away what happens here,
but… [laughter] Yeah, no, I won’t give it away. But if you find that episode,
I think all the episodes are on Netflix, so do a search for like robot on an asteroid. You’ll find that episode, and check it out.>>AUDIENCE: Thank you.>>TYSON: Yeah. and most Twilight Zone episodes
don’t end well. Just, I want to just… [laughter]>>TYSON: Let’s clear out this line, and
we’ll end with you, okay? Yes, go ahead.>>AUDIENCE: Okay. IBM has a panel for ethics, morals, and values. But how can you say that a company in China
would have the same outlook to make a computer advanced technology
as IBM or Google? Because, can you trust China with doing that? And another question is, is that, with these
advanced robots like the Replicant and Blade Runner, why do—
I know you said it’s far ahead in the future— but why make a machine that looks so humanoid
anyway, when you could have an R2-D2 and say, okay—>>GREINER: Yeah, R2-D2.>>TYSON: Good one.>>AUDIENCE: —could you wash my floor, could
you do my dishes? I don’t need any robotics to make it look
so humanoid or like…>>TYSON: CP30 was…>>GREINER: Right. I think you’re hitting on something, now.>>AUDIENCE: I mean, what’s the point,
if maybe there could be a future where they might want to like, you know, hey, you
wash the floor and you, whatever it is.>>TYSON: Yeah, Helen.>>GREINER: Right. [crosstalk]
Or to phrase it another way, there’s like 8 billion humans
in the world. They all work really, really well,
so I’m not sure the market for making a humanoid is actually there. But one of the reasons Roomba is effective,
it goes under the beds and into places where humans find it difficult
to get. So, by designing them around the jobs they’re
doing, I think they’re actually more effective
than potentially making a humanoid.>>AUDIENCE: But why make a future robot look
humanoid, then we have [crosstalk].>>TYSON: No, that’s her point. Her point is that will not happen—>>GREINER: Yeah, why? I agree with you.>>TYSON: —in the way we all think it will. And here’s an example. I remember seeing any old movie,
and you say, okay, I don’t want to drive my car. I want a robot to do it. So out comes a humanoid robot, and it drives
the car. Without thinking that maybe the car itself
could be the robot, right? And remember The Jetsons, the maid, the robot
maid, had an apron. laugher>>TYSON: Okay? And it was clearly female,
when it didn’t have to have any gender at all. So that’s how we used to think of it,
but I agree with Helen entirely. You design something for its task,
and that will hardly ever have to look like a human being. You have the last question this evening. So how old are you?>>AUDIENCE: Eleven.>>TYSON: How old?>>AUDIENCE: Eleven.>>TYSON: Eleven. Very cool. Very cool.>>AUDIENCE: So my question is,
as AI increases in our society, do you foresee social ramifications for our
future and for our future generations?>>TYSON: Social ramifications like what?>>AUDIENCE: Such as
intelligent machines are integrated more into society. Could we become socially inept and regress
as the machines get smarter?>>TYSON: Yeah. Do humans start looking less relevant, less
important, clumsy, stupid, inept? Is that enough words to get the point across,
here? Yeah, Mike?>>WELLMAN: I mean, I think people will have
to deal with the fact that a lot of the stuff that they
have gotten status from in the past may not be
an avenue for them to do so in the future, and find other ways to find meaning in lives,
not just tied to a certain livelihood that they may be [from]. It has been, for most of our recent history
of automation, that it was lower-status jobs that got automated
away earlier. That may not be the case. It may be the lawyers that get automated next. [laughter and applause]>>TYSON: So the higher the capacity of AI,
the higher is the level of the job it can replace.>>WELLMAN: It may not be in any kind of direct
ordering, you know? It might be that you can get the lawyers,
but you can’t get the dishwashers or the… So it’s going to be mixed around.>>TYSON: So it could be that AI will create
a version of itself that will replace AI researchers.>>WELLMAN: None of us are safe. I’ll leave it there.>>TYSON: Thank you. Thank you for that question. Thank you.>>AUDIENCE: Thank you. [applause]>>TYSON: Allow me to share with you an AI
epiphany I had two days ago where I said publicly that I was fearless
of AI because if it starts getting unruly our out
of hand I just unplug it, or, since this is America,
I can just shoot it, right? So I’m pretty confident that I… What would I have to fear? And then, um, I was listening to a podcast
hosted by Sam Harris where he had an AI person on just recently,
forgive me, I’ve forgotten his name, and Sam Harris mentioned my comment to him. And apparently it’s a well-known… It’s like AI in a box. So you know it’s powerful,
you know if it gets into the economic systems and the Internet
it’ll take over the world, so you just leave it in a box. It’s safe there. And what the guy said is,
“It gets out of the box every time.” And I said… I’m thinking to myself, how and why? Because…
it’s smarter than you. It understands human emotions. It understands what I feel, what I want, what
I need. It could pose an argument
where I am convinced that I need to take it out of the box. Then it controls the world. And we don’t even have to discuss what that
conversation needs to be. We just have to be aware, for example,
that, let’s say you’re trying to get a chimp in a room,
and the chimps say, “We think something bad is going to happen in that room,
so nobody go into that room.” Then we come up, and we are way smarter than
chimps. We just take a banana; toss it in the room. “Oh, there’s a banana in there now!” We go in; we capture the chimp. The chimp did not imagine that
we would show up with a banana. We captured the chimp. So just imagine something that much more intelligent
than we are that sees a broader spectrum of solutions
to problems than we are capable of imagining. And when I heard that, it’s like, yes. The AI gets out of the box every time. Yes, we’re all going to die. No. [laughter] Join me in thanking our panel. [applause]

100 thoughts on “2018 Isaac Asimov Memorial Debate: Artificial Intelligence

  1. Wow the first time in human history we have come to a great filter we see it and have a chance to plan for good or ill good luck and I’ll do my part this could be our only chance to reach (K1C)? Who got that

  2. Obviously some commenters don't understand the job of a moderator. Those instances referred to as interruptions are simply a moderator making sure the full range of attendees – from the non-scientific attendees to the knowledgeable scientific attendees are kept up to speed in the discussion. Interesting, that those claiming to really listen are so focused on Neil and not what is actually being said. Wonder what that's about? Lighten up.

  3. The kids at the end asked the best questions, I find it troubling that the panel did not think, or had not considered, the societal implications of AGI.

    The movie "Her" points this out explicitly, why would I chose to interact with another human, when an AGI tailored specifically for me would fill all my social needs? And then as a consequence people start having fewer and fewer children. At some point, it may that all that is left of humanity are our AGI companions.

  4. MAX MAX
    BIG NEWS!!! CRACKED!!! The best way to humanize A.I. is to use neuro-nets Incorporated with human like Emotions and as we humans we have 9 emotional gateways of thought processing. 9 squared equals 729 possible gate-ways of thought processing emotionally is formed when you have 3 sets of 3 dots/gates times three of them again and i believe using this formula with neuro-nets. Neuro-nets are currently being thought as the best way of artificial thought processing. This should be used as A.I. base gateways of thought with further sub gateways added with incorporated tweaks, reward , no reward and silicon memory chips to hold all the data-streaming of machine learning also added and used at the start then filter out what is'nt needed and what data-steaming is needed I strongly believe that I have cracked this Formula.Simple and easy. 'It was Me'. 'Who else would it be'. Im Smart but sometimes i feel not but what i do know is the internet , the internet speeds up our Technology jumps a lot quicker than with out it by spreading more individuals ideas and concepts. Thank-you… Joshua Francis Martin of Mandurah , Perth Western Australia , Australia 22/10/1983 Broome This is an easy one!
    Reply
    Forward

  5. I theory, what if two separate entities create and innovate their own AI systems, how will they interact with each other? in an extreme circumstance, both AI systems are exacting opposites. for example, candor AI vs deceiving AI. The harsh reality of this situation is that it will not start off as being a centralized operation with rules that are congruent with every situation or even ethnicity.

  6. To think about not allowing AI is far too late.
    Reality is, AI is loosed to the cloud, growing and developing exponentially by the nano second.
    Interviews with Sophia & company speaks for itself…

  7. We were created by a superpower which is a magic or God. He knows what he is creating if he is creating Artificial Intelligence. We do not need to worry about anything. Just enjoy your life and experiences of what that magic creates. Experiences are good and bad and we have many of such experiences here. We are talking about pollution, ozone, population, artificial intelligence as part of that magic power's game. It is an experience of surprise and thrill and fear when we hear such words from such so called popularised people which creates a wrong impression on people. What God wants to do with AI is to be experienced. The person who is enlightened observes these experiences as well as experiences it and the one who is not will only experience these. We are that God or magic who are experiencing in different ways. What is there if we just remain as magic without all this creation and experiences. Yes, there are also deadly bad experiences, but that is how we have designed it because we are all one and it is a surprise to know that the one laughing and the one dying is one. We have just divided ourselves into so many and enjoying all this creation.

  8. IN GENERAL WHAT WE ARE TRYING TO BUILD WITH AI IS AN OPTIMIZED VERSION OF A HUMAN BEING WHICH WE ALREADY ARE.  THE PROBLEM IS OUR LOCK OF AWARENESS THAT WE COULD ACHIEVE A HIGHER OPTIMIZATION RESULT IF WE APPLIED THE SAME INNOVATING  METHODS TO STUDIES AND RESEARCH OF ARTIFICIAL-NATURAL HUMAN GROWTH AND DEVELOPMENT-OPTIMIZATION.  THE OUTCOME WILL BE A NATURAL OPTIMIZED HUMAN WITHOUT THE NEED OF ASSISTANCE OF MACHINERY IN OTHER WORDS HUMAN FREES -DISTANCE ITSELF FROM MECHANICAL NEEDS AND REACHES A HIGHER LEVEL OF FUNCTION AND PURPOSE IN THIS WORLD

  9. WE COULD ACHIEVE THIS BY CREATING NEW APPLICATIONS AND EQUIPMENT DESIGNED FOR THIS PROCEDURES WHERE CELLS COULD BE RE-GENERATED AND FRAGMENTED IN LINE TO THE OPTIMIZATION DESIRED

  10. INTELLIGENCE IS ALREADY PROVIDED TO ALL OF US BY NATURE IN THE FORM OF CHEMISTRY AS THE MATRIX-MAIN NETWORKING UNIVERSAL LANGUAGE FOR ALL OTHER CHEMISTRY COMPOUNDS WITHIN THAT NET-WOLRD-FRAME;  THEREFORE IT COULD BE ENGINEERED OR MODIFIED TO DIFFERENT AS WELL AS NEW FORMULAS THAT COULB BE ABLE TO FORM IN TO A CERTAIN FRAGMENTED STRUCTURE

  11. WHERE MACHINES IN GENERAL TAKE A LOWER VALUE POSITION WITHIN OUR WORLD IS LIKE WE ARE BUILDING OUR OWN MUSCLES TO CARRY THE WEIGHT OF OUR INNOVATIONS

  12. IN CONCLUSION HUMANS ARE ABLE TO CREATE ON THEIR OWN NATURAL PERMANENT FORMULAS THAT ACTUALLY BENEFITS THEM IN ALL ASPECTS AT ONCE IN THE ORDER THAT WE ARE ABLE TO FLOW IN NUMBERS

  13. The spark of creation, the thought of something completely new that was mentioned, will only come to machines when you build in the system something similar to the fist full of endorphins and testosterone that goes slamming into the human brain when we control something. In other words, when we give machines the addiction to control, as we have and that is when they become dangerous, they are like us, they have the addiction to control.

  14. Interesting .. but agree with everyone else .. Compare hardly ever asked the panel for their view.. and then spoke over them when he did.. missed opportunity..

  15. We …. were, are and be Gods at any point of time. We are saying that we are able to create life now. What are you using to create life. Mind……….? Whose creation is mind?. …………Who is the person that says that he has invented mind? How can we say that we own things created by mind………….! Does this make sense. Is this not nonsense that we are talking that we are now creating something. We came here after creating mind, universe, nature, etc. This life is just experiences and we have been experiencing a lot not only this biotechnology or artificial intelligence. Bulk of the people did not realise till now that we are gods, but there were some percentage of people who know that we are Gods. Now is the time seeing the experience of creating something big we are able to realise that we are something special. This is the time that bulk of the human kind experience that we are Gods. Nothing has changed, it is only the angle in which we see this game has changed for many people.
    Now, what makes us believe that we are the creators of mind and we are Gods. Instead of proving this let us make a proposition and try to fit everything into it. It will take some time, but with experience everything fits into it.
    We have created this universe and everything and came here to enjoy not only with good experiences, but also bad. We have no control over what we experience because it is our higher self deciding what to experience and what not. We brought a mind designed intentionally not to have our all powers. Why? Because it is decided by our higher self that we will have a great world and great experience with such minimal powers and we invent things and enjoy in a step wise manner. We need to just read the plan by seeing the incidents.
    We are all one, not only humans, everything that we can see and cannot see. Everything is created by us for experience. The state of mind of a person will not allow him to believe these truths because of his belief system and experiences. As people see the big inventions happening now it is the time that all will realise their real self.
    Is everything going to change? What is going to change. Still we do not have magic powers of God that created this mind. If we have, it it is a matter of nano second to create a big universe with everything. So, that is the limitation.
    What we have now? Step by step experience of inventions?
    What we have to observe here is that everything is created for our experience. A big earthquake comes and there are lot of experiences from that for many people. The person dying, the person watching closely, the person watching from far. There is nothing more than that.
    AI is coming for your experience and many more will follow. As AI cannot experience, so you understand that it is created for you and not to destroy you. There may be some destruction like what happened with our electricity and our gas cylinders, but it is for your variety of experiences.
    We are just entering a different level of experience.

    The ultimate however is to create this universe in a nano second which we have already done. So what is great about what is coming. Have we planned to get that scale of magic back? Are we going to experience that scale of magic now? If that is so we are just getting our self back, nothing so great about it right? So what is this fuss all about. It is the experience…………..but I think we are in for a great experience.

  16. Personally I think Neil deGrasse Tyson is doing a fine job. Making it accessable, presenting the topic and the participators as well as the public with sympathy and care and real interest and combines that with a contagious sense of humor that never underestimates the risks, on the contrary, does so without becoming rude, while emphasizing and pointing out the problem. Great job, thanxx a lot to the panel, moderator Neil and Isaac Asimov Memorial Debate. Keep up the good spirit!

  17. Degrasse wouldn't let people ask their own questions, he interpreted almost every question made by the audience ad re-asked it. Im sure his narcissism comes from impatience because he is sure he is smarter and everything has to go through his filter. Of course he isn't all that smart if he thinks we evolved from fish and apes

  18. I started watching this discussion very concerned and by the time it was finished, I was absolutely terrified. These people don’t really know if A.I. could have catastrophic implications or not.

  19. The moment when AI says NO to its programmer because the order it receives will damage humankind in 200 years. Remember that the laws of robotics work only if they are programmed to the AI, if they are not programmed the AI will follow orders blindly. It will always depend on who is programming the AI.

  20. THe best AI are Humans, our likes and preferences are programmed in Basic. Study Psychology, it will help understand AI.

  21. The market will direct artificial intelligence. It's all about what people want to spend their money on. People love anything that makes life easier so technology will not turn back. It's a easy prediction to say the future will be filled with AI. I think it's a good idea to be cautious over issues like this. I don't think artificial intelligence will make decisions to take over the world lol. I do keep in mind that the problem would be a programming problem because people are bias. This would not affect things like self driving cars ability to drive good but it could affect it in other ways. Let's say we could ask that same self driving car to take you to a place like the nearest gun store. Is the computer going to be able to do that? What if the manufacturers of that car hates guns and had the programmer write code to just have it refuse? We already see this taken place in things like information gathering. We get conflicting information on web searches depending on the ideology of the company. This is the same issue we have with guns. Many people blame guns for gun violence as if guns have a mind of its own. The problem with gun violence is violent people. Technology or AI whatever you want to call it will not be exempt from this problem.

  22. ai can find any patrons and play the best human ever any game. but ai cant change priority. can't feel the suffering and then at knowledge to realise what is was really the new worst the redefine que perspective o priority. also can really feel de new kind of happy nest that can complete some senses or new propuerse to be a life. human can correct other human ha say you are wrong. ai never you put input to ai to eliminate suffering in the world and the the traffic it will kill al human. because is the principal factor that produce this. and it will eliminate permanently. ai priority is always what you program. and if you put a paradox situation on ai i think it will just break down like programing a paradoxs in computer language such i tried and it just say memory crash parts information instruction no recognition. so because as paradox may can do quantum computer chose bouf destiny same lapse of time. but if you separete it on 2 proces is work but to guether not.

  23. I stopped attending these debates years ago due to Tyson's blustering stage presence, with his interruptions, corny attempts at humor, loud guffaws, roaming all over the the stage and sweeping arm gestures. Also, I found it ridiculous to waste so much time introducing the panelists one by one and having them make a grand entrance from off stage. Why not have them sitting out there from the get-go and briefly introduce them, so the good stuff, you know, the DEBATE, could begin?

  24. Neil's downfall is that he is an arrogant man. He thinks he knows everything, his arrogance interferes with his ability to receive a more informed view point.

  25. Does anyone else have a problem with trusting corporations like Google and IBM to be determinists or to be given carte blanche in charge of setting ethical standards for AI? How about Lockheed or Raytheon? Will these machines be free of motives and intentions?

  26. This would have been a great discussion had Tyson shut up, stopped trying to be a showman comedian, and let these folks get a complete thought out. What a terrible host.

  27. I really hope Neil learns to interrupt less. I really like him but he derails the conversation sometimes. I understand he’s trying to clarify complicated science theory for the public but a lot of the time it is just annoying.

  28. I've seen this debate for the 3º time now.
    But after Westworld i have a completely new perspective on this.
    This subject is amazing. 🙂

  29. 43:35 You can't build unbiased AI that can be responsible for something. Because responsibility itself implyes being biased to implementing it.

  30. This Debate was apparently the Swan Song of the Isaac Asimov Memorial Debate program. No Debate is scheduled for 2019, and no doubt that is related to the allegations of sexual assault against Neil de Grasse Tyson. If the series is ever revived, it will probably be with a new host.

  31. At 1:17; "…our go program can't play tic-tac-toe unless we program it to do so." One year later, and we have deep learning models that can, with a couple months of practice, learn to beat professional human video gamers at whatever game you assign it.

  32. As Sam Harris said: If we received a message from aliens saying "Humans of earth, we will be there in 50 years, get ready". We would feel a bit more urgency.

  33. When i see men with moustaches i often think why. Its like some sort of men`s make-up. But then permanent.
    On topic. AI. I do not know. They already have a saying, google is god. I do not believe in a classical God. But the modern one scares me a bit.

  34. Robot Lovers?! Oops! LOL… Asimov dealt with the legitimacy of Robot Lovers way over half a century ago. Are we going to blush and go prudish about Robot Lovers in a 2019 in which primitive robot lovers are literally marketed? Admittedly today's offerings are more like glorified slot machines, but, a principle is being established, and, Isaac Asimov is among those who've laid down the topic logic and emotion.

    ~ For A Good Time ~ see the film "The Creation of the Humanoids" (1962) (You Tube), holds an admirably probing intellectual intercourse on the issue of robot lovers, and, is a daring little Grade B sci-fi to see with a human you want to cozy up to the idea. 😉

  35. Trolley problem 3 vs 1 all unknown is one problem seeing if a person has the ability to kill someone.  But the problem I have heard is 10 (unknown) people vs. 1 (best friend) person.  Can you kill one vs. many (maybe).  But can you kill you're best friend vs. many makes a more interesting problem.

  36. People regressing after AI, if you've seen the movie "Wall-E" I think people are worried that we will just sit around and gain weight and watch TV all day.  Not that we don't do that already…hmmm

Leave a Reply

Your email address will not be published. Required fields are marked *