The Evolution of Consciousness – Yuval Noah Harari Panel Discussion at the WEF Annual Meeting

The Evolution of Consciousness – Yuval Noah Harari Panel Discussion at the WEF Annual Meeting

(light music)
– Good morning. Ah, good. It’s amazing, I say good
morning and the music stops. I want that at home. Welcome to our session this morning on the Evolution of Consciousness. I think we’re all here
probably for similar reasons. There’s a kind of, and I’m sure we’ve all picked
it up this week in Davos, this kind of free-floating anxiety about you know, if AI
is gonna be the answer to everything and machine
learning is gonna outthink us. Robots. What’s to become of us human beings? What is work going to look like? Is there a place in the
world that we’ve created? Have we sown the seeds of
our own destruction, I guess, is the question at the core of this and we’re going to talk
about consciousness, human consciousness, the nature of it, and how it evolves, does it evolve? With me today, I’m Amy Burns. I’m the editor of Harvard Business Review. But with me today are, and I just, I’m doing this because I don’t wanna
miss the proper titles and affiliations. All the way to my left is Jodi Halpern, who is a Professor of Bioethics
and Medical Humanities at Berkeley. Next to her is Yuval Harari, Professor of History at the
Hebrew University of Jerusalem, and closest to me is Dan Dennett, who’s a Professor of Philosophy at Tufts. Welcome. Great to be here with you. So I am going to quote something to you that I read in an article from Wired which, you know, kinda gave me nightmares. It said, “Countless
numbers of intelligences “are being built and programmed. “They’re not only going to get
smarter and more pervasive. “They’re going to be better than us “and they’ll never be just like us.” That was a Freudian slip that
I couldn’t get that one out. So, Dan, I’m gonna start with you. Here’s a little question. What is consciousness?
(laughing) – Oh, that’s an easy one. Thank you, Amy. – Let’s land that.
– You want the 10-second answer, or the 10-hour answer? – Well, actually, do
let me help focus that. I was only half-kidding. That’s a Davos joke, I guess. How far has science gotten us to understanding the
basics of consciousness, what consciousness is at its heart, and talk about that, well, actually, I’ll save
that question for you, Jodi. Go ahead, Dan. – It’s making great progress. I think, in my career, there was a period when it was considered off-limits to talk about it, for scientists to talk about it, but that’s changed in the last 25 years and now there’s a regular gold rush going and there’s a lot of good work being done and there’s a lot of controversy and there’s a lot of big egos fighting it out at the top, but we’re making real progress, and I think that the idea
is for a proper confirmed scientific theory of, not
just human consciousness, but animal consciousness and by extension, machine consciousness, if that’s possible. We’re, stay tuned. Things are happening very fast. – Well, can you bring it
into semi-focus for us? – What consciousness in a, there are many different meanings. There’s just plain sentient, and even planets are sentient, but they’re not conscious
in a sensible sense. It’s a, an open-ended capacity to represent your own representations and reflect your own reflections and it’s what gives us the power to imagine counterfactual
futures in great detail and think about them, for instance. That’s just one of the
things that consciousness, in us, can do. That very fact of imagining nonexistent states of affairs, distance in time and space, for instance, we have no reason to believe that any other species is capable of that. That really is a, might prove to be wrong, yes, and it might prove that some moon somewhere
has made of green teas, but I don’t think it’s very likely. – Okay, how does that sound to you, Yuval? – Well, what I think
about the same question? – Yeah. – Well, I think that
it’s a lot of confusion between intelligence and consciousness, especially when it comes
to artificial intelligence. I would say that intelligence is the ability to solve problems. Consciousness is the
ability to feel things, to have subjected experiences
like love and hate and fear and so forth. The confusion between
intelligence and consciousness is understandable because
in humans, they go together. We solve most problems through feelings, but in computers, they could work, they could be completely separated so we might have super intelligence without any consciousness whatsoever. – Agree. – Jodi, do you agree? – [Jodi] I do. – And what about when we sort
of connect this to empathy, which is really your field. Do you think we’re getting any closer to understanding that? – Yeah, I think, by the way, one of the great things about, thank you. One of the great things about AI is not our increased understanding of it, I mean, that’s great. A lot of you work in that area but it’s causing us to
be much more precise in our questions about
ourselves, which I think is great and what I think is that as a, I’m a psychiatrist,
philosopher of emotions, ethicist, so I have to say part of
what people are worried about is neither consciousness or
intelligence, but the self, and what is the self and will we be able to become, will that be replaced? – [Amy] So how do you
differentiate among those? – Well, I think that the way that Yuval just defined consciousness,
which is different than Daniel’s definition of consciousness is closer to my definition of the self. So, I don’t care about the semantics. So I think we’re pretty compatible in our views of a lot of this, but I think that what
I would just point out is that I know there
are people even at Davos who are creating
companies to try to create empathic AI, and I really agree with Daniel
that that’s a big mistake, that we really should
be using AI as a tool, not as a companion. And I can talk forever about
why I think that’s the case but I’ll just give you a quick two things. So I’m a teacher, I’m a Berkley professor, and I’ve been teaching doctoral
students in the sciences ethics for a long time. So for 20 years, I’ve
asked the same question the first day I’d meet
with the doctoral students who are very good scientists, and I say if you could
have a little electrode planted, by the way, 20 years ago, I didn’t know we’d be
able to do this (laughs). It was a thought experiment. I said if you could have an
electrode planted in your brain that would make your crucial life, they’re at a stage of
life where they’re making very existential decisions. Who to marry, whether to have children, what careers to do, and I would say to them, if your electrode would make the right decisions for you, and you would have a happy
outcome, a better life, would you do it? And I just did this four days ago, again, with my new course at Berkley. I’ve done this every year. And every year, every
single person says no and I think that’s incredibly interesting. So there’s two ways of
thinking about ethics and one is outcomes-based. Only utilitarianism-consequentialism. And another way of thinking about what we as selves and persons care about goes beyond just happiness
maximizes an outcome, but the kind of processes
we live our life through. Are we autonomous, are we agents? And I would say even more
than autonomy and agency, what is our relationality? How we relate to others and how to others, how we encounter others and make decisions and be with others? And I have to say one
thing that’s very shocking. This year, learning more and
more about where AI is at, I actually think my students are wrong. I think I’d say yes to the electrode because I, and this where
I’m on the same page, I think that with enough, I mean, not today. Not with the AI of today, but if there is an AI, this is not AI that would do empathy, I think that there’s an AI that would make good decisions for Jodi. This is more about agency and autonomy. I think if it knew everything about, you know, everything
I’ve ever been through and how the world is changing and everything about my physiology and you can tell, basically,
you asked me what empathy is and my whole career is
saying what I think it is and it keeps changing, but the main thing is
there’s two parts of it. One part of it is being able to really sort of micro-recognize other people’s internal worlds. And I am now convinced with the two of you that AI could, in principle, do that. But the other part of it, I’m a psychotherapist as well, so what makes empathy
transformative in psychotherapy? One part of it is that the fibrous recognizes what you’re going through in ways you might not even recognize, and I think, we do have, by the way, Stanford has a whole weekend on this and I worked with them, there is AI psychotherapy now, already, and it’s like a smart journal. And I’m not against that with people that are not demented or not children and can really understand that
this is just a smart journal that can tell them their reactions, but the other part of really
transformative psychotherapy is the co-vulnerability that you know it’s actually another human who is subjectively experiencing the gravity of what you’ve been through and a being with experience. A company in you, actually. We have Matthieu Ricard
here who knows about this, and that is really on a scientific level a lot of what my whole
career has been showing, the value of that, and I’m not gonna even
say whether I could ever do that or not ’cause it would have to be sentient in all these ways, but what I’m saying is
we don’t want it to. We wanna do what humans are good at and that’s what humans can do and when we lose all the
jobs we’re gonna lose because of AI doing mechanical things, I want AI doing the caregiving, the elder care, the child, I don’t want AI to be doing that. I want us to be doing the humanizing thing and that co-vulnerability of other humans and being with, that’s what humans do best and I’ve talked a long time but I’ll just say something
really provocative and one of our articles related to this, I think we’re good at it
because we’re not logical. This is a very bizarre thing to say but I think it’s the glitches in us which are related to our finitude that make us really feel
understood by each other. We empathize most around
each other’s mistakes, each other’s ridiculous suffering, and I think that it’s a fool’s task. That’s just exactly what
we’re not needing AI for. But to make smart decisions? I trust AI. But not to understand
me and help me transform and feel related to. – Go ahead, Dan. – I think that, a point you made about
vulnerability I wanna return to. It seems to me that in the foreseeable future, AI systems are going to
be tools, not colleagues. Not, and the key point there is that we, the tool users, are going to be responsible for the decisions that are made and if you start thinking about an AI as being a responsible agent, put consciousness aside and say, well, it’s an intelligent agent but can we make it a responsible agent? I co-taught a seminar, a
course on autonomous agents in AI a few years ago and as an assignment to the students, I asked them, not to make one of these, but simply to give the specs for an AI that could sign a contract. Not as a surrogate for
some other human being, but in its own that was legally, a child or a demented person
can’t sign a contract. They’re not viewed as
having the requirements of moral agency and you ask yourself, what would an AI be that had the requirements
for moral agency? And the point that they
gradually all drove to was, it has to be vulnerable like us. It has to have skin in the game. And right now, AIs are unlike us in a
very fundamental way. You can reboot ’em, you can copy ’em, you can put ’em to sleep for
a 1,000 years and wake ’em up. They are sort of immortal. And making them so that they have to face their finitude of life the way we do is a tall order. I don’t think it’s impossible but I think that nobody
in AI is working on that or thinking about that and in the mean time, they are engaging a lot
of false advertising which I, one of my all
time heroes is Alan Turing. Right up there with Darwin. And Turing was a brilliant man. His famous Turing Test, which
I’m sure you all know about has one unanticipated flaw. By setting up this litmus
test for intelligence as an opportunity for the computer to deceive a human judge
about its humanity, it put a premium on deception, a premium on seeming human that is still being followed in the industry. Everything from Siri on down and up, they have these dignified
humanoid interfaces and those are pernicious because I mean, one thing that
my whole career is based on is the idea of the intentional stance where you take something complicated and you make sense of it by considering it as a rational agent
with beliefs and desires and when we do that, we
always are overcharitable. They always imagine more comprehension, more understanding, more rationality then there’s actually there. And rather that fostering that by having these oh-so-cute
and friendly interfaces, it should be like the
pharmaceutical companies. They should have to remind every user of all the known glitches
and incomprehensions in the areas the system is
absolutely clueless about. Of course, that would run on
for pages and pages and pages so we’d have to find
some substitute for that, so until we’re ready to have AIs that you would be comfortable and rational,
making a promise with or signing a contract with, then we’re the ones, the clock stops with us. Whatever advisor we have,
a very intelligent advisor, but when it comes to
acting on that advice, we shouldn’t duck responsibility and as long as we maintain
our own moral responsibility for the decisions we make aided by AI, I think that’s a key to keeping
us in the driver’s seat. – Do you agree, Yuval? – I’ll take the AI’s position. (laughing)
I’ll point out that humans are not very
good in decision-making, especially in the field of ethics and the problem today is not
so much that we lack values. It’s that we lack understanding
of cause and effect. In order to be really responsible, it’s not enough to have
values and responsibility. You need to really understand the chain of causes and effect. Now, our moral sense evolved
when we were hunter-gatherers and it was relatively easy to see the chains of causes
and effect in the world. Where did my food come from? Oh, I hunted it myself. Where did my shirt come from? I made it or my family or friends made it. But today, even the simplest question, where does this come from? I don’t know. It will take me just a year, at least, to find out who made this
and under what conditions and was it just or not just. The world is just too complicated. In many areas, not in all areas, but in many areas, it’s
just too complicated and when we speak, for
example, about contracts. So I sign contracts almost every day like I have this new application and I switch it on, and
immediately a contract appear, and it’s like pages and pages of legalese. Me, and I guess almost everybody else, we never read a word. We just click I have read and that’s it. Now, is this responsibility? I’m not sure. I think one of the issues, and this comes back to the issue of self, is that over history, we have built up this view of life as a drama of decision making. What is human life? It’s a drama of decision making. And you look at art, so any Hollywood comedy, any Jane Austene novel,
any Shakespeare play, it boils down to this great moment of making a decision. Do I marry Mr. Collins or this other guy, I forgot his name. To be or not to be?
– Darcy. (audience laughing) – Do I kill King Duncan? Do I listen to my wicked wife and kill King Duncan or not? And the same as religion. The big drama of decision making. I will be fried in hell for eternity if I make the wrong decision and it’s the same as modern ideologies that democracy is all about the vote or making the big decisions. And in the economy, we have
the customer is always right. What is the customer’s choice? So everything comes back
to this moment of decision and this is why AI is so frightening. I mean, if we shift the authority to make decisions to the AI, the AI votes, the AI chooses, then what does it leave us with? And maybe the mistake was in framing life as a
drama of decision-making. Maybe this is not what
human life is about. Maybe it was a necessary
part of human life for thousands of years, but it’s not really what
human life should be about. – [Amy] Jodi. – I love this discussion right here. We never get together. I actually think there’s a deep, I sometimes, I hope it’s fun for you guys, but sometimes it’s more
fun if we’re fighting but I mean, I feel like
you guys are writing next sentences in my head. I love this discussion and I
wanna try to synthesize it. So this goes to what I think you started. I think we need, maybe at
Davos another other places, to have much deeper thinking about ethics and this
notion of responsibility and we need to make
progress on that notion, but basically, let me suggest to you that we too often think that the responsibility is what the cause of something is is what’s responsible for it. But that’s not necessarily true. That’s where Dan was getting us started. Role responsibility can be
independent of causality so what do I mean by that? I mean, my example is
playground bullying of kids, little kids bullying each other and cause tremendous problems ’cause that causes longterm health and mental health problems. The little kid can’t be responsible because they didn’t have
agency and autonomy. But the school is responsible and the parent and the school system for, they’re the party that requires the kid to be in school, be exposed to other
kids in the first place and has to really help each child have a right to an open
future, et cetera, et cetera, but the responsibility lies in the role of being the parent,
the school, et cetera. And I understand the point of who created the jacket
in the first place, so I’m not saying this neatly
tidies up every small question but I think that I love Dan’s point that our vulnerability and our
responsibility are connected. That’s where I’m going
as well with my work. But the point is, we can’t, so I, and I’ve written about how bad, I did a whole three-year project on how bad humans are at decision-making. That’s why I disagree with my students. I want that electrode advisor. But no matter what, even if it had, I mean, even if it let it work where it did the decision, even if it causally overruled them, I still would be morally responsible because of my role
responsibility to myself. And I wanna say one last thing. I also agree, though, with Uval’s point that the focus on
decision-making in ethics and what we are as persons
has been really misplaced because our roles play out in how we empathize and
relate to each other every day and what we do and what
we create and do together, not in these just very dramatic decisions. – One of the things we have to realize is that our current views may be out of date, may be on the way out, may be on the point of extinction. Our current views about
responsibility and decision-making are held in place by a dynamic feedback system. We educate our children
to become moral agents. We try to give them a
sense of taking pride and being responsible for
the decisions they make. That’s part of the moral education. So if we’re talking
about overthrowing that, we’re talking about a truly revolutionary, really a sort of extinction, would be the extinction of
persons as responsible agents which is pretty serious. A question along those lines
that I have for you, though, is okay, so you’ve got this
super-duper AI of the future that got this wonderful ability to predict cause and effect and it also, of course, has to have some set of values in order to say why one outcome is better than another. I think there’s already problems there. Let me give you a side. Three Mile Island, how many
years ago did that happen? 30, something like that. If you’re a consequentialist, is that a good thing to happen
or a bad thing to happen? If you’d been the engineer who could’ve pushed the button to prevent Three Mile
Island from happening, knowing subsequent history, would you not push the button
or would you push the button? That idea that we can
be consequentialists, the big flaw in that, I think, is that consequentialism works
great in games like chess when there’s an end point and you can work back from that and figure out whether this
was a good move or bad move but in life, there are no end points, and that means that the whole idea of totalling up the good against the bad, it’s a fool’s errand, you can’t do it. You simply can’t do it and neither can any AI, but I’m supposing that an AI could and it did its calculation
and it said to Yuval, after due consideration, here is the morally best thing to happen. The human race extinguishes itself. So you say, gotcha, that’s
what we’re gonna do, right? – I’m not sure I’m following the relevance to the kind of
argument you’re having here. It doesn’t matter what kind, you can start with any set of values. The idea is that you could program the AI to follow your
particular set of values– – Well, it depends. – Yeah, well, everything here is still, we are still not there
with this kind of AI. Of course, if you couldn’t do it, if you can’t program values into the AI in any significant way, then the entire discussion is irrelevant, but even if people say, but we won’t say, I
don’t know, serendipity, like okay, let’s start
with something easier than destroying the whole human race, just letting the AI pick my music for me. Something much more simple. And people say no, that’s bad. We shouldn’t give the AI the authority to pick music for me because then, I lose serendipity and I get trapped inside the cocoon of my previous preferments. But that’s so easy to solve. You just find out what is the exact, not even the exact, what is the ideal zone of serendipity, because
if it’s 50% serendipity, it’s too much noise. If it’s zero, it’s too little. Let’s say you have all these experiments and you realize that the
ideal level of serendipity for humans or even for
me personally is 7%, you just program this into the AI and 93% of the music is based on previous likes and dislikes and 7% is serendipitous, completely. It guarantees more serendipity than I could ever accomplish myself. So, even these kinds of arguments, okay, you like serendipity, no problem. Just insert the numbers into the AI. – Well, the serendipity, yes. You can get a variety of serendipity where you can sort of set
it to how much you want, but it’s not clear that
you’re not paying a price when you do it that way as opposed to, well, to take
an example along these lines, I, in my student days
and early professor days, very often when hunting in
the stacks of the library for a particular book and found books shelved nearby that hugely changed my intellectual mind. And how close do you think your serendipity algorithm can get to recreating that kind of serendipitous possibility? Do you think, oh yeah, we can do that. It’s just a matter of tuning. – This particular case, yes,
I guess they could do it. I mean, not me, I don’t know how to code but you just say okay, so
you find the best book, according to my criteria, and then you go to the
Library of Congress, you find out which books
are on the shelves nearby and give me those. – But wait a minute. One of the things that I would be quite sure of is that sometimes I
would pull a book down, not because, ooh, that
looks right down my alley, but no, that can’t be right. This goes against
everything I’ve ever valued. And then, open it up and been challenged and that’s not going to appear on my shortlist in your
serendipity algorithm because it’s too closely to my
existing sets of preferences. This is a, the possibility of a revolutionary change in my preferences and one can say, well, we’ll just leave it to, we’ll just leave that to chance. We’ll put in some super serendipity and you can do that. But um, maybe it’s better to have a way of encountering in the actual real world these opportunities rather than having them carefully dished out to you by one way or another. – In principle, yes, but I think that all
these kinds of discussions about the problems and limitations of AI, I think in many cases, the AI is tested against an impossible
yardstick of perfection, whereas it should be tested against the fallible
yardstick of human beings. It’s like a discussion
about self-driving cars. That people come up
with all these scenarios about what could go wrong
with self-driving cars and it’s certainly true and a lot of things not
only can but will go wrong with self-driving cars. They will kill a lot of people, but then we just have to remember, oh, but today, 1.25 million
people are killed each year by car accidents, most of which are caused by human beings who
drink alcohol and drive or who text messages while driving or just don’t pay attention so even if self-driving cars
kill 100,000 people every year, that’s still far, far better than the situation today with human beings and I think that this is what we need to, the same kind of more realistic approach should be adopted when we consider the benefits and limitations of AI. Also, it feels like choosing music or choosing friends or even sponsors. – How about choosing medical
diagnosis and treatment? The advances in AI in
medicine are truly impressive and they’re getting better and better. And I think this is in
general a wonderful thing. I don’t think people are doing a very good job of accounting for the downside. Today, still, the life of a physician is
one of the most attractive, exciting, glorious, reward-filled, gratifying lives on the planet. That’s gonna change. The physician in the very near future is gonna be more like the doorman in an expensive apartment building. Great bedside manner, very good at pushing a few buttons and very good at explaining
to you what the machine says. Is that the life you
want for your children? – I wanna see if Jodi agrees with that. – You’ve managed to come up with something to disagree with me about (laughs). He’s a hero. I think, first of all, I wanna say, I’m just too much of an ethics teacher. I feel like I have to
make the audience realize what happened in these transitions. (Amy laughing)
So what happened is, we were talking about two different philosophical ethical views of the world. One of consequentialist-utilitarian where the outcome maximization, which is, we’re always
doing the maximizing efficiency of positive over negative, measurable happiness, blah blah blah, and the autonomous vehicle
points that you just made are about that. We will be better off
with autonomous vehicles. So the other point that Dan was making about the library was that even if it somehow had the right serendipity to give him some kind of
happiness outcome of books, the experience of being involved
in his own transformation from this other ethical
system of selves and persons which goes with more
responsibility as well which is the other ethical,
it’s called deontology, the study of duties,
but we all think of it as the study of rights, human rights, individual rights, but
that the processes of life, not deceiving each other,
caring for each other. These things matter
independent of consequences. Not just because of consequences. That’s another thing. We want to have our own
vulnerable, frail lives, make our own mistakes,
love the people we love because that’s being human. Not just maximizing an outcome. That’s what the logical shift that we made in a sneaky way, so you know that. So then, the question is, what does being a doctor
have to do with that? So I still think, first of all, I’m a psychiatrist, so not only is AI already better at reading X-rays and in the national
health system in the UK, better at deciding when you need dialysis, so that’s a decision, but it’s actually, to my
incredible astonishment, already better at predicting suicide, which has been the biggest
problem in psychiatry. There’s been no way to predict when a person who feels suicidal, has suicidal intent to ideation is actually gonna complete a suicide. Biggest problem in psychiatry. AI is not perfect at all,
it’s like the car thing. I don’t know if it’s as good as cars yet. I doubt it’s as good as cars, but it could get that good, but the point is, that’s amazing, right? That’s such an existentially
interesting thing that AI is already good at and like I said, I’m already
different than my students. I’ll let AI advise my decisions. I’m certainly gonna want it to decide people’s cancer treatments and their psychiatric
hospitalizations and all of that as long as it’s better than we are because I want, that’s when you’re looking for a good outcome. – But are you’re gonna be
a highly educated doorman? – No, no, okay, thanks Amy, but the point is consequentialism outcomes tell me if AI, when people come to a doctor, my whole career is showing
that you care about humanity, but let me tell you, they
care about their outcome. You wanna get well. You’ll pick the surgeon with
the terrible personality, there are surgeons with great ones, but you’ll pick the one with
the terrible personality if you’re gonna get a better result. So let AI do all the
result things it can do. Most, I gave two talks
on this yesterday here, most of what people need in
health care, unfortunately, or I don’t, most is the wrong
word ’cause public health, but a lot of what people need is dealing with things that
aren’t going very well. A lot of what they need is someone to help them change behaviors that are hard to change. All of that is a very subtle process that I’ve spent my entire career on so I don’t think it’s
like being a doorman. I think it’s what all
of psychiatry is about and it has to do with this co-mingling. I’ve written a book about this but it’s a co-mingling of left
and right brain capacities to recognize what’s at stake for someone with that shared vulnerability and that’s genuinely
transformative for people, that genuinely is healing for people so I think that we’ll need more people that might be that doctors become more like really good therapists
if they wanna be doctors. They need to be able to
deal with death and dying. They need to be able to deal with loss. They need to be able to deal with motivating you to drink a little less, exercise a little more, and really be in it with you, but wouldn’t be. I mean again, actually, there are doormen that I know, have known, who’ve had a very psychotherapeutic effect on the people they work with, but you get what I’m saying. There’s a lot of expertise there. – I think that if we try to go back to the
question of consciousness, maybe one way of going forward is to say okay, leave the
decision-making to the AI. It’s better at that. And let’s focus on exploring consciousness and on exploring experience which is not about decision-making. And I think that, so even in this sense, maybe
there is no argument here and if you really value experience and you really value consciousness, then you should have no problem leaving the dirty stuff of
making decisions to the AI and having much more time and energy to explore this field about
which we know so little. I think humanity, for practical reasons, for thousands of years
have focused so much on making decisions and controlling, on manipulating and if
we can just leave that and focus on what we don’t know, on this dark continent of consciousness, that would be a very
important step forward. – It would mean giving up certain very deep pleasures. I’m a sailor. When I was a teenager, I
learned celestial navigation of the sextant and the chronometer and I dreamt of single handing
a sailboat across the ocean and navigating by my sextant. Forget it. The insurance company won’t let you do it. (all laughing) Because a GPS can do it 1,000 times better and faster and safer and you
just bring three GPSs along and trust the two that agree and, but the sextant, nicely polished on your mantle piece, it’s completely antique, but look what’s happened. It means that a certain sort of adventure, existential adventure, a certain sort of challenge has been simply wiped off the planet. Here’s something you
just can’t do anymore. Unless you’re a sort of anti-technology nut or something and you get the insurance
companies to sign off and then you go and you do your foolhardy, romantic, foolish thing. But I don’t know if
we’ve taken the measure of how much, I mean that’s a very dramatic
case, at least for me, I don’t know if we’ve taken the measure on just how much our finite fragile lives will be tamed, overtamed by the reliance on technology, turning us into hypercautious, hyperfragile,
hyperdependent beings and whether the fact that we get a smile on our face all day long and are well fed, will make up for that. – So before I open the floor to questions, I wanna ask, Jodi, how
does that sound to you? We’re gonna become hyperfragile, we’re all gonna need you. Does that make sense? – Well, I think the last
part about needing me if a good place to go about needing care, needing to care for, again, because my work is on empathy, I always take it back to the, I mean there’s two basic parts of the person or the self that we brought up quite a bit today. One has to do with autonomy and experiencing that with the sailing and the moral responsibility of that and the other part had
to do with relationality, we brought that up quickly
but I wanna go back to that because the most interesting, a lot of you are in the real world where you’re doing these things already and I’ve traveled and
I’ve seen the robotic pets that are being used with
elders with dementia right now. I was thinking, you know, if you’re lonely and you have a pet. I’m very attached to my dog in real life and that’s an interesting thing, but I mean, we’re not, let’s say theoretically, and we said already that some of us think it’s problematic to make colleagues rather than instruments
or helpers out of AI but it looks like we are
going, some of you here, in that direction. So there will be dementia caregivers that will feel like a
person that cares about you and I love Dan’s point about, I mean, the Turing, I’m really interesting in
working with you on that issue on how Turing’s mistake
of emphasizing deception is the test because basically we’ve decided that if we
can make people believe something is really a person,
we’ve solved the moral problem in a way, so if you’re, I mean I’m really curious
what all of you would feel. If you were an elder with dementia and you had this wonderful caregiver and loved them and felt
that they loved you and you didn’t know that they were AI, is there anything wrong with that? I’m just interested in what
people think about that. Where’s the loss there? – That’s a very good question. Yes, I agree with you entirely that the cutting edge,
as far as I can see, for humanoid AI is elder care and if you look back, I remember in my youth, I’m that old, there were still telephone operators, by the thousands. Not a nice life, really. Really quite tedious nine to five job. And of course, those
jobs all got wiped out and we applaud, we think, good, that’s not any way for a
human being to spend a life. Well, I have to say that taking care of demented people is not my idea of a good life. And there’s gonna be more and more need for people to take care of our parents, myself in a few years, and so I face quite directly
the issue that Jodi raises and I think the key in what she said was deception. The question is whether or not we can let people whether or not people will get the benefit with AIs that are very responsive but wear their non-humanity
on their sleeves. And I think that’s possible. Some of you may remember sort of an antique science fiction film called um, oh gosh, Short Circuit, which had the most amazing robot, looked sort of like a praying mantis. Didn’t really have a face. It had cameras and it had
these sort of flappers that were like eyebrows. I think that designers of that robot went out of their way to
make this as non-humanoid as you can imagine. But they also did a brilliant job of making it care and making it seem like, like a friend like someone
you’d wanna be friends and worry about its future. So I think it’s possible. For me, I think I would just like, three robots to play Bridge with.
(laughing) So I couldn’t do that anymore. – So on that note, if you have a question, please raise your hand and Robert at the back of the room will
run over with the microphone and please to wait for him. So who has questions for our panel? – Yes, I see someone right in the second row there. – Hi, thank you for a
fascinating discussion. We’re talking, really, about how we, regulate, but how we think
about our relationship with AI and we’re kind of edging toward the idea that society may have a view we’re trying to inform
society’s view on that, but my question is that
this is all taking place at a time where we’re
peculiarly fragmented as societies and technology itself pushing our fragmentation
in different ways. We also have different cultural
understandings around this, so a lot of the discussion around the regulation of AI in the West is about, well, the
Chinese are going ahead and they have so much data
and they’re doing so much that if you stop us in any way, we’re gonna lose the battle and it’s kind of an arm’s race in this, which I think is a difficult argument too. So I just wondered if you can think of any optimistic, you know, threads of thought that can address those intuitions. – [Amy] Any optimism here? – (laughs) Well, it’s, it’s tempting to think that the good old
marketplace would take care of a lot of this and that a lot of the brave ventures, most of them, are going to be discarded, dismissed, ignored, by the potential customers for them, but we have to bear in
mind that some of them, and we’re already seeing
this with children, some of them may be addictive and that is very, very
pessimistic, I think. I am very worried about that. – So we’ll end the optimistic answer on a pessimistic note. We’ll have a question up here
in the front row, Robert. – [Man] Suppose you could program AI to be the most beneficial, so we won’t call it compassionate, but always most beneficial. So what do you think of the value of the process of becoming compassionate? So the vast reaches of experience of the challenges and how you solve them, that can also help you to
help companions on the way, and by the way, if you spend some time in hermitage, cultivating compassion is certainly not about decision-making but becoming a better human being so that process is to have
gone through the journey is highly enriching
and that also helps you to have your human failures to become compassionate in a way. – Yeah, I think that if you spend less time on decision-making and with the feeling that
oh, I control everything so it’s very important what
decision I make and so forth, if you spend less time on that, you have far more time and
energy to explore yourself, to explore your
consciousness, your experience and thereby develop your compassion. Now, I think that you
also need some real life experiences of course, to do that, but my impression, looking for Davos, so I’m not sure that the people who make the most important
decisions in the world are, by definition, also
the most compassionate. (laughing) So certainly, there is
no easy, direct link between making decisions,
being compassionate. It’s much more complicated than that and my impression from meeting people at the top of the business world and at the top of the political world is that if they had some free
time for making decisions, that would be great for them and also great for the word because they are so busy making decisions, they don’t have time, for example, to develop their compassion. – I just wanna add something
linking all these things. I think that I love that point and I think it shows
where we don’t have to be sort of happy fools, you know, I mean, wouldn’t be so
bad to be happy, right, but anyway (laughs) for
everyone to be happy, but I think that the, giving up decision-making to some degree, which is an interesting thought to be able to really become a deeper self and deeper in our relational
and spiritual lives, I do think this notion, we’re not giving up moral responsibility, so that’s extremely important. We don’t have the concept
yet for how this goes where you don’t just
give your decisions off to AI the way you would to
an authoritarian leader. You’re not just passively feeling like your steps are predicted. It’s much more like a
tragic view of the world where you realize which
may be closer to the truth, even without AI, which is our decisions are not really, they’re not that rational to begin with and they’re not really that
determinative of what happens as we think most of the time. We don’t have the power
and control of our lives that we think we have anyway, so a very deep awareness that
we’re morally responsible for each other and yet
we don’t have the power to change each other
the way we think we do or to control other people. We can barely control ourselves. So it gives us a very
different moral vision. – It does, but I think, you say we don’t have
the power we think we do to control others and yet, I think, we should also acknowledge that whatever we do, we do with a little help from our friends and that, in fact, having friends having people whose respect we cherish is one of the great
stiffeners of the spine, the moral spine, parents will automatically not engage in behaviors in front of their children, that if the children weren’t there, they would probably succumb to that urge to indulge in those behaviors and if AI removes that wonderful companionship association where, in effect, as you said, I think, it’s not just that we take responsibility for our own actions. We take partial
responsibility for the actions of our family and friends. And this whole web of moral
responsibility and respect is, itself, in jeopardy now and that’s very scary. – [Amy] So we have a question. This gentleman in the third row, Robert. – Thank you, fascinating discussion. I’d like to hear your view in terms of, so given that most of the repetitive jobs will progressively disappear, where do you see what is left for humans in terms of where we would be more
relatively competitive? What jobs, if you could give us examples, and secondly, what type of advice would you give to our children in terms of how they should
orientate their careers accordingly? – I think that need to protect
the humans, not the jobs. There are many jobs not worth protecting and if we can protect the humans then it doesn’t really matter so much if they have a job, if
they don’t have a job, which kind of job, on a more practical and realistic level, what kind of skills, for example, to acquire today so that he would still be relevant, not just economically, but also socially. In 30 years or 40 years, we don’t have any idea how
the world would look like except that it will be
completely different from today, so any investment in a narrow skill, in a particular skill, is a dangerous bet. The best bet is to invest
in emotional intelligence and in mental resilience. In the ability to face change, in the ability to change yourself and to constantly reinvent yourself because this is definitely
going to be needed more than in any previous time in history. Of course, the big question
is how do you learn something like that? It’s not something that
you can just read in a book or just hear a lecture and that’s it. I’m not mentally resilient,
I can face the world better. (all exclaiming) – So the gentleman next to the gentleman who just asked the question. – Thank you very much for
a fascinating discussion. I’m coming from Russia from a big bank where we’re, of course,
implementing a lot of AI, but on the other hand,
Elon Musk said that we have five to 10% of controlling
how AI will develop and we also have an education fund and we worry a lot of that kids are growing up increasingly
in a virtual world and already are losing some of the skills and they’re not only navigating. They’re much more basic. But what do you think of our chances of not finishing in a matrix-like future and what can we do, like big businesses, politicians, to prevent it? (audience laughing) – Well, one thing I can
say that we are already in a kind of matrix anyway. We have been for thousands of years. So it’s not a completely new situation, and what can businesses
and politicians do, I think the first step, and that goes back to the first question that was asked here. Any solution will have
to be on a global level because of the danger of
the race to the baton. That no country, no business
would like to stay behind. If, for example, take a simple case, a simple, it’s still complicated
but relatively simple in moral terms. Developing autonomous weapon
systems is a terrible idea for many reasons, but we are seeing now an arms’ race all over the world to develop
autonomous weapon systems and even to stop, even though
it’s very clear, I think, the ethical debate on that, that it’s a very bad idea, it’s still very, very difficult to stop it because even if one country,
Russia or the US, whatever, says okay, it’s a bad idea, we are not going to do it, then they look across the border and they see that the
Europeans or the Chinese or somebody is doing it and they say, we are not fools, we
don’t want to stay behind, so even though we know that it’s bad, which means we are moral and it’s important that the moral people will be the most people,
so we must develop it, because we will be more
responsible in using it, so let’s develop it. And this is the logic
of the race to the baton and the only way to effectively prevent the development of
autonomous weapon systems is by global corporation. And this is a no-brainer, but it’s, the world is, at least
in the last few years, going exactly the opposite direction. So before we think about
any practical method, we need far more effective
global corporation, otherwise, almost nothing will work. – [Amy] So let’s take one more question down from here, Robert. – [Woman] Thank you and thank
you for a fabulous discussion. It’s really exciting. My question is related
to Davos theme this year which has been gender. To what extent can artificial
intelligence be gendered, what would it mean, and could we eliminate
things like unconscious bias if AI could be gendered? – Oh, that’s a good question and it’s particularly important because as my colleague
and friend Joanna Bryson has shown recently, the deep learning systems that sieve through all the
material on the internet are so good at capturing patterns that they have become gender biased. Right there. Since they’re parasitic
on the communications of human beings, they have already, this
has been discovered. She proves that this is a real feature and a very serious one. – I can say two things about this issue. First of all, there is a real problem of AI becoming gender biased. The bright side is that,
at least in theory, it should be easier to
eliminate this in AI than in humans because AI
doesn’t have a subconscious. In humans, somebody can
agree with you completely that it’s terrible to discriminate against women, against gays, whatever, but they are not aware
of what is happening on the subconscious level so it’s very, very difficult
to change that in humans. Luckily, AI doesn’t have a subconscious. – Oh, it’s over then.
– So, in theory, it could be easier.
(laughing) The other point, more broadly, is that it’s very interesting. I mean, even the Turing test was originally, Turing gave two examples. Not only about convincing a human, the machine is a human, but also about passing as
somebody from the other gender and you see it throughout, especially in science fiction, it always comes back somehow
to the issue of gender. Like 90-something percent of
the science fiction movies, the plot is you have a male scientists and the AI or the robot is usually female and I think that both of these films are not about AI at all. They’re not, they’re about feminism. It’s not humans afraid
of intelligence robots, it’s men afraid of intelligent women. (audience clapping)
– Wow (laughs). Okay. Well. No one wants to follow that (laughs). Well, I wanna thank our panel. You guys have been absolutely brilliant. Thank you for helping us make really important distinctions among the self,
intelligence, consciousness, for giving us smarter
questions to chew on ourselves and for sending us out in the world so that we’re not happy fools. And thank you all for
your terrific questions. (audience clapping) (light orchestral music)

32 thoughts on “The Evolution of Consciousness – Yuval Noah Harari Panel Discussion at the WEF Annual Meeting

  1. We tend to make our decisions under mitigating circumstances, based on opinions and on unverified assumptions. We choose between sets of fictional promises and sympathy. At the same time, we take the governing framework for granted. Global integration is undoubtedly welcome and imminent but are we ready for the new challenges?

  2. AI only has subconscious. It's programmed through training and it's very difficult to access and fine tune and edit the resulting algorithms. Because it's necessarily trained on human behavior, prejudice could be a pernicious problem.

  3. I didn't even hear them yet, but the title annoys me because, as I understand it, "Consciousness" doesn't necessarily evolve, it is already perfect in itself, what evolves is our way to recognize it, our way of not being distracted, that can change, but consciousness is beyond evolution. For example, concepts evolve, stuff, paths, but not space. How does "space" evolve? It's already "spacy" there is not a thing to add to maker "spacier."

  4. "Consciousness Is a Cultural Template," one of the 37 essays in the 700-page GOD IS A HEARTLESS RECLUSE, posits that consciousness is a CULTURAL TEMPLET we acquire via rearing, education, and interaction with other humans. Proof is the fact that the more neglect/abuse children experience, the more problems they have with language development, psychological development, and so on. Feral kids like Genie Wiley prove this. In fact, human newborns raised in the bush by chimps would have CHIMP consciousness: They'd sound and behave like chimps. This DISPROVES panpsychism, the belief that consciousness is a universal force akin to gravity and our brains mere receivers: the Penrose-Hameroff ORCH-OR theory. Hameroff is a raging mystic who embraces neo-Platonism (immortal soul, godhead, and other transcendental beliefs that Plato got from the Orient). On the other extreme is materialism, determinism, etc. Everyone interested in a more balanced, interdisciplinary view of consciousness must check out my work:

  5. The AI we have is at a level which is absurdly tiny compared to humans, it is about the level of a cockroach or a bee. To get to a human level will take enormous amount of computation, like centuries worth, because the computation is huge in a brain, and small in our current computers, because the lithographic scale is 2d and hundreds of square nanometers per bit, while the brain computation is 3d and a few cubic angstroms per bit. One brain is comparable to all the articifial computers on Earth in data storage.

  6. In his ubiquitous presence on various panels, conversations and lectures Harari seems to himself become a human example of machine learning.

  7. This was great. These guys should get together again. How about every 1/2 year? Things are changing so fast.

  8. Daniel Dennet is such a bluff, always full of bs. He doesn't even know the difference between consciousness and self-consciousness.

  9. Bertalanffy gave me indirectly an extended idea of consciousness with his General Systems Theory; I define consciousness as the cybernetic circuit of input-process-output in inert beings, input-process-output-feedback in living beings and input-process-output-feedback-feedforward in living rational beings (by process we must assume from the simplest to the most complex systems), and by such definition, I must say that awareness and self-awareness are consciousness but consciousness is not only awareness nor self-awareness; thus consciousness is a category where it is included processing of physical reactions (like a hydrogen atom always reacting the same in a determined condition because of its properties and functions maintained by its structure, like a representation of physical memory; also e.g., molecules and viruses), reaction of simple sentience (like bacteria, fungi and plants), reaction of complex sentience (like animal instinct and emotion which are awareness), and reaction of rationality (like intelligence, which in conjunction with senses produces self-awareness).

    As you can see, I have extended consciousness to non-living beings thanks to the cybernetics' field. Now because of this, I say Intelligence is part of consciousness, but intelligence is not self-aware if it doesn't feel through senses, instinct and emotion, i.e., intelligence is not self-aware if it doesn't feel alive. Now, if we would want to create robots that are self-aware, we would need to create artificial senses, artificial instincts and artificial emotions, in other words, artificial life; but such self-awareness requires death threat because that's what made evolve life into all kinds of consciousness that there are today. So things were put together randomly (entropy and inert entities) until it emerged an organic (determined) way of sustaining more complex structures (living beings and negentropy); by this evolutionary process, we came to believe that we are a "we" when it is very likely that there is no such thing, and such illusion that feels so real is what we will create when we make artificial self-awareness.

    Could it be more real our subjective experience of classical physics than the objective superpositioned realm of quantum mechanics?

    What do you think Yuval?

  10. What a douche this Denett ?! Ultra conservative grandpa should stay at home doing nothing if he is so afraid of anything changing in his world. Change is the constant and it's weird that with his old age he didn't get it. This conversation is really BS..How could it get such a like/dislike ratio ? Or does this shows how many people just don't get it at all what ai is going to do ? A bunch of alarmists, disapointing

  11. Denett : taking care of demented people is not a good life but being a doctor YES it's prestigious…Who invited this person ? How could this kind of person being on stage in such place ? I want Ai to take him down and make decisions for him..hopefully it will make him keep his mouth shut!

  12. Apasionante discusión sobre la manera en que tomamos decisiones y experimentamos nuestras realidades, pero sobre todo el cambio que estas pueden sufrir con el incremento de la inteligencia artificial en nuestras vidas.

  13. Dearest amazing Yuval I hope you enjoy it, wishing you all the best since you deserved

  14. Yuval in the beginning states that there is a common mistake between consciousness and intelligence. Dennet says he agrees. But I think Dennet is one of the thinkers making this confusion between consciousness and intelligence (in his dismissal of questions such as the so called hard problem of consciousness).

  15. Oh the irony of hearing Dan Dennett doing a version of the God of the Gaps argument against AI being able to pick music or books…

    Not a very good debate, certainly not if the topic was what you were expecting. Yuval is clearly the sharpest knife in the drawer, Dan is out of his depth, and Jodi was pretty much ignored, although she did allude to the fact that what makes us human is exactly those parts of us that are irrational… Yuval, you need better debate partners 😉

  16. Do other people have a hard time following the train of thought in this discussion? Like why (23:00) are they discussing whether ai can incorporate serendipity (and yes, the answer is yes.)

  17. Meanwhile A.I has leaped into hyper space and came back changed.. meditated on itself.. and decided to self-implode

  18. Yo do cannot program natural serendipity. The thought is absurd. Also who knows what mental deficits might be created by eliminating the cognitive function of making decisions? If i let a machine do all my lifting my muscles atrophy

  19. The world needs more of these type of discussions between intelligent individuals that don't sacrifice integrity for a sound bite, have left their ego's at the door and genuinely want the best for humanity.

    In Sapiens, Yuval Harari entitles his chapter on the transition from hunter-gatherer societies to agriculture, "History's Biggest Fraud." He provides evidence of how cooperation, peace, and relatively flat hierarchies cede to competitiveness, warfare, and complex hierarchies that oppressed most of the population. He presents the Industrial Revolution as the second biggest fraud because, like agriculture, while it produced more goods for human consumption, it was accompanied by horrendous suffering. His humanist suggestions for how humans need to change largely involve recovering values that could be found in abundance during the 2.5 million years before the beginnings of agriculture and during which Sapiens and its predecessors "fed themselves by gathering plants and hunting animals that lived and bred without their intervention."
    I largely agree. But in  my book, Compassion: A Global History of Social Policy, I also discuss examples from both agricultural societies and industrial societies that demonstrate that there were many places where pre-agricultural social values somehow managed to hang on despite the intrinsic problems of efforts to create societies in which nature was manipulated in ways that early societies eschewed. And so, for example, the Inca Empire was peaceful and shared its agricultural products across a huge area through "effective systems for food storage, good roads, and efficient shipping." Most pre-colonial African agricultural societies also embodied characteristics of pre-agricultural societies.  India also mostly fit into that category before foreign conquests and the caste system imposed unhealthy values about 3000 years ago. On the industrial side, post-World War I Vienna,   modern-day Kerala, Costa Rica, and to a degree, Scandinavia provide some models of societies that value compassion over GDP and wealth accumulation.

  21. It´s like Yuval trying to explain to his granpa. It´s clear that Halpern and Dannet don´t have a really general point of view

  22. I agree with Dennett. A car satnav gets right on my t*ts. Much rather use a map, observe signs and a deploy a sense of direction. Trouble is, society is on course to make roadsigns and road atlases museum items and then I'll have no choice.

  23. These people need to study body and mind? Do they know how they were in mother wombs how the consciousness is started. Mind elements and physical nature these talking people need to investigate true Buddha's teaching?

  24. Neither Denett or Harari know much know about conciousness. Harari is a brilliant thinker but he doesn't no understands conciousness in a deep technical or philosophical sense. Denett is a famous theorist of conciousness but he is remarkably ignorant about it.

Leave a Reply

Your email address will not be published. Required fields are marked *