Yuval Noah Harari on AI and Human Evolution | WSJ Leadership Institute
WSJ News
조회수
조회수 99.5K회
좋아요
좋아요 2.2K개
게시일
게시일
설명
Yuval Noah Harari, historian and bestselling author of books like "Sapiens: A Brief History of Mankind" and "Homo Deus," joins the WSJ Leadership Institute to examine, through his work on human evolution, ethics, and power, how AI is reshaping global institutions, executive decision-making, and our very concept of intelligence.
#AI #Writing #WSJ
- So you studied the militaryhistory of the Middle East.
Did you ever expect tonow be the foremost expert on all things AI and whetherwe are doomed as humanity? - I'm not the foremost expert, but I know I didn't expectto be talking about AI with such an audience.
As you said, I was originally a specialist in medieval military history, but the Middle Ages arecoming back in many ways.
- Okay.
Okay, we're gonna get into that and then let me feel free to get into that as I ask you this first question, you call artificial intelligence or alien intelligence as you refer to it throughout your writing, "The rise of a new speciesthat could replace, replace, homo sapiens.
"- Mm-hmm, yeah.
- "Sapiens," your prior book.
What does it mean to be human right now? - To be aware for the first time, we have real competition on the planet.
We have been the mostintelligent species by far for tens of thousands of years.
And this is how we got frombeing an insignificant ape in the corner of Africa to being the absolute rulers of the planet and of the ecosystem.
And now, we are creating something that could compete with usin the very near future.
The most important thing to know about AI is that it is not a tool likeall previous human inventions, it is an agent.
An agent in the sensethat it can make decisions independently of us, it can invent new ideas, it can learn and change by itself.
All previous human inventions, you know, whether they're printing press, or the atom bomb, theyare tools that empower us.
- They needed us.
- They need us because a printing presscannot write books by itself and it cannot decide which books to print.
An atom bomb cannot inventthe next more powerful bomb.
And an atom bomb cannotdecide what to attack.
An AI weapon can decide by itself- - That's right.
- Which target to attack and design the next generationof weapons by itself.
- So this is why you argue that AI is potentially, or it sounds like you'resaying is, by the way now, but you've written in the past, potentially moremomentous in the invention of the telegraph, theprinting press, even writing.
But the way you talk about itin Nexus is that it is a baby.
Because it learns from us.
And therefore, your argument is that we, especially the powerfulleaders in this room, have a lot of responsibility because how we act is how AI will be.
You cannot expect to lie andcheat, and have benevolent AI.
- Yeah.
- Explain that.
- Yeah, there is a bigdiscussion around the world about AI alignment.
Okay, we are creating theseincreasingly super intelligent, very powerful new agents.
How do we make sure thatthese agents remain aligned with human goals and with the benefit of humanity that they do what is good for us? And so there is a lot ofresearch and a lot of efforts focused on the idea thatif we can design these AIs in a certain way, if we canteach them certain principles, if we can code into them certain goals, then we'll be safe.
But there are two mainproblems with this approach.
First of all, and the very definition of AI is that it can learn and change by itself.
If you have a machine thatcan act automatically, but only following pre-programmed orders, then you know it's a coffee machine.
It can do somethingautomatically produce coffee, but it cannot decide orinvent anything by itself.
It's not an AI.
So when you design an AI by definition, this thing is going todo all kinds of things which you cannot anticipate.
If you can anticipate everything it'll do, it is by definition not an AI.
So that's one problem.
The other even bigger problem.
.
.
That we can think about AI, like you said, like a baby or a child, and you can educate a childto the best of your ability and he or she will still surprise you.
For better or worse.
No matter how much youinvest in their education, they are independent agents, they might eventually do something which will surprise youand even horrify you.
The other thing is, everybody who has anyknowledge of education knows that in the education of children, it matters far less what you tell them- - [Poppy] Than what you do.
- It matters far more what you do.
If you tell your kids don't lie, and your kids watch youlying to other people, they will copy your behavior,not your instructions.
Now if we have now these bigprojects to educate the AIs, not to lie, but the ais are given access to the world and they watch how humans behave, and they see some of the mostpowerful humans on the planet, including their parents lying, the AI will copy the behavior.
People who think that I can run, say this huge AI corporation, and while I'm lying, I will teach my AIs notto lie, it will not work.
It will copy your behavior.
- One of your central arguments is that we as society writ large, have focused way too much on power.
And you also make the argumentthat some disagree with or call counterintuitive thatmore information is great for democracies because yousay all information is not, true information, et cetera.
- Most information is not the truth.
- Right.
- There's a huge confusion between information and truth.
Yes, some formation is true.
And you get informationto get to know the truth.
But generally, the truth isa very, very small subset of most of all theinformation in the universe.
- So if we are focusing too much on power, and that's a very important distinction, you say, this is why we have failed as people largely to answer actually the biggest questions of life.
If we can be moreproductive, we can be richer, we can have stronger militaries.
But many of us can't answerthe question as you write, who are we? What should we aspire to? And what is a good life? Essentially we areaccumulating power, not wisdom.
- [Noah] Yeah.
- How can we change it? - That's the big problem of human history, you know, for thousands of years.
We are extremely goodin acquiring more power.
And this is how we transform ourselves from an insignificant ape in East Africa into the ruler of the world.
We can fly to the moon,we can split the atom, but we don't seem to besignificantly happier than we were in the stone age.
We don't know how to translatepower into happiness.
Again, you look at the mostpowerful people on the planet, they don't seem to be the most, the happiest people on the planet.
(crowd laughing) So there is a very deep- - [Poppy] Do you wanna ask them?There are many of them are- - I'm not necessarily referringto the people in this room.
I want to clarify, I don't think there is a contradiction between power and happiness.
- Okay.
- I don't think that asyou acquire more power, you necessarily become miserable, no.
- Good.
- But there is no.
.
.
It can go together, but itdoesn't necessarily go together.
And as a species, we havenot been particularly good in translating power into happiness or even into knowledge and wisdom.
Again, we tend to confuse intelligence with knowledge, and with truth.
But we are the most intelligentspecies on the planet.
We are also the most delusionalspecies on the planet.
- [Poppy] And destructive, you argue.
- And self-destructive, yeah.
The kind of things that people believe, no other animal on the planetwill believe such nonsense.
- Like.
- Except.
If I look at my own country, like, you would not findany animal that believes that if you go and kill othermembers of your species, you will be rewarded after adeath by entering paradise.
No chimpanzee will believe that.
No horse would believe that.
No wolf will believe that.
Millions of people believe that.
And they believe it so strongly that they actually go and kill people in the expectation that as a result they will be rewarded inparadise with whatever.
- We took a reallyinteresting poll this morning asking the leaders in this room how consequential they think AI has been so far in their business.
And actually only a few.
.
.
The businesses they lead, and only a small portionsaid significantly most it was moderately or not at all.
Can you speak to them as if we were sittinghere 36 months from now? Is there any world in which AI doesn't have a significantimpact on their business? - It depends on their business, but in most fields.
.
.
Again, the question is one of timescale.
You know, I've been talkingto a lot of the people who lead the AI revolution, and many of them say, "You know, we are already in the middle of the AI revolution.
We still haven't seenanything really major.
" And that's just the difference between how historians view time, and how CEOs, and entrepreneurs view time.
For an entrepreneur, twoyears is a long time.
For historians it's nothing.
It's like, imagine that weare now sitting in London and the year is 1835.
The first railway has been opened between Manchester andLiverpool five years ago.
And we have now thisconference in London in 1835, and people saying, "You know, all this talk aboutrailways changing the world, the industrial revolution,this is nonsense.
We have had railwaysfor ages, five years.
" And look, okay, so there is some changes that they now have peopletraveling with the trains.
They move around coal more easily, but nothing major happened because there is a time lag between the invention of the technology and the moment when yousee the actual social, and political consequences.
So we now know that theindustrial revolution and trains, they completely transformed everything.
Geopolitics, the way people fight wars, the economy, family structure, but it just took more than five years.
The same is likely to happen with AI.
In all fields from theobvious to the less obvious.
Like, I think that one of the first fields will see major changes is finance.
- Okay.
- That AI is going very quickly to take overthe financial system.
- [Poppy] We have some bankersin the room so tell us more.
- Yeah.
Because finance is theideal playing ground for AI.
It's purely an informational real.
If you want to have an AIself-driving vehicle on the road, which has been promised again and again, and we are still not there, the problem-- Waymo.
Waymo.
- Yeah, but you go around London, you don't see these tens of thousands of self-driving vehicles yet.
I just passed my first driving lesson.
- Congratulations.
- Here in UK, and you still need to learn how to drive.
- Okay.
- So.
- But finance-- The problem is that for driving, you need to deal withthe messy physical world of pedestrians, and holesin the road and whatever.
But in finance, it's onlyinformation in, information out.
It's much easier for an AI to master that.
And what happens to finance once AIs, for instance, start inventingnew financial devices that the human brain is simplyincapable of dealing with because it's mathematically too complex.
We are going to see AI changingeven things like religion.
- How?- At least, religions which are based on texts like Judaism, Islam, Christianity, they give ultimate authority to the text.
Not to any human being.
Now, until today, humans were nevertheless the main authority in these religions because the texts could not speak, the Bible could not interpret itself, the Bible could not answer your questions, so you needed a humanbeing as an intermediary.
What happens when you have an AI text that can speak for itself? No Jewish rabbi can knowall the texts of Judaism because there are too many of them.
For the first time in history, there is something on the planet that is able to remember every single word in every writing of everyrabbi in the last 2,000 years and talk back to you and explain, and defend its views.
So, I have friends who are now working on building religious AIs that are meant to either augment or replace human religious leaders,, especially in text-based religions.
If the religion is not based on texts, it doesn't give authority to atext, it's a different story.
- Okay, but I go.
.
.
And we're going to questions next, I'm gonna come first to Mattia Mor, if you wanna raise your hand, and we'll get you a microphone.
But I go and talk tomy pastor at our church when I am going through a difficult time.
I am never going to talkto ChatGPT like that.
- Well, it's an individual choice.
- But you think some will.
- I know that already millions of people do it.
I mean, I know people who go for now AIs to get psychological counseling.
That AI is their best friend.
Like teenagers, somethinghappened in school.
They consult, they tellthe AI what happened and ask for advice about relationships.
- So let me get back to.
.
.
And then the question is next.
Let me get back to what you've said though about replacing jobs.
This is really important.
And you write and talk a lot about a.
.
.
You're worried about whatit becomes a useless class.
That's what you've talked about.
And it was five or six years ago, I interviewed Google CEO,Sundar Pichai in Oklahoma in one of their data centers.
We need many, many moreof them now to power AI.
And I remember asking him about AI, it was people talking about a lot then.
And he essentially told me, and I'm paraphrasing here, "That if AI proves tobe quote in his words, 'very disruptive' totoo many American jobs, he essentially said they wouldbe open to slowing it down.
" Okay.
I'm talking about this not just Google, just writ large these companies right now, if we are headed for whatyou're talking about, a potential useless class, many interestingly white collar jobs, so it's getting a littlebit more attention perhaps and when it replaced blue collar jobs, which is a whole issue in and of itself, what do we do to makesure we survive society not only survive but thrive? - I want to emphasize that AI has enormous positive potential as well as dangerous potential.
And I don't believe in historical or in technological determinism.
You can use the same technology to create completelydifferent kinds of societies.
We saw it in the 20th century that we used exactly the same technology to build communist totalitarian regimes and liberal democracies.
- That's right.
- It's the same with AI.
We have a lot of choicesabout what to do with it, if again, provided we remember that for the first time we are dealing with agents and not tools.
So it makes it much more complicated.
But we do have still, most ofthe agency is in our hands.
And the question of howwe develop the technology and even more importantlyhow we deploy it, we can make a lot of choices there.
- We have agency in how we move forward.
We don't have a choice that it has come.
We have power in how weuse it and go through it.
- Absolutely.
The main problem is that now the companies and countries that lead the AI revolution has been locked intoan arms race situation.
So even if they know that itwould be better to slow down.
to invest more in safety, to be careful about this orthat potential development, they're constantly afraidthat if we slow down and they don't slow down,they will take over the world.
- So let's get to some questions here.
Mattia Mor with Emotion Network.
Hi.
Yes, right here.
- Hi, Noah.
So congratulations because I mean, your books are really eyeopening and so you gave really a lotof answers through your books and also today.
We as a company, we have aconference called Tech Emotion because we strongly believe in the power of mixing technology,innovation with emotion, creativity, and culture.
And what you're saying, this is a lot of a mix of this.
So, I think it's really interesting what you're saying aboutalso the effect on religion, the fact of on the soul of the people.
And this is also related towhat she was saying about the purpose of the life of the people that is going to be destroyedor changed a lot by AI, by all this innovations.
So how do you think it's possible to get a future where people isgoing to be more satisfied, and more happy, and also find more purpose of what they do with the difficulties thatthis world is changing? So I know this is a big question, but I think that is the mostimportant thing in our life, much more than business,but more than anything else.
- Yeah, it's a very big subject.
The most important thing, I only have time to talk about one thing.
So the most important thing is that we need to solveour own human problems instead of relying onthe AI to do it for us.
And the most, and the key problem is the problem of trust and cooperation.
At the present moment, trust is collapsing all over the world, both between countries,and within societies.
And the hope that okay, humans can no longer trust each other so the international system,and the trade system, and everything is collapsing,but the AI will save us.
No, it will not.
In a world in which humans compete with each other ferociouslyand cannot trust each other, the AI produced by such aworld will be a ferocious, competitive, untrustworthy AI.
It's not possible for humans as they engagedin this ferocious competition to create benevolent, trustworthy AI.
It will just not happen.
So if you think about, it'sjust a question of priority.
We have now this big human trust problem, and we have the issue,how do we develop AI? Too many people think, "Okay, let's first solvehow do we develop AI problem and then this will solvethe human trust problem.
" It'll not work.
We need to get ourpriorities the other way.
First, solve the human trust problem, then together we can create benevolent AI.
Of course, this is not what is happening right now in the world.
- [Poppy] Do we have onemore? Yes, right there.
- [Guest] Thanks very much, well.
Quick question from me.
So, you know, in human history there's been organizing principles and you write about thatso much in your books.
And there's been, in some senses, at least geographicallymonolithic organizing principles, like religion in the church.
- Yeah.
- Was one of those.
But when we talk about AI, we're not talking about somethingthat is monolithic, right? The there is no, the AI, this is really effectively going to be multiple plethoras of AIsmanifesting themselves.
- Absolutely.
- And in that context, you know, when you describe AI replacing religions in some sense, I think the real question for me is, when you have no singleorganizing principle, there is no the AI that gets developed with any kind of intent, whether that intent isbenevolent or otherwise, and there's all of these competing AIs that are effectively evolving fast.
What does that world look like? - Hmm.
Now that's a very,very important point.
I mean, the AI will not be one big AI.
We are talking aboutpotentially millions or billions of new AI agents withdifferent characteristics, again, produced by differentcompanies, different countries.
Everywhere.
In the military, in the financial system, in the religious system, so you'll have a lot of religious AIs competing with each other, which AI will be theauthoritative AI rabbi for which currents of Judaism.
And the same in Islam, andthe same in in Hinduism, in Buddhism, and so forth.
So you'll have competition there.
And in the financial system.
And we just have no ideawhat the outcome will be.
We have thousands of years of experience with human societies.
What happens when millions of humans compete for economic power, for religious authority.
It's very complex, but we at least have some experience in how these things develop.
We have zero experience.
What happens in AI societies when millions of AIscompete with each other? We just don't know.
Now this is not something youcan simulate in the AI labs.
If OpenAI, for instance,wants to check the safety or the potential outcomeof its latest AI model, it cannot simulatehistory in a laboratory.
It can check for all kindsof failures in the system, but it cannot tell in advance what happens when you have millionsof copies of these AIs in the big world outside developing in unanticipated ways, interacting with each other, and with billions of human beings.
So in a way, it's the biggest social experiment in human history.
We are all part of it and nobody has any ideahow it will develop.
You know, one analogy to keep in mind, we now have this immigrationcrisis in the US, in Europe, elsewhere.
Lots of people worried about immigrants.
Why are people worried about immigrants? There are three main thingsthat come to people's mind.
They will take out jobs, they come with different cultural ideas, they will change our culture.
They may have political agendas, they might try to take overthe country politically.
These are the three main things that people keep coming back to.
Now, you can think about the AI revolution as simply a wave of immigration of millions and billions of AI immigrants that will take people's jobs, that have very different cultural ideas, and that might try to gainsome kind of political power.
And these AI immigrants,these digital immigrants, they don't need visas, they don't cross in a sea in some rickety boat inthe middle of the night.
They come at the speed of light.
And I look for instance atfar-right parties in Europe.
And they talk so muchabout the human immigrants, sometimes with justification, sometimes without justification.
They don't talk almost at all about the wave of digital immigrants that is coming into Europe.
And I think they should be much, if they care about thesovereignty of their country, if they care about the economic and cultural future of their country, they should be far more worried about the digital immigrants than about the human immigrants.
영상 정리
영상 정리
1. 저자는 원래 중세 군사사 전문가였지만, 지금은 AI 전문가가 되었어요.
2. AI는 도구가 아니라 독립적 판단이 가능한 에이전트라고 설명했어요.
3. AI는 스스로 학습하고 결정할 수 있어, 위험성이 크다고 지적했어요.
4. AI를 잘 설계하지 않으면, 예상치 못한 행동을 할 수 있다고 경고했어요.
5. AI는 어린아이처럼 배울 수 있지만, 예상 밖 행동도 할 수 있다고 했어요.
6. AI는 행동보다 행동하는 방식이 더 중요하다고 강조했어요.
7. 거짓말하는 인간을 보면, AI도 그 행동을 따라할 수 있다고 했어요.
8. 정보와 진실은 다르며, 대부분의 정보는 진실이 아니라고 말했어요.
9. 인간은 힘을 쌓는 데 능하지만, 행복과 지혜는 부족하다고 지적했어요.
10. 힘과 행복은 꼭 함께 가지 않으며, 우리는 지혜를 잃고 있다고 했어요.
11. 인간은 자신이 가장 똑똑하다고 착각하지만, 오히려 착각에 빠졌다고 했어요.
12. 많은 사람들이 종교적 믿음으로 폭력을 정당화한다고 지적했어요.
13. AI가 인간의 일자리를 대체할 가능성에 대해 걱정했어요.
14. AI는 기술의 이중성을 가지고 있으며, 올바르게 쓰면 좋은 세상도 만들 수 있다고 했어요.