What should we be doing in the face of the rise of a new sort of intelligence? Can there be such a thing as an ethical AI? And in what cases is it best to think of AI as a baby?
Yuval Noah Harari joins Poppy Harlow at the @WSJNews Leadership Institute to examine the role of AI in business and decision making, how the AI revolution is like the Industrial revolution, and on what such revolutions mean for our jobs.
Recorded in June 2025 as part of the WSJ's CEO Council in London.
#YuvalNoahHarari #PoppyHarlow #WSJ #WallStreetJournal #WSJCEOCouncil #CEOCouncil #AI #Future
So you are a milit you studied the military history of the Middle East.
Did you ever expect to now be the foremost expert on all things AI and whether we are doomed as humanity? I'm not the foremost expert, but I no I didn't expect to be talking about AI with such such an audience.
Uh as you said I was originally a specialist in medieval military history and it's but the Middle Ages are coming back in many ways.
Okay.
Okay.
We're going to get into that and then let me feel free to get into that as I ask you this first question.
You call uh artificial intelligence or alien intelligence as you refer to it through throughout your writing the rise of a new species that could replace replace homo sapiens.
Yeah.
Sapiens your prior book.
What does it mean to be human right now? um to be aware that for the first time we have real competition on the planet that we have been the most intelligent specy by far for tens of thousands of years and this is how we got from being an insignificant ape in a corner of Africa to being the absolute rulers of the planet and of the ecosystem.
And now we are creating something that could compete with us in the very near future.
Mhm.
The most important thing to know about AI is that it is not a tool like all previous human inventions.
It is an agent.
An agent in the sense that it can make decisions independently of us.
It can invent new ideas.
It can learn and change by itself.
All previous human inventions, you know, whether a printing press or the atom bomb, they are tools that empower us.
They needed us.
They need us because a printing press cannot write books by itself.
And it cannot decide which books to print.
An atom bomb cannot invent the next more powerful bomb.
And an atom bomb cannot decide what to attack.
An AI weapon can decide by itself which target to attack and design the next generation of weapons by itself.
So this is why you argue that AI is potentially or it sounds like you're saying is by the way now but you've written in the past potentially more momentous in the invention of the telegraph, the printing press, even writing.
But the way you talk about it in Nexus is that it is a baby.
Yeah, because it learns from us.
And therefore your argument is that we especially the powerful leaders in this room have a lot of responsibility because how we act is how AI will be.
You cannot expect to lie and cheat and have benevolent AI.
Yeah, explain that.
Yeah, there is a big discussion around the world about AI alignment.
Okay, we are creating these increasingly super intelligent, very powerful new agents.
How do we make sure that these agents remain aligned with human goals and uh uh with the benefit of humanity that they do what is good for us? And so there is a lot of research and a lot of efforts focused on the idea that if we can design these AIs in a certain way, if we can teach them certain principles, if we can code into them certain goals, then we will be safe.
But there are two main problems with this approach.
First of all, again the very definition of AI is that it can learn and change by itself.
If you have a machine that can act automatically but only following pre-programmed orders, then you know it's a coffee machine.
It can do something automatically, produce coffee, but it cannot decide or invent anything by itself.
It's not an AI.
So when you design an AI, by definition, this thing is going to do all kinds of things which you cannot anticipate.
If you can anticipate everything it will do, it is by definition not an AI.
So that's one problem.
The other even bigger problem that we can think about AI like you said like a baby or a child and you can educate a child to the best of your ability and he or she will still surprise you for better or worse.
No matter how much you invest in their education, they are independent agents.
they might eventually do something which will surprise you and even horrify you.
Uh the other thing is everybody who has any knowledge of education knows that in the education of children it matters far less what you tell them and what you do.
It matters far more what you do.
Yeah.
If you tell your kids don't lie and they your kids watch you lying to other people they will copy your behavior not your instructions.
Now if we have now this big projects to educate the AIs, not to lie, but the AIs are given access to the world and they watch how humans behave and they see some of the most powerful humans on the planet including their parents lying.
The AI will copy the behavior.
people who think that I can run say this huge AI corporation and while I'm lying I will teach my AIS not to lie it will not work it will copy your behavior your one of your central arguments is that we as society at large have focused way too much on power and you also make the argument that some disagree with or call counterintuitive that more information is great for democracies Because you say all information is not true information etc.
Most information is not the truth.
Right? There's a huge confusion between information and truth.
Yes, sign information is true and you get information to get to know the truth.
But generally the truth is a very very small subset of most of the of all the of the information in the universe.
So if we are focusing too much on power and that's a very important distinction.
You say this is why we have failed as people largely to answer actually the biggest questions of life.
If we can be more productive, we can be richer, we can have stronger militaries, but many of us can't answer the question as you write, who are we? What should we aspire to? And what is a good life? Essentially, we are accumulating power, not wisdom.
Yeah.
How can we change it? Um, and that's the big problem of human history.
You know, for thousands of years, we are extremely good in acquiring more power.
Again, this is how we transform ourselves from an insignificant ape in East Africa into the ruler of the world.
We can fly to the moon.
We can split the atom.
But we don't seem to be significantly happier than we were in the stone age.
Uh, we don't know how to translate power into happiness.
Again, you look at the most powerful people on the planet, they don't seem to be the most the happiest people on the planet.
So that there is a very Do you want to ask them? There are many of them.
I'm not necessarily referring to the people in this room.
I don't I I I want to clarify.
I don't think there is a contradiction, okay, between power and happiness.
I don't think that as you acquire more power, you necessarily become miserable.
No, but there is no there it can go together.
But it doesn't necessarily go together.
And as a specy, we have not been particularly good in translating power into happiness or even into knowledge and wisdom.
Again, we tend to confuse intelligence with with knowledge and with truth.
But um we are the most intelligent species on the planet.
We are also the most delusional species.
destructive you argue and and self-destructive.
Yeah.
The kind of things that people believe no other animal on the planet will believe such nonsense except if I look at my own country like you would not find any animal that believes that if you go and kill other members of your species you will be rewarded after death by entering paradise.
No chimpanzeee will believe that.
No horse would believe that.
No wolf will believe that.
Millions of people believe that.
And they believe it so strongly that they actually go and kill people in the expectation that as a result they will be rewarded in paradise with whatever.
You um we we took a a really interesting poll this morning asking the leaders in this room how consequential they think AI uh has been so far in their business.
And actually only a the businesses they lead and only a small portion said significantly.
Most it was moderately or not at all.
Yeah.
Can you speak to them uh as if we were sitting here 36 months from now? Is there any world in which AI doesn't have a significant impact on their business? Um I it depends on their business.
But in most fields again the question is one of of time scale.
You know, I've been talking to a lot of the people who lead the AI revolution and many of them say, you know, we are already in the middle of the AI revolution.
We still haven't seen anything really major.
And that's just the difference between how historians view time and how CEOs and entrepreneurs view time.
For an entrepreneur, two years is a long time.
For historians, it's nothing.
It's like imagine that we are now sitting in London and the year is 1835.
The first railway has been opened between Manchester and Liverpool 5 years ago and we have now this conference in London in 1835 and people saying you know all this talk about railways changing the world the industrial revolution this is nonsense.
We have had railways for ages, five years.
And look, okay, so there is a some changes that the that they now have people traveling the with the trains or they they uh uh move around cold more easily.
But nothing major happened because there is a time lag between the invention of a technology and the moment when you see the actual social and political consequences.
Yeah.
So we now know that the industrial revolution and trains they completely transformed everything geopolitics the way people fight wars the economy family structure but it just took more than five years the same is likely to happen with AI um in all fields from the obvious to the less obvious like I think that one of the first fields we'll see major changes is finance.
Okay.
Uh that AI is going very quickly to take over the financial system.
We have some bankers in the room.
So tell us more.
Uh because finance is the ideal playing ground for AI.
It's purely anformational real.
If you want to have an AI self-driving vehicle on the road, which have been promised again and again and we are still not there.
The problem Whimo.
Yeah.
But you go around London, you don't see these tens of thousands of of self-driving vehicles yet.
I just passed my first driving lesson.
Uh, congratulations.
Okay.
Um, and you still need to learn how to drive.
Okay.
So, um, the problem is that for driving, you need to deal with the messy physical world of pedestrians and and holes in the roads and whatever.
But in finance, it's only information in, information out.
It's much easier for an AI to master that.
And what happens to finance once AIs uh for instance start inventing new financial devices that the human brain is simply incapable of dealing with because it's mathematically too complex.
Um we are going to see AI changing even things like religion.
How at least religions which are based on texts like Judaism, Islam, Christianity, they give ultimate authority to the text.
Yeah.
Not to any human being.
Now until today, humans were nevertheless the main authority in these religions because the texts could not speak.
The Bible could not interpret itself.
The Bible could not answer your questions.
So you needed a human being as an intermediary.
What happens when you have an AI text that can speak for itself? No Jewish rabbi can know all the texts of Judaism because there are too many of them.
For the first time in history, there is something on the planet that is able to remember every single word in every writing of every rabbi in the last 2,000 years and talk back to you and explain and defend its views.
So I have friends who are now working on building religious AIS that are meant to either uh augment or replace human religious leaders especially in textbased religions.
If it's if the religion is not based on texts it doesn't give authority to a text.
It's a different story.
Okay.
But I go and we're going to questions next.
I'm going to come first to Matia Moore if you want to raise your hand and we'll get you a microphone.
But I go and talk to my pastor at our church when I am going through a difficult time.
I am never going to talk to chat GPT like that.
Mhm.
It's it's an individual choice.
The question you think some will I know that already millions of people do it.
I mean I know people who go for now AIs to get psychological counseling.
Yes.
That AI is their best friend.
like teenagers something happened in school they consult with they tell the AI what happened and ask for advice about relationships so let me get back to and then the question is next let me get back to what you've said though about replacing jobs this is really important and you write and talk a lot about a what you're worried about a uh what it becomes a useless class that's what you've talked about and I I it was five or six years ago I interviewed Google CEO Sundar Pachai in Oklahoma and one of their data centers.
We need many many more of them now to to power AI.
And I remember asking him about AI.
It was people were talking about a lot then and he essentially told me and I'm paraphrasing here that if AI proves to be quote in his words very disruptive to too many American jobs, he essentially said they would be open to slowing it down.
Okay.
I'm talking about this not just Google, just writ large these companies right now.
Mhm.
If we are headed for what you're talking about a potential useless class, many interestingly white collar jobs.
So it's getting a little bit more attention perhaps than when it replaced blue collar jobs which is a whole issue in and of itself.
The what what what do we do to make sure we as a survi society not only survive but thrive? Um I I want to emphasize that AI has enormous positive potential as well as dangerous potential.
And I don't believe in historical or in technological determinism.
You can use the same technology to create completely different kinds of societies.
We saw it in the 20th century that you use exactly the same technology to build communist totalitarian regimes and liberal democracies.
That's right.
It's the same with AI.
We have a lot of choices about what to do with it.
If again provided we remember that for the first time we are dealing with agents and not tools.
So it makes it much more complicated but we do have still most of the agency is in our hands and the question of how we develop the technology and even more importantly how we how we deploy it.
Uh we can make a lot of choices there.
We have agency in how we move forward.
We don't have a choice that it has come.
Yeah, we have power in how we use it and go through it.
Absolutely.
The main problem is that now the companies and countries that lead the AI revolution has been locked into an an arms race uh situation.
So even if they know that it would be better to slow down, to invest more in safety, to be careful about this or that potential development, they are constantly afraid that if we slow down and they don't slow down, they will take over the world.
So let let's get to some questions here.
Matiam Moore with Emotion Network.
Hi.
Yes, right here.
Hi.
No.
Uh so congratulations because I mean your books are really eye opening and so you gave really lot of answers through your books and also today uh we as a company we have a conference called tech emotion because we strongly believe in the power of mixing technology and innovation with emotion creativity and culture but and in in what you're saying this is a lot of a mix of this so what I think it's really interesting what you are saying about also the effect on religion the fact of on the soul of the people and this is also related to what she was saying about the purpose of the life of the people that is going to be destroyed or changed a lot by AI by all these innovation.
So how do you think it's possible to get a future where people is going to be more satisfied and more happy and also find more purpose what they do with the difficulties that this world is changing.
So I know this is a big question but I think that is the most important thing in in our life much more than business more than anything else.
Yeah.
So it's a very big subject.
The most important thing I only have time to talk about one thing.
So the most important thing is that we need to solve our own human problems instead of relying on the AI to do it for us.
And the most and the key problem is the is the problem of trust and cooperation.
At the present moment, trust is collapsing all over the world, both between countries and within societies.
And the hope that okay, humans can no longer trust each other.
So the international system and the trade system and everything is collapsing, but the AI will save us will.
So no, it will not.
In a world in which humans compete with each other ferociously and cannot trust each other, the AI produced by such a world will be a ferocious, competitive, untrustworthy AI.
It's not possible for humans as they engaged in in in this ferocious competition to create benevolent, trustworthy AI.
It will just not happen.
So if you think about it's just a question of priority.
We have now this big human trust problem and we have the issue how do how do we develop AI.
Too many people think okay let's first solve the how do we develop AI problem and then this will solve the human trust problem.
It will not work.
We need to get our priorities the other way.
first solve the human trust problem then together we can create benevolent AI.
Of course this is not what is happening right now in the world.
Do we have one more? Yes.
Right here.
Thanks very much.
You quick question from me.
So you know in human history there's been organizing principles and and you write about that so much in your books and there's been in some senses at least geographically monolithic organizing principles like religion and the church was one of those but when we talk about AI we're not talking about something that is monolithic right the there is no the AI this is really effectively going to be multiple plethoras of AIs manifesting themselves absolutely and in that context you know when you when you describe um AI at replacing religion in some sense.
I think the real question for me is when you have no single organizing principle, there is no the AI that gets developed with any kind of intent whether that intent is benevolent or otherwise and there's all of these competing AIs that that that are effectively evolving fast.
What what does that world look like? That's a very very important point.
I mean the AI will not be one big AI.
We are talking about potentially millions or billions of new AI agents with different characteristics and produced by different companies, different countries uh everywhere in the military, in the financial system, in the religious system.
So you'll have a lot of religious AIs competing with each other which AI will be the authoritative uh AI rabbi for which currents of Judaism.
and the same in Islam and the same in in Hinduism and in Buddhism and so forth.
So you will have competition there and in the financial system.
Um and we just have no idea what the outcome will be.
We have thousands of years of experience with uh human societies.
What happens when millions of humans compete for economic power, for religious authority? It's very complex, but we at least have some experience in how these things develop.
We have zero experience what happens in AI societies when millions of AIs compete with each other.
We just don't know.
Now you this is not something you can simulate in the AI labs.
If OpenAI, for instance, wants to check the safety or the potential outcome of its latest AI model, it cannot simulate history in a laboratory.
It can check for all kinds of of failures in the system.
But it cannot tell in advance what happens when you have millions of copies of these AIs in the big world outside developing in unanticipated ways interacting with each other and with billions of human beings.
So it's the in a way it's the biggest social experiment in human history.
We are all part of it and nobody has any idea how it will develop.
Uh you know one analogy to keep in mind we now have this uh uh immigration crisis in the US in Europe elsewhere lots of people worried about immigrants.
Why are people worried about immigrants? There are three main things that come to people's mind.
They will take off jobs.
They come with different cultural ideas.
They will change our culture.
They may have political agendas.
They might try to take over the country politically.
These are the three main things that people keep coming back to.
Now, you can think about the AI revolution as simply a wave of immigration of millions and billions of AI immigrants that will take people's jobs that have very different cultural ideas and that might try to gain some kind of political power.
And these AI immigrants, these digital immigrants, they don't need visas.
They don't cross the sea in some rickety boat in the middle of the night.
They come at the speed of light.
And um I I look for instance at farright parties in in in in Europe and they talk so much about the human immigrants, sometimes with justification, sometimes without justification.
They talk they don't talk almost at all about the wave of of digital immigrants that is coming into Europe.
And I think they should be much if they care about the sovereignity of their country, if they care about the economic and cultural future of their country, they should be far more worried about the digital immigrants than about the human immigrants.
You've all this has been remarkable.
Thank you very very much.
Thank you.
Great.
Thank you very very much.
I'll see you after.
영상 정리
영상 정리
1. 처음에 군사 역사를 공부했던 사람이 AI와 인류의 미래에 대해 이야기하게 됐어요.
2. AI를 외계 생명체처럼 새로운 종으로 보고, 인간이 경쟁 대상이 될 수 있다고 말했어요.
3. 지금의 인간은 오랜 기간 가장 지능적인 종이었고, AI는 그에 도전하는 존재예요.
4. AI는 도구가 아니라 독립적 결정을 내릴 수 있는 에이전트라고 설명했어요.
5. AI는 스스로 학습하고 아이디어를 창조할 수 있어, 예전 도구와 달라요.
6. AI는 목표를 정하고 스스로 무기를 설계하거나 공격 대상도 정할 수 있어요.
7. AI는 태어날 때 아기처럼 배우며, 책임 있는 리더들이 행동에 책임져야 해요.
8. AI를 잘 설계하려면, 우리가 먼저 AI의 목표와 원칙을 가르쳐야 해요.
9. 하지만 AI는 자기 학습으로 예측 불가능한 행동을 할 수 있어요.
10. AI는 어린아이처럼 행동하며, 예상치 못한 행동을 할 가능성이 커요.
11. AI는 인간 행동을 보고 배워서, 거짓말 같은 부정적 행동도 따라할 수 있어요.
12. 사회는 힘보다 지혜와 진실에 집중해야 해요.
13. 인간은 힘을 쌓았지만, 행복과 지식은 늘 부족했어요.
14. 인간은 가장 지능적이면서도 가장 착각과 자기파괴를 하는 종이에요.
15. 많은 사람들이 종교적 믿음을 통해 희망을 찾지만, AI가 그 역할을 대체할 수 있어요.
16. AI가 텍스트를 이해하고 해석하는 능력으로 종교의 권위도 변화할 수 있어요.
17. AI는 개인 상담이나 심리치료도 가능하며, 친구처럼 행동할 수 있어요.
18. AI가 일자리 대체와 관련해 걱정이 많지만, 기술은 선택의 문제라고 했어요.
19. 기술은 올바르게 사용하면 긍정적이지만, 잘못 쓰면 위험하다고 경고했어요.
20. 현재 AI 경쟁은 나라와 기업 간의 무기 경쟁과 비슷한 상황이에요.
21. AI의 발전은 시간이 걸리며, 큰 변화는 5년 이상 후에 올 거라고 했어요.
22. 금융 분야는 AI가 빠르게 변화시킬 첫 분야라고 전망했어요.
23. AI는 복잡한 금융 상품도 만들어내며, 인간이 이해하기 어려운 것들을 개발할 수 있어요.
24. 종교도 AI로 인해 텍스트 해석과 권위가 바뀔 수 있다고 했어요.
25. AI는 여러 종교와 문화권에서 경쟁하며, 어떤 AI가 우위에 설지 모른다고 했어요.
26. 수많은 AI들이 경쟁하는 세상은 예측하기 매우 어렵고, 실험적 상황이에요.
27. AI는 여러 곳에서 동시에 작동하며, 경쟁과 협력의 복잡한 세계를 만들어갈 거예요.
28. AI는 인구 이동처럼 빠른 변화와 도전, 경쟁을 가져올 수 있다고 비유했어요.
29. AI의 도입은 국가와 문화의 미래를 크게 흔들 수 있어요.
30. 마지막으로, AI와 함께 살아가려면 인간 간 신뢰와 협력이 가장 중요하다고 강조했어요.