자청의 유튜브 추출기

유튜브 영상의 자막과 AI요약을 추출해보세요

[ENG] 튜링상 수상자가 말하는 국가 간 AI 경쟁 상황은? 얀 르쿤 박사 인터뷰

김지윤의 지식Play

조회수 조회수 49.4K 좋아요 좋아요 2.0K 게시일 게시일

설명

#메타 #AI개발 #AI경쟁 #국제정치 튜링상 수상자가 말하는 국가 간 AI 경쟁 상황은? AI에 대한 두려움은 과장되었다? 싱가포르에서 만난 Meta 수석 AI 과학자 얀 르쿤 박사와의 대화를 공개합니다! 즐거움과 유익함이 가득가득, 김지윤의 지식Play! kimjyTV@gmail.com
자막

자막

전체 자막 보기
I'm not going to ask you whether or not you agree on that.

Not really.

China doesn't need us.

Oh, scientists are always very, you know, u positive on the future.

I mean, in fact, it's the opposite.

The smartest people I've ever encountered are people who just want to be left alone, [Music] right? Fore.

Hello.

Hello.

Pleasure to be here.

Well, we don't have much time, so you have to get into the interview straight.

First of all, all the big tech companies are racing into the AI competition and it seems like Meta is taking a little bit different paths.

Of course, Microsoft and OpenAI, they are giving the service to the customers, but it seems like Meta is trying to establish the ecosystem of AI.

Could you describe the philosophy of Meta on AI and why it matters? The main philosophy and strategy is to think of AI as a platform and to provide this platform in open source.

Historically the software infrastructure of the internet has become open source.

It's true also of the software infrastructure of the mobile communication network of uh servers on the internet and and everything else.

There is a push generally from the market for platform software to become to be open source and so Meta has been embracing this strategy and I think this is very important because AI systems are built on top of foundation models.

Foundation models are very expensive to train and require a lot of expertise, computational resources and money and only a few entities in the world can do this.

But then if they are provided in open source, a lot of people across the world can contribute to making them better, but can also build products on top of it for all kinds of users, which tends to be very difficult to do with a proprietary platform.

And so I don't see the competition as being, you know, one company against another or one country or one region against another.

It's more the proprietary world competing with each other against the open- source world where ideas are shared, software platforms are shared and research results are shared.

Well, so it's more like a democratization of AI that you are proposing and also you are very critical of larger language model.

I think maybe I can rephrase that we are not really sure about the large language model and we have to move beyond the LM.

I'm not really sure what the differences are.

So could you please walk us through what that means and what are the what the differences are and why is it important? Okay.

So first of all large language models are very useful.

They should be developed deployed as products and over the next few years you're going to see a lot of improvements of LLMs you know through kind of various uh new techniques and sort of engineering progress.

The reason I've been critical of LLMs is that I do not think that in themselves they provide a path towards truly intelligent systems that have the same level or abilities to learn as humans or even animals.

You know, we have LLMs that can produce good text, they can pass the bar exam, they can solve mathematical equations, they can do all kinds of interesting things, write code and help a lot of people in their daily lives.

But we still don't have domestic robots.

We don't have self-driving cars or level five self-driving cars.

There's a lot of things that AI systems cannot do.

And what AI systems cannot do is everything that has to do with the physical world.

So LLM are able to manipulate language.

And we think of language as kind of the epitome of human intelligence.

But in fact, no, language is simple compared to the real world.

Dealing with the real world is very hard.

And we still don't know exactly how to do this with AI systems.

So future AI systems will be able to understand the physical world.

We'll have persistent memory.

We'll have the ability to reason and plan.

And those are four essential characteristics of intelligence that LLMs are currently not capable of doing.

And I think we need to move beyond LLM for that reason.

Actually reminds me of Chinese room theory.

So there's a debate between and the scholars.

Is it really the AI or computers are thinking or is it simply just the data processing? Is that right? Not really.

There was a philosophical debate.

The Chinese womb argument was proposed by John Surl who was a philosopher and he said you can mechanize thought without really having understanding.

I always thought this was a terrible argument.

There's nothing particularly mysterious or mystical about thought or about human thought in particular.

I think there is no question most people today certainly in the AI field believe that we'll eventually going to be able to design machines and train them so that they approach the level of intelligence that we see in humans and and animals.

And there's nothing particularly mystical about it that we're not going to be able to reproduce in a computational device.

So no question we're going to get to that level at some point in the future.

The real question is how and how long is it going to take? And the answer is it's probably much more difficult than we think.

Well, you've been a very strong advocate for the open source AI.

So what is the driving force of your commitment to the openness and why it's important? So it's important for a number of reasons.

There are reasons that drive meta's policy about AI.

there's a wider philosophical position which is that the more people are involved in contributing to scientific and technological progress the faster the progress.

So the reason we've seen a quick progress in AI over the last dozen years or so is precisely because the research was open you know in academia but also contributions from companies until fairly recently and meta through its research organization called fair which I created 11 years ago has been at the forefront of of this openness.

In the last few years though, a number of companies have kind of climbed up.

They they've become more secretive because they think it's competitive advantage that they need to to keep.

At Meta though, we have a tradition in the DNA of the company to open source our platform software going back many years.

And uh the company, you know, makes money with products on top of those platforms but not with a platform themselves.

And so there is no cost in distributing those platforms.

And in fact, it makes the technology progress faster.

makes it safer, more secure, higher performance and and everything.

So that's kind of the the philosophy.

Now the reason for the the company to open source is multiple.

The first one is we profit from what others contribute to open source platforms and so it makes us progress faster.

Second, it's a way of attracting the best talents.

If you want to attract the best scientists to work for your company, you can't tell them, you know, work on this product and you can't say a word about what you're doing because you kill their career.

And so it's it's much easier to attract the top talents if you tell them like you can publish what your your intellectual production intellectual impact is the currency of a scientist right if you want.

So but then there is a more important reason for open source and the more important reason is the fact that in the near future every single one of our interactions with the digital world will be mediated by AI assistants.

They'll be leaving in our smart glasses like the ones I'm I'm wearing at the moment or maybe our smartphones or other devices.

But basically, every question we have, we'll ask our AI assistants.

And what that means is that our entire digital diet will come from AI assistants.

Now, if those AI assistants come from a handful of companies on the west coast of the US or China, it's not good.

It's not good for cultural diversity.

It's not good because those systems will not speak all the world's languages.

they will not understand all the cultures, all the value systems, all the centers of interest.

So the proper way to approach that is to have open source platforms on top of which anyone can build AI assistance with their biases, their language, their culture, their value system, all the all the biases that you can imagine, right? So that users will have a choice of which assistant they want to use for any particular purpose that can only be enabled by open source platforms.

Um the other advantage is that it provides some level of sovereignty for all the countries around the world that may not have local resources to train foundation models.

Well, I mean there are a lot of benefits but there are still some concerns and risks as well.

People worried about what if it's misused and also if something happens who is accountable for that? Well, those are two different questions.

Okay, let let's talk about the the liability first, right? Let me use an analogy to first approximation.

The entire world of computing today runs on Linux except for a few desktop computers and a few iPhones.

Every server on the internet runs Linux.

All the computers in your car run Linux.

Your phone, if it's an Android phone, runs Linux.

Mobile communication network towers run on Linux.

So, if you buy, let's say, a Wi-Fi router and there's a bug in it for some reason.

Someone breaks into your Wi-Fi in your home, it runs Linux, but you're not going to sue the people who write Linux.

You're going to sue the manufacturer of that box.

Right? And that's the power of open source.

Basically, if you have an open source system and you put it in a product, because you have access to the source, you are now responsible for whether your product works or not.

You chose to use this open source platform, you could fix it if it doesn't work for you.

And so, if the product breaks, it's your fault.

I think if we have open source AI platform, it's going to be similar.

If you build a product on top of an open source foundation model, you are responsible for ensuring that that product does the right thing and is not dangerous for your users and and everything.

and you're responsible for it, you can always blame the provider of the open source system, but you didn't have to use it and you could fix it because it was open source because you could fine-tune it, right? And so open source to some extent kind of, you know, puts kind of insulation of of liability which I think is a huge advantage for for businesses.

What if there's any misuse? Excuse me because I'm very new to this word so not really sure how it works.

But the reason that people are a little bit hesitating to use open source AI is what if somebody's stealing my information and data etc because there's no very um strong one entity or leadership who is supervising.

I mean there's a number of different questions there right again when you put a product on the market you should ensure that this product you know is is compliant with all the regulations and does not endanger users and etc.

And you can choose to use open source platforms to build it or not.

So if the platform you use is not open source, then you can blame the provider of that technology if if your product fails because of it.

But if it's open source, it's really up to you.

That doesn't solve the problem of what if you put provide a product or an open source platform and some people do bad things with it like badly intentioned people do bad things with it.

And there it's, you know, it comes back to the question of who is respon responsible.

If you buy a kitchen knife, it's very useful, right? You can cut your vegetable with it cooking, but someone could take your kitchen knife and, you know, kill your neighbor.

You're not going to blame the manufacturer of the kitchen knife.

Uh the person who did this is responsible.

Those questions are not resolved because the case has not occurred yet in in in lawsuits.

Okay.

But at some point, the the courts, I think, will will settle that that question.

hopefully in ways that don't distinguish the progress of technology but but sort of in ways that also makes it safe.

Well, people hear open source AI then probably very easily think about China's deepseek.

So, is that your definition of open-source AI or is there any difference between your definition and deepseek? Okay, there's there's several definitions of open source and there's like some technical differences about what is really open source and not really open source and there are details which I'm not going to go into.

Deepseek is an open source system in in multiple ways.

The the code is distributed.

The weights are available for free.

The techniques that are used are described in a technical paper.

So there's a lot of details about it and how it does uh you know reasoning and things like that.

Lama which is the open source financial model from from Meta comes with kind of similar similar things also technical papers and and open source code and free weights and you know a little more restrictions about how you can use it for various reasons because of the legal situation which is not completely settled.

I think the effect of deepseek was to show the world that good ideas can come from anywhere and that the more people are involved in contributing to technology the faster it progresses and so there was some idea before that proprietary models were ahead technologically from the open source models but deepseek has changed this it's basically told people not true there's very good ideas that can come up from the open source world and it's likely that in the future open source platform will actually progress faster than than proprietary ones.

It's been the case for the software infrastructure of the internet going back 25 years.

They've completely taken over and it's very likely that a similar thing will happen in the context of AI.

We we would see foundation models as infrastructure on top of which people will will will build build products.

Well, recently the US House Committee on China released a report and they're accusing DPS of collecting American users data and sending it to China.

I'm not going to ask you whether or not you agree on that.

It's a very political question, but um there's a clear tension between technological nationalism and open scientific collaboration.

It seems like what we saw like 80 years ago about the nuclear power and do we have to share it with other countries or something like that.

So as a scholar and as an AI scholar how would you negate this tension? So I think there is a fundamental misunderstanding which is that people have the instinct of interpreting progress as a rivalry between countries or regions or companies.

But in fact right now the the real competition is between the propri proprietary world and the open source world.

Whenever someone contributes an advance and publishes it then the advances are available to everyone.

Everyone can get inspiration from those new techniques integrate them in their own model and then you know come come up with a new version of their model which may be better than whatever people were were doing before.

And again the more people are involved in contributing the faster the progress.

Now good ideas can come from everywhere in the world.

You know some people in some regions of the world in Silicon Valley in particular have a bit of a superiority complex but it's it's misplaced because good ideas can come from anywhere.

For example, the first version of Llama which is now the open source uh vehicle for for Meta was actually produced in Paris in the Meta AI research lab in Paris by a small team of a dozen people.

The most cited paper in all of computer science in fact in all of science over the last 10 years is a paper about deep learning about a particular technique called residual networks and uh this was published about 10 years ago.

Uh the lead author was a gentleman called Kaming Ha who at the time was working at the Microsoft research lab in Beijing.

And this paper had an enormous impact.

Its underlying technique which is conceptually very simple is used in all AI systems today.

So what that tells you is that you know ideas can come from from anywhere from Europe from Asia from China from Singapore from Korea in Paris.

In fact I have a pretty strong collaboration with a bunch of Korean laboratory there is a joint partnership between the Korean governments and and Korean universities for a laboratory at New York University where I'm a professor.

Right.

And that's why you're a strong believer in the openness.

But somehow the you know because a lot of people think AI is going to be the defining technology of the 21st century just like nuclear power did and and also the space technology was in the past and there's huge competition between the United States and China.

Of course they are competing each other everywhere but also in the AI industry as well.

So that's probably why they're so sensitive about the deepseek and about other you know AIs.

Yeah.

But what I've seen I think is at least in the in the industry um is an evolution in the thinking where some people before deepseek were advocating the fact in the US were advocating the the fact that we should not open source our best models so that China would not have access to it.

Not just China but other geopolitical rivals.

And then since deep a lot of people have realized China doesn't need us.

doesn't need the US or or the west.

They they can produce really good models and in fact they're releasing the their model in open source and there's probably no reason to kind of work in secret and just you know to some extent collaborate and so a lot of people have changed their mind about about this.

People who are anti-open source now have become pro open source because they say there's no point in keeping things secret.

We're just shooting ourselves in the foot.

If the US tries to ban open source AI, progress will slow down in the US but it will not slow down in the rest of the world.

So this would actually be counterproductive.

Well, it's just out of curiosity.

I mean these two countries are leading I mean the leaders of the AI industry I guess.

Could you please assess the level of progress of each country and what are the advantages and disadvantages each country holds? So first of all there are contributions from other parts of the world as well.

Of course in France for example there's a company called Mistra which is kind of also open sourcing its uh it its main models.

uh two of the co-founders came from from meta actually they they were the initial contributors to lama the first lama so I think there's going to be contribution contributions from uh from everywhere and I see the position of a particular region is due to the concentration of talents in the ecosystem so what you see now is a high concentration of talents in on the west coast of the US you know people move around and information is exchanged and you have you know university of California at Berkeley and Stanford and on nearby and so a lot of information circulates.

You see kind of a similar center in New York as well.

In Europe, it's Paris and London.

Both of them are sort of centers of innovation in AI and, you know, places where there's a lot of investment and startups and research labs and technical schools and and and things like that.

And then nent, you know, countries that are trying to kind of bring something to the table like like Vietnam, for example, which has a very young population.

It's a big advantage, you know, going forward.

not true of Korea and Japan or China for that matter.

It's not true of China either.

So China certainly is educating a very large number of engineers and scientists and just because of statistics you know there's a lot of innovation that comes from China just because of the sheer number but there but there are kind of structural and and cultural elements in China that perhaps limit the the impact and and to some extent you know favor the US perhapeness is one of them the fearlessness if you want towards innovation right the ability to take risks particularly on the part of investors and things like this which is very vibrant in in the US.

So it seems like China has a huge population, very talented people but a little bit too authoritarian.

Maybe I can just edit it if you guys do not like it.

I would say it's a little more top down.

So the ability to basically, you know, bring the top talents to the top and and give them the resources, the ability to to flourish and succeed is something that the American system of uh research has has been extremely good at since uh since World War II.

It's not going in the right direction at the moment in the sense that federal funding for for research in the US is being cut.

There's more pressure put on uh universities and everything.

So it it may give sadly an opportunity for other regions to actually become more prominent in research.

Well, still a lot of people have some fear of AI.

So what if it um controls human world and you know dystopian future and so what is your take on these doomsdays and narratives? I don't believe in any of those kind of doom scenarios.

No, I I disagree with some of my colleagues uh some of my friends even in that respect.

I must say though the the majority of AI scientists is more on my side.

I mean they don't believe that, you know, AI is going to take over the world and kill everyone.

Well, scientists are always very, you know, um positive on the future like political scientists.

Yes.

But but they're willing to debate questions and and have arguments and counter arguments, etc.

Right.

You you need that if you if you want to be able to arrive at the truth.

You need to be able to debate different different sides.

But I think people are making sort of fundamental mistake which is the only example of intelligent entity that we are used to seeing is other humans.

And so we just assume that if if an entity is intelligent, it will be intelligent in the same way as humans.

It will have this all the characteristics of human nature.

But that's simply not true.

Human nature is what it is because of evolution.

The fact that we are a social species, you know, we are descending from other social species like apes and and others.

So because we are a social species, we have relationships between individuals that sometimes involve the ability to influence others and in some species the ability to dominate others.

That exists in humans too, but there are other ways to kind of, you know, have relationship with others in in in humans.

So we had this idea somehow that intelligence is linked with this desire to dominate.

But it's not it's not true at all.

There are intelligent animals that are not social and have no desire to dominate anybody.

A good example is orangutongs.

Orangutongs are very smart, almost as smart as humans, but they're not social.

They have no desire to dominate anything.

And so this idea that necessarily when a system becomes intelligent, they will want to dominate humanity is just false.

It's not even true of humans.

It's not the smartest among us who want to dominate others.

I mean, in fact, it's the opposite.

You know, my experience is that the the smartest people I've ever encountered are people who just want to be left alone, right? Maybe this is going to be my last question.

Korea is very advanced country in science and technology, but it's not particularly global AI superpower yet.

from your perspective the countries like South Korea how it can contribute or play a role in AI and if there's any advice you can give our leaders and also the young scholars.

I think that you put your finger on a very difficult question for which I don't know the answer.

But if you take some countries in Asia like South Korea, Japan and Taiwan, those countries are extremely good at technology, particularly hardware technology, but not software so much.

And it's not that the talents are not there, they are there.

The education system are high quality.

The people are really smart.

If you put them in a different environment, they will flourish.

uh one of my most brilliant collaborators and and colleague at NYU is professor Kung Yun Cho who is absolutely brilliant is Korean.

So I don't explain why there are certain areas for which Taiwan, Japan and Korea in particular are at the top really leading in hardware not so much in software.

I think it might be because for brilliance in software what's required is organized chaos.

You need chaos.

You need um good ideas to bubble from the bottom.

You need to take risks.

You need people with a long leash.

If you tell people what to work on, you're not going to get breakthroughs.

You're going to get improve, you know, incremental improvements.

This may be somewhat kind of more compatible with some aspects of culture in North America rather than in uh Asia or Europe.

Now, those things evolve with time.

So, the counterpart is that America is not very good at hardware, for example.

So, I don't know.

It's a complicated question.

It's been it's been puzzling me.

I think it puzzles a lot of governments as well who would like to you know basically position themselves for AI as well as you know the hardware aspects and and everything.

Well, I think it's very insightful answer as a matter of fact because we all know we hate chaos.

We like and love the order but maybe it's time that we have to change as you said to develop our software.

Thank you so much Dr.

Lukun.

I know you have to run into the conference and very hectic and busy schedule and thank you again for sharing your time for this innovative.

It was a pleasure.

Thank you very much.

[Music]
영상 정리

영상 정리

1. 중국은 우리를 필요로 하지 않아요.

2. 과학자들은 미래를 긍정적으로 보지 않아요.

3. 가장 똑똑한 사람들은 혼자 있고 싶어해요.

4. 메타는 AI를 플랫폼으로 보고 오픈소스로 제공하려 해요.

5. 오픈소스는 AI 발전을 빠르게 하고 안전하게 만들어요.

6. 경쟁은 기업 간이 아니라 오픈소스와 독점 세계의 경쟁이에요.

7. 우리는 대형 언어모델보다 더 발전된 AI를 원해요.

8. LLM은 언어를 이해하는 데 도움은 되지만, 물리적 세계 이해는 부족해요.

9. 미래 AI는 물리적 세계를 이해하고 기억하며 계획할 수 있어야 해요.

10. 인간의 사고는 이해 없이 기계화될 수 있다는 중국인방론은 별로예요.

11. AI는 인간처럼 생각하는 게 아니에요. 다른 방식으로 발전할 거예요.

12. 오픈소스 AI를 지지하는 이유는 빠른 발전과 글로벌 협력 때문이에요.

13. 오픈소스는 다양한 문화와 언어를 반영하는 AI를 가능하게 해요.

14. 오픈소스 AI는 책임 소재도 명확히 할 수 있어요.

15. misuse(남용) 문제는 제품 책임과 비슷하게 해결될 거예요.

16. 딥스킷은 오픈소스 AI의 가능성을 보여줬어요.

17. AI 경쟁은 미국과 중국이 아니라 오픈소스와 독점의 싸움이에요.

18. 좋은 아이디어는 세계 어디서든 나올 수 있어요.

19. 유럽, 아시아, 미국 모두 AI 발전에 기여할 수 있어요.

20. 중국은 인구가 많아 연구 인재도 많지만, 제도적 한계가 있어요.

21. 미국은 위험 감수와 혁신이 활발하지만, 연구 자금이 줄고 있어요.

22. AI가 인류를 지배한다는 두려움은 과학적 근거가 없어요.

23. 인간처럼 생각하는 AI는 아니에요. AI는 다른 방식으로 발전할 거예요.

24. 인간은 사회적 동물이고, AI도 그런 특성을 가질 필요는 없어요.

25. 한국은 뛰어난 하드웨어 강국이지만, 소프트웨어는 더 발전할 수 있어요.

26. 소프트웨어 혁신은 혼돈과 위험을 감수하는 창의력에서 나와요.

27. 한국, 일본, 대만은 하드웨어에 강하지만, 소프트웨어는 도전이 필요해요.

28. 변화는 시간이 걸리지만, 새로운 사고방식이 필요해요.

29. 감사합니다, 박사님. 바쁜 일정 속에서도 소중한 말씀 감사합니다.

최근 검색 기록