자청의 유튜브 추출기

유튜브 영상의 자막과 AI요약을 추출해보세요

AI 요약 제목

AI 안전성 해설: 위험과 신뢰 구축 비법 공개

원본 제목

Is AI as safe as they want you to think it is? Feat. Demis Hassabis

Matt Wolfe

조회수 조회수 80 좋아요 좋아요 9 게시일 게시일

설명

In this video I sit down with Demis Hassabis and discuss the breakneck speeds at which AI is advancing. Demis shares his thoughts on the guardrails currently in place, and those that are needed, to ensure that AI doesn't go down the dark path we all know it is capable of. The conversation was exciting, inspiring and overall very reassuring. Special thanks to Demis Hassabis and the entire DeepMind team for making this interview possible. Discover More: 🛠️ Explore AI Tools & News: https://futuretools.io/ 📰 Weekly Newsletter: https://futuretools.io/newsletter 🎙️ The Next Wave Podcast: https://youtube.com/@TheNextWavePod Socials: 🖼️ Instagram: https://instagram.com/mr.eflow ❌ Personal Twiter/X: https://x.com/mreflow ❌ Future Tools Twiter/X: https://x.com/futuretoolsio 🧵 Threads: https://www.threads.net/@mr.eflow 🟦 LinkedIn: https://www.linkedin.com/in/matt-wolfe-30841712/ Resources From Today's Video: Let’s work together! - Brand, sponsorship & business inquiries: mattwolfe@smoothmedia.co #AINews #AITools #ArtificialIntelligence Time Stamps: 0:00 - Introduction 2:06 - What's going on under the hood? 3:22 - The leap to Deep Think 5:12 - The world model 6:41 - What's on the horizon? 9:59 - Alpha Evolve and AI designing AI 11:15 - The era of the AI agent 12:51 - Maintaining public trust 15:50 - The near future 16:48 - Outro
자막

자막

전체 자막 보기
People are worried about things like privacy and and losing their jobs to AI.

How does a company like DeepMind build the the trust of the general public? What I want us to get to is a place where the assistant feels like it's working for you.

It's your AI.

AI is scary.

It's moving insanely fast.

And from an outsers's perspective, it seems like there aren't nearly enough guard rails.

And some of these concerns are actually legit.

The ones that always stand out to me are those surrounding access.

rich people with rich access to the right tools or privacy.

How do we trust these big companies with all the personal data? What does the world look like when everyone is recording everything and AI taking people's jobs? One expert says a blood bath.

Half of entry-level white collar jobs disappearing and 10 to 20% unemployment could be 1 to 5 years.

Some of these concerns are less extreme than others.

Take the classic Skynet example.

It's a bit extreme, but they're all reasonable because some of the things you've heard me talk about before.

First, the AI race.

These huge companies with vast resources are trying to be the biggest and best in the world of AI.

Are these companies prioritizing short-term profits over long-term safety? Second, even some of the scientists working on it don't totally understand what's going on under the hood.

For example, some of these models exhibit what's called emergent behaviors.

These are when they produce outputs that even the engineers who built these models had no idea what they can do.

In this video, I want to look into whether or not AI is all doom and gloom.

Because if you listen to some of the analysts, news outlets, influencers, myself included from time to time, it's the beginning of the end.

And it's impossible to put that genie back in the bottle.

But is it? Helping me answer these questions is Deis Habibus, a Nobel laureate, a knight, the CEO of Google DeepMind, and one of the most influential figures in AI.

Companies like DeepMind are the parents of these AI children.

And we're still in the phase where the parent is responsible when their kids mess up.

So, what steps are these companies taking to ensure they raise responsible, well- behaved young algorithms? It all starts with trying to understand what's going on inside the tech.

Can you sort of describe what's happening under the hood with an LLM? Like demystify it for people a little bit.

Sure, I can try.

Um, at the basic level, uh, what these LLM systems are trying to do is is very simple in a way.

They're just trying to predict the next word.

And they do that by looking at a vast training set of language.

The trick is not just to regurgitate what it's already seen, but actually generalize to something novel that you are now asking it.

LLMs predict the next word.

For example, if you go to a standard large language model and give it the statement, the quick brown fox, it will likely complete the rest of that sentence with the quick brown fox jumps over the lazy dog.

But the modern chat bots that we use today are more like question and response machines fine-tuned to be more like assistants.

It's still doing the same thing, but instead of trying to finish the sentence, it's trying to answer your question that you put into the chat.

But the trick here is that they don't want that chatbot to just find a paragraph from the original source material and parrot it back to you.

They want it to come up with new information based on all of the information it already knows from within its training data.

And if it doesn't already know something, it will either search the internet to try to find it for you or in the case where it doesn't have internet access, it'll just make things up.

And that is what we call hallucinations.

At IO, you announced the new deep think, right, which is so much more powerful and it's it's topping all of the benchmarks for things like coding and math and all that.

What happened under the hood that caused that new leap? New techniques have been brought into the foundational model space where there's uh this called pre-training where you sort of train the initial base model based on you know all the training corpus.

Then you try and fine-tune it with a bit of reinforcement learning feedback.

And now there's this third part of the training which is we sometimes call inference time training or or thinking where you you've got the model and you give it many uh cycles to sort of go over itself and go over its answer before it outputs the answer to the user.

What deep thinks about is actually taking that to the maximum and giving it loads more time to think and actually even doing parallel thoughts and then choosing the best one.

And you know, we've pioneered that kind of work in the past, actually nearly a decade ago now with Alph Go and our games playing programs because in order to be good at games, you need to do that kind of planning and thinking.

And now we're trying to do it in a more general way here.

What's really cool here is how Demis is highlighting how much of an effort engineers and scientists are putting into making AI more and more accurate and removing the chance for hallucinations.

AI started with the next word prediction, like the example of the quick brown fox we gave earlier.

Then it evolved to test time compute where the AI model would actually spend the time thinking through its responses and you were actually able to see this happen in real time.

And now the latest evolution is what Demis just talked about which is parallel thoughts.

Now the LLMs are thinking through a ton of different potential responses all at once instead of focusing on just one at a time.

It will then pick from all of those responses or even combine responses in order to give you the best possible output.

The ultimate goal here is to put the most accurate and helpful responses in front of you.

You've mentioned that the long-term goal is to sort of let these AIs have like a world model.

Can you sort of explain what you mean by a world model and what does that open up to us? I think for a model, what we mean by a world model is a model that can understand not just language but also audio, images, video, uh all sorts of input, any input um and then potentially also output.

The reason that's important is if you want a system to be a good assistant, uh it needs to understand the physical context around you or if you want robotics to work in the real world, uh the robot needs to understand the physical environment.

What sort of new things do you think that'll open up to people once they have that ability? Um I think robotics is one of the major areas.

I think that's what's holding back robotics today.

It's not so much the hardware, it's actually the software intelligence.

You know, the robots need to understand the physical environment.

I think that that's also what will make today's sort of naent assistant technology and things like you saw with project Astra that we show and Gemini live for that to work really robustly.

You want as accurate as world model as you can.

So that's our glimpse under the hood.

LLMs are imperfect models that are constantly being refined to become more and more accurate with the eventual goal of becoming complete world models that help Ahi understand what's going on around it in the real physical world.

But what's still unclear is how this will translate into practical applications that will significantly improve society without a lot of the downsides everyone is fearful of.

So you've mentioned things like AI will be able to most likely in the future solve things like room temperature superconductors and more energy efficiency and curing diseases.

Out of the the sort of things that are out there that it could potentially solve, what do you think the sort of closest on the horizon is? Well, as you say, we're very interested and we actually work on on on many of those topics, right? Whether they're mathematics or material science like superconductors, you know, we work on fusion, renewable energy, climate modeling.

But I think the closest if you you think about and and probably most near-term is building on our alpha fold work.

We spun out a company called Isomorphic Labs to do drug discovery, rethink the sort of the whole drug discovery process um from first principles with AI.

And normally, you know, it takes the rule of thumb is around a decade for a drug go from sort of identifying why a disease is being caused to actually coming up with a cure for it and then and then finally being available to patients.

It's a very laborious, very hard, painstaking and expensive process.

I would love to be able to speed that up to a matter of months, maybe even weeks one day and uh cure hundreds of diseases like that.

uh and I think that's potentially in reach and sounds maybe a bit science fiction-like today but that's what protein structure prediction was like uh you know five six years ago before we came up with alphafold and used to take years to find painstakingly with experimental techniques the structure of one protein and now we can do it in a matter of seconds uh with these computational methods so I think that sort of potential is there and it's really exciting to to try and make that happen 10 years to a matter of weeks is a pretty wide gap app.

But to truly understand this disparity, we need to look at why it currently takes up to 10 years to bring a drug to market.

It all starts with the research phase.

They first have to identify a target such as a protein or gene, which when altered can treat specific conditions.

The early goal is to develop a compound that makes that alteration.

Once promising compounds are found, we go through up to 7 years of testing in the lab and on animals.

And most compounds actually fail at this stage for a variety of reasons.

is things like lack of efficacy or toxicity.

If the results are promising, then the companies need to get regulatory approval to get clinical trials started on humans, which is a process that has three phases of its own and can each take several years.

And again, most drugs fail during this phase.

In fact, 90% never get past the human trial phase.

Once a drug does pass all these phases, it then has to go through another round of regulatory approvals before finally being allowed to go to the public.

But here's where AI comes in.

That first seven-year long discovery phase, it's going to be crushed because AI can identify the targets and compounds at an accelerated rate.

It can also detect toxicity and side effects earlier, which helps to weed out poor candidates before they go to trials.

The studies themselves, they're also quicker because the rate at which AI gathers and analyzes data is so much quicker.

The bottom line is we'll get better drugs and treatments way faster.

But here's where it gets really wild.

In the beginning, AI was being used to complete human tasks faster.

Now, we're starting to see AI training AI, which when you boil it down is in a way AI completing AI tasks faster.

This is where things really pick up.

You guys just announced Alpha Evolve recently, which looks amazing, right? It's it's an AI that essentially can help you come up with new algorithms, right? How close are we to AIS that are sort of designing new AIs to improve the AIs? And then we start entering this cycle.

Yes, I think it's really cool, a really cool breakthrough piece of work where we're combining kind of in this case evolutionary methods with LLMs to try and get them to get to to sort of invent something new.

Uh, and I think there's going to be a lot of uh uh promising work actually combining different methods in computer science together with these foundation models like Gemini that we have today.

So I think it's a great uh uh very promising path to explore.

Just to just to reassure everyone, it still has humans in the loop, scientists in the loop to kind of it's not directly improving Gemini.

It's using uh these techniques to improve the AI ecosystem around it.

Slightly better algorithms, better chips that the system's trained on versus it the algorithm that it's using itself.

This is really important because it seems like Demis is hinting at humans eventually being removed from the equation.

AI gets better at training AI and no longer needs humans to be involved in its development.

So where do we fit in? The answer to that lies in the end goal of all of these personal assistants and agents.

AI agents, they've been sort of a a big talk in the AI community recently.

And how far off do you think we are to being able to give an agent like a week's worth of work and it goes and executes that for us? Yeah, I mean I think that's the dream to kind of offload some of our mundane admin work and and and and also to to make things like much more enjoyable for us.

You know, you have maybe have a trip to Europe or Italy or something and you want the most amazing itinerary sort of built up for you and then booked.

Um I I love our assistants to be able to do that.

You know, I hope we're maybe a year away or something from that.

I think we still need a bit more reliability in the tool use and and again the the planning and the reasoning of these systems, but they're rapidly improving.

So, as you saw with with the latest project Mariner, what what do you think the biggest bottleneck is right now to to sort of getting that long-term agent? I think it's just the reliability of the reasoning processes and the and the tool use, right? It's so and making sure cuz each each one if it has a slight chance of an error if you're doing like a 100 steps even a 1% error doesn't sound like very much but it can compound to something pretty significant over a you know 50 or 100 steps and a lot of the really interesting tasks you'd might want these systems to help you with will probably need multi-step uh planning and action.

Removing the mundane from our day-to-day sounds wonderful but it also comes with the inevitable questions about jobs being replaced by AI.

This is part of a broader series of public concerns surrounding things like privacy, data security, and job loss that all big tech companies are facing today.

DeepMind's association to Google comes with some of the baggage.

So that begs the question, how does a company like DeepMind build the the trust of the general public that you can trust them with this kind of technology? Well, look, I think we are we've tried to be and I think we are very responsible uh uh trying to be responsible role models actually with these frontier technologies.

Partly that's showing what AI can be used for for good, you know, like medicine and biology.

I mean, what better use could there be for AI than to cure, you know, terrible diseases.

Um, so that's always been my number one thought there.

But there's other things, you know, where it can help with climate, energy, and so on that we've discussed.

But I think we've got to that you know companies is incumbent on them to behave thoughtfully and responsibly with this powerful technology.

We take privacy extremely seriously uh at Google always have done um and I think you know most of the things we've been discussing with the assistants they would be opted you know you would you they'll make the person the universal assistant much more useful for you but you would be you know uh intentionally opting into that very clearly with all the transparency around that.

What I want us to get to is a place where the assistant feels like it's working for you.

It's your AI, right? Your personal AI.

And and and it's working on your behalf.

And um I think that's the mode, you know, that's at least the vision that we have and that we want to deliver and that we think um users and consumers will want.

One of the things that you guys also demoed at IO that I I got a chance to actually test out a little bit earlier was the Android XR glasses and those were absolutely mind-blowing when I tried them the first time.

And uh so I guess the flip side of the sort of privacy thing is if everybody's sort of walking around wearing glasses that have microphones and cameras on them, how do we ensure that the the sort of privacy of the other people around us are is secure? I think that's a great question.

I mean first thing is to make it very obvious that you're it's on or off and these types of things, you know, in terms of the user interfaces and the form factors.

I think that's number one.

But I also think this is the sort of thing where we'll need sort of uh societal agreement and norms about how do we do we all want if we have these devices they're popular uh and they're useful you know how do we want to what are the kind of um the the guard rails around that and I think that's still that's why we're we're only in trusted tester at the moment is partly the technology still developing but also we need to think about the societal impacts like that ahead of time.

So basically, they don't know yet, which is interesting and fair all at the same time because ultimately when Demis mentions the social agreements, he's talking about government regulations and legislation.

AI is moving so fast and we're all busy figuring out all the other stuff going on in the world.

We haven't as a society stopped and really thought about these implications.

And we need to because given the speed, we're kind of running out of time.

But it makes sense that it's moving so fast.

AI is exciting.

It's cool and the benefits that it promises will change everyone's life for the better.

Just listen to Deis talk about what he's excited for in the near future.

And remember, this is the man who is on the absolute forefront of this technology.

So, I've got one last question here.

It's kind of a a two-parter question.

What excites you most about what you can do with AI today? And what excites you most about what we'll be able to do in the very near future? Well, today I think um it's it's it's the AI for science work is my you know always been my passion and I'm really proud of what Alpha Fold and things like that have empowered.

They've become a you know a standard tool now in biology and medical research.

You know over 2 million researchers around the world use it in their incredible work.

uh in the future, you know, I'd love a system to basically enrich your life and actually protect a little bit work for you on your behalf to protect your mind space and your your own thinking space from all of the digital world that's bombarding you the whole time.

And I think actually one of the answers to that is that we're all feeling in the modern world with social media and all these things is is uh maybe a digital assistant working on your behalf that only at the times that you want surfaces the information rather than interrupting you at all times and and of of the day.

The thing about this technology is that it's supposed to be the technology that gets us away from the bombardment of technology.

We're sitting at our computers and on our phones getting flooded by negativity and toxicity on social media every minute of the day.

It's refreshing to hear someone like Demis, who's in one of the best positions on Earth to build this future, talk about how the importance of AI is critical for our mental and physical well-being.

That we should be able to cut out the mundane, remove the toxicity, and focus on the things we really want to do.

Travel the world, play guitar, pick up that hobby that we never found time for, or most importantly, spend time with friends and family.

In the end, after speaking to Demis, I really felt like it wasn't all doom and gloom, that super intelligent and talented people are actually behind the wheel and that they have a firmer grasp than most people think they do.

I want to thank my guest Demisipus and the whole team at Google DeepMind for the incredible conversation.

As always, don't forget to like and subscribe, and thanks so much for nerding out with me.

Heat.

영상 정리

영상 정리

1. 사람들은 프라이버시와 일자리 걱정을 해요.

2. 딥마인드는 어떻게 신뢰를 쌓을까 고민해요.

3. AI는 내 일처럼 일하는 것이 목표예요.

4. AI는 빠르게 발전하며 guard rails가 부족해 보여요.

5. 부유층과 개인정보 보호 문제도 큰 걱정이에요.

6. AI로 일자리 손실과 사회 변화가 우려돼요.

7. 일부 전문가들은 큰 충돌을 예상하기도 해요.

8. AI 경쟁은 단기 이익보다 안전이 중요하다고 해요.

9. 일부 모델은 예상 못한 행동을 보여줄 수 있어요.

10. AI가 모든 문제를 해결하는 건 아니에요.

11. 딥마인드 CEO는 책임감 있게 AI를 개발한다고 해요.

12. 내부에서는 언어 모델이 어떻게 작동하는지 설명했어요.

13. 언어 모델은 다음 단어를 예측하는 게 기본이에요.

14. 챗봇은 질문에 답하고 새 정보를 만들어내요.

15. AI는 인터넷 검색이나 만들어내기도 해요.

16. 최근 '딥씽크'는 더 강력한 성능을 보여줬어요.

17. AI는 여러 생각을 동시에 하여 최적 답을 찾는 기술을 개발 중이에요.

18. AI는 점점 더 정확하고 자연스러워지고 있어요.

19. 미래에는 AI가 언어, 영상, 소리까지 이해할 수 있어요.

20. 이 기술은 로봇과 자율주행 등에도 활용될 거예요.

21. AI가 세상을 더 잘 이해하는 시대가 올 거예요.

22. 예를 들어, 신약 개발이 훨씬 빨라질 수 있어요.

23. 현재는 10년 걸리던 일들이 AI로 몇 달 만에 가능해질 수 있어요.

24. AI는 약물 발견과 질병 치료를 혁신할 수 있어요.

25. AI가 AI를 설계하는 것도 가능해지고 있어요.

26. AI가 스스로 더 똑똑해지는 사이클이 시작돼요.

27. 인간이 빠져도 AI는 계속 발전할 수 있어요.

28. 개인 비서 같은 AI는 일상 업무를 도와줄 거예요.

29. 한 해 정도면 AI가 복잡한 일정도 짤 수 있어요.

30. 신뢰성과 도구 활용이 앞으로 중요한 과제예요.

31. 일자리 걱정은 계속 나오지만, 책임감 있게 개발해야 해요.

32. 딥마인드는 책임감 있게 AI를 사용하려 노력해요.

33. AI는 의료, 환경, 에너지 등 좋은 일에 쓰이고 있어요.

34. 사용자 프라이버시를 보호하는 것도 중요해요.

35. AI가 내 일상에 도움을 주면서도 안전하게 쓰이길 원해요.

36. 구글은 투명성과 책임 있는 행동을 지향해요.

37. AI는 개인 맞춤형 도우미로 발전할 거예요.

38. XR 안경처럼 새로운 기술은 프라이버시 문제도 가져와요.

39. 사회적 규범과 법이 함께 발전해야 해요.

40. AI는 빠르게 발전하지만, 규제와 논의가 필요해요.

41. AI로 과학, 의학, 환경 등 많은 분야가 좋아질 거예요.

42. 예를 들어, 신약 개발은 몇 년이 아닌 몇 주 만에 가능해질 수 있어요.

43. 현재는 긴 연구와 임상 과정이 필요하지만, AI가 도와줄 거예요.

44. AI는 더 좋은 약과 치료법을 빠르게 만들어줄 수 있어요.

45. AI가 스스로 새로운 알고리즘도 설계할 수 있어요.

46. 인간과 AI가 함께 발전하는 미래를 기대해요.

47. 오늘날 AI는 과학과 연구를 혁신하고 있어요.

48. 미래에는 AI가 우리 일상과 삶의 질을 높일 거예요.

49. AI는 우리의 정신 건강과 삶의 균형도 도울 수 있어요.

50. 결국, AI는 더 나은 세상을 만드는 도구가 될 거예요.

최근 검색 기록