자청의 유튜브 추출기

유튜브 영상의 자막과 AI요약을 추출해보세요

AI 요약 제목

OpenAI 공동창업자 그렉 브록맨과 나눈 솔직한 대화 한 잔!

원본 제목

A Cheeky Pint with OpenAI cofounder Greg Brockman

Stripe

조회수 조회수 21.4K 좋아요 좋아요 556 게시일 게시일

설명

Greg Brockman—OpenAI cofounder and Stripe's first engineer—joins John Collison to talk about research-driven product development, an early moment he thought OpenAI was doomed, S curves in AI advancement, and energy bottlenecks. Subscribe to Cheeky Pint YouTube: https://www.youtube.com/playlist?list=PLcoWp8pBTM3ATMYLP-hFIhJORSw-nFOiY Spotify: https://open.spotify.com/show/2IHbGJJMpiFoz5YrvRfTFw Apple Podcasts: https://podcasts.apple.com/us/podcast/cheeky-pint/id1821055332 Watch more Cheeky Pint https://youtu.be/z6PHZJLo2Sk Full episode transcript https://cheekypint.transistor.fm/1/transcript Key moments 00:00 Intro 02:51 Was OpenAI the first company to take the scaling hypothesis seriously? 04:53 Lessons from Dota about deep learning 08:08 What is a good new Turing test? 08:57 Personalization in AI 09:57 Research-driven product development 10:26 An early moment OpenAI felt doomed 15:01 OS limits on AI product development 17:59 When will AI make novel advancements in math or science? 20:03 Energy bottlenecks 22:30 S curves in AI advancement 24:00 AI coding 26:25 Refactoring as a killer AI use case 27:26 How OpenAI decides what products to built 28:53 Growing up in North Dakota 30:17 How far away is AGI?
자막

자막

전체 자막 보기
This is totally backwards from how you're supposedto do a startup, right? You're supposed to have a problem and we had no idea what the problem was.

Is there a world wherethe AI becomes the manager and it, you know, gives you ideas and gives you some tasks to do? This was probably the hardestproject that I've ever done, because it felt totally doomed, right? It's like, I know like every instinct, every builder instinct of mine.

Did it actually feel doomed? Oh, I felt totally doomed.

Greg in 2010 dropped out of MIT to become our first engineer and went on to become Stripe's CTO.

In 2015 after he left,he co-founded OpenAI.

OpenAI is really cooking at the moment and Greg is one of the mostproductive people I know.

Cheers.

Wow, you really havesome old school photos.

Some deep cut GDB trivia.

So yeah.

We figure we'vegotta put people at ease, make people feel at home.

Very nice.

Thank you.

Okay.

Well, I just dive straightinto all the questions I have, which is a lot.

All right.

If you were not working in AI, how could one have known that something was about to start working? People were telling youthat AI was the future in the 1970s, in the 1980s, the 1990s.

Then very quickly, in the late 2010s everything started happening.

Well, I was someone whowas not in the field and so, I remember verymuch what it was like.

2013, 2014, it felt likeevery day on Hacker News there'd be a new "deeplearning for X" article.

And I remember being like,"What is deep learning? I knew, like, one person in the field, and I asked them to introduce me to more people in the field, and I just kept getting introduced to a bunch of my smartestfriends from college.

Now, if you actually look atthe work that was being done.

.

.

2012, basically, image recognition, for the first time, you could solve with the neural net much better than anything else.

It just blew all these traditional computer visionapproaches out of the water.

It's like this like learned system is able to outperform 40years worth of, "Let's write down all the rules and try to handcraft the algorithm for the task.

" And it's very easy to then be like, "Okay.

Well, this approach, sure it works for computer vision, but it's never gonna workfor machine translation.

" In 2014, suddenly, you're getting great results in machine translation.

I think that this pattern was applied in subfield after subfield.

One thing I've been wondering about is so many different things are finally working at the same time.

We have LLMs, which are obviously amazing, but then we also separately have image models really working.

We also have text-to-speechand speech-to-text working way better than they were before.

What's the common factor behind everything startingto work at the same time? Well, it's deep learning, right? I think deep learning is the core- We've have deep learning for a long time.

Why didn't deep learningwork in the 1980s? If you look at the number of orders of magnitude of compute that we've gone through from 1940 to today, I mean, it's just astounding.

You think they're allexplained by compute scale-ups applied to the right algorithms? Of course, the type of algorithm changes, and some of those resultsaren't even deep learning.

But I think that fundamentally,it is about compute, and you need an algorithm that is scalable that can actually absorb that compute.

Was OpenAI the first company to take the scalinghypothesis really seriously? I think that claiming thefirst is always difficult, but I think that is clear that we sort of succeeded much more wildlysooner than anyone else.

I think that we had real conviction behind what we needed to do.

Some people think that OpenAI set out to prove the scale hypothesis, whereas it was almostthe other way around, that the scale hypothesisis what we observed as the thing that was working for us, and we really saw it for the first time, actually, during our Dota 2 project.

We started out with 16 coresto train a little agent- on Jakub and Szymon whowere leading the project from an ML perspective, on their desktop.

Then, they scaled to 32 cores.

It felt like every weekI'd come back to the office and they'd scaled it by another 2x and we had 2x performance- just so clear, you justneed to keep going.

Where does this thing peter out? It just never did.

Founders get too much credit, because you have an initial product that's a pretty reasonable idea Then you listen to the customers, and you follow what's working.

So you were saying that was kind of OpenAI with the scaling hypothesis, where you started tryingto make Dota AIs work and you noticed that addingmore compute worked really well, and you said, "Where else will just throwing morecompute at it yield benefits.

I think that's to the first order correct.

I think one thing thatdistinguishes OpenAI from the typical startup, is we did everything in reverse, right? It's like, you're supposedto have a problem to solve.

No one cares about the technology.

Form the entity upfront.

Exactly, yes.

And for us, we reallychased the technology without any idea of howit would be applied.

A lot of pursuing the technology really is you have to let reality hit you cold, hard in the face.

There's just no otherway to achieve results.

You can't will it into existence.

You can't convince people that this is the thing.

It's like, you have toactually, make the system work.

We have to just sort of figure out what is the right frontier,what are the problems, what are the things thatare on the edge of working, and to really double down on those.

What else do you takeaway from the Dota work? How else does it.

.

.

Because you could have just started with LLMs and you know, we could have skipped that period in the wilderness, but it sounds like itwas somewhat formative for the OpenAI organization.

I think Dota had many lessons, one of which actually wasa management lesson for me.

I remember when westarted out the project, I tried to set a listof milestones, right? It's like, "Okay, this datewe're gonna beat this player.

This date, we're gonna beat this player.

" That didn't work?It did not work at all.

I remember our first milestone came and Went?Exactly.

And so you realize that you cannot control the outcome, right? You cannot set outcome-based milestones.

What you can do is youcan control the inputs of we're gonna try theseexperiments by this date.

We're going to implementthis feature by this date.

And that is what actually worked.

And I remember it was one of those things that was just a storythat I could not have written in any better,if we'd intended to.

We beat our in-house best player and then we were playing as a semi-pro and he was just trouncingus, trouncing us.

Then, suddenly we'restarting to get pretty good.

So we showed up at the international, this tournament blind.

First day, we had threeplayers that we played against.

We went 3-0, 3-0, and then 2-1.

We're like, "Oh no.

We lost.

What happened?" It turned out that this prothat we're playing against, he had used an item we'dnever trained against.

And we were like, "Oh no, we're totally going to be hosed.

" So, what do we do? Well, we just need to change the training.

And so people stayed upall night to get this done.

They added this extra item in there.

4am, they finally get the job running.

That Wednesday, we're supposed to play against the number two and the number one player in the world, and our semi-pro plays against it.

He's like, "This bot is totally broken.

" And we're like, "Oh no,we like clearly had a bug.

Something terrible has happened.

" He was like, "Look, it'staking all this damage it doesn't need to.

I'm gonna go kill it.

" He goes to kill it, he loses.

And he was like, "That was weird.

" He'd realized that what had happened was it had learned a baiting strategy.

And then we realized,"Well, we have a super-bot, but it's so bad at the beginning, because it's trying to do the baiting.

So, what if we just stitch thetwo bots we have together?" And then that bot was just undefeatable, and we played against thisnumber one player and won.

To me this is like the story of how deep learning works, right? Is it's like you kind of can't control where you're gonna go.

You can control everything that goes in.

You can put these metricsin these measurements and you can have sort of the evaluations.

And being able to gauge where you're at is almost as important as being able to make the forward progress.

But if you get all those elements right, then you can do true magic.

You're also describing something that worked really wellfor an organization where it was motivatingto stay up all night.

If the prize was impossibly far away, it wouldn't have been as motivating.

But the fact that there wasa near-term reward function and you were able toshow concrete progress.

.

.

I think so.

I think some of my favorite engineering storieshave the same character.

I remember you and I staying up all night to get our ISO 8583 integration.

There's something aboutstaying up all night for like critical projects that actually have important history in all startups.

I'm glad to hear that traditionis alive and well at OpenAI.

So, Dota has fallen, Chesshas fallen, Go has fallen.

We've passed the Turing test, I think by anyone's measure.

People comment on how there was little fanfare when we did, but we seem to havepretty clearly done so.

What's a good new Turing test? Well, I'll tell you two things.

One, is that, if you look at the strict version of the Turing test, I would actually claimwe haven't done it yet.

No one's really gone that extra mile to say, "Can we actually have an AI that like is fullyindistinguishable from a human?" It's not clear if it'seven a good task, right? But I think that the rightquestion to your point is like, well, what is the milestone that we should be chasingin terms of capability? I remember talking to one ofour board members in 2018, and he said, you know,"Look, I get that you were all excited about near-term AGI, but it just doesn't feellike it's on track.

" I asked, "Well, what do you mean?" He said, "In a world with near-term AGI, you would expect massive economic value to be delivered by AIalready, and where is it?" In 2018, I think that wasa very fair criticism, and clearly, that'sstarting to change now.

It feels like one thing that may really change the AImarket is personalization.

Up to quite recently, whenyou asked ChatGPT a question, it was like walking intoa shop off the street.

They've never met you before.

They know nothing about you, whatever.

That's obviously, notideal for this, you know, close part of your digital life.

I'm curious how you'rethinking about personalization from a product point of view, because it feels to me like the most meaningful changesince the chat interface.

Two and a half years ago.

Two and a half years ago.

I mean, I think it's absolutely critical, and I think it is very rightly considered to be kind of a next frontier.

I'm someone who always, whenI just Google something, I go into Incognito Mode, because I don't even want my computer to remember that history.

And I always used to go fortemporary chats on ChatGPT.

But now my usage has totally reversed.

I want ChatGPT to remember everything.

I want it to remember my interactions, because it's useful.

Okay.

So you guys figured out, from a product point of view how to make the memoryactually work better.

It's the product point of view, but also, really theresearch point of view.

And I presume there's a flipflop between the product and research where,when you find something that's useful from aproduct point of view, then the product people say, "I'm just a product person, you researchers goactually make this good.

" Then that kind of kicks off more research.

Is that how it works? To some extent that's a failure mode in our mind.

I think that we really don't want to have that kind of silo.

We really want to blur the lines and have people cross-collaborate.

And so, it's avery different mindset from how you would traditionallybuild a product versus how you do research.

Part of what had happened, actually, was that we had GPT-3, we knew we needed to build a product in order to be able tocontinue to raise funding, and we were like, "Well, what product do we build? And we wrote down a list of, like, 100 different products.

We could do a medicalthing.

Then you're like, "Okay.

Well, now we haveto sell to hospitals.

We're gonna have to hire doctors.

" You realize you give upon the G in AGI, right? You're gonna like go for a specific thing.

Someone had the idea of saying, "Well, why don't we just make a API and let people figure it out? And, again, this is totally backwards from how you're supposedto do a startup, right? You're supposed to have a problem, and we had no idea what the problem was.

Yeah, yeah, yeah, we'regonna back into the problem.

And so this actually felt like this was probably the hardestproject that I've ever done, because it felt totally doomed, right? It's like, I know like every instinct, every builder instinct of mine says- Did it actually feel doomed? Oh, it felt totally doomed.

It wasn't just likeopen-ended or something? No, it felt doomed.

But you were still doing it.

Yeah.

I mean, it's like, at some point, if you have.

.

.

There wasdefinitely no other path.

There was no other path.

It was the only shot we had.

I remember someone also saying like, "I can't imagine anyone payingfor samples from this model.

" And I was like, "You might be right.

" I'm still trying to imagine it myself.

Yes.

It was just not clear, were we above thresholdor below threshold? We showed it to people, and people were interested.

But that's very differentfrom people being like, "I will build my company on top of this," Yes.

So, what was the firstuse case to get any traction? AI Dungeon.

What was that again? There you go.

AI Dungeon was atext-based adventure game.

Oh sure, yeah, yeah, yeah, yeah.

Okay.

Bbut that was real revenue or that was non-zero revenue? It was enough.

And, in fact, I believe they were our first paying user.

And that gets youconfused for, you're like, "Ah, clearly the future ofOpenAI is gaming" you know? I know, back to our roots.

Exactly.

It's interesting too, because we had dreamed ofall of these applications, like medicine and all these things, and you start with the gaming application.

But we could see signs oflife on so many other things.

I think in many ways GPT-3 was like the world's best demo machine, right? When we released the API, people were coming with allthese cool things you could do, but making them reliable was so hard.

It really wasn't until thenext generation of GPT-4, until we started to figure outhow to do post training well that then you were actually able to build real businesses on top of these things.

Bill Gates was saying recently that GPT-4 was the best demo he'dever seen since Xerox Park.

You know this quote?Yes.

He said it to me the night that he saw it.

Yeah.

That's high praise.

I wanna touch the medicine thing, because you've mentioned it.

Like you said your family has personal stories.

You've talked about gettingvery valuable diagnostic help.

We, ourselves, actually, it'smuch more minor in our family, but we managed to fix a cat thanks to debugging it with an LLM.

I think that's an interesting example, because so many people that I know have had some kind ofexperience like this.

Maybe it's because you actually don't get that much time from a doctor.

Are there other examples likethis medicine application where you're seeing a lot of success, that many people have similar stories, but we just hear less about? Yeah, I think it's a great question.

And by the way, I thinklike medicine is an example of one where I kind ofthought it was gonna be one of the last domains that we would successfullybe able to add value in, but it turns out that the bar is so low, you just need to exceed WebMD.

I think that we have seen other areas that arelike a real common theme.

One that's very interesting right now is the life coach, life advice kind of application, where you just talk your AI.

That's actually really taking off.

Yeah.

It really is.

Education is another area that just like clearly isreally having an impact.

There are studies coming out now that actually show that peopleare able to learn better through the use of these tools.

That's to be expected, right? It is the Bloom 2 sigmaeffect in a product.

Yes.

And that, for example is like why, Sal Khan, started Khan Academy, to think about if you can give personalizedtutoring to everyone.

We showed him GPT-4.

He's like, "This is the thing.

We need to become a GPT-4 app.

" I think that there are these really amazing applications that are affecting everyone's daily lives.

Obviously, programming is another one, that people are seeing all across the board ina professional context.

We're heading to a world where, like, if you want to do productive work, and you don't have access to computer, you're going to be hampered.

Similarly, not having access to AI, it's heading in the same direction.

Speaking of not having access to AI, I will posit that thesedays, it feels like AI product developmentis mostly OS limited.

Is that how you feel? Are we stuck at the moment? I do feel a little of stuckage, but not to worry, it was overcomeable.

But yeah, I think it is true.

Two years ago we releasedplug-ins in ChatGPT.

Do you remember those?Yeah.

That was trying to make it so anyone could write appsthat then ChatGPT could access.

The models were just not that good, right? That we limited to likethree plug-ins at a time.

You could have only somany functions and stuff.

It just wasn't that reliable.

Now we're in a world where MCP basically is really taking off and is a way to hook upyour AI to different tools, and very much like kind of trying to take that same type of ideaand really make it work.

Now, the world thatwe're in is very similar, where there's certaininterfaces we don't have.

Being able to access yourphone and all those APIs.

There's a question of, is the model above thresholdto actually use them or not? My observation has been that basically, I think that there is maybea lag of, like, six months of different interfacesthat are hard to access.

But once we have a modelthat's good enough, we will find a way.

People will find a way.

I think that we're in a world where I have everyexpectation that we will get the future that has been promised.

It's just gonna take some work.

I feel like there are many moments where I'm using my phone,and I want a single button where it's just, like "ChatGPT what do you think of this? I need your comment.

Ineed your fact check.

I need your explanation.

"Something like that.

You take a screenshot, andyou like go into ChatGPT, you click "upload photo.

" It feels very 1993 versus the button on myphone that just says, "Hey, ChatGPT, what doyou think about this?" Obviously, you guys are notempowered to go build that.

That's what I mean by it feels somehow like we're a littleoperating system limited.

I definitely get it, but I'll say, I think thatthere are two dimensions.

This is how I've beenthinking about things since we released the API back in 2020.

There's capability and convenience.

What you're referring tois the convenience, right? It's like pretty inconvenient to do the screenshot and paste it.

But the thing is, if thecapability is good enough, you are willing to accept anysort of inconvenience, right? It's like if this, bytaking the screenshot and showing it to ChatGPT, it could give you amazing insight, it could tell you like how to, you know, build Stripe in some way, and it takes you like a month to do it, you have to crawl the top of the mountain, you'll do it, right? The convenience will not stop you.

And so the point that I'mtrying to make is that if the capability is high enough, people will start doing a specific flow.

They'll discover the use cases and the convenience will just catch up.

In the convenience, there's so much pressure.

There's pressure onthe phone manufacturer.

There's pressure on us.

There's pressure on everyone in order to bring down the convenience.

o, I just need to be patient, and it'll be great in three years time.

Yes.

And really lean in and use the AI.

A criticism people like to levy of AI is, "Yeah, it's great and handy and all, but it hasn't come up witha single novel advance in mathematics or science.

"Have you? Well, you could have if you'd become a mathematician.

But you know, but humanity has, you know, for keeping the scoreboard.

What do you make of that criticism? Just wait.

Okay, so you think like take one of the Millennium Prizes Do you think we plausibly will see that? I think for sure.

I mean, there's no question.

Two years, five years, 10 years? I think that is the question.

It's just timing.

That is my question.

I mean, I would put two to five years as the right number.

I think ultimately this comes back to the question of benchmarks, right? Actually being able tosolve a millennium problem is pretty high bar.

Yeah.

And once you can do that,there's so many other things that will definitely be possible.

And I think that we're starting to see the leading edges of this.

And to me, if we look atour definition of AGI.

.

.

We recently started talking about this framework ofthinking about levels of AGI, starting from chatbots,to reasoners, to agents, to innovators, toorganizations, Five levels.

We're basically, somewherein level three right now.

Level four, this innovator, like,that's gonna be different.

I recently posted some pictures of our visit to Abilene, Texas, where we're building thesebig data centers together with our partner, Oracle.

Imagine taking that whole data center and just thinking hardabout one problem, right? Imagine it just thinking about how to solve a Millennium Problem or how to cure a specific kind of cancer.

Maybe it needs access to some apparatus.

Maybe it needs access to robotic wet labs.

Maybe it needs access to different tools in the world.

But that level of computational power coupled with the ability to experiment and learn from your ideas, that is going to be somethingthe world has never seen.

So yet again, we just haven'tput a respectable amount of compute on these problems compared to what we will be doing.

Yeah, we're still on thesetiny little computers.

So, that actually gets to, in terms of these scaling laws,do they eventually run out because we just run out of compute? Or do we eventually get to the point where we're inventing newkinds of nuclear energy, and that is what unlocks the next level? A lot of energy that comesonline now is for data centers, which was not true when youguys started training GPT-2.

Isn't that the upcoming bottleneck? I mean it's as it should be, right? It really should be that it's energy manufactured into intelligence and that's your only bottleneck.

But I'm saying that'llbe like quite a plateau compared to the exponential growth we've seen over the past few years.

Unless things really change in terms of permitting and plans forbuilding everything like that.

This is I think the core, right? If you look at every trend in this field, there are these exponentials, these S-curves thatsum up to exponentials.

Sure.

But these exponentialswere mostly existing in like tech, Silicon Valley space where it was pretty easyto have exponential growth.

It's pretty hard inpermitting and real estate and damming rivers andbuilding nuclear power plants.

It's harder to haveexponential growth there.

Well, let's see how fusion pans out.

Yeah.

Okay, but even fusion, most industry observers would say is still five years away.

And so, where does the next five years of power growth come from? I think that it is very possible that we end up bottlenecked on energy, and that's actually, onereason that we've been spending a lot of time really tryingto advocate for the fact that we just need far more power.

My observation of themarket is that ultimately, the capitalist markets do provide.

I think there's this likeabsolute tsunami of demand that is coming our way, but I feel some confidencethat, again, there's like, when there's enough pressure, when there's enough clarityof, this is the bottleneck.

.

.

And it's not just for any company, right? It's really for national competitiveness.

You look at other countries that are just buildinghuge amounts of power, far more than we are.

I think that actually forAmerica to remain competitive, there's just no choice but to build it.

We've got to figure out power.

We do.

Speaking of bottlenecks, everyone was talking aboutthe data wall in 2023.

I think this is an interesting thing, where no one is talkingabout the data wall anymore, and yet, it doesn't feel likeAI progress has slowed down.

Is it just test-time compute? Is it, like, people werewrong about the data wall and where it presented a bottleneck? Is there actually still a data wall, but it's two years away? It's basically, all ofthese things, right? It truly is.

It's like, you keep changing the paradigm.

That is the real core of the Kurzweil view of the world, is that, fine, this one wayof doing things taps out.

If you just look at thatone way of doing things, you feel hopeless.

You feel like this is it, but somehow you will find a new S-curve.

And I think that's what's happened.

For example, synthetic data, for example, reinforcementlearning, right? If you think about the RL paradigm, fundamentally that's a dataproduction mechanism, right? And it just that the AI happens to be training on its own data and then you learn it very rapidly, and then you learn on that.

Each of these has taken us much further.

I think there are lotsof algorithmic ideas, lots of techniques, lots of ways of even using the existing data better.

I think that fundamentallythe S-curves continue, and if you zoom out, it alllooks smooth and uninterrupted.

So, it's kinda like chip miniaturization where each generation people are like, "Okay, well, that's the smallest you can possibly make a chip.

" Do you know what I mean? "That's it we're donewith miniaturization," and somehow we figure out a way.

Yes.

Now one difference with chips is at the end of the day, there is some limit.

Right.

But we've neverbeen that close to that.

Yes.

Where does AI coding go? In particular, vibe codingis all the rage right now.

It's kind of the term of 2025.

It's sort of working.

It's very impressive.

No one is really fullyletting AI software engineers run end-to-end in production.

I'm just curious, what are your one to two-year predictions on what happens with AI coding? Well, my general observation is that once something kind ofworks in this field, the next gen is gonna be great.

I think that's where we areright now for AI coding.

I think what we're going to see is AIs taking more andmore of the drudgery, more of this like pain, more of the parts that are not very fun for humans.

Now, one thing that'svery interesting is that I think that so far, the vibe coding has actually taken a lot of code that is actually quite fun and left behind thereview and the deployment, these things that are not fun at all.

I'm hopeful that we're actually gonna be able to make a lot of progress on these other areas as well, but fundamentally, we should really end up with a full AI coworker.

And I think it really will beanything you want to create, you can be the manager, and you can have this team of software engineering agents.

Now, the thing that I think will be very interesting to see is, is there a world wherethe AI becomes the manager and it gives you ideas, gives you some tasks to do? That's something that, again, it's just totally backwards in terms of how we think about it.

But are there ways inwhich you can actually have outcomes for companies and actually have people whose jobs become much more meaningful because they have an AI whoreally deeply understands them in the same way that your AI doctor really deeplyunderstands all of your needs.

But isn't part of the common thread that we're talking about here, often places where AI tools underperform, it's because they're tryingto do something generally, Like, voice recognition is not that good because it's trying torecognize all voices as opposed to trying to recognize my voice in particular.

Similarly, with AI coding,they work well in places where you need no context at all and we're single-shotting an app based on publicly available libraries.

In places where you have to understand a million-line code base, they haven't fully figured outhow to do a good job of that.

Is that a fair parallel to draw between all these challenges? Well, I think there aretwo things in there.

One is that I think thisis already changing, right? So if you look at something like Codex, it's actually great atoperating in a big codebase.

Like I ask it for wherefunctionality is implemented and it's better than I am at finding it.

Which is kind of a wild fact.

It's super cool to seeit like grepping around and just going and exploring.

Actually, this is one thing that we really shot for with Codex, was to build a tool for software engineers who are not necessarily vibe coding.

It's not about buildinga new app from scratch, which is a cool demo, But that's not actually how most software gets written.

Actually, I think maybe thekiller enterprise feature is refactors, right? It's like rewriting yourCOBOL app or changing- When Facebook did hiphop to do static PHP.

If you think about it, the amount of deep, sophisticated thought that is required to accomplish a refactor is actually not that high.

There's a lot of mechanical work that's just the sheervolume of it that's hard, That's an AI-shaped problem for sure.

I think we're going to seea lot more productivity on all sorts of AI tasks as a result.

Now, we are in a world whereyou said a second thing, which is maybe you need to narrow down more.

I think that the way these models work is you actually do want one model that knows more and more things, and you want it to havesome personalization to you, but the fact that is they have this one base model that kind of knows everything is actually a very useful starting point.

So I do think that you'regoing to see a world where we'll have more andmore capable base models and figuring out how you really connect it to all of your organization's code, and context, and history.

How does OpenAI decidewhat products to do? I'm just curious how you think about when to develop specificproducts or when you think, "Oh, you contributed do that with ChatGPT, and that's good enough.

" Yeah.

It's is a reallytough question, right? It's something we really struggle with, and I think that over time when we first launched ChatGPT, we're left with this, "Well,we're an enterprise business, and we're a consumer business.

" And seems terrifying, as a startup.

I remember talking to one ofmy board members who said, "It just feels like what you have is an unfocused strategy at first, because you're just doingall these different things.

" But if you think about it, maybe an analogies toa company like Disney where you make one core assetlike Little Mermaid, right? Then, you productize it in all these different ways.

You think about Little Mermaid the ride, the lunchbox, the T-shirt.

I think that we have some element of that.

We have a core model, and then we have a question of, "Well, what are the applications that this can add a lot ofvalue to quickly, right? With like a small amountof additional work.

So I think the question ofwhat areas to go into are how far does it take usoff the general path, the return on how importantis this domain area, especially for achieving the bigger goal, how much synergy is there across it with respect to other things we work on? So coding is one wherethere's very clear synergy, very clear ROI.

Because if we can speed ourselves up, that's something thataccelerates everything.

How is being from NorthDakota, how has it shaped you? Look, North Dakota was anamazing place to grow up.

Was it actually? Come on.

I've been there.

And it was great.

Look, it was incredibly safe.

Our doors didn't even have working locks.

It was that kind of place.

I had a lot of freedom academically.

Sixth grade, my dadtaught me some algebra.

Seventh grade was the first time they split you into advanced math, so I was going to be taking pre-algebra.

My mom took me to go see the teacher, and we asked, "Can you skip?" The teacher looked atus very condescendingly and said, "Every parent believes that their child is special.

I can guarantee your son will be plenty challenged in my class.

" And so, after a month ofme sitting in the back just playing games on my calculator and you know, she'd call merandomly to try to trip me up, and I just look at theboard and be like, "2x.

" She said, "Okay, fair enough.

Your son has nothing tolearn in this class.

" So, they moved me intoeighth grade algebra.

But then eighth grade rolled around and I had no more mathleft in my middle school.

That's right you went to college, right? Well, so I did, in high school start going to University of North Dakota, and took a bunch of classes there.

But also, I was connectedto a lot of people who were the top math kids in the country through things like math campand the math competitions.

So, you're saying the social scene was not too distracting in North Dakota? Not too distracting, but it was definitely fun.

Last question, do you remember we were going to Camp YC in 2017 and I asked you how far AGI was away and you said two or three years.

Did I say that? You did.

I don't see the recording.

Well, I'm just trying to think was I right, were you right? How should we grade that? Because we didn't get AGI, but we didn't not get AGI either.

I'm curious if you have any reflections from your own AGI prediction journey.

I think that we are.

.

.

I will say I think that AI is surprising.

I think that that is like thesingle most consistent theme, is that the thing we were picturing, we got something different, but we got something better, more magical.

Something that is more helpful.

And so, I'm actuallyquite happy with that.

Now, predicting where you go, it's again, it's really hardto manage the outputs here.

One goal of OpenAI that wehave successfully achieved, is every year to have at least one result that just feels likea step function better than anything before.

You know when you see it kind of.

You just have one really awesomeAI feeling thing each year.

That kind of thing.

I like that.

Yeah.

That's a good way to tie it back, which is, you know, the waywe grade that prediction is that you've stopped settingmetrics based on outputs.

Yeah, exactly, exactly.

Yeah.

Yeah.

Yes.

But it does feelwe're getting really close to something pretty magical.

I agree.

Thank you.

Thank you.

영상 정리

영상 정리

1. 스타트업은 문제를 먼저 찾는 게 보통인데, 이번 프로젝트는 반대였어요.

2. AI가 관리자가 되어 아이디어와 작업을 주는 상상도 했어요.

3. 이 프로젝트는 정말 힘들었고, 실패할 것 같았어요.

4. 2010년 Greg는 MIT를 떠나 Stripe CTO가 되었어요.

5. 2015년 그는 OpenAI를 공동 창업했어요.

6. Greg는 매우 생산적인 인물로 유명해요.

7. 옛날 사진과 GDB 퀴즈도 보여줬어요.

8. 사람들을 편하게 하고 친근하게 만들려고 했어요.

9. 질문이 많았고, 바로 답변을 시작했어요.

10. AI가 미래라는 말을 70년대부터 들었어요.

11. 2013~14년, 딥러닝 관련 기사들이 많았어요.

12. 딥러닝은 2012년 이미지 인식에서 성공했어요.

13. 규칙을 일일이 쓰는 대신 학습 시스템이 뛰어났어요.

14. 2014년, 기계 번역도 좋은 성과를 냈어요.

15. 딥러닝은 여러 분야에서 성공을 거두고 있어요.

16. 여러 기술이 동시에 발전하는 이유는 딥러닝 덕분이에요.

17. 1980년대 딥러닝이 안 된 이유는 계산력 부족 때문이었어요.

18. 오늘날은 계산 능력이 엄청나게 늘었어요.

19. OpenAI는 확장 가설을 매우 진지하게 믿었어요.

20. Dota 2 프로젝트에서 확장 가설이 맞다는 걸 알게 됐어요.

21. 코어 수를 늘리며 성능이 계속 좋아졌어요.

22. 확장 가설은 계속해서 성과를 냈어요.

23. 스타트업은 문제보다 기술을 먼저 추구하는 방식이에요.

24. 현실에 부딪히며 기술을 발전시키는 게 중요해요.

25. Dota 프로젝트는 관리와 실패 경험도 가르쳐줬어요.

26. 목표 대신 투입과 실험이 더 중요하다는 교훈을 얻었어요.

27. AI가 프로 선수와 경쟁하는 이야기도 있었어요.

28. 딥러닝은 결국 원하는 전략을 배워내요.

29. AI의 성과는 밤새워 노력한 덕분이에요.

30. AI는 게임, 의료, 교육 등 여러 분야에 영향을 미치고 있어요.

31. GPT-4는 많은 사람에게 큰 인상을 남겼어요.

32. 의료 분야에서도 진단 도움 등 성공 사례가 있어요.

33. AI는 개인 맞춤화가 중요한 미래 핵심이 될 거예요.

34. 지금은 AI가 기억하고 활용하는 게 더 중요해지고 있어요.

35. 제품과 연구가 서로 협력하는 게 중요하다고 생각해요.

36. OpenAI는 문제보다 기술을 먼저 추구하는 방식이에요.

37. 처음엔 문제를 몰랐지만, 기술을 따라가며 발전했어요.

38. AI는 처음엔 실패했지만, 계속 발전하고 있어요.

39. AI의 첫 활용은 AI Dungeon 같은 게임이었어요.

40. GPT-3는 데모용으로도 훌륭했어요.

41. GPT-4는 실질적 비즈니스 성공을 이뤘어요.

42. 의료와 교육, 프로그래밍 등 다양한 분야에서 성공 사례가 늘고 있어요.

43. 개인화와 접근성 향상이 앞으로 중요한 과제예요.

44. AI 개발은 아직 OS 수준의 제한이 있지만, 곧 해결될 거예요.

45. 인터페이스와 편리성도 계속 발전할 거예요.

46. 계산력과 에너지 문제는 앞으로도 중요한 이슈예요.

47. 에너지 부족이 성장의 큰 장애물이 될 수 있어요.

48. 더 많은 전력 공급이 필요하다고 생각해요.

49. 시장은 강한 수요를 보여주고 있어요.

50. 데이터와 알고리즘의 발전은 계속될 거예요.

51. 데이터 벽은 계속해서 새로운 방법으로 돌파되고 있어요.

52. 합성 데이터와 강화학습이 그 예예요.

53. 칩 미니어처화처럼 AI도 계속 발전할 거예요.

54. AI 코딩은 앞으로 더 강력해질 거예요.

55. AI가 반복적이고 어려운 작업을 대신할 거예요.

56. AI는 소프트웨어 엔지니어의 협력자로 성장할 거예요.

57. AI가 관리자가 되어 아이디어와 작업을 주는 것도 가능해요.

58. 복잡한 코드베이스 이해는 이미 가능해지고 있어요.

59. AI는 리팩토링과 대규모 작업도 도와줄 수 있어요.

60. 제품 개발은 핵심 모델과 응용 분야를 연결하는 게 중요해요.

61. OpenAI는 핵심 모델을 바탕으로 다양한 제품을 만들어가요.

62. 지역적 배경이 창의력과 사고방식에 영향을 미쳐요.

63. 북 Dakota는 안전하고 자유로운 환경이었어요.

64. 어릴 때 수학을 배우며 도전 정신을 키웠어요.

65. 2017년 AGI 예상 시점은 2~3년이었어요.

66. 지금은 AI가 예상보다 더 발전했어요.

67. AI는 기대보다 더 멋지고 유용하게 발전하고 있어요.

68. 매년 한 단계 뛰어난 AI 성과를 내는 게 목표예요.

69. 앞으로도 기대와 희망을 가지고 있어요.

최근 검색 기록