00:24 — Elad Gil (EG): The purpose of the event today is for a twofold. Number one is just getting people back together and stuff more regularly. And so, you know, I'm going to be hosting an event every month or two or under for topics. We just want David back to around the changing economic environment. A couple of months ago, very pleased to have Sam Altman here today to talk about. I think one of the most interesting things that's happening right now and some of the shifts that are occurring in AI. So, really the purpose of getting back community in San Francisco and simultaneously, you know, discussing related topics. And I think we have like the lowest chairs in the history of like us. Speaking of that, quite low. I almost feel like I'm at a campsite by a campfire and nobody can see me. But thanks everybody for making it up.
01:05 — Sam Altman (SA): You have to make up economic prognostrications too.
01:07 — EG: Yeah, I'm happy that's about that at the end. Yeah. So I wanted to introduce Sam really quickly. Sam has probably known everybody here. He's a serial founder. He was the president of YC. He's the CEO of OpenAI. He's an investor in everything from Airbnb and Stripe to really exciting companies, he helped set up a Worldcoin, which has been a crypto protocol. And so he's done a bit of everything and everything that so, you know, he's welcome to family. And so we have a lot to cover today. And we're basically going to go from everything from like the history of AI on through to you know are we going to face that AI apocalypse eventually? And so maybe we can start on a positive note. And I just love to hear your view of. You know, history of AI. What has changed over time in the course of being involved in running that company and we can talk a bit more about some of the concerns that have.
02:05 — SA: Yeah. I think in some sense, it has been this like long continuous arc that has been when the computer bends. But the thing that is working now, this idea of neural networks is obviously not a new idea. It was like left for dead for a long time. But it's remarkable. (Check we're going to switch the mics real quick, make it a little more audible for everyone.) And I think it gets. Is this better? So, so I think it's just been this, you know, long standing idea that people have been talking about since the computers are have worked have been invented. And we finally got enough compute for it to work. And there's like a little bit more to the story than that. I think a significant thing that happened was the shift to these large unsupervised models. That was sort of not what most people predicted if we go back five or 10 years. But like fundamentally, the miracles are we have like an algorithm that can really truly genuinely learn. And we can throw more compute at it and it gets predictably better.
03:25 — EG: And I think we need to talk really deeply into the mic so that we can do a hear us. But how do we get to the current AI stack? I mean, it seems like there's been a really big paradigm shift towards these big unsupervised models. From a lot of the things that came before and some a little bit curious like, you know, what causes shift from the sort of CNN RNN is for to stand again world that we had even just three or four years ago and then transformers came out. And then we went back to the paper with the paper in 2017. I think it was and then took a year or two for things to really kind of gel and some just sort of curious about. You know what led us to where we are today from the model perspective.
02:58 — SA: The plans in the AI field if we rewind back to like 2015 or something is that we were going to. And OpenAI was one of these people, you know, we were going to like train agents are all agents to play games. And we would put them in more and more sophisticated like multi agent environments and they would have to learn all these social skills and how to interact with each other and. You know, like each other or whatever. And then eventually they would like learn that they need to learn human knowledge and they would do that. That was kind of a path that a lot of people thought was what made sense. And almost very few people predicted that actually we'd be able to flip it around and first we could have models learn like all of human knowledge in this very non agent way. And then once they had that represented you could like use them to like do these more sophisticated tasks. But this idea that you were just going to go like read all of the text ever written by humans on the internet with no particular supervision signal just trying to predict the next word one of this time. And then it was like a very laughable idea when we started doing that and it's gone like further certainly that I thought it would go and I think most people in the field. Transformers were obviously a huge part of that. One of these rare times where we got to mean that was like orders of magnitude more efficient more computer efficient than the thing that came before it. Really good at you know lots of good ideas about transformers but like really good and making use of the hardware that we have that we have also. But really that one like fundamental idea that we can you know scale up these large unsupervised models and get these like quite surprising results get the zero shot learning working which I still think is like somewhat miraculous. That was like quite not it's one of these things that looks incredibly obvious in hindsight but if you told someone that four or five years ago they wouldn't look at you like you have no idea what you're talking about.
06:01 — EG: And I think that's been applied in really interesting ways by OpenAI obviously there's GPT in the large language models there's DALL-E there's new API's like whisper which is a really cool thing that you folks are doing in terms of speech attacks. What are some of the big directions or sort of areas that open the eyes must put this on going forward.