full transcript

From the Ted Talk by The TED Interview: The race to build AI that benefits humanity with Sam Altman


Unscramble the Blue Letters


Hello there, this is cirhs ardonsen, and I am hlgeuy, hugely, tmenruoseldy excited to welcome you to a new series of the TED interview. Now, then, this season, we're trying something new. We're organising the whole season around a single theme, albeit a theme that some of you may consider inappropriate. But hear me out. The theme is the case for optimism. And yes, I know the world has been hit with etdnraxirliaory ugly things in the last few years. Political division, a racial reckoning, technology run amuck, not to mention a golbal pandemic and impending cimtale catastrophe. What on earth are we thinking in this context? Optimism just seems so nvaie and unwanted, almost annoynig. So here's my position. Don't think of optimism as a feeling. It's not just this sort of shallow feeling of hope. Optimism is a search. It's a determination to look for a pathway forward somewhere out there. I believe I truly believe there are amazing people whose minds contain the ideas, the visions, the solutions that can actually create that pathway forward. If given the support and rsceoeurs they need, they may very well lhigt the path out of this dark place we're in. So these are the people who can present not optimism, but a case for oipsmitm. They're the people I'm talking to this season. So let's see if they can persuade us now. Then the place I want to satrt is with A.I. artificial intelligence. This, of course, is the next innovative technology that is going to chgnae everything as we know it, for better or for worse. Today was painted not with the usual dsiayoptn brush, but by someone who truly believes in its potential. Sam Altman is the former pseneirdt of Y Combinator, the legendary startup accelerator. And in 2015, he and a team launched a company called Open Eye, dedicated to one noble purpose to develop A.I. so that it benefits humanity as a whole. You may have heard, by the way, recently a lot of buzz around in A.I. technology called T3 that was developed by open eye improve the quality of the amazing team of researchers and developers they have work in. There will be hearing a lot about three in the crasoitenovn ahead. But sticking to this lotfy mission of doenpliveg A.I. for humanity and finding the resources to realize it haven't been simple. Open A.I. is certainly not without its critics, but their goal couldn't be more important. And honestly, I found it really quite exciting to hear Sam's vision for where all this could lead. OK, let's do this. So, Sam Altman, welcome. Thank you for having me. So, Sam, here we are in 2021. A lot of people are fearful of the future at this moment in world history. How would you describe your attitude to the future? I think that the combination of scientific and technological progress and better societal dociisen making, better societal governance is going to solve in the next couple of decades all of our current most pressing permlbos, there will be new ones. But I think we are going to get very safe, very inexpensive, carbon free nuclear energy to work. And I think we're going to talk about that time that the climate disaster looks so bad and how lucky we are. We got svead by science and technology, I think. And we've already now seen this with the rptiiady that we were able to get vaccines deployed. We are going to find that we are able to cure or at least taret a significant percentage of haumn disease, including I think we'll just actually make progress in helping people have much longer decades, longer health spans. And I think in the next cpluoe of decades, that will look pretty clear. I think we will build systems with AI and otherwise that make access to an iendlicrby high quality education more possible than ever before. I think the lives we look forward like one hundred yreas, ftfiy years, even the quality of life available to anyone then will be much better than the qiutlay of life available in the very best case to anyone taody, to any single preson today. So, yeah, I'm super optimistic. I think, like, it's always easy to do scroll and think about how bad are the bad things are, but the good things are really good and getting much better. Is it your sincere belief that artificial intelligence can actually make that future better? Certainly. How look, with any technology. I don't think it will all be better. I think there are always positive and negative use cseas of anything new, and it's our job to maximize the positive ones, minimize the negative ones. But I truly, genuinely believe that the positive impacts will be orders of magnitude bigger than the nevagtie ones. I think we're seeing a glimpse of that now. Now that we have the first general purpose built out in the world and available via things like RPI, I think we are seeing evidence of just the breadth of services that we will be able to offer as the sort of technological revolution really takes hold. And we will have people irtnaect with scerives that are smart, really smart, and it will feel like as strange as the world before mobile phones feels now to us. Hmm, yeah, you minoneted your API, I gsues that stands for what, application programming inrtacfee? It's the technology that allows complex technology to be aesiscbcle to others. So give me a sense of a couple of things that have got you most excited that are already out there and then how that gives you visibility to a pathway forward that is even more exciting. So I think that the things that we're seeing now are very much glimpse of the future. We released three, which is a general-purpose natural language text model in the summer of tnwtey twenty. You know, there's hdrednus of applications that are now using it in production that's ramping up all of the time. But there are things where people use three to really understand the intent behind the search query and deliver results and sort of understand not only intent, but all of the data and deliver the thing of what you want. So you can sort of describe a fuzzy thing and it'll understand documents. It can understand, you know, short documents, not full books yet, but bring you back to the context of what you want. There's been a lot of excitement about using the generative capabilities to create sort of games or sort of interactive stories or letting people develop characters or chat with a sort of vutaril friend. There are applications that, for example, help a job seeekr polish a teilraod aoppiaitlcn for each individual company. There's the bnniiengg of tutors that can sort of tceah people about different concepts and take on different personas. And we can go on for a long time. But I think anything that you can imagine that you do today via computer that you would like to really understand and get to know you. And not only that, but understand all of the data and knowledge in the world and help you have the best experience that is is possible that that will happen. So what gets opened up? What new adjacent possible state is that as a result of these powers from this question, from the point of view of someone who's starting out on a career, for example, they're trying to figure out what would be a really interesting thing to do in the future that has only recently become possible. What are some new things that this opens up in a world where you can talk to a computer? And get. The output that would normally require you hiring the world erpxets back imlmeeiadty for almost no money, I would say think about what's possible there. So that could be like, as you said, what can normally only the best programmer in the world or a really great programmer do for me. And can I now instead just ask in English and have that prgarom written? So all these people that, you know, want to develop an app and they have an idea, but they don't know how to program. Now they can have it. You know, what is the service look like when anyone on Earth who wants really great mdiecal advice? Can get better medical advice than any single doctor could ever get, because this has the total medical knowledge and reasoning ability that the some humanity has ever produced. When you want to laren something, you have sort of a tutor that understands your exact style, how you best learn everything you know, and custom tehecas you whatever concept you want to learn someday. You can imignae that like. You have an eye that reads your email and your task list and your calendar and the documents you've been sent and in any meeting maximally perfectly peprraes you and has all of the information that you need in all the context of your entire career right there for you to go on for a long time. But I think this will just be powerful smtseys. So it's really fun piynlag around with Chapatti three, one compelling example of someone who's more tax base is try Googling The Guardian essay that was written entirely by different GP2 three queries and stitched together. It's an essay on why artificial intelligence isn't a threat to humanity. And that's impressive. It's very compelling. I actually tried inputting one of the three online uses. I asked the question what is interesting about some ollman? Oh no. Here's what it came back with. It was it was rather plchshpiiaool, actually. Came back with. I don't understand what you mean by interesting. It seems to me that this word has no unambiguous meaning in the context of human society beyond its literal definition as somewhat pleasing or entertaining. I do not believe there to be any such thing as INTERESTINGNESS except in the mind of a human or other sentient being that to my knowledge, this is an entirely sijuvbcete sttae that vireas from person to person. However, I will ganrt that this is an interesting question to ask. This does not mean it has been answered. There is no answer to be found. Well, so you can agree that somewhere between profound and gibberish is that almost well, with the state of play is I mean, that's where we are today. I think somewhere between profound and jbsirbeih is the right way to think about the current capabilities of CGP three. I think they would definitely had a bbulbe of hype about three last summer. But the thing about bubbles is the reason that smrat people fall for them is there's a kernel of something really real and really interesting that people get overexcited about. And I think people definitely got and still are overexcited about 3:00 today, but still probably underestimated the potential of where these models will go in the future. And so maybe there's this like short term overhyped and long term under hype for the etirne field, for tax models, for whatever you'd like. It's going on. And as you said, there's clearly some gibberish in there. But on the other hand, those were like well-formed sentences. And there were a couple of ideas and there that I was like, oh, like they actually maybe that's right. And I think if artificial intelligence, even in its crenrut very larval state, can make us confront new things and sort of inspire new iades, that's already pretty impressive. Give us a sense of what's actually happening in the background there. I think it's hard to understand because you read these words seem like someone is trying to mean something. Obviously, I think you believe that there's whatever you've built there, that there's a sort of thinking, sentient thing that's going, oh, I must answer this question. So so what how would you describe what's going on? You've got something that has read the entire Internet, essentially all of Wikipedia, etc. We've read something that's read like a small fitrocan of a random sampling of the Internet. We will eventually tiarn something that has read as much of the Internet or more of the Internet than we've done right now. But we have a very long way to go. I mean, we're still, I think, relative to what we will have orpaeetd at quite small scale with quite small eyes. But what is happening is there is a mdeol that is ingesting lots of text and it is trying to predict the next word. So we use Transformer's they take in a ctnexot, which is a particular architecture of an A.I. model, they take in a context of a lot of words, let's say like a thousand or something like that. And they try to predict the word that comes next in the sequence. And there's like a lot of other things that happen, but fundamentally that's it, and I think this is interesting because in the pcosers of playing that little game of trying to predict the next word, these models have to develop a representation and understanding of what is likely to come next and. I think it is maybe not perfectly accurate, but certainly worth considering to say that intelligence is very near the ability to make accurate predictions. What's confusing about this is that there are so many wdors on the Internet which are foolish as well as the words that are wise. And and how do you build a model that can distinguish between those two? And this is prompted actually by another example that I typed in. Like I asked, you know, what is a powerful idea, very interested in ideas. That was my question as a powerful idea. And it came back with several things, some of which seemed moeladtrey pronouncements, which seemed moderately gibberish. But then he was he was one that it came back with the idea that the human race has, qutoe, evolved, unquote, is false evolution or adaptation within a species was abandoned by biology and genetics long ago. Wait a sec. That's news to me. What have you been reading? And I presume this has been pulled out of some recesses of the Internet, but how is it possible, even in trehoy, to imagine how a model can gaivratte towards truth, wisdom, as opposed to just like majority views? Or how how how do you avoid something taking us further into the sort of the maze of errors and bad tiinnhkg and so forth that has already been a worrying feature for the last few years ? It's a fantastic question, and I think it is the most interesting area of research that we need to pursue. Now, I think at this pniot, the questions of whether we can build really powerful general-purpose AI system, I won't say there in the rearview mirror. We still have a lot of hard engineering work to do, but I'm ptrety confident we're going to be able to. And now the questions are like, what should we build? And how and why and what data should we train on and how do we build systems not just that can do these like phenomenally iispvmsere things, but that we can ensure do the things that we want and that understand the concepts of truth and falsehood and, you know, alignment with human values and mmnielsanigt with human values. One of the pieces of raeecsrh that we put out last year that I was most proud of and most excited about is what we call reinforcement learning from human feedback. And we sewohd that we can take these giant models that are trained on a bnuch of stuff, some of it good, some of the bad, and then with a really quite small amuont of feedback from human judgment about, hey, this is good, this is bad, this is wrong, this is the behavior I want I don't want this behavior. We can feed that information from the human judges back into the model and we can teach the model, behave more like this and less like that. And it works better than I ever imagined it would. And that gives me a lot of hope that we can build an aligned system. We'll do other things, too, like I think curating data sets where there's just less sort of bad data to train on. It will go a very long way. And as these models get smarter, I think they ienelrhnty deolevp the ability to sort out bad data from good data. And as they get really smart, they'll even start to do something we call active lenianrg, which is where they ask us for exactly the data they need when they're missing something, when they're uurnse, when they don't understand. But I think as a result of siplmy scaling these models up, building better, I hate to use the word cognition because it sounds so anthropomorphic, but let's say bndiilug a better ailbity to resoan into the moelds, to think, to cllngeahe, to try to understand and combining that with this idea of online into human vuleas via this technique we developed, that's going to go a very long way. Now, there's another question, which you sort of just kicked the ball down the field, too, which is how do we as a society decide to which set of human values do we align these powerful systems? Yeah, indeed. So if I if I understand rightly what you're saying, that you're saying that it's possible to look at the otuupt at any one time of three. And if we don't like what it's coming up with, some ways human can say, no, that was off, don't do that. Whatever algorithm or process led you to that, undo it. Yeah. And that the system is that incredibly powerful at avoiding that same kind of mistake in future because it sort of replicates the instructions , correct? Yeah. And eventually and not much longer, I believe that we'll be able to not only say that was good, that was bad, but say that was bad for this reason. And also tell me how you got to that answer so I can make sure I understand. But at the end of the day, someone needs to decide who is the wise human or short humans who are looking at the results. So it's a big difference. Someone who who grew up with intelligent design wrold view could look at that and go, that's a brlalinit outmoce. Well, Goldstar done. And someone else would say something is done awfully wrong here. So how do you avoid and this is a version of the pebrlom that a lot of the, I guess, sliicon Valley companies are facing right now in terms of the pushback they're getting on the output of social media and so forth. How do you asblmese that pool of experts who santd for human values that we actually want? I mean, we talk about this all the time, I don't think this is like solely or even not even clsoe to majorly up to opening night to decide, I think we need to begin a societal conversation now about how we're going to make those decisions, how we're going to make sure we have representational input in that, and how we sort of make these very difficult global governance systems. My personal beleif is that we should have pretty baord rules about what these systems will never do and will always do. But then the individual user should get a system that kind of beevahs like they want. And there will be people do have very different value systems. Some of them are just fundamentally ictpialmobne. No one gets to use eye to, like, exploit other people, for example, and hopefully we can all agree on. But do you want the AI to like. You know, support you and your belief of intelligent design, like, do I think openly, I should say it can't, even though I disagree with that is like a scientific conclusion. No, I wouldn't take that stance. I think the thing to remmeebr about all of this is that history is still quite extraordinarily weak. It's still has such big problems and it's still so unreliable that for most use cases it's still unsuitable. But when we think about a system that is like a thousand times more powerful and let's say a million tiems more reliable, it just doesn't it doesn't say gibberish very often. It doesn't totally lose the plot and get distracted or system like that is going to be one that a lot of the economic activity in the world comes to rely on. And I think it's very important that we don't have a small group of people sort of saying you can never use it for this thing that, like most of the world wants to use it for because it doesn't match our personal beliefs. Talk a bit more about some of the other uses of it, because one of the things that's most snuririspg is it's not just about sort of text responses. It's it can take gnazieereld human instructions and build things up. For example, you can say to it, write a Python program that is designed to put a flashing cursor in one corner of the screen, in the gogloe logo in the other corner. And and it can go your way and do something like that. Shockingly, quite well, effectively. Yeah, I it can. That's amazing. I mean, this is aamnizg to me. That opens the door to. An entirely way to think about programers for the future, that you could you could have people who can program just in human natural language potentially and gain rapid efficiency. I do the engineering. We're not that far away from that world. We're not that far away from the world where you will write a spec in English. And for a slpmie enough program, I will just write the code for you. As you said, you can see glimpses of that even in this very week three which was not trained to code like. I think this is important to remember. We trained it on the language on the Internet very rarely, you know, Internet let language on the Internet also iulecnds some code snippets. And that was enough, so if we really try to go train a model on code itself and that's where we decide to put the horsepower of the model into, just imagine what will be possible will be quite impressive. But I think what you're pointing to there is that because models like three to some degree or other, and it's like very hard to know exactly how much understand the underlying concepts of what's going on. And they're not just regurgitating things they found in a wibtese, but they can really apply them and say, oh, yeah, I kind of like know about this word and this idea and code. And this is probably what you're trying to do. And I won't get it right always. But sometimes I will just graneete this like a brand new program for nothing that anyone has ever asked before. And it will work. That's pretty cool. And data is data. So it can do that from English to code. It can do that from elsignh to French. Again, we never told it to learn about translation. We never told it about the concepts of English and frcenh, but it learned them, even though we never said this is what English is and this is what French is and this is what it means to tsatarnle, it can still do it. Wow, I mean, for carvitee people, is there a world coming where the sort of the palette of possibility that they can be exposed to is just explodes? I mean, if you're a miaiuscn, is there a near future where you can say to your eye, OK, I'm going to bed now, but in the morning I'd love you to present me with a thousand tuba jingles with words attached that you have of a sort of mean factor to the and you come down in the mnionrg and the cuemtpor shows you the stuff. And one of them, you go, wow, that is it. That is a top 10 hit and you build a song from it. Or is that going to be released? Actually be the value add. We released something last year cllead Jukebox, which is very near what you described, where you can say I want music generated for me in this style or this kind of stuff, and it can come up with the words as well. And it's like pretty cool. And I really enjoy listening to misuc that it cetaers. And I can sort of do four songs, two bars of a jingle, whatever you'd like. And one of my very firtaove asttris reached out, called to open it after we reaelse this and said that he wanted to talk. And I was like, well, I like total fanboy here. I'd love to join that call. And I was so nervous that he was going to say, this is terrible. This is like a really sad thing for human creativity. Like, you know, why are you doing this? This is like whatever. And he was so excited. And he's like, this has been so inrinpsig. I want to do a new aulbm with this. You know, it's like, give me all these new ideas. It's making me much better at my job. I'm going to make better music because of this tool. And that was awesome. And I hope that's how it all continues to go. And I think it is going to lead to this. We see a similar thing now with Dolly, where graphic dirseegns sometimes tell us that they just they see this new set of possibilities because there's new creative inspiration and they're cycle time, like the amount of time it teaks to just come up with an idea and be able to look at it and then decide whether to go down that path or head in a different direction goes down so much. And so I think it's going to just be this like incredible creative explosion for humans. And how far away are we some before? And I it comes up with a genuinely powerful new idea, an idea that solves the problem that humans have been wrestling with. It doesn't have to be as quite on the sacle as of, OK, we've got a virus cinomg. Please dsierbce to us what a what a national rational response should look like, but some kind of genuinely innovative idea or solution like one one ianetrnl question we've asked ourselves is, when will the first genuinely interesting, purely AI written TED talk show up? I think that's a great milestone. I will say it's always hard to guess timeline's I'm sure I'll be wrong on this, but I would guess the first genuinely iinserettng. Ted talk, thought of written delivered by an AIDS within the kind of the seven ish year time frame. Maybe a little bit less. And it flees like I mean, just reading that Guardian essay that was kind of it was a composite of several different GPG three rpeonesss to questions about, you know, the threats of robotics or whatever. If you throw in a human editor into the mix, you could probably imagine something much sooner. Indeed. Like tooomrrw. Yeah. So the hybrid the hybrid version where it's basically a tool assisted TED talk, but that it is better than any TED talk a human could generate in one hundred hours or whatever, if you can sort of combine human discretion with A.I. heeroowpsr. I suspect that's like our next year or two years from now kind of thing where it's just really quite good. That's that's really interesting. How do you view the impact of A.I. on jobs? There's obviously been the familiar story is that every White-Collar job is now up for drctioetsun. What's what's your view there? You know, it's I think it's always hard to make these proiendicts. That is definitely the familiar story now. Five years ago, it was every blue collar job is up for destruction, maybe like last year it was. Every creative job is up for destruction because of things like Jukebox I. I think there will be an enormous impact on. The job market, and I really hate it, I think it's kind of gross when people like working on I petrend like there's not going to be or sort of say, oh, don't worry about it. It'll just all obviously better. It doesn't always obviously get better. I think what is true is. Every technological revolution produces a change in jobs, we always find new ones, at least so far. It's difficult to pdrecit from where we're sitting now what the new ones will be and this technological revolution is likely to be. Again, it's always tempting to say this time it's different. Maybe I'll be totally wrong. But from what I see now, this technological revolution is likely to be more. dtamaric. More of a staaccto note than most, and I think we as a society need to figure out how we're going to cushion everybody through that. I've got my own ideas about how to do that. I, I wouldn't say that I have any reason to believe they're the right ones, but doing nothing and not really engaging with the magnitude of what's about to happen, I think it's like not an aetpaccble ansewr. So there's going to be huge impact. It's dfulifcit to predict where it shows up the most. I think previous predictions have mostly been wrong, but I I'd like to see us all as a society, certainly as a field, engage in what what the shifts we want to make to the siacol ctcrnaot are to kind of get through that in a way that is milmaaxly bfnieecail to everybody. I mean, in every past rouvitolen, there's always been a spcae for humans to move to. That is, if you like, mnoivg up the food chain, it's sort of we've retreated to the things that humans could uiunlqey do, think better, be more creative and so forth. I guess the wrroy about A.I. is that in principle, I believe this, that there is no human cognitive feat that won't ultimately be doable, probably better by artificial general touch, simply because of the extra frpoieewr that ultimately they can have, the vast knowledge they bring to the table and so forth. Is that basclaily right, that there is ultimately no safe sort of space where we can say, oh, but that would never be able to do that on a very long time horizon? I agree with you, but that's such a long time hioorzn. I think that, you know, like maybe we've merged by that point, like maybe we're all plugged in and then, like, we're this sort of symbiotic thing. Like, I think there's an interesting example, as we were talking about a few mtnieus ago, where right now we have these systems that have sort of enormous horsepower but no serietng wheel. It's like, you know , incredible capabilities, but no judgment. And there's like these obvious ways in which today even a human plus three is far better than either on their own. Many people speak about a world where it's sort of A.I. as this external threat you speak about. At some point, we actually merge with eyes in some way. What do you mean by that? There's a lot of different vsiornes of what I think is possible there, you know, in some sense, I'd argue the merge has already like begun the human technology mrgee like we have this thing in our hands that sort of dictates a lot of what we think, but it gives us real superpowers and that can go much, much further. Maybe it goes all the way to like the Elon Musk vision of neuro link and having our brains plugged into computers and sort of like literally we have a computer on the back of our head or goes the other direction and we get uploaded into one. Or maybe it's just that we all have a chat bot that kind of cntoatlsny steers us and hlpes us make better decisions than we could. But in any case, I think the fundamental thing is it's not like the humans versus the eyes cnopetimg to be the. Smartest sentient thing on earth or beyond. But it's that this idea of being on the same team. Hmm. I certainly get very excited by the sort of the medium term pinottael for creative people of all sorts if they're willing to expand their palette of plitbssiiieos. But with the use of A.I. to be willing to. I mean, the one thing that the history of technology has shown again and again is that something this powerful and with this much benefit is unstoppable and you will get rewarded for ebmcnraig it the most and the earliest. So talk about what can go wrnog with that, so let's move away from just the sort of emniococ displacement fatcor. You were a co-founder of Open Eye because you saw eiasntixtel risks to humanity from high today. What would you put as the sort of the most wyrirnog of those risks? And how is open eye working to minimize? I still think all of the really hrriiyfnog risks exist. I am more cifnnodet, much more confident than I was five years ago when we statred that there are technical things we can do about. How we build these systems and the research and the alignnemt that make us much more likely to end up in the kind of really wonderful camp, but, you know, like maybe open I fall behind and maybe somebody else feels ajai that thinks about it in a very different way or doesn't care as much as we'd like about safety and the risks or how to strike a different trade off of how fast we should go with this and where we should sort of just say, like, you know, like let's push on for the economic benefits. But I think all of this sort of like, you know, tatiiolldanry what's been in the realm of sci fi risks are real and we should not igrone them. And I still lose sleep over them. And just to update people is artificial general intelligence. Right now, we have incredible examples of powerful AI operating on specific areas. Ajai is the ability of a computer mind to connect the dots and to make decisions at the same level of breadth that that huanms have had. What's your sort of elevator pitch on Ajai about how to ifdnteiy and how to think of it? Yeah, I mean, the way that I would say it is that for a while we were in this world of like very narrow A.I. , you know, that could like clsifsay images of cats or whatever, more advanced stuff in that. But that kind of thing. We are now in the era of general purpose, AI, where you have these systems that are still very much iferpmcet tools, but that can generalize. And one thing like GPP three can write essays and translate between languages and write computer code and do very cmeactolpid search. It's like a single model that understands enough of what's really going on to do a broad array of tasks and learn new things quickly, sort of like people can. And then eunlveatly we'll get to this other realm. Some people call it ajai, some people call ostler things. But I think it implies that the systems are like to some degree self directed, have some intentionality of their own is a simple summary to say that, like the fundamental risk is that there's the potential with general artificial intelligence of a sort of rnaawuy effect of self-improvement that can happen far faster than any kind of humans can even keep up with, so that the day after you get to ajai, suddenly computers are tdnohsuas of times more advanced than us and we have no way of controlling what they do with that power. Yeah, and that is certainly in the risk space, which is that we build this thing and at some point somewhat suddenly, it's much more powerful than we are, we haven't really done the full merge yet. There's an event horizon there and it's sort of hard to see to the other side of it. Again, lots of reasons to think it will go OK. Lots of reasons to think we won't even get to that scenario. But that is something that. I don't think people should brush under the rug as much as they do, it's in the possibility space for sure, and in the possibility subspace of that is one where, like, we didn't actually do as good of a job on the alignment work as we thought. And this sort of child of humanity kind of acts in a very different way than we think. A framework that I find useful is to sort of think about like a two by two matrix, which is sroht timelines to ajai and long timelines to ajai and a slow take off and a fast take off on the other axis. And in the short timelines, fast take off quadrant, which is not where I think we're going to be. But if we get there, I think there's a lot of sncieaors in the direction that you are describing that are worrisome. And we would want to spend a lot of effort planning for. I mean, the fact that a computer could start editing its own code and improving itself while we're asleep and you wake up in the morning and it's got smarter, that is the start of something super powerful and potentially scary. I have tremendous misgivings about letting my system, not one we have today, but one that we might not have and too many more years start editing its own code while we're not paying attention. I think that's the kind of thing that is worth a gerat deal of societal diusssocin about, you know, just because we can do that. Should we? Yes, because one of the things that's that's been most snkcohig to you about the last few years has been just the power of unintended consequences. It's like you don't have to have a belief that there's some sort of wkiang up of of an alien intelligence that suddenly decided it wants to wraek havoc on humans. That may never happen. What you can have is just incredible pwoer that goes amok. So a lot of people would argue that what's happened in technology in the last few years is actually an example of that. You know, social media companies created these inetglnelcies that were programmed to maximally hrasvet attention, for example, for sure. And they understand this from that turned out to be in some ways horrifying and extraordinarily damaging. Is that a meaningful sort of canary in the coal mine saying, look out, humanity, this could be really dangerous? And how how on earth do you protect against those kinds of unintended consequences? I think you raise a great point in general, which is these systems don't have to wish ill to humanity to cause ill just when you have, like, very powerful systems. I mean, unenidnted cqensucnoees for sure. But another version of that is and I think this applies at the technical level, at the company level, at the societal lveel, incentives are superpower's. Charlie Munger had this thing on, which is incentives are so powerful that if you can spend any time wevhotsaer working on the incentive system, that's what you should do before you work on anything else. And I really believe that. And I think that apeilps to the ivdndaiuil models we bilud and what their reward ftuocinns look like. I think it applies to society in a big way, and I think it applies to our cprtarooe structure at open. I you know, we sort of observe that if you have very well-meaning people, but they have this incentive to sort of maximize attention harvesting and profit forever through no one's ill intentions, that leads to a quite undesirable outcome. And so we set up opening is this thing called a capped profit model specifically so that we don't have the system incentive to just generate maximum value forever with an AGI that seems like obviously quite broken. But even though we knew that was bad and even though we all like to think of ourselves as good people, it took us a long time to figure out the right sttucrrue, to figure out a charter that's going to govern us and a set of incentives that we believe will let us do our work. And kind of these we have these like three elements that we talk about a lot research sort of engineering, development and deployment policy and safety. Put those all together under a system where you don't have to rely on. Anything but the natural ieecvntnis to push in a direction that we hope will minimize the sort of negative unintended consequences. So help me understand this, because this is I think this is confusing to some people. So you started oeninpg. I initially I think Elon Musk, the co-founder, and there was a group of you and the argument was this technology is too powerful to be left, developed in sceret and to be left developed purely by corporations who have whatever incentive they may have. We need a nifoonprt that will develop and shrae knowledge openly. First of all, just even at that early stage, some people were confused about this. It was saying if this thing is so dangerous, why on earth would you want to make it secrets even more available? Well, maybe givnig the tools to that sort of AI trseorrit in his bedroom somewhere, I think I think we got misunderstood in the way we were talking about that. We certainly don't think that the right thing to do is to, like, build this a super woepan and hand it to a terrorist. That's obviously afuwl. One of the reasons that we like our API model is it lets us make the most powerful AI thoeolcngy anyone in the world has, as far as we know, available to ever would like to use it, but to put some controls on its usage. And also, if we make a mistake, to be able to pull it back or change it or tweak it or improve it or whatever. But we do want to put and this is continued will continue to be true with appropriate restrictions and gadrlruais, very powerful technology in the hands of people. I think that is fair. I think that will lead to the best results for the society as a whole. And I think it will sort of maximize bneifet. But that's very different than sort of sihpipng the whole model and saying, here, do whatever you want with it. We're able to enforce rules on it. We also think and this is part of the msisoin that like something the field was doing a lot of that we didn't feel good about was sort of saying like, oh, we're going to keep the pace of progress and capabilities secret. That doesn't feel right, because I think we do need a satoceil conversation about what's what's going on here, what the itcpmas are going to be. And so we although we don't always say, like, you know, here's the suepr weapon, hopefully we do try to say, like, this is really serious. This is a big deal. This is going to affect all of us. We need to have a big conversation about what to do with it. Help me understand the structure a bit better, because you definitely surprised much people when you announced that msfrcoiot were putting a billion dalorls into the organization and in return, I guess they get certain exclusive licensing rights. And so, for example, they are the exclusive licensee of CP3. So talk about that structure of how you win. Microsoft presumably have invested not purely for aritlutisc purposes. They think that they will make money on that billion dollars. I sure hope they do. I love capitalism, but I think that I really leovd even more about Microsoft as a partner. And I'll talk about the structure and the exclusive license in a minute is that we like went around to people that might find us. And we said one of the things here is that we're going to try to make you some money. But like Adjei going well is more important. And we need you to sign this document that says if things don't go the way we think and we can't make you money like you just cheerfully walk away from it and we do the right thing for humanity. And they were like, yes, we are enthusiastic about that. We get that the mission comes first here. So again, I hope a phenomenal investment for them. But they were like they really pstenalaly surprised us on the upside of how aligned they were with us, about how strange the world may get here and the need for us to have flexibility and put our mission first, even if that means they lose all their money, which I hope they don't and don't think they will. So the way it's set up is that if at some point in the coming year or two, two years, Microsoft decide that there's some incredible commercial opportunity that they could realize out of the eye that you've built and you feel actually, no, that's that's damaging. You can block it. You can veto it. Correct. So the four most powerful version of three and its successors are available via the API, and we intend for that to ctuoinne. What Microsoft has is the ability to sort of put that model directly into their own technology. If they want to do that. We don't plan to do that with other people because we can't have all these controls that we talked about earlier. But they're like a close trusted partner and they really care about safety, too. But our goal is that anybody who wants to use the API can have the most poferuwl versions of what we've trained. And the structure of the API lets us continue to increase the safety and fix problems when we find them. But but the structure. So we start out as a non-profit, as you said, we realized pretty quickly that although we went into this thinking that the way to get to ajai would be about smarter and srtmear amrtlhogis, that we just needed bigger and bggeir curteopms as well. And that was going to require a scale of capital that no one will, at least certainly not me, could figure out how to raise is a nonprofit. We also needed to sort of be able to compensate very highly compensated, talented individuals that do this, but are full for profit company had runaway incentives problem, among other things. Also just one about sort of fairness in society and wealth concentration that didn't feel right to us either. And so we came up with this kind of hybrid where we have a nonprofit that governs what we do, and it has a subsidiary, LLC, that we structure in a way to make a fixed amount of profit so that all of our investors and employees, hopefully if things go how we like, if not no one gets any money, but hopefully they get to make this one time great ruetrn on their investment or the time that they snpet it open their equity here. And then beyond that, all the value flows back to the nonprofit and we figure out how to share it as fairly as we can with the world. And I think that this structure and this nonprofit with this very srtong ctearhr in place and everybody who jnios snignig up for the mission come in first and the fact the world may get strange, I think that. That was at least the best idea we could come up with, and I think it feels so far like the incentive system is wniokrg, just as I sort of wctah the way that we and our partners make decisions. But if I read it right, the cap on the gain that investors can make is 100 Axum. It's a massive call that was for our very first round investors. It's way, way lower. Like as we now take a bit of capital, it's way, way lower. So your deal with Microsoft isn't you can only make the first hundred billion dollars. I don't know. It's way lower than after that. We're giving it to the world. It's way lower than that. Have you disclosed what I don't know if we have, so I won't accidentally do it now. All right. OK, so explain a bit more about the charter and how it is that you. Hope to aivod or I guess help contribute to an eye that is safe for humanity. What do you see as the keys to us avoiding the wrsot mistakes and really hdnilog on to something that's that's beneficial for humanity? My answer there is actually more about, like technical and societal issues than the charter. So if it's OK for me to answer it from that ptceirpevse, sure. OK, I'm happy to talk about the charter to. I think this question of alignment that we telakd about a little elairer is paramount, and then I think to undnerastd that it's useful to dtinfaeefrtie between accidental misuse of a system and intentional misuse of a system. So like intentional would be a bad actor saying, I've got this powerful system, I'm going to use it to like hack into all the computers in the world and wreak havoc on the power gdris. And accidental would be kind of the Nick Bostrom make a lot of paper clips and view humans as collateral damage in both cases. But to vnraiyg deerges, if we can really, truly, technically solve the alignment problem and the societal problem of deciding to which set of human values do we align, then the systems understand right and wrong, and they understand probably better than we ever can, unintended consequences from cmolpex ancoits and very complex systems. And, you know, if we can train a system which is like. Don't harm humanity and the system can really understand what we mean when we say that, again, who is we and what does that have some asterisks on them? Sorry, go ahead. Well, that's if they could understand what it means to not harm humanity, that there's a lot wrapped up in that sentence. Because what's been so striking to me about efforts so far is that they seem to have been based on a very naive view of human nature. Go back to the sort of Facebook and Twitter examples of, well, the engineers building some of the systems would say we've just designed them around what humans want to do. You said, well, if someone wants to cclik on something, we will give them more of that thing. And what could psobisly be wrong with that? We're just snpoutiprg human choice, ignoring the fact that humans are complicated, farshid animals for sure, who are constantly making ccieohs, that a more effective vrioesn of themselves would agree is not in their long term interests. So that's one part of it. And then you've got layered on top of that or the complications of systemic complexity where, you know, mupillte choices by thousands of people end up creating a reality that possibly have designed for how how to cut through that. Like an AI has to make a decision based on a moment, on a spcfeiic data set. As those decisions get more powerful, how can we be confident that they don't lead to this sort of system crashing basically in some way? I think that I've heard a lot of behavioral psychologists and other people that have studied this say in different ways, are that I hate to keep picking on Facebook, but we can do it one more time since we're on the topic. Maybe you can't in any given moment in ngiht where you're tired and you have a stressful day, stop yourself from the dopamine hit of soinlrlcg and Instagram, even though you know that's bad for you and it's not leading to your best life. But if you were akesd in a reflective moment where you were sort of fully alert and tugufhtohl, do you want to spend as much time as you do scrolling through Instagram? Does it make you happier or not? You would actually be able to give like the right long term answer? It's sort of the spirit is willing, but the flesh is weak kind of monemt. And one thing that I am hfuepol is that humans do know what we want and what. On the whole, and penteserd with research or sort of an objective view about what makes us happy and doesn't we're pretty, what's so great about it, they're pretty good. But in any particular moment, we are subjected to our animal instincts and it is easy for the lower bairn to take over the eye. Well , I think be an even higher brain. And as we can teach it, you know, here is what we really do value. Here's what we really do want. It will help us make better dsienoics than we are capable of, even in our best moments. So is that being proposed and talked about as an actual rule? Because it skreits me that there is something peallointty super profound here to introduce some kind of rule for development of AIDS that they have to tap into not. What humans one, which is an ill diefend question, but as to what humans in reflective mode want. Yeah, we talk about this a lot. I mean, do you see a real ccnhae where something like that could be incorporated as a sort of an absolute gdloen rule and and if you like, searpd around the community so that it seeps into corporations and elsewhere? Because that I've seen no evidence that, well, a little corporation that was potentially a game changer. Corporations have this weird incentive problem. Right. What I was trying to speak about was something that I think should be technologically possible , and that's something that we as a society should demand. And I think it is technically possible for this to be sort of like a layer above the neocortex that makes even better decisions for us and our wfaelre and our long term happiness and fulfillment than we could make on our own. And I think it is possible for us as a steiocy to demand that. And if we can do like a penicr move between what the technology is capable of and what we what we as society demand, maybe we can make everybody in the middle that way. I mean, there are instances of even though companies have their incentives to make money and so forth, they also in the knowledge age. Can't make moeny if they have pissed off too many of their employees and customers and investors by aoaglny of the climate space right now, you can see more and more companies, even those that are emitting huge amounts of carbon dioxide, saying, wait a sec, we're struggling to recruit talented people because they don't want to work for someone who's evil. And their customers are saying, we don't want to buy something that is evil. And so, you know, ultimately you can picture processes where they do better. And I I believe that most engineers, for example, work in Silicon Valley. Companies are actually good people who want to degsin great products for humanity. I think that the people who run these companies want to be a net contribution to htmnuiay. It's we've we've rushed really quickly and design stuff without thinking it through properly. And it's led to a mess up. So it's like, OK, don't move fast, break things, slow down and build beautiful things that are bliut on a real version of human nature and on a real version of system complexity and the risks associated with sstemyic complexity. Is that the adnega that fundamentally you think that you can push somehow? Yes, but I think the way we can push it is by getting the iivcnnete system right. I think most people are flmdntualenay erxtleemy good. Very few people wake up in the morning thinking about how can I make the world a worse place? But the incentive systems that we're in are so powerful. And even those eeirnegns who join with the absolute best of intentions get sucked into this world where they're like trying to go up from it all for and five or whatever Facebook calls those things and you like, it's pretty exciting. You get caught up playing the game, you're rewarded for kind of doing things that move the company's key metrics. It's like fun to get promoted. It feels good to make more money and the incentive systems of the company. And that's what it rewards. An individual performance are maybe like not what we all want. And here I don't want to pick on Facebook at all because I think there's versions of this at play it like every big tech company, including in some ways I'm sure it open I but to the degree that we can better agiln the incentives of companies with the welfare of society and then the incentives of an individual at those companies within the now realign incentives for those companies, the more likely we are to be able to have things like ajai that. Follow an incentive ssetym of. What we want in our most reflective best mentmos and are even better than what we could think of ourselves is is it still the vision for open eye that you will get to? Artificial greenal intelligence ahead of. The cotonripoars, so that you can somehow put a stake in the ground and build it the right way. Is that really a realistic thing to to dream for? And if not, how do you live up to the mission and help ensure that this thing doesn't go off the rails? I think it is. Look, I certainly don't think we will be the only gourp to build an AGI, but I think we could be the first. And I think if you are the first, you have a lot of norms that empower. And I think you've already seen that. You know, we have released some of the most powerful systems to date. And I think the way that we have done that kind of in controlled release where we've released a bigger model than a bigger one than a bigger one, and we sort of try and talk about the potential misuse cases and we try to like talk about the importance of releasing this behind an API so that you can make changes. Other groups have followed suit in some of those directions, and I think that's good. So, yes, I don't think we can be the only I do think we can be ahead. And if we are ahead, I think we can use that leverage to hopefully push people in a better dorictien or maybe we're wrong and somebody else has a better direction. We're doing something about do you have a sucruttral advantage in that your mission is to do this for everyone as opposed to for some corporate obitcevje. And that that that allows you that. Why is it that we came out of open eye and not someone else? It's like it's surprising in some ways when you're up against so much money and so much talent in these other cpnaeioms that you came up with this platform ahead of. You know, in some sesne it's surprising and in some sense, like the startup wins most of the time, like I'm a huge believer in startups as the best force for innovation we have in the world today. I talked a little bit about how we combine these three. Different clans of research, engineering and sort of safety and policy that don't normally combine well and I think we have an unusual strength, there were clearly like well funded. We have super talented ppolee. But what we really have is like intense focus and self belief that what we're doing is possible and good. And I appreciate the implied compliment. But, you know, we, like, work really hard. And if we stop doing that, I'm sure someone would run by us fast. Tell us a bit more about some of your prior life sentences. Yeah, for several years, you were running Y Combinator, which had incredible imcpat on some 70 companies. There are so many startup stories that began at Y Combinator. What were key drivers in your own life that took you on the path you're on? And how did that path end up at Y Combinator? No exaggeration. I think I have back to back had the two jobs that are at least the most interesting to me in all of Silicon vllaey. I, I was I went to clgeloe to study computer science. I was a mjoar computer nerd growing up. I knew like a little bit about startups, but not very much. I started working on this project the same year I started working on that. This thing called Y Combinator started and funded me and my co-founders. And we doreppd out of school and did this cnmpaoy, which I ran for like seven years. And then after that I got acquired. I had stayed close to my comment the whole time. I thought it was just this incredible group of people and spirit and set of incentives and just bdaly misunderstood by most of the world, but obvuios to everyone within it that it was going to create huge amounts of value and do a lot of new things. My company had acquired PJI, who is the founder of ICI, and like truly one of the most incredible humans and business people. And Paul Burrell, Paul Graham asked me if I wnetad to run it. And kind of like the central learning of my cearer, why I individual srupttas has been that if you really scale them up, remarkable things can happen. And I. I did it and I was like, one of the things that would make this exciting for me personally mtiiovntag would be if I could sort of push it in the direction of doing these hard tech companies, one of which became open. I describe actually what Y Combinator is, you know, how many people come through it to give us a couple of stories of its impact. Yeah. So you basically apply as a handful of people and an idea, maybe a prototype and say, I would like to start a company and will you please fund me? And we review those apipnoltiacs and we I shouldn't say we anymore. I guess they fund four hundred companies a year. You get about one hundred and fifty tusanohd dollars while she takes about seven percent oehrsnwip and then gives you lots of advice and then networking and sort of this like fast track program for sttraing a startup. I haven't looked at this in a while, but at one point a significant fraction of the biillon dollar plus companies in the US that got started. It all came through the Wiki program, some recently in the news ones have been like anbrib, jahdocre, Coinbase, insta card stripe. And I think it's just it has become an incredible way to help. People who understand technology get a three month course in business, but instead of like herding you with an MBA, we actually teach you the things that matter and kind of go on to do incredible, incredible work. What is it about eernrepnrteus? Why do they matter? Some people just find them kind of annoying. But I think you would aruge I think I would argue that they have done as much as anyone to spahe the future. Why ? What is it about them? I think it is the ability to take. And idea and by force of will to make it hpeapn in the world and in an incentive system that rewards you for making the most impact on the most people like in our system. That's how we get most of the things that that we use. That's how we got the computer that I'm using, the software I'm using to talk to you on it. Like all of this, you know, everyone in life, everything has a balance sheet. There's pntely of very annoying things about them. And there's plenty of very annoying things about the system that sort of ildeiozs them. But we do get something really ipnamrtot in return. And I think that as a force for mikang things that make all of our lives better happen, it's very cool. Otherwise, you know, like if you have, like, a great idea, but you don't actually do anything useful with it for people, that's still cool. It's still intellectually interesting. But like, there's got to be something about the reward fcontiun in society that is like, did you actually do something useful? Did you create value? And I think entrepreneurship and startups are a wonderful way to do that. You know, we get all these great software companies. But I also think it's like how we're going to get ajai, how we're going to get nuclear fusion, how we're going to get life estxieonn. And like on any of those topics are a long list of other things I could point to. There's like a number of startups that I think are doing incredible work, some of which will actually deliver. It is a truly amazing thing when you put the camera back and to believe that a human being could be lynig awake at night and something pops inside their mind as a patterning of the noreuns in their brain that is effectively them saying, aha, I can see a way where the future could be better and and they can actually picture it. And then they wake up and then they talk to other people and they persuade them and they persuade investors and so forth. And the fact that this this system can happen and that they can then actually change the history changes in some sense. It is mind bnlgoigg that that happens that way and it happens again and again. So you've seen so many of these stories happen. What would you say? Is the is there a key thing that differentiates good entrepreneurs from others? If you could dolbue down on one trait, what would it be? If I could pick only one, I would pick determination. I think that is the biggest predictor of success, the biggest differentiator predictor. And if you would allow a second, I would pick like communication skills or egaevlsnim or something in that direction as well. There are all of the obvious ones that matter, like intinlelcgee, but there's like a lot of smart people in the world. And when I look back at kind of the thousands of entrepreneurs I've worked with, all of many of whom were like quite capable, I would say that's like one and two of the surprisingly differentiated characteristics. What it's it's what I look at, the different things that you've built and you're working on. I mean, it could not be more foundational for the future. I mean, entrepreneurship. I know this is I agree that this is really what has driven the future. Do you see some people get really now they look at Silicon Valley and they look at this srtoy and they worry about the culture. Right. That it's this is a bro clrtuue. Do you see prospects of that changing atmyine soon? And would you welcome it? Can we get better companies by really working to expand a group of people who can be entrepreneurs and who can contribute to aid, for example? For sure. And in fact, I think I'm hopeful, since these are the two things I've thought the most about. I'm excited for the day when someone combines them and uses A.I. to better select who did more fairly, maybe even scleet who to fund and how to aisvde them and really kind of make entrepreneurship super widely available that will lead to like better outcomes and sort of more societal wealth for all of us. So are. So, yeah, I think. Broadening the set of people able to start companies and that sort of get the resources that you need, that is like an uoalunqcveliy good thing and it's something that I think Silicon Valley is making some progress in. But I hope we see a lot more. And I do really, truly think that the technology industry entrepreneurship is one of the greatest forces for self betterment. If we can just figure out how to be a little bit more inclusive in how we do things. My last question today is about ideas were spreading. If you could inject one idea into the mind of everyone listening, what what would the idea be? We've touched on it a bunch, but the one idea would be the ajai really is going to happen. You have to engage with it seriously, and you shouldn't just listen to this and then brush aside and go about life as if it's not going to happen because it is going to affect everything. And we will all we all, I think, have an obligation, but also an opportunity to figure out what not means and how we want the world and this sort of one time shift to go on. I'm kind of awed by the breadth of things are engaged with. Thank you so much for spending so much time sharing your vision. Thanks so much for having me. OK, that's it for today. You can read more about open eyes, vision and pegosrrs at open eye dotcom. If you want to try playing with yourself, it's a little tricky. You have to find a website that has licensed the API. The one I went to was popoliheshr ehi dot com, where you just you pay a few dollars to get access to a very strange mind. That's actually quite a lot of fun. The ireveitnw is part of the TED Audio Collective, a collection of podcasts dedicated to sparking curiosity and sharing ideas that matter. This show is produced by Kim Net2Phone Pittas and edited by Grace Rubenstein and Sheila bfnfaoo, Sambor Islamic Sir. Fact Check is by Paul Durbin and special thanks to Michele Quent, Colin hmlees and Anna flein. If you like the show, please write and review it. It helps other people find us. We read every reievw, so thanks so much for lnneistig. See you next time.

Open Cloze


Hello there, this is _____ ________, and I am ______, hugely, ____________ excited to welcome you to a new series of the TED interview. Now, then, this season, we're trying something new. We're organising the whole season around a single theme, albeit a theme that some of you may consider inappropriate. But hear me out. The theme is the case for optimism. And yes, I know the world has been hit with _______________ ugly things in the last few years. Political division, a racial reckoning, technology run amuck, not to mention a ______ pandemic and impending _______ catastrophe. What on earth are we thinking in this context? Optimism just seems so _____ and unwanted, almost ________. So here's my position. Don't think of optimism as a feeling. It's not just this sort of shallow feeling of hope. Optimism is a search. It's a determination to look for a pathway forward somewhere out there. I believe I truly believe there are amazing people whose minds contain the ideas, the visions, the solutions that can actually create that pathway forward. If given the support and _________ they need, they may very well _____ the path out of this dark place we're in. So these are the people who can present not optimism, but a case for ________. They're the people I'm talking to this season. So let's see if they can persuade us now. Then the place I want to _____ is with A.I. artificial intelligence. This, of course, is the next innovative technology that is going to ______ everything as we know it, for better or for worse. Today was painted not with the usual _________ brush, but by someone who truly believes in its potential. Sam Altman is the former _________ of Y Combinator, the legendary startup accelerator. And in 2015, he and a team launched a company called Open Eye, dedicated to one noble purpose to develop A.I. so that it benefits humanity as a whole. You may have heard, by the way, recently a lot of buzz around in A.I. technology called T3 that was developed by open eye improve the quality of the amazing team of researchers and developers they have work in. There will be hearing a lot about three in the ____________ ahead. But sticking to this _____ mission of __________ A.I. for humanity and finding the resources to realize it haven't been simple. Open A.I. is certainly not without its critics, but their goal couldn't be more important. And honestly, I found it really quite exciting to hear Sam's vision for where all this could lead. OK, let's do this. So, Sam Altman, welcome. Thank you for having me. So, Sam, here we are in 2021. A lot of people are fearful of the future at this moment in world history. How would you describe your attitude to the future? I think that the combination of scientific and technological progress and better societal ________ making, better societal governance is going to solve in the next couple of decades all of our current most pressing ________, there will be new ones. But I think we are going to get very safe, very inexpensive, carbon free nuclear energy to work. And I think we're going to talk about that time that the climate disaster looks so bad and how lucky we are. We got _____ by science and technology, I think. And we've already now seen this with the ________ that we were able to get vaccines deployed. We are going to find that we are able to cure or at least _____ a significant percentage of _____ disease, including I think we'll just actually make progress in helping people have much longer decades, longer health spans. And I think in the next ______ of decades, that will look pretty clear. I think we will build systems with AI and otherwise that make access to an __________ high quality education more possible than ever before. I think the lives we look forward like one hundred _____, _____ years, even the quality of life available to anyone then will be much better than the _______ of life available in the very best case to anyone _____, to any single ______ today. So, yeah, I'm super optimistic. I think, like, it's always easy to do scroll and think about how bad are the bad things are, but the good things are really good and getting much better. Is it your sincere belief that artificial intelligence can actually make that future better? Certainly. How look, with any technology. I don't think it will all be better. I think there are always positive and negative use _____ of anything new, and it's our job to maximize the positive ones, minimize the negative ones. But I truly, genuinely believe that the positive impacts will be orders of magnitude bigger than the ________ ones. I think we're seeing a glimpse of that now. Now that we have the first general purpose built out in the world and available via things like RPI, I think we are seeing evidence of just the breadth of services that we will be able to offer as the sort of technological revolution really takes hold. And we will have people ________ with ________ that are smart, really smart, and it will feel like as strange as the world before mobile phones feels now to us. Hmm, yeah, you _________ your API, I _____ that stands for what, application programming _________? It's the technology that allows complex technology to be __________ to others. So give me a sense of a couple of things that have got you most excited that are already out there and then how that gives you visibility to a pathway forward that is even more exciting. So I think that the things that we're seeing now are very much glimpse of the future. We released three, which is a general-purpose natural language text model in the summer of ______ twenty. You know, there's ________ of applications that are now using it in production that's ramping up all of the time. But there are things where people use three to really understand the intent behind the search query and deliver results and sort of understand not only intent, but all of the data and deliver the thing of what you want. So you can sort of describe a fuzzy thing and it'll understand documents. It can understand, you know, short documents, not full books yet, but bring you back to the context of what you want. There's been a lot of excitement about using the generative capabilities to create sort of games or sort of interactive stories or letting people develop characters or chat with a sort of _______ friend. There are applications that, for example, help a job ______ polish a ________ ___________ for each individual company. There's the _________ of tutors that can sort of _____ people about different concepts and take on different personas. And we can go on for a long time. But I think anything that you can imagine that you do today via computer that you would like to really understand and get to know you. And not only that, but understand all of the data and knowledge in the world and help you have the best experience that is is possible that that will happen. So what gets opened up? What new adjacent possible state is that as a result of these powers from this question, from the point of view of someone who's starting out on a career, for example, they're trying to figure out what would be a really interesting thing to do in the future that has only recently become possible. What are some new things that this opens up in a world where you can talk to a computer? And get. The output that would normally require you hiring the world _______ back ___________ for almost no money, I would say think about what's possible there. So that could be like, as you said, what can normally only the best programmer in the world or a really great programmer do for me. And can I now instead just ask in English and have that _______ written? So all these people that, you know, want to develop an app and they have an idea, but they don't know how to program. Now they can have it. You know, what is the service look like when anyone on Earth who wants really great _______ advice? Can get better medical advice than any single doctor could ever get, because this has the total medical knowledge and reasoning ability that the some humanity has ever produced. When you want to _____ something, you have sort of a tutor that understands your exact style, how you best learn everything you know, and custom _______ you whatever concept you want to learn someday. You can _______ that like. You have an eye that reads your email and your task list and your calendar and the documents you've been sent and in any meeting maximally perfectly ________ you and has all of the information that you need in all the context of your entire career right there for you to go on for a long time. But I think this will just be powerful _______. So it's really fun _______ around with Chapatti three, one compelling example of someone who's more tax base is try Googling The Guardian essay that was written entirely by different GP2 three queries and stitched together. It's an essay on why artificial intelligence isn't a threat to humanity. And that's impressive. It's very compelling. I actually tried inputting one of the three online uses. I asked the question what is interesting about some ollman? Oh no. Here's what it came back with. It was it was rather _____________, actually. Came back with. I don't understand what you mean by interesting. It seems to me that this word has no unambiguous meaning in the context of human society beyond its literal definition as somewhat pleasing or entertaining. I do not believe there to be any such thing as INTERESTINGNESS except in the mind of a human or other sentient being that to my knowledge, this is an entirely __________ _____ that ______ from person to person. However, I will _____ that this is an interesting question to ask. This does not mean it has been answered. There is no answer to be found. Well, so you can agree that somewhere between profound and gibberish is that almost well, with the state of play is I mean, that's where we are today. I think somewhere between profound and _________ is the right way to think about the current capabilities of CGP three. I think they would definitely had a ______ of hype about three last summer. But the thing about bubbles is the reason that _____ people fall for them is there's a kernel of something really real and really interesting that people get overexcited about. And I think people definitely got and still are overexcited about 3:00 today, but still probably underestimated the potential of where these models will go in the future. And so maybe there's this like short term overhyped and long term under hype for the ______ field, for tax models, for whatever you'd like. It's going on. And as you said, there's clearly some gibberish in there. But on the other hand, those were like well-formed sentences. And there were a couple of ideas and there that I was like, oh, like they actually maybe that's right. And I think if artificial intelligence, even in its _______ very larval state, can make us confront new things and sort of inspire new _____, that's already pretty impressive. Give us a sense of what's actually happening in the background there. I think it's hard to understand because you read these words seem like someone is trying to mean something. Obviously, I think you believe that there's whatever you've built there, that there's a sort of thinking, sentient thing that's going, oh, I must answer this question. So so what how would you describe what's going on? You've got something that has read the entire Internet, essentially all of Wikipedia, etc. We've read something that's read like a small ________ of a random sampling of the Internet. We will eventually _____ something that has read as much of the Internet or more of the Internet than we've done right now. But we have a very long way to go. I mean, we're still, I think, relative to what we will have ________ at quite small scale with quite small eyes. But what is happening is there is a _____ that is ingesting lots of text and it is trying to predict the next word. So we use Transformer's they take in a _______, which is a particular architecture of an A.I. model, they take in a context of a lot of words, let's say like a thousand or something like that. And they try to predict the word that comes next in the sequence. And there's like a lot of other things that happen, but fundamentally that's it, and I think this is interesting because in the _______ of playing that little game of trying to predict the next word, these models have to develop a representation and understanding of what is likely to come next and. I think it is maybe not perfectly accurate, but certainly worth considering to say that intelligence is very near the ability to make accurate predictions. What's confusing about this is that there are so many _____ on the Internet which are foolish as well as the words that are wise. And and how do you build a model that can distinguish between those two? And this is prompted actually by another example that I typed in. Like I asked, you know, what is a powerful idea, very interested in ideas. That was my question as a powerful idea. And it came back with several things, some of which seemed __________ pronouncements, which seemed moderately gibberish. But then he was he was one that it came back with the idea that the human race has, _____, evolved, unquote, is false evolution or adaptation within a species was abandoned by biology and genetics long ago. Wait a sec. That's news to me. What have you been reading? And I presume this has been pulled out of some recesses of the Internet, but how is it possible, even in ______, to imagine how a model can _________ towards truth, wisdom, as opposed to just like majority views? Or how how how do you avoid something taking us further into the sort of the maze of errors and bad ________ and so forth that has already been a worrying feature for the last few years ? It's a fantastic question, and I think it is the most interesting area of research that we need to pursue. Now, I think at this _____, the questions of whether we can build really powerful general-purpose AI system, I won't say there in the rearview mirror. We still have a lot of hard engineering work to do, but I'm ______ confident we're going to be able to. And now the questions are like, what should we build? And how and why and what data should we train on and how do we build systems not just that can do these like phenomenally __________ things, but that we can ensure do the things that we want and that understand the concepts of truth and falsehood and, you know, alignment with human values and ____________ with human values. One of the pieces of ________ that we put out last year that I was most proud of and most excited about is what we call reinforcement learning from human feedback. And we ______ that we can take these giant models that are trained on a _____ of stuff, some of it good, some of the bad, and then with a really quite small ______ of feedback from human judgment about, hey, this is good, this is bad, this is wrong, this is the behavior I want I don't want this behavior. We can feed that information from the human judges back into the model and we can teach the model, behave more like this and less like that. And it works better than I ever imagined it would. And that gives me a lot of hope that we can build an aligned system. We'll do other things, too, like I think curating data sets where there's just less sort of bad data to train on. It will go a very long way. And as these models get smarter, I think they __________ _______ the ability to sort out bad data from good data. And as they get really smart, they'll even start to do something we call active ________, which is where they ask us for exactly the data they need when they're missing something, when they're ______, when they don't understand. But I think as a result of ______ scaling these models up, building better, I hate to use the word cognition because it sounds so anthropomorphic, but let's say ________ a better _______ to ______ into the ______, to think, to _________, to try to understand and combining that with this idea of online into human ______ via this technique we developed, that's going to go a very long way. Now, there's another question, which you sort of just kicked the ball down the field, too, which is how do we as a society decide to which set of human values do we align these powerful systems? Yeah, indeed. So if I if I understand rightly what you're saying, that you're saying that it's possible to look at the ______ at any one time of three. And if we don't like what it's coming up with, some ways human can say, no, that was off, don't do that. Whatever algorithm or process led you to that, undo it. Yeah. And that the system is that incredibly powerful at avoiding that same kind of mistake in future because it sort of replicates the instructions , correct? Yeah. And eventually and not much longer, I believe that we'll be able to not only say that was good, that was bad, but say that was bad for this reason. And also tell me how you got to that answer so I can make sure I understand. But at the end of the day, someone needs to decide who is the wise human or short humans who are looking at the results. So it's a big difference. Someone who who grew up with intelligent design _____ view could look at that and go, that's a _________ _______. Well, Goldstar done. And someone else would say something is done awfully wrong here. So how do you avoid and this is a version of the _______ that a lot of the, I guess, _______ Valley companies are facing right now in terms of the pushback they're getting on the output of social media and so forth. How do you ________ that pool of experts who _____ for human values that we actually want? I mean, we talk about this all the time, I don't think this is like solely or even not even _____ to majorly up to opening night to decide, I think we need to begin a societal conversation now about how we're going to make those decisions, how we're going to make sure we have representational input in that, and how we sort of make these very difficult global governance systems. My personal ______ is that we should have pretty _____ rules about what these systems will never do and will always do. But then the individual user should get a system that kind of _______ like they want. And there will be people do have very different value systems. Some of them are just fundamentally ____________. No one gets to use eye to, like, exploit other people, for example, and hopefully we can all agree on. But do you want the AI to like. You know, support you and your belief of intelligent design, like, do I think openly, I should say it can't, even though I disagree with that is like a scientific conclusion. No, I wouldn't take that stance. I think the thing to ________ about all of this is that history is still quite extraordinarily weak. It's still has such big problems and it's still so unreliable that for most use cases it's still unsuitable. But when we think about a system that is like a thousand times more powerful and let's say a million _____ more reliable, it just doesn't it doesn't say gibberish very often. It doesn't totally lose the plot and get distracted or system like that is going to be one that a lot of the economic activity in the world comes to rely on. And I think it's very important that we don't have a small group of people sort of saying you can never use it for this thing that, like most of the world wants to use it for because it doesn't match our personal beliefs. Talk a bit more about some of the other uses of it, because one of the things that's most __________ is it's not just about sort of text responses. It's it can take ___________ human instructions and build things up. For example, you can say to it, write a Python program that is designed to put a flashing cursor in one corner of the screen, in the ______ logo in the other corner. And and it can go your way and do something like that. Shockingly, quite well, effectively. Yeah, I it can. That's amazing. I mean, this is _______ to me. That opens the door to. An entirely way to think about programers for the future, that you could you could have people who can program just in human natural language potentially and gain rapid efficiency. I do the engineering. We're not that far away from that world. We're not that far away from the world where you will write a spec in English. And for a ______ enough program, I will just write the code for you. As you said, you can see glimpses of that even in this very week three which was not trained to code like. I think this is important to remember. We trained it on the language on the Internet very rarely, you know, Internet let language on the Internet also ________ some code snippets. And that was enough, so if we really try to go train a model on code itself and that's where we decide to put the horsepower of the model into, just imagine what will be possible will be quite impressive. But I think what you're pointing to there is that because models like three to some degree or other, and it's like very hard to know exactly how much understand the underlying concepts of what's going on. And they're not just regurgitating things they found in a _______, but they can really apply them and say, oh, yeah, I kind of like know about this word and this idea and code. And this is probably what you're trying to do. And I won't get it right always. But sometimes I will just ________ this like a brand new program for nothing that anyone has ever asked before. And it will work. That's pretty cool. And data is data. So it can do that from English to code. It can do that from _______ to French. Again, we never told it to learn about translation. We never told it about the concepts of English and ______, but it learned them, even though we never said this is what English is and this is what French is and this is what it means to _________, it can still do it. Wow, I mean, for ________ people, is there a world coming where the sort of the palette of possibility that they can be exposed to is just explodes? I mean, if you're a ________, is there a near future where you can say to your eye, OK, I'm going to bed now, but in the morning I'd love you to present me with a thousand tuba jingles with words attached that you have of a sort of mean factor to the and you come down in the _______ and the ________ shows you the stuff. And one of them, you go, wow, that is it. That is a top 10 hit and you build a song from it. Or is that going to be released? Actually be the value add. We released something last year ______ Jukebox, which is very near what you described, where you can say I want music generated for me in this style or this kind of stuff, and it can come up with the words as well. And it's like pretty cool. And I really enjoy listening to _____ that it _______. And I can sort of do four songs, two bars of a jingle, whatever you'd like. And one of my very ________ _______ reached out, called to open it after we _______ this and said that he wanted to talk. And I was like, well, I like total fanboy here. I'd love to join that call. And I was so nervous that he was going to say, this is terrible. This is like a really sad thing for human creativity. Like, you know, why are you doing this? This is like whatever. And he was so excited. And he's like, this has been so _________. I want to do a new _____ with this. You know, it's like, give me all these new ideas. It's making me much better at my job. I'm going to make better music because of this tool. And that was awesome. And I hope that's how it all continues to go. And I think it is going to lead to this. We see a similar thing now with Dolly, where graphic _________ sometimes tell us that they just they see this new set of possibilities because there's new creative inspiration and they're cycle time, like the amount of time it _____ to just come up with an idea and be able to look at it and then decide whether to go down that path or head in a different direction goes down so much. And so I think it's going to just be this like incredible creative explosion for humans. And how far away are we some before? And I it comes up with a genuinely powerful new idea, an idea that solves the problem that humans have been wrestling with. It doesn't have to be as quite on the _____ as of, OK, we've got a virus ______. Please ________ to us what a what a national rational response should look like, but some kind of genuinely innovative idea or solution like one one ________ question we've asked ourselves is, when will the first genuinely interesting, purely AI written TED talk show up? I think that's a great milestone. I will say it's always hard to guess timeline's I'm sure I'll be wrong on this, but I would guess the first genuinely ___________. Ted talk, thought of written delivered by an AIDS within the kind of the seven ish year time frame. Maybe a little bit less. And it _____ like I mean, just reading that Guardian essay that was kind of it was a composite of several different GPG three _________ to questions about, you know, the threats of robotics or whatever. If you throw in a human editor into the mix, you could probably imagine something much sooner. Indeed. Like ________. Yeah. So the hybrid the hybrid version where it's basically a tool assisted TED talk, but that it is better than any TED talk a human could generate in one hundred hours or whatever, if you can sort of combine human discretion with A.I. __________. I suspect that's like our next year or two years from now kind of thing where it's just really quite good. That's that's really interesting. How do you view the impact of A.I. on jobs? There's obviously been the familiar story is that every White-Collar job is now up for ___________. What's what's your view there? You know, it's I think it's always hard to make these ___________. That is definitely the familiar story now. Five years ago, it was every blue collar job is up for destruction, maybe like last year it was. Every creative job is up for destruction because of things like Jukebox I. I think there will be an enormous impact on. The job market, and I really hate it, I think it's kind of gross when people like working on I _______ like there's not going to be or sort of say, oh, don't worry about it. It'll just all obviously better. It doesn't always obviously get better. I think what is true is. Every technological revolution produces a change in jobs, we always find new ones, at least so far. It's difficult to _______ from where we're sitting now what the new ones will be and this technological revolution is likely to be. Again, it's always tempting to say this time it's different. Maybe I'll be totally wrong. But from what I see now, this technological revolution is likely to be more. ________. More of a ________ note than most, and I think we as a society need to figure out how we're going to cushion everybody through that. I've got my own ideas about how to do that. I, I wouldn't say that I have any reason to believe they're the right ones, but doing nothing and not really engaging with the magnitude of what's about to happen, I think it's like not an __________ ______. So there's going to be huge impact. It's _________ to predict where it shows up the most. I think previous predictions have mostly been wrong, but I I'd like to see us all as a society, certainly as a field, engage in what what the shifts we want to make to the ______ ________ are to kind of get through that in a way that is _________ __________ to everybody. I mean, in every past __________, there's always been a _____ for humans to move to. That is, if you like, ______ up the food chain, it's sort of we've retreated to the things that humans could ________ do, think better, be more creative and so forth. I guess the _____ about A.I. is that in principle, I believe this, that there is no human cognitive feat that won't ultimately be doable, probably better by artificial general touch, simply because of the extra _________ that ultimately they can have, the vast knowledge they bring to the table and so forth. Is that _________ right, that there is ultimately no safe sort of space where we can say, oh, but that would never be able to do that on a very long time horizon? I agree with you, but that's such a long time _______. I think that, you know, like maybe we've merged by that point, like maybe we're all plugged in and then, like, we're this sort of symbiotic thing. Like, I think there's an interesting example, as we were talking about a few _______ ago, where right now we have these systems that have sort of enormous horsepower but no ________ wheel. It's like, you know , incredible capabilities, but no judgment. And there's like these obvious ways in which today even a human plus three is far better than either on their own. Many people speak about a world where it's sort of A.I. as this external threat you speak about. At some point, we actually merge with eyes in some way. What do you mean by that? There's a lot of different ________ of what I think is possible there, you know, in some sense, I'd argue the merge has already like begun the human technology _____ like we have this thing in our hands that sort of dictates a lot of what we think, but it gives us real superpowers and that can go much, much further. Maybe it goes all the way to like the Elon Musk vision of neuro link and having our brains plugged into computers and sort of like literally we have a computer on the back of our head or goes the other direction and we get uploaded into one. Or maybe it's just that we all have a chat bot that kind of __________ steers us and _____ us make better decisions than we could. But in any case, I think the fundamental thing is it's not like the humans versus the eyes _________ to be the. Smartest sentient thing on earth or beyond. But it's that this idea of being on the same team. Hmm. I certainly get very excited by the sort of the medium term _________ for creative people of all sorts if they're willing to expand their palette of _____________. But with the use of A.I. to be willing to. I mean, the one thing that the history of technology has shown again and again is that something this powerful and with this much benefit is unstoppable and you will get rewarded for _________ it the most and the earliest. So talk about what can go _____ with that, so let's move away from just the sort of ________ displacement ______. You were a co-founder of Open Eye because you saw ___________ risks to humanity from high today. What would you put as the sort of the most ________ of those risks? And how is open eye working to minimize? I still think all of the really __________ risks exist. I am more _________, much more confident than I was five years ago when we _______ that there are technical things we can do about. How we build these systems and the research and the _________ that make us much more likely to end up in the kind of really wonderful camp, but, you know, like maybe open I fall behind and maybe somebody else feels ajai that thinks about it in a very different way or doesn't care as much as we'd like about safety and the risks or how to strike a different trade off of how fast we should go with this and where we should sort of just say, like, you know, like let's push on for the economic benefits. But I think all of this sort of like, you know, _____________ what's been in the realm of sci fi risks are real and we should not ______ them. And I still lose sleep over them. And just to update people is artificial general intelligence. Right now, we have incredible examples of powerful AI operating on specific areas. Ajai is the ability of a computer mind to connect the dots and to make decisions at the same level of breadth that that ______ have had. What's your sort of elevator pitch on Ajai about how to ________ and how to think of it? Yeah, I mean, the way that I would say it is that for a while we were in this world of like very narrow A.I. , you know, that could like ________ images of cats or whatever, more advanced stuff in that. But that kind of thing. We are now in the era of general purpose, AI, where you have these systems that are still very much _________ tools, but that can generalize. And one thing like GPP three can write essays and translate between languages and write computer code and do very ___________ search. It's like a single model that understands enough of what's really going on to do a broad array of tasks and learn new things quickly, sort of like people can. And then __________ we'll get to this other realm. Some people call it ajai, some people call ostler things. But I think it implies that the systems are like to some degree self directed, have some intentionality of their own is a simple summary to say that, like the fundamental risk is that there's the potential with general artificial intelligence of a sort of _______ effect of self-improvement that can happen far faster than any kind of humans can even keep up with, so that the day after you get to ajai, suddenly computers are _________ of times more advanced than us and we have no way of controlling what they do with that power. Yeah, and that is certainly in the risk space, which is that we build this thing and at some point somewhat suddenly, it's much more powerful than we are, we haven't really done the full merge yet. There's an event horizon there and it's sort of hard to see to the other side of it. Again, lots of reasons to think it will go OK. Lots of reasons to think we won't even get to that scenario. But that is something that. I don't think people should brush under the rug as much as they do, it's in the possibility space for sure, and in the possibility subspace of that is one where, like, we didn't actually do as good of a job on the alignment work as we thought. And this sort of child of humanity kind of acts in a very different way than we think. A framework that I find useful is to sort of think about like a two by two matrix, which is _____ timelines to ajai and long timelines to ajai and a slow take off and a fast take off on the other axis. And in the short timelines, fast take off quadrant, which is not where I think we're going to be. But if we get there, I think there's a lot of _________ in the direction that you are describing that are worrisome. And we would want to spend a lot of effort planning for. I mean, the fact that a computer could start editing its own code and improving itself while we're asleep and you wake up in the morning and it's got smarter, that is the start of something super powerful and potentially scary. I have tremendous misgivings about letting my system, not one we have today, but one that we might not have and too many more years start editing its own code while we're not paying attention. I think that's the kind of thing that is worth a _____ deal of societal __________ about, you know, just because we can do that. Should we? Yes, because one of the things that's that's been most ________ to you about the last few years has been just the power of unintended consequences. It's like you don't have to have a belief that there's some sort of ______ up of of an alien intelligence that suddenly decided it wants to _____ havoc on humans. That may never happen. What you can have is just incredible _____ that goes amok. So a lot of people would argue that what's happened in technology in the last few years is actually an example of that. You know, social media companies created these _____________ that were programmed to maximally _______ attention, for example, for sure. And they understand this from that turned out to be in some ways horrifying and extraordinarily damaging. Is that a meaningful sort of canary in the coal mine saying, look out, humanity, this could be really dangerous? And how how on earth do you protect against those kinds of unintended consequences? I think you raise a great point in general, which is these systems don't have to wish ill to humanity to cause ill just when you have, like, very powerful systems. I mean, __________ ____________ for sure. But another version of that is and I think this applies at the technical level, at the company level, at the societal _____, incentives are superpower's. Charlie Munger had this thing on, which is incentives are so powerful that if you can spend any time __________ working on the incentive system, that's what you should do before you work on anything else. And I really believe that. And I think that _______ to the __________ models we _____ and what their reward _________ look like. I think it applies to society in a big way, and I think it applies to our _________ structure at open. I you know, we sort of observe that if you have very well-meaning people, but they have this incentive to sort of maximize attention harvesting and profit forever through no one's ill intentions, that leads to a quite undesirable outcome. And so we set up opening is this thing called a capped profit model specifically so that we don't have the system incentive to just generate maximum value forever with an AGI that seems like obviously quite broken. But even though we knew that was bad and even though we all like to think of ourselves as good people, it took us a long time to figure out the right _________, to figure out a charter that's going to govern us and a set of incentives that we believe will let us do our work. And kind of these we have these like three elements that we talk about a lot research sort of engineering, development and deployment policy and safety. Put those all together under a system where you don't have to rely on. Anything but the natural __________ to push in a direction that we hope will minimize the sort of negative unintended consequences. So help me understand this, because this is I think this is confusing to some people. So you started _______. I initially I think Elon Musk, the co-founder, and there was a group of you and the argument was this technology is too powerful to be left, developed in ______ and to be left developed purely by corporations who have whatever incentive they may have. We need a _________ that will develop and _____ knowledge openly. First of all, just even at that early stage, some people were confused about this. It was saying if this thing is so dangerous, why on earth would you want to make it secrets even more available? Well, maybe ______ the tools to that sort of AI _________ in his bedroom somewhere, I think I think we got misunderstood in the way we were talking about that. We certainly don't think that the right thing to do is to, like, build this a super ______ and hand it to a terrorist. That's obviously _____. One of the reasons that we like our API model is it lets us make the most powerful AI __________ anyone in the world has, as far as we know, available to ever would like to use it, but to put some controls on its usage. And also, if we make a mistake, to be able to pull it back or change it or tweak it or improve it or whatever. But we do want to put and this is continued will continue to be true with appropriate restrictions and __________, very powerful technology in the hands of people. I think that is fair. I think that will lead to the best results for the society as a whole. And I think it will sort of maximize _______. But that's very different than sort of ________ the whole model and saying, here, do whatever you want with it. We're able to enforce rules on it. We also think and this is part of the _______ that like something the field was doing a lot of that we didn't feel good about was sort of saying like, oh, we're going to keep the pace of progress and capabilities secret. That doesn't feel right, because I think we do need a ________ conversation about what's what's going on here, what the _______ are going to be. And so we although we don't always say, like, you know, here's the _____ weapon, hopefully we do try to say, like, this is really serious. This is a big deal. This is going to affect all of us. We need to have a big conversation about what to do with it. Help me understand the structure a bit better, because you definitely surprised much people when you announced that _________ were putting a billion _______ into the organization and in return, I guess they get certain exclusive licensing rights. And so, for example, they are the exclusive licensee of CP3. So talk about that structure of how you win. Microsoft presumably have invested not purely for __________ purposes. They think that they will make money on that billion dollars. I sure hope they do. I love capitalism, but I think that I really _____ even more about Microsoft as a partner. And I'll talk about the structure and the exclusive license in a minute is that we like went around to people that might find us. And we said one of the things here is that we're going to try to make you some money. But like Adjei going well is more important. And we need you to sign this document that says if things don't go the way we think and we can't make you money like you just cheerfully walk away from it and we do the right thing for humanity. And they were like, yes, we are enthusiastic about that. We get that the mission comes first here. So again, I hope a phenomenal investment for them. But they were like they really __________ surprised us on the upside of how aligned they were with us, about how strange the world may get here and the need for us to have flexibility and put our mission first, even if that means they lose all their money, which I hope they don't and don't think they will. So the way it's set up is that if at some point in the coming year or two, two years, Microsoft decide that there's some incredible commercial opportunity that they could realize out of the eye that you've built and you feel actually, no, that's that's damaging. You can block it. You can veto it. Correct. So the four most powerful version of three and its successors are available via the API, and we intend for that to ________. What Microsoft has is the ability to sort of put that model directly into their own technology. If they want to do that. We don't plan to do that with other people because we can't have all these controls that we talked about earlier. But they're like a close trusted partner and they really care about safety, too. But our goal is that anybody who wants to use the API can have the most ________ versions of what we've trained. And the structure of the API lets us continue to increase the safety and fix problems when we find them. But but the structure. So we start out as a non-profit, as you said, we realized pretty quickly that although we went into this thinking that the way to get to ajai would be about smarter and _______ __________, that we just needed bigger and ______ _________ as well. And that was going to require a scale of capital that no one will, at least certainly not me, could figure out how to raise is a nonprofit. We also needed to sort of be able to compensate very highly compensated, talented individuals that do this, but are full for profit company had runaway incentives problem, among other things. Also just one about sort of fairness in society and wealth concentration that didn't feel right to us either. And so we came up with this kind of hybrid where we have a nonprofit that governs what we do, and it has a subsidiary, LLC, that we structure in a way to make a fixed amount of profit so that all of our investors and employees, hopefully if things go how we like, if not no one gets any money, but hopefully they get to make this one time great ______ on their investment or the time that they _____ it open their equity here. And then beyond that, all the value flows back to the nonprofit and we figure out how to share it as fairly as we can with the world. And I think that this structure and this nonprofit with this very ______ _______ in place and everybody who _____ _______ up for the mission come in first and the fact the world may get strange, I think that. That was at least the best idea we could come up with, and I think it feels so far like the incentive system is _______, just as I sort of _____ the way that we and our partners make decisions. But if I read it right, the cap on the gain that investors can make is 100 Axum. It's a massive call that was for our very first round investors. It's way, way lower. Like as we now take a bit of capital, it's way, way lower. So your deal with Microsoft isn't you can only make the first hundred billion dollars. I don't know. It's way lower than after that. We're giving it to the world. It's way lower than that. Have you disclosed what I don't know if we have, so I won't accidentally do it now. All right. OK, so explain a bit more about the charter and how it is that you. Hope to _____ or I guess help contribute to an eye that is safe for humanity. What do you see as the keys to us avoiding the _____ mistakes and really _______ on to something that's that's beneficial for humanity? My answer there is actually more about, like technical and societal issues than the charter. So if it's OK for me to answer it from that ___________, sure. OK, I'm happy to talk about the charter to. I think this question of alignment that we ______ about a little _______ is paramount, and then I think to __________ that it's useful to _____________ between accidental misuse of a system and intentional misuse of a system. So like intentional would be a bad actor saying, I've got this powerful system, I'm going to use it to like hack into all the computers in the world and wreak havoc on the power _____. And accidental would be kind of the Nick Bostrom make a lot of paper clips and view humans as collateral damage in both cases. But to _______ _______, if we can really, truly, technically solve the alignment problem and the societal problem of deciding to which set of human values do we align, then the systems understand right and wrong, and they understand probably better than we ever can, unintended consequences from _______ _______ and very complex systems. And, you know, if we can train a system which is like. Don't harm humanity and the system can really understand what we mean when we say that, again, who is we and what does that have some asterisks on them? Sorry, go ahead. Well, that's if they could understand what it means to not harm humanity, that there's a lot wrapped up in that sentence. Because what's been so striking to me about efforts so far is that they seem to have been based on a very naive view of human nature. Go back to the sort of Facebook and Twitter examples of, well, the engineers building some of the systems would say we've just designed them around what humans want to do. You said, well, if someone wants to _____ on something, we will give them more of that thing. And what could ________ be wrong with that? We're just __________ human choice, ignoring the fact that humans are complicated, farshid animals for sure, who are constantly making _______, that a more effective _______ of themselves would agree is not in their long term interests. So that's one part of it. And then you've got layered on top of that or the complications of systemic complexity where, you know, ________ choices by thousands of people end up creating a reality that possibly have designed for how how to cut through that. Like an AI has to make a decision based on a moment, on a ________ data set. As those decisions get more powerful, how can we be confident that they don't lead to this sort of system crashing basically in some way? I think that I've heard a lot of behavioral psychologists and other people that have studied this say in different ways, are that I hate to keep picking on Facebook, but we can do it one more time since we're on the topic. Maybe you can't in any given moment in _____ where you're tired and you have a stressful day, stop yourself from the dopamine hit of _________ and Instagram, even though you know that's bad for you and it's not leading to your best life. But if you were _____ in a reflective moment where you were sort of fully alert and __________, do you want to spend as much time as you do scrolling through Instagram? Does it make you happier or not? You would actually be able to give like the right long term answer? It's sort of the spirit is willing, but the flesh is weak kind of ______. And one thing that I am _______ is that humans do know what we want and what. On the whole, and _________ with research or sort of an objective view about what makes us happy and doesn't we're pretty, what's so great about it, they're pretty good. But in any particular moment, we are subjected to our animal instincts and it is easy for the lower _____ to take over the eye. Well , I think be an even higher brain. And as we can teach it, you know, here is what we really do value. Here's what we really do want. It will help us make better _________ than we are capable of, even in our best moments. So is that being proposed and talked about as an actual rule? Because it _______ me that there is something ___________ super profound here to introduce some kind of rule for development of AIDS that they have to tap into not. What humans one, which is an ill _______ question, but as to what humans in reflective mode want. Yeah, we talk about this a lot. I mean, do you see a real ______ where something like that could be incorporated as a sort of an absolute ______ rule and and if you like, ______ around the community so that it seeps into corporations and elsewhere? Because that I've seen no evidence that, well, a little corporation that was potentially a game changer. Corporations have this weird incentive problem. Right. What I was trying to speak about was something that I think should be technologically possible , and that's something that we as a society should demand. And I think it is technically possible for this to be sort of like a layer above the neocortex that makes even better decisions for us and our _______ and our long term happiness and fulfillment than we could make on our own. And I think it is possible for us as a _______ to demand that. And if we can do like a ______ move between what the technology is capable of and what we what we as society demand, maybe we can make everybody in the middle that way. I mean, there are instances of even though companies have their incentives to make money and so forth, they also in the knowledge age. Can't make _____ if they have pissed off too many of their employees and customers and investors by _______ of the climate space right now, you can see more and more companies, even those that are emitting huge amounts of carbon dioxide, saying, wait a sec, we're struggling to recruit talented people because they don't want to work for someone who's evil. And their customers are saying, we don't want to buy something that is evil. And so, you know, ultimately you can picture processes where they do better. And I I believe that most engineers, for example, work in Silicon Valley. Companies are actually good people who want to ______ great products for humanity. I think that the people who run these companies want to be a net contribution to ________. It's we've we've rushed really quickly and design stuff without thinking it through properly. And it's led to a mess up. So it's like, OK, don't move fast, break things, slow down and build beautiful things that are _____ on a real version of human nature and on a real version of system complexity and the risks associated with ________ complexity. Is that the ______ that fundamentally you think that you can push somehow? Yes, but I think the way we can push it is by getting the _________ system right. I think most people are _____________ _________ good. Very few people wake up in the morning thinking about how can I make the world a worse place? But the incentive systems that we're in are so powerful. And even those _________ who join with the absolute best of intentions get sucked into this world where they're like trying to go up from it all for and five or whatever Facebook calls those things and you like, it's pretty exciting. You get caught up playing the game, you're rewarded for kind of doing things that move the company's key metrics. It's like fun to get promoted. It feels good to make more money and the incentive systems of the company. And that's what it rewards. An individual performance are maybe like not what we all want. And here I don't want to pick on Facebook at all because I think there's versions of this at play it like every big tech company, including in some ways I'm sure it open I but to the degree that we can better _____ the incentives of companies with the welfare of society and then the incentives of an individual at those companies within the now realign incentives for those companies, the more likely we are to be able to have things like ajai that. Follow an incentive ______ of. What we want in our most reflective best _______ and are even better than what we could think of ourselves is is it still the vision for open eye that you will get to? Artificial _______ intelligence ahead of. The ____________, so that you can somehow put a stake in the ground and build it the right way. Is that really a realistic thing to to dream for? And if not, how do you live up to the mission and help ensure that this thing doesn't go off the rails? I think it is. Look, I certainly don't think we will be the only _____ to build an AGI, but I think we could be the first. And I think if you are the first, you have a lot of norms that empower. And I think you've already seen that. You know, we have released some of the most powerful systems to date. And I think the way that we have done that kind of in controlled release where we've released a bigger model than a bigger one than a bigger one, and we sort of try and talk about the potential misuse cases and we try to like talk about the importance of releasing this behind an API so that you can make changes. Other groups have followed suit in some of those directions, and I think that's good. So, yes, I don't think we can be the only I do think we can be ahead. And if we are ahead, I think we can use that leverage to hopefully push people in a better _________ or maybe we're wrong and somebody else has a better direction. We're doing something about do you have a __________ advantage in that your mission is to do this for everyone as opposed to for some corporate _________. And that that that allows you that. Why is it that we came out of open eye and not someone else? It's like it's surprising in some ways when you're up against so much money and so much talent in these other _________ that you came up with this platform ahead of. You know, in some _____ it's surprising and in some sense, like the startup wins most of the time, like I'm a huge believer in startups as the best force for innovation we have in the world today. I talked a little bit about how we combine these three. Different clans of research, engineering and sort of safety and policy that don't normally combine well and I think we have an unusual strength, there were clearly like well funded. We have super talented ______. But what we really have is like intense focus and self belief that what we're doing is possible and good. And I appreciate the implied compliment. But, you know, we, like, work really hard. And if we stop doing that, I'm sure someone would run by us fast. Tell us a bit more about some of your prior life sentences. Yeah, for several years, you were running Y Combinator, which had incredible ______ on some 70 companies. There are so many startup stories that began at Y Combinator. What were key drivers in your own life that took you on the path you're on? And how did that path end up at Y Combinator? No exaggeration. I think I have back to back had the two jobs that are at least the most interesting to me in all of Silicon ______. I, I was I went to _______ to study computer science. I was a _____ computer nerd growing up. I knew like a little bit about startups, but not very much. I started working on this project the same year I started working on that. This thing called Y Combinator started and funded me and my co-founders. And we _______ out of school and did this _______, which I ran for like seven years. And then after that I got acquired. I had stayed close to my comment the whole time. I thought it was just this incredible group of people and spirit and set of incentives and just _____ misunderstood by most of the world, but _______ to everyone within it that it was going to create huge amounts of value and do a lot of new things. My company had acquired PJI, who is the founder of ICI, and like truly one of the most incredible humans and business people. And Paul Burrell, Paul Graham asked me if I ______ to run it. And kind of like the central learning of my ______, why I individual ________ has been that if you really scale them up, remarkable things can happen. And I. I did it and I was like, one of the things that would make this exciting for me personally __________ would be if I could sort of push it in the direction of doing these hard tech companies, one of which became open. I describe actually what Y Combinator is, you know, how many people come through it to give us a couple of stories of its impact. Yeah. So you basically apply as a handful of people and an idea, maybe a prototype and say, I would like to start a company and will you please fund me? And we review those ____________ and we I shouldn't say we anymore. I guess they fund four hundred companies a year. You get about one hundred and fifty ________ dollars while she takes about seven percent _________ and then gives you lots of advice and then networking and sort of this like fast track program for ________ a startup. I haven't looked at this in a while, but at one point a significant fraction of the _______ dollar plus companies in the US that got started. It all came through the Wiki program, some recently in the news ones have been like ______, ________, Coinbase, insta card stripe. And I think it's just it has become an incredible way to help. People who understand technology get a three month course in business, but instead of like herding you with an MBA, we actually teach you the things that matter and kind of go on to do incredible, incredible work. What is it about _____________? Why do they matter? Some people just find them kind of annoying. But I think you would _____ I think I would argue that they have done as much as anyone to _____ the future. Why ? What is it about them? I think it is the ability to take. And idea and by force of will to make it ______ in the world and in an incentive system that rewards you for making the most impact on the most people like in our system. That's how we get most of the things that that we use. That's how we got the computer that I'm using, the software I'm using to talk to you on it. Like all of this, you know, everyone in life, everything has a balance sheet. There's ______ of very annoying things about them. And there's plenty of very annoying things about the system that sort of ________ them. But we do get something really _________ in return. And I think that as a force for ______ things that make all of our lives better happen, it's very cool. Otherwise, you know, like if you have, like, a great idea, but you don't actually do anything useful with it for people, that's still cool. It's still intellectually interesting. But like, there's got to be something about the reward ________ in society that is like, did you actually do something useful? Did you create value? And I think entrepreneurship and startups are a wonderful way to do that. You know, we get all these great software companies. But I also think it's like how we're going to get ajai, how we're going to get nuclear fusion, how we're going to get life _________. And like on any of those topics are a long list of other things I could point to. There's like a number of startups that I think are doing incredible work, some of which will actually deliver. It is a truly amazing thing when you put the camera back and to believe that a human being could be _____ awake at night and something pops inside their mind as a patterning of the _______ in their brain that is effectively them saying, aha, I can see a way where the future could be better and and they can actually picture it. And then they wake up and then they talk to other people and they persuade them and they persuade investors and so forth. And the fact that this this system can happen and that they can then actually change the history changes in some sense. It is mind ________ that that happens that way and it happens again and again. So you've seen so many of these stories happen. What would you say? Is the is there a key thing that differentiates good entrepreneurs from others? If you could ______ down on one trait, what would it be? If I could pick only one, I would pick determination. I think that is the biggest predictor of success, the biggest differentiator predictor. And if you would allow a second, I would pick like communication skills or __________ or something in that direction as well. There are all of the obvious ones that matter, like ____________, but there's like a lot of smart people in the world. And when I look back at kind of the thousands of entrepreneurs I've worked with, all of many of whom were like quite capable, I would say that's like one and two of the surprisingly differentiated characteristics. What it's it's what I look at, the different things that you've built and you're working on. I mean, it could not be more foundational for the future. I mean, entrepreneurship. I know this is I agree that this is really what has driven the future. Do you see some people get really now they look at Silicon Valley and they look at this _____ and they worry about the culture. Right. That it's this is a bro _______. Do you see prospects of that changing _______ soon? And would you welcome it? Can we get better companies by really working to expand a group of people who can be entrepreneurs and who can contribute to aid, for example? For sure. And in fact, I think I'm hopeful, since these are the two things I've thought the most about. I'm excited for the day when someone combines them and uses A.I. to better select who did more fairly, maybe even ______ who to fund and how to ______ them and really kind of make entrepreneurship super widely available that will lead to like better outcomes and sort of more societal wealth for all of us. So are. So, yeah, I think. Broadening the set of people able to start companies and that sort of get the resources that you need, that is like an _____________ good thing and it's something that I think Silicon Valley is making some progress in. But I hope we see a lot more. And I do really, truly think that the technology industry entrepreneurship is one of the greatest forces for self betterment. If we can just figure out how to be a little bit more inclusive in how we do things. My last question today is about ideas were spreading. If you could inject one idea into the mind of everyone listening, what what would the idea be? We've touched on it a bunch, but the one idea would be the ajai really is going to happen. You have to engage with it seriously, and you shouldn't just listen to this and then brush aside and go about life as if it's not going to happen because it is going to affect everything. And we will all we all, I think, have an obligation, but also an opportunity to figure out what not means and how we want the world and this sort of one time shift to go on. I'm kind of awed by the breadth of things are engaged with. Thank you so much for spending so much time sharing your vision. Thanks so much for having me. OK, that's it for today. You can read more about open eyes, vision and ________ at open eye dotcom. If you want to try playing with yourself, it's a little tricky. You have to find a website that has licensed the API. The one I went to was ___________ ehi dot com, where you just you pay a few dollars to get access to a very strange mind. That's actually quite a lot of fun. The _________ is part of the TED Audio Collective, a collection of podcasts dedicated to sparking curiosity and sharing ideas that matter. This show is produced by Kim Net2Phone Pittas and edited by Grace Rubenstein and Sheila _______, Sambor Islamic Sir. Fact Check is by Paul Durbin and special thanks to Michele Quent, Colin ______ and Anna _____. If you like the show, please write and review it. It helps other people find us. We read every ______, so thanks so much for _________. See you next time.

Solution


  1. applies
  2. helmes
  3. dollars
  4. argue
  5. plenty
  6. interview
  7. felin
  8. teaches
  9. worst
  10. strikes
  11. ignore
  12. companies
  13. pleasantly
  14. belief
  15. silicon
  16. horsepower
  17. thousands
  18. classify
  19. dystopian
  20. close
  21. happen
  22. problems
  23. constantly
  24. creates
  25. challenge
  26. change
  27. factor
  28. moving
  29. analogy
  30. showed
  31. continue
  32. inspiring
  33. designers
  34. called
  35. career
  36. loved
  37. structural
  38. wrong
  39. confident
  40. joins
  41. microsoft
  42. basically
  43. musician
  44. start
  45. spent
  46. complicated
  47. major
  48. broad
  49. dropped
  50. imagine
  51. objective
  52. hugely
  53. competing
  54. multiple
  55. staccato
  56. interact
  57. moderately
  58. artists
  59. technology
  60. identify
  61. bunch
  62. social
  63. power
  64. societal
  65. chance
  66. motivating
  67. asked
  68. functions
  69. virtual
  70. watch
  71. corporate
  72. train
  73. night
  74. impact
  75. great
  76. interface
  77. includes
  78. times
  79. choices
  80. possibly
  81. anytime
  82. extraordinarily
  83. tremendously
  84. years
  85. current
  86. takes
  87. humans
  88. jordache
  89. difficult
  90. entire
  91. embracing
  92. nonprofit
  93. build
  94. company
  95. light
  96. direction
  97. tomorrow
  98. design
  99. learning
  100. firepower
  101. grids
  102. mentioned
  103. humanity
  104. presented
  105. subjective
  106. destruction
  107. algorithms
  108. computers
  109. rapidity
  110. values
  111. unequivocally
  112. making
  113. perspective
  114. state
  115. developing
  116. bigger
  117. double
  118. weapon
  119. potential
  120. startups
  121. amount
  122. surprising
  123. alignment
  124. strong
  125. cases
  126. contract
  127. mission
  128. amazing
  129. philosophical
  130. thoughtful
  131. creative
  132. extremely
  133. predict
  134. acceptable
  135. incentive
  136. whatsoever
  137. misalignment
  138. english
  139. saved
  140. sense
  141. output
  142. shipping
  143. twenty
  144. pretend
  145. wanted
  146. worrying
  147. function
  148. incentives
  149. anderson
  150. extension
  151. ownership
  152. review
  153. negative
  154. supporting
  155. signing
  156. existential
  157. computer
  158. fifty
  159. optimism
  160. resources
  161. beginning
  162. moments
  163. consequences
  164. neurons
  165. pretty
  166. process
  167. answer
  168. unintended
  169. brain
  170. uniquely
  171. short
  172. college
  173. ideas
  174. starting
  175. model
  176. discussion
  177. align
  178. remember
  179. prepares
  180. treat
  181. potentially
  182. guess
  183. applications
  184. hundreds
  185. experts
  186. intelligences
  187. secret
  188. corporations
  189. ability
  190. immediately
  191. runaway
  192. medical
  193. conversation
  194. understand
  195. guardrails
  196. evangelism
  197. varying
  198. decisions
  199. learn
  200. words
  201. interesting
  202. stand
  203. impressive
  204. program
  205. super
  206. boffano
  207. money
  208. merge
  209. badly
  210. horrifying
  211. outcome
  212. generate
  213. album
  214. agenda
  215. horizon
  216. climate
  217. unsure
  218. research
  219. culture
  220. version
  221. impacts
  222. reason
  223. quote
  224. moment
  225. avoid
  226. lofty
  227. charter
  228. click
  229. helps
  230. welfare
  231. morning
  232. systems
  233. people
  234. differentiate
  235. french
  236. context
  237. group
  238. listening
  239. structure
  240. couple
  241. degrees
  242. naive
  243. jibberish
  244. scrolling
  245. global
  246. annoying
  247. models
  248. idolizes
  249. fundamentally
  250. release
  251. imperfect
  252. entrepreneurs
  253. thousand
  254. working
  255. beneficial
  256. awful
  257. problem
  258. playing
  259. opening
  260. services
  261. bubble
  262. coming
  263. level
  264. website
  265. system
  266. seeker
  267. incompatible
  268. golden
  269. airbnb
  270. chris
  271. varies
  272. worry
  273. harvest
  274. incredibly
  275. develop
  276. complex
  277. today
  278. grant
  279. brilliant
  280. revolution
  281. teach
  282. shocking
  283. wreak
  284. person
  285. individual
  286. altruistic
  287. general
  288. thinking
  289. tailored
  290. valley
  291. feels
  292. defined
  293. powerful
  294. philosopher
  295. simply
  296. terrorist
  297. earlier
  298. actions
  299. boggling
  300. human
  301. assemble
  302. building
  303. steering
  304. engineers
  305. application
  306. responses
  307. accessible
  308. scale
  309. society
  310. decision
  311. started
  312. obvious
  313. built
  314. giving
  315. google
  316. possibilities
  317. advise
  318. economic
  319. waking
  320. share
  321. fraction
  322. operated
  323. talked
  324. president
  325. world
  326. smarter
  327. theory
  328. intelligence
  329. scenarios
  330. internal
  331. story
  332. specific
  333. behaves
  334. minutes
  335. quality
  336. point
  337. music
  338. translate
  339. lying
  340. generalized
  341. spread
  342. predictions
  343. space
  344. systemic
  345. pincer
  346. billion
  347. benefit
  348. favorite
  349. inherently
  350. gravitate
  351. select
  352. describe
  353. important
  354. dramatic
  355. versions
  356. traditionally
  357. hopeful
  358. simple
  359. smart
  360. maximally
  361. holding
  362. progress
  363. eventually
  364. return
  365. shape

Original Text


Hello there, this is Chris Anderson, and I am hugely, hugely, tremendously excited to welcome you to a new series of the TED interview. Now, then, this season, we're trying something new. We're organising the whole season around a single theme, albeit a theme that some of you may consider inappropriate. But hear me out. The theme is the case for optimism. And yes, I know the world has been hit with extraordinarily ugly things in the last few years. Political division, a racial reckoning, technology run amuck, not to mention a global pandemic and impending climate catastrophe. What on earth are we thinking in this context? Optimism just seems so naive and unwanted, almost annoying. So here's my position. Don't think of optimism as a feeling. It's not just this sort of shallow feeling of hope. Optimism is a search. It's a determination to look for a pathway forward somewhere out there. I believe I truly believe there are amazing people whose minds contain the ideas, the visions, the solutions that can actually create that pathway forward. If given the support and resources they need, they may very well light the path out of this dark place we're in. So these are the people who can present not optimism, but a case for optimism. They're the people I'm talking to this season. So let's see if they can persuade us now. Then the place I want to start is with A.I. artificial intelligence. This, of course, is the next innovative technology that is going to change everything as we know it, for better or for worse. Today was painted not with the usual dystopian brush, but by someone who truly believes in its potential. Sam Altman is the former president of Y Combinator, the legendary startup accelerator. And in 2015, he and a team launched a company called Open Eye, dedicated to one noble purpose to develop A.I. so that it benefits humanity as a whole. You may have heard, by the way, recently a lot of buzz around in A.I. technology called T3 that was developed by open eye improve the quality of the amazing team of researchers and developers they have work in. There will be hearing a lot about three in the conversation ahead. But sticking to this lofty mission of developing A.I. for humanity and finding the resources to realize it haven't been simple. Open A.I. is certainly not without its critics, but their goal couldn't be more important. And honestly, I found it really quite exciting to hear Sam's vision for where all this could lead. OK, let's do this. So, Sam Altman, welcome. Thank you for having me. So, Sam, here we are in 2021. A lot of people are fearful of the future at this moment in world history. How would you describe your attitude to the future? I think that the combination of scientific and technological progress and better societal decision making, better societal governance is going to solve in the next couple of decades all of our current most pressing problems, there will be new ones. But I think we are going to get very safe, very inexpensive, carbon free nuclear energy to work. And I think we're going to talk about that time that the climate disaster looks so bad and how lucky we are. We got saved by science and technology, I think. And we've already now seen this with the rapidity that we were able to get vaccines deployed. We are going to find that we are able to cure or at least treat a significant percentage of human disease, including I think we'll just actually make progress in helping people have much longer decades, longer health spans. And I think in the next couple of decades, that will look pretty clear. I think we will build systems with AI and otherwise that make access to an incredibly high quality education more possible than ever before. I think the lives we look forward like one hundred years, fifty years, even the quality of life available to anyone then will be much better than the quality of life available in the very best case to anyone today, to any single person today. So, yeah, I'm super optimistic. I think, like, it's always easy to do scroll and think about how bad are the bad things are, but the good things are really good and getting much better. Is it your sincere belief that artificial intelligence can actually make that future better? Certainly. How look, with any technology. I don't think it will all be better. I think there are always positive and negative use cases of anything new, and it's our job to maximize the positive ones, minimize the negative ones. But I truly, genuinely believe that the positive impacts will be orders of magnitude bigger than the negative ones. I think we're seeing a glimpse of that now. Now that we have the first general purpose built out in the world and available via things like RPI, I think we are seeing evidence of just the breadth of services that we will be able to offer as the sort of technological revolution really takes hold. And we will have people interact with services that are smart, really smart, and it will feel like as strange as the world before mobile phones feels now to us. Hmm, yeah, you mentioned your API, I guess that stands for what, application programming interface? It's the technology that allows complex technology to be accessible to others. So give me a sense of a couple of things that have got you most excited that are already out there and then how that gives you visibility to a pathway forward that is even more exciting. So I think that the things that we're seeing now are very much glimpse of the future. We released three, which is a general-purpose natural language text model in the summer of twenty twenty. You know, there's hundreds of applications that are now using it in production that's ramping up all of the time. But there are things where people use three to really understand the intent behind the search query and deliver results and sort of understand not only intent, but all of the data and deliver the thing of what you want. So you can sort of describe a fuzzy thing and it'll understand documents. It can understand, you know, short documents, not full books yet, but bring you back to the context of what you want. There's been a lot of excitement about using the generative capabilities to create sort of games or sort of interactive stories or letting people develop characters or chat with a sort of virtual friend. There are applications that, for example, help a job seeker polish a tailored application for each individual company. There's the beginning of tutors that can sort of teach people about different concepts and take on different personas. And we can go on for a long time. But I think anything that you can imagine that you do today via computer that you would like to really understand and get to know you. And not only that, but understand all of the data and knowledge in the world and help you have the best experience that is is possible that that will happen. So what gets opened up? What new adjacent possible state is that as a result of these powers from this question, from the point of view of someone who's starting out on a career, for example, they're trying to figure out what would be a really interesting thing to do in the future that has only recently become possible. What are some new things that this opens up in a world where you can talk to a computer? And get. The output that would normally require you hiring the world experts back immediately for almost no money, I would say think about what's possible there. So that could be like, as you said, what can normally only the best programmer in the world or a really great programmer do for me. And can I now instead just ask in English and have that program written? So all these people that, you know, want to develop an app and they have an idea, but they don't know how to program. Now they can have it. You know, what is the service look like when anyone on Earth who wants really great medical advice? Can get better medical advice than any single doctor could ever get, because this has the total medical knowledge and reasoning ability that the some humanity has ever produced. When you want to learn something, you have sort of a tutor that understands your exact style, how you best learn everything you know, and custom teaches you whatever concept you want to learn someday. You can imagine that like. You have an eye that reads your email and your task list and your calendar and the documents you've been sent and in any meeting maximally perfectly prepares you and has all of the information that you need in all the context of your entire career right there for you to go on for a long time. But I think this will just be powerful systems. So it's really fun playing around with Chapatti three, one compelling example of someone who's more tax base is try Googling The Guardian essay that was written entirely by different GP2 three queries and stitched together. It's an essay on why artificial intelligence isn't a threat to humanity. And that's impressive. It's very compelling. I actually tried inputting one of the three online uses. I asked the question what is interesting about some ollman? Oh no. Here's what it came back with. It was it was rather philosophical, actually. Came back with. I don't understand what you mean by interesting. It seems to me that this word has no unambiguous meaning in the context of human society beyond its literal definition as somewhat pleasing or entertaining. I do not believe there to be any such thing as INTERESTINGNESS except in the mind of a human or other sentient being that to my knowledge, this is an entirely subjective state that varies from person to person. However, I will grant that this is an interesting question to ask. This does not mean it has been answered. There is no answer to be found. Well, so you can agree that somewhere between profound and gibberish is that almost well, with the state of play is I mean, that's where we are today. I think somewhere between profound and jibberish is the right way to think about the current capabilities of CGP three. I think they would definitely had a bubble of hype about three last summer. But the thing about bubbles is the reason that smart people fall for them is there's a kernel of something really real and really interesting that people get overexcited about. And I think people definitely got and still are overexcited about 3:00 today, but still probably underestimated the potential of where these models will go in the future. And so maybe there's this like short term overhyped and long term under hype for the entire field, for tax models, for whatever you'd like. It's going on. And as you said, there's clearly some gibberish in there. But on the other hand, those were like well-formed sentences. And there were a couple of ideas and there that I was like, oh, like they actually maybe that's right. And I think if artificial intelligence, even in its current very larval state, can make us confront new things and sort of inspire new ideas, that's already pretty impressive. Give us a sense of what's actually happening in the background there. I think it's hard to understand because you read these words seem like someone is trying to mean something. Obviously, I think you believe that there's whatever you've built there, that there's a sort of thinking, sentient thing that's going, oh, I must answer this question. So so what how would you describe what's going on? You've got something that has read the entire Internet, essentially all of Wikipedia, etc. We've read something that's read like a small fraction of a random sampling of the Internet. We will eventually train something that has read as much of the Internet or more of the Internet than we've done right now. But we have a very long way to go. I mean, we're still, I think, relative to what we will have operated at quite small scale with quite small eyes. But what is happening is there is a model that is ingesting lots of text and it is trying to predict the next word. So we use Transformer's they take in a context, which is a particular architecture of an A.I. model, they take in a context of a lot of words, let's say like a thousand or something like that. And they try to predict the word that comes next in the sequence. And there's like a lot of other things that happen, but fundamentally that's it, and I think this is interesting because in the process of playing that little game of trying to predict the next word, these models have to develop a representation and understanding of what is likely to come next and. I think it is maybe not perfectly accurate, but certainly worth considering to say that intelligence is very near the ability to make accurate predictions. What's confusing about this is that there are so many words on the Internet which are foolish as well as the words that are wise. And and how do you build a model that can distinguish between those two? And this is prompted actually by another example that I typed in. Like I asked, you know, what is a powerful idea, very interested in ideas. That was my question as a powerful idea. And it came back with several things, some of which seemed moderately pronouncements, which seemed moderately gibberish. But then he was he was one that it came back with the idea that the human race has, quote, evolved, unquote, is false evolution or adaptation within a species was abandoned by biology and genetics long ago. Wait a sec. That's news to me. What have you been reading? And I presume this has been pulled out of some recesses of the Internet, but how is it possible, even in theory, to imagine how a model can gravitate towards truth, wisdom, as opposed to just like majority views? Or how how how do you avoid something taking us further into the sort of the maze of errors and bad thinking and so forth that has already been a worrying feature for the last few years ? It's a fantastic question, and I think it is the most interesting area of research that we need to pursue. Now, I think at this point, the questions of whether we can build really powerful general-purpose AI system, I won't say there in the rearview mirror. We still have a lot of hard engineering work to do, but I'm pretty confident we're going to be able to. And now the questions are like, what should we build? And how and why and what data should we train on and how do we build systems not just that can do these like phenomenally impressive things, but that we can ensure do the things that we want and that understand the concepts of truth and falsehood and, you know, alignment with human values and misalignment with human values. One of the pieces of research that we put out last year that I was most proud of and most excited about is what we call reinforcement learning from human feedback. And we showed that we can take these giant models that are trained on a bunch of stuff, some of it good, some of the bad, and then with a really quite small amount of feedback from human judgment about, hey, this is good, this is bad, this is wrong, this is the behavior I want I don't want this behavior. We can feed that information from the human judges back into the model and we can teach the model, behave more like this and less like that. And it works better than I ever imagined it would. And that gives me a lot of hope that we can build an aligned system. We'll do other things, too, like I think curating data sets where there's just less sort of bad data to train on. It will go a very long way. And as these models get smarter, I think they inherently develop the ability to sort out bad data from good data. And as they get really smart, they'll even start to do something we call active learning, which is where they ask us for exactly the data they need when they're missing something, when they're unsure, when they don't understand. But I think as a result of simply scaling these models up, building better, I hate to use the word cognition because it sounds so anthropomorphic, but let's say building a better ability to reason into the models, to think, to challenge, to try to understand and combining that with this idea of online into human values via this technique we developed, that's going to go a very long way. Now, there's another question, which you sort of just kicked the ball down the field, too, which is how do we as a society decide to which set of human values do we align these powerful systems? Yeah, indeed. So if I if I understand rightly what you're saying, that you're saying that it's possible to look at the output at any one time of three. And if we don't like what it's coming up with, some ways human can say, no, that was off, don't do that. Whatever algorithm or process led you to that, undo it. Yeah. And that the system is that incredibly powerful at avoiding that same kind of mistake in future because it sort of replicates the instructions , correct? Yeah. And eventually and not much longer, I believe that we'll be able to not only say that was good, that was bad, but say that was bad for this reason. And also tell me how you got to that answer so I can make sure I understand. But at the end of the day, someone needs to decide who is the wise human or short humans who are looking at the results. So it's a big difference. Someone who who grew up with intelligent design world view could look at that and go, that's a brilliant outcome. Well, Goldstar done. And someone else would say something is done awfully wrong here. So how do you avoid and this is a version of the problem that a lot of the, I guess, Silicon Valley companies are facing right now in terms of the pushback they're getting on the output of social media and so forth. How do you assemble that pool of experts who stand for human values that we actually want? I mean, we talk about this all the time, I don't think this is like solely or even not even close to majorly up to opening night to decide, I think we need to begin a societal conversation now about how we're going to make those decisions, how we're going to make sure we have representational input in that, and how we sort of make these very difficult global governance systems. My personal belief is that we should have pretty broad rules about what these systems will never do and will always do. But then the individual user should get a system that kind of behaves like they want. And there will be people do have very different value systems. Some of them are just fundamentally incompatible. No one gets to use eye to, like, exploit other people, for example, and hopefully we can all agree on. But do you want the AI to like. You know, support you and your belief of intelligent design, like, do I think openly, I should say it can't, even though I disagree with that is like a scientific conclusion. No, I wouldn't take that stance. I think the thing to remember about all of this is that history is still quite extraordinarily weak. It's still has such big problems and it's still so unreliable that for most use cases it's still unsuitable. But when we think about a system that is like a thousand times more powerful and let's say a million times more reliable, it just doesn't it doesn't say gibberish very often. It doesn't totally lose the plot and get distracted or system like that is going to be one that a lot of the economic activity in the world comes to rely on. And I think it's very important that we don't have a small group of people sort of saying you can never use it for this thing that, like most of the world wants to use it for because it doesn't match our personal beliefs. Talk a bit more about some of the other uses of it, because one of the things that's most surprising is it's not just about sort of text responses. It's it can take generalized human instructions and build things up. For example, you can say to it, write a Python program that is designed to put a flashing cursor in one corner of the screen, in the Google logo in the other corner. And and it can go your way and do something like that. Shockingly, quite well, effectively. Yeah, I it can. That's amazing. I mean, this is amazing to me. That opens the door to. An entirely way to think about programers for the future, that you could you could have people who can program just in human natural language potentially and gain rapid efficiency. I do the engineering. We're not that far away from that world. We're not that far away from the world where you will write a spec in English. And for a simple enough program, I will just write the code for you. As you said, you can see glimpses of that even in this very week three which was not trained to code like. I think this is important to remember. We trained it on the language on the Internet very rarely, you know, Internet let language on the Internet also includes some code snippets. And that was enough, so if we really try to go train a model on code itself and that's where we decide to put the horsepower of the model into, just imagine what will be possible will be quite impressive. But I think what you're pointing to there is that because models like three to some degree or other, and it's like very hard to know exactly how much understand the underlying concepts of what's going on. And they're not just regurgitating things they found in a website, but they can really apply them and say, oh, yeah, I kind of like know about this word and this idea and code. And this is probably what you're trying to do. And I won't get it right always. But sometimes I will just generate this like a brand new program for nothing that anyone has ever asked before. And it will work. That's pretty cool. And data is data. So it can do that from English to code. It can do that from English to French. Again, we never told it to learn about translation. We never told it about the concepts of English and French, but it learned them, even though we never said this is what English is and this is what French is and this is what it means to translate, it can still do it. Wow, I mean, for creative people, is there a world coming where the sort of the palette of possibility that they can be exposed to is just explodes? I mean, if you're a musician, is there a near future where you can say to your eye, OK, I'm going to bed now, but in the morning I'd love you to present me with a thousand tuba jingles with words attached that you have of a sort of mean factor to the and you come down in the morning and the computer shows you the stuff. And one of them, you go, wow, that is it. That is a top 10 hit and you build a song from it. Or is that going to be released? Actually be the value add. We released something last year called Jukebox, which is very near what you described, where you can say I want music generated for me in this style or this kind of stuff, and it can come up with the words as well. And it's like pretty cool. And I really enjoy listening to music that it creates. And I can sort of do four songs, two bars of a jingle, whatever you'd like. And one of my very favorite artists reached out, called to open it after we release this and said that he wanted to talk. And I was like, well, I like total fanboy here. I'd love to join that call. And I was so nervous that he was going to say, this is terrible. This is like a really sad thing for human creativity. Like, you know, why are you doing this? This is like whatever. And he was so excited. And he's like, this has been so inspiring. I want to do a new album with this. You know, it's like, give me all these new ideas. It's making me much better at my job. I'm going to make better music because of this tool. And that was awesome. And I hope that's how it all continues to go. And I think it is going to lead to this. We see a similar thing now with Dolly, where graphic designers sometimes tell us that they just they see this new set of possibilities because there's new creative inspiration and they're cycle time, like the amount of time it takes to just come up with an idea and be able to look at it and then decide whether to go down that path or head in a different direction goes down so much. And so I think it's going to just be this like incredible creative explosion for humans. And how far away are we some before? And I it comes up with a genuinely powerful new idea, an idea that solves the problem that humans have been wrestling with. It doesn't have to be as quite on the scale as of, OK, we've got a virus coming. Please describe to us what a what a national rational response should look like, but some kind of genuinely innovative idea or solution like one one internal question we've asked ourselves is, when will the first genuinely interesting, purely AI written TED talk show up? I think that's a great milestone. I will say it's always hard to guess timeline's I'm sure I'll be wrong on this, but I would guess the first genuinely interesting. Ted talk, thought of written delivered by an AIDS within the kind of the seven ish year time frame. Maybe a little bit less. And it feels like I mean, just reading that Guardian essay that was kind of it was a composite of several different GPG three responses to questions about, you know, the threats of robotics or whatever. If you throw in a human editor into the mix, you could probably imagine something much sooner. Indeed. Like tomorrow. Yeah. So the hybrid the hybrid version where it's basically a tool assisted TED talk, but that it is better than any TED talk a human could generate in one hundred hours or whatever, if you can sort of combine human discretion with A.I. horsepower. I suspect that's like our next year or two years from now kind of thing where it's just really quite good. That's that's really interesting. How do you view the impact of A.I. on jobs? There's obviously been the familiar story is that every White-Collar job is now up for destruction. What's what's your view there? You know, it's I think it's always hard to make these predictions. That is definitely the familiar story now. Five years ago, it was every blue collar job is up for destruction, maybe like last year it was. Every creative job is up for destruction because of things like Jukebox I. I think there will be an enormous impact on. The job market, and I really hate it, I think it's kind of gross when people like working on I pretend like there's not going to be or sort of say, oh, don't worry about it. It'll just all obviously better. It doesn't always obviously get better. I think what is true is. Every technological revolution produces a change in jobs, we always find new ones, at least so far. It's difficult to predict from where we're sitting now what the new ones will be and this technological revolution is likely to be. Again, it's always tempting to say this time it's different. Maybe I'll be totally wrong. But from what I see now, this technological revolution is likely to be more. Dramatic. More of a staccato note than most, and I think we as a society need to figure out how we're going to cushion everybody through that. I've got my own ideas about how to do that. I, I wouldn't say that I have any reason to believe they're the right ones, but doing nothing and not really engaging with the magnitude of what's about to happen, I think it's like not an acceptable answer. So there's going to be huge impact. It's difficult to predict where it shows up the most. I think previous predictions have mostly been wrong, but I I'd like to see us all as a society, certainly as a field, engage in what what the shifts we want to make to the social contract are to kind of get through that in a way that is maximally beneficial to everybody. I mean, in every past revolution, there's always been a space for humans to move to. That is, if you like, moving up the food chain, it's sort of we've retreated to the things that humans could uniquely do, think better, be more creative and so forth. I guess the worry about A.I. is that in principle, I believe this, that there is no human cognitive feat that won't ultimately be doable, probably better by artificial general touch, simply because of the extra firepower that ultimately they can have, the vast knowledge they bring to the table and so forth. Is that basically right, that there is ultimately no safe sort of space where we can say, oh, but that would never be able to do that on a very long time horizon? I agree with you, but that's such a long time horizon. I think that, you know, like maybe we've merged by that point, like maybe we're all plugged in and then, like, we're this sort of symbiotic thing. Like, I think there's an interesting example, as we were talking about a few minutes ago, where right now we have these systems that have sort of enormous horsepower but no steering wheel. It's like, you know , incredible capabilities, but no judgment. And there's like these obvious ways in which today even a human plus three is far better than either on their own. Many people speak about a world where it's sort of A.I. as this external threat you speak about. At some point, we actually merge with eyes in some way. What do you mean by that? There's a lot of different versions of what I think is possible there, you know, in some sense, I'd argue the merge has already like begun the human technology merge like we have this thing in our hands that sort of dictates a lot of what we think, but it gives us real superpowers and that can go much, much further. Maybe it goes all the way to like the Elon Musk vision of neuro link and having our brains plugged into computers and sort of like literally we have a computer on the back of our head or goes the other direction and we get uploaded into one. Or maybe it's just that we all have a chat bot that kind of constantly steers us and helps us make better decisions than we could. But in any case, I think the fundamental thing is it's not like the humans versus the eyes competing to be the. Smartest sentient thing on earth or beyond. But it's that this idea of being on the same team. Hmm. I certainly get very excited by the sort of the medium term potential for creative people of all sorts if they're willing to expand their palette of possibilities. But with the use of A.I. to be willing to. I mean, the one thing that the history of technology has shown again and again is that something this powerful and with this much benefit is unstoppable and you will get rewarded for embracing it the most and the earliest. So talk about what can go wrong with that, so let's move away from just the sort of economic displacement factor. You were a co-founder of Open Eye because you saw existential risks to humanity from high today. What would you put as the sort of the most worrying of those risks? And how is open eye working to minimize? I still think all of the really horrifying risks exist. I am more confident, much more confident than I was five years ago when we started that there are technical things we can do about. How we build these systems and the research and the alignment that make us much more likely to end up in the kind of really wonderful camp, but, you know, like maybe open I fall behind and maybe somebody else feels ajai that thinks about it in a very different way or doesn't care as much as we'd like about safety and the risks or how to strike a different trade off of how fast we should go with this and where we should sort of just say, like, you know, like let's push on for the economic benefits. But I think all of this sort of like, you know, traditionally what's been in the realm of sci fi risks are real and we should not ignore them. And I still lose sleep over them. And just to update people is artificial general intelligence. Right now, we have incredible examples of powerful AI operating on specific areas. Ajai is the ability of a computer mind to connect the dots and to make decisions at the same level of breadth that that humans have had. What's your sort of elevator pitch on Ajai about how to identify and how to think of it? Yeah, I mean, the way that I would say it is that for a while we were in this world of like very narrow A.I. , you know, that could like classify images of cats or whatever, more advanced stuff in that. But that kind of thing. We are now in the era of general purpose, AI, where you have these systems that are still very much imperfect tools, but that can generalize. And one thing like GPP three can write essays and translate between languages and write computer code and do very complicated search. It's like a single model that understands enough of what's really going on to do a broad array of tasks and learn new things quickly, sort of like people can. And then eventually we'll get to this other realm. Some people call it ajai, some people call ostler things. But I think it implies that the systems are like to some degree self directed, have some intentionality of their own is a simple summary to say that, like the fundamental risk is that there's the potential with general artificial intelligence of a sort of runaway effect of self-improvement that can happen far faster than any kind of humans can even keep up with, so that the day after you get to ajai, suddenly computers are thousands of times more advanced than us and we have no way of controlling what they do with that power. Yeah, and that is certainly in the risk space, which is that we build this thing and at some point somewhat suddenly, it's much more powerful than we are, we haven't really done the full merge yet. There's an event horizon there and it's sort of hard to see to the other side of it. Again, lots of reasons to think it will go OK. Lots of reasons to think we won't even get to that scenario. But that is something that. I don't think people should brush under the rug as much as they do, it's in the possibility space for sure, and in the possibility subspace of that is one where, like, we didn't actually do as good of a job on the alignment work as we thought. And this sort of child of humanity kind of acts in a very different way than we think. A framework that I find useful is to sort of think about like a two by two matrix, which is short timelines to ajai and long timelines to ajai and a slow take off and a fast take off on the other axis. And in the short timelines, fast take off quadrant, which is not where I think we're going to be. But if we get there, I think there's a lot of scenarios in the direction that you are describing that are worrisome. And we would want to spend a lot of effort planning for. I mean, the fact that a computer could start editing its own code and improving itself while we're asleep and you wake up in the morning and it's got smarter, that is the start of something super powerful and potentially scary. I have tremendous misgivings about letting my system, not one we have today, but one that we might not have and too many more years start editing its own code while we're not paying attention. I think that's the kind of thing that is worth a great deal of societal discussion about, you know, just because we can do that. Should we? Yes, because one of the things that's that's been most shocking to you about the last few years has been just the power of unintended consequences. It's like you don't have to have a belief that there's some sort of waking up of of an alien intelligence that suddenly decided it wants to wreak havoc on humans. That may never happen. What you can have is just incredible power that goes amok. So a lot of people would argue that what's happened in technology in the last few years is actually an example of that. You know, social media companies created these intelligences that were programmed to maximally harvest attention, for example, for sure. And they understand this from that turned out to be in some ways horrifying and extraordinarily damaging. Is that a meaningful sort of canary in the coal mine saying, look out, humanity, this could be really dangerous? And how how on earth do you protect against those kinds of unintended consequences? I think you raise a great point in general, which is these systems don't have to wish ill to humanity to cause ill just when you have, like, very powerful systems. I mean, unintended consequences for sure. But another version of that is and I think this applies at the technical level, at the company level, at the societal level, incentives are superpower's. Charlie Munger had this thing on, which is incentives are so powerful that if you can spend any time whatsoever working on the incentive system, that's what you should do before you work on anything else. And I really believe that. And I think that applies to the individual models we build and what their reward functions look like. I think it applies to society in a big way, and I think it applies to our corporate structure at open. I you know, we sort of observe that if you have very well-meaning people, but they have this incentive to sort of maximize attention harvesting and profit forever through no one's ill intentions, that leads to a quite undesirable outcome. And so we set up opening is this thing called a capped profit model specifically so that we don't have the system incentive to just generate maximum value forever with an AGI that seems like obviously quite broken. But even though we knew that was bad and even though we all like to think of ourselves as good people, it took us a long time to figure out the right structure, to figure out a charter that's going to govern us and a set of incentives that we believe will let us do our work. And kind of these we have these like three elements that we talk about a lot research sort of engineering, development and deployment policy and safety. Put those all together under a system where you don't have to rely on. Anything but the natural incentives to push in a direction that we hope will minimize the sort of negative unintended consequences. So help me understand this, because this is I think this is confusing to some people. So you started opening. I initially I think Elon Musk, the co-founder, and there was a group of you and the argument was this technology is too powerful to be left, developed in secret and to be left developed purely by corporations who have whatever incentive they may have. We need a nonprofit that will develop and share knowledge openly. First of all, just even at that early stage, some people were confused about this. It was saying if this thing is so dangerous, why on earth would you want to make it secrets even more available? Well, maybe giving the tools to that sort of AI terrorist in his bedroom somewhere, I think I think we got misunderstood in the way we were talking about that. We certainly don't think that the right thing to do is to, like, build this a super weapon and hand it to a terrorist. That's obviously awful. One of the reasons that we like our API model is it lets us make the most powerful AI technology anyone in the world has, as far as we know, available to ever would like to use it, but to put some controls on its usage. And also, if we make a mistake, to be able to pull it back or change it or tweak it or improve it or whatever. But we do want to put and this is continued will continue to be true with appropriate restrictions and guardrails, very powerful technology in the hands of people. I think that is fair. I think that will lead to the best results for the society as a whole. And I think it will sort of maximize benefit. But that's very different than sort of shipping the whole model and saying, here, do whatever you want with it. We're able to enforce rules on it. We also think and this is part of the mission that like something the field was doing a lot of that we didn't feel good about was sort of saying like, oh, we're going to keep the pace of progress and capabilities secret. That doesn't feel right, because I think we do need a societal conversation about what's what's going on here, what the impacts are going to be. And so we although we don't always say, like, you know, here's the super weapon, hopefully we do try to say, like, this is really serious. This is a big deal. This is going to affect all of us. We need to have a big conversation about what to do with it. Help me understand the structure a bit better, because you definitely surprised much people when you announced that Microsoft were putting a billion dollars into the organization and in return, I guess they get certain exclusive licensing rights. And so, for example, they are the exclusive licensee of CP3. So talk about that structure of how you win. Microsoft presumably have invested not purely for altruistic purposes. They think that they will make money on that billion dollars. I sure hope they do. I love capitalism, but I think that I really loved even more about Microsoft as a partner. And I'll talk about the structure and the exclusive license in a minute is that we like went around to people that might find us. And we said one of the things here is that we're going to try to make you some money. But like Adjei going well is more important. And we need you to sign this document that says if things don't go the way we think and we can't make you money like you just cheerfully walk away from it and we do the right thing for humanity. And they were like, yes, we are enthusiastic about that. We get that the mission comes first here. So again, I hope a phenomenal investment for them. But they were like they really pleasantly surprised us on the upside of how aligned they were with us, about how strange the world may get here and the need for us to have flexibility and put our mission first, even if that means they lose all their money, which I hope they don't and don't think they will. So the way it's set up is that if at some point in the coming year or two, two years, Microsoft decide that there's some incredible commercial opportunity that they could realize out of the eye that you've built and you feel actually, no, that's that's damaging. You can block it. You can veto it. Correct. So the four most powerful version of three and its successors are available via the API, and we intend for that to continue. What Microsoft has is the ability to sort of put that model directly into their own technology. If they want to do that. We don't plan to do that with other people because we can't have all these controls that we talked about earlier. But they're like a close trusted partner and they really care about safety, too. But our goal is that anybody who wants to use the API can have the most powerful versions of what we've trained. And the structure of the API lets us continue to increase the safety and fix problems when we find them. But but the structure. So we start out as a non-profit, as you said, we realized pretty quickly that although we went into this thinking that the way to get to ajai would be about smarter and smarter algorithms, that we just needed bigger and bigger computers as well. And that was going to require a scale of capital that no one will, at least certainly not me, could figure out how to raise is a nonprofit. We also needed to sort of be able to compensate very highly compensated, talented individuals that do this, but are full for profit company had runaway incentives problem, among other things. Also just one about sort of fairness in society and wealth concentration that didn't feel right to us either. And so we came up with this kind of hybrid where we have a nonprofit that governs what we do, and it has a subsidiary, LLC, that we structure in a way to make a fixed amount of profit so that all of our investors and employees, hopefully if things go how we like, if not no one gets any money, but hopefully they get to make this one time great return on their investment or the time that they spent it open their equity here. And then beyond that, all the value flows back to the nonprofit and we figure out how to share it as fairly as we can with the world. And I think that this structure and this nonprofit with this very strong charter in place and everybody who joins signing up for the mission come in first and the fact the world may get strange, I think that. That was at least the best idea we could come up with, and I think it feels so far like the incentive system is working, just as I sort of watch the way that we and our partners make decisions. But if I read it right, the cap on the gain that investors can make is 100 Axum. It's a massive call that was for our very first round investors. It's way, way lower. Like as we now take a bit of capital, it's way, way lower. So your deal with Microsoft isn't you can only make the first hundred billion dollars. I don't know. It's way lower than after that. We're giving it to the world. It's way lower than that. Have you disclosed what I don't know if we have, so I won't accidentally do it now. All right. OK, so explain a bit more about the charter and how it is that you. Hope to avoid or I guess help contribute to an eye that is safe for humanity. What do you see as the keys to us avoiding the worst mistakes and really holding on to something that's that's beneficial for humanity? My answer there is actually more about, like technical and societal issues than the charter. So if it's OK for me to answer it from that perspective, sure. OK, I'm happy to talk about the charter to. I think this question of alignment that we talked about a little earlier is paramount, and then I think to understand that it's useful to differentiate between accidental misuse of a system and intentional misuse of a system. So like intentional would be a bad actor saying, I've got this powerful system, I'm going to use it to like hack into all the computers in the world and wreak havoc on the power grids. And accidental would be kind of the Nick Bostrom make a lot of paper clips and view humans as collateral damage in both cases. But to varying degrees, if we can really, truly, technically solve the alignment problem and the societal problem of deciding to which set of human values do we align, then the systems understand right and wrong, and they understand probably better than we ever can, unintended consequences from complex actions and very complex systems. And, you know, if we can train a system which is like. Don't harm humanity and the system can really understand what we mean when we say that, again, who is we and what does that have some asterisks on them? Sorry, go ahead. Well, that's if they could understand what it means to not harm humanity, that there's a lot wrapped up in that sentence. Because what's been so striking to me about efforts so far is that they seem to have been based on a very naive view of human nature. Go back to the sort of Facebook and Twitter examples of, well, the engineers building some of the systems would say we've just designed them around what humans want to do. You said, well, if someone wants to click on something, we will give them more of that thing. And what could possibly be wrong with that? We're just supporting human choice, ignoring the fact that humans are complicated, farshid animals for sure, who are constantly making choices, that a more effective version of themselves would agree is not in their long term interests. So that's one part of it. And then you've got layered on top of that or the complications of systemic complexity where, you know, multiple choices by thousands of people end up creating a reality that possibly have designed for how how to cut through that. Like an AI has to make a decision based on a moment, on a specific data set. As those decisions get more powerful, how can we be confident that they don't lead to this sort of system crashing basically in some way? I think that I've heard a lot of behavioral psychologists and other people that have studied this say in different ways, are that I hate to keep picking on Facebook, but we can do it one more time since we're on the topic. Maybe you can't in any given moment in night where you're tired and you have a stressful day, stop yourself from the dopamine hit of scrolling and Instagram, even though you know that's bad for you and it's not leading to your best life. But if you were asked in a reflective moment where you were sort of fully alert and thoughtful, do you want to spend as much time as you do scrolling through Instagram? Does it make you happier or not? You would actually be able to give like the right long term answer? It's sort of the spirit is willing, but the flesh is weak kind of moment. And one thing that I am hopeful is that humans do know what we want and what. On the whole, and presented with research or sort of an objective view about what makes us happy and doesn't we're pretty, what's so great about it, they're pretty good. But in any particular moment, we are subjected to our animal instincts and it is easy for the lower brain to take over the eye. Well , I think be an even higher brain. And as we can teach it, you know, here is what we really do value. Here's what we really do want. It will help us make better decisions than we are capable of, even in our best moments. So is that being proposed and talked about as an actual rule? Because it strikes me that there is something potentially super profound here to introduce some kind of rule for development of AIDS that they have to tap into not. What humans one, which is an ill defined question, but as to what humans in reflective mode want. Yeah, we talk about this a lot. I mean, do you see a real chance where something like that could be incorporated as a sort of an absolute golden rule and and if you like, spread around the community so that it seeps into corporations and elsewhere? Because that I've seen no evidence that, well, a little corporation that was potentially a game changer. Corporations have this weird incentive problem. Right. What I was trying to speak about was something that I think should be technologically possible , and that's something that we as a society should demand. And I think it is technically possible for this to be sort of like a layer above the neocortex that makes even better decisions for us and our welfare and our long term happiness and fulfillment than we could make on our own. And I think it is possible for us as a society to demand that. And if we can do like a pincer move between what the technology is capable of and what we what we as society demand, maybe we can make everybody in the middle that way. I mean, there are instances of even though companies have their incentives to make money and so forth, they also in the knowledge age. Can't make money if they have pissed off too many of their employees and customers and investors by analogy of the climate space right now, you can see more and more companies, even those that are emitting huge amounts of carbon dioxide, saying, wait a sec, we're struggling to recruit talented people because they don't want to work for someone who's evil. And their customers are saying, we don't want to buy something that is evil. And so, you know, ultimately you can picture processes where they do better. And I I believe that most engineers, for example, work in Silicon Valley. Companies are actually good people who want to design great products for humanity. I think that the people who run these companies want to be a net contribution to humanity. It's we've we've rushed really quickly and design stuff without thinking it through properly. And it's led to a mess up. So it's like, OK, don't move fast, break things, slow down and build beautiful things that are built on a real version of human nature and on a real version of system complexity and the risks associated with systemic complexity. Is that the agenda that fundamentally you think that you can push somehow? Yes, but I think the way we can push it is by getting the incentive system right. I think most people are fundamentally extremely good. Very few people wake up in the morning thinking about how can I make the world a worse place? But the incentive systems that we're in are so powerful. And even those engineers who join with the absolute best of intentions get sucked into this world where they're like trying to go up from it all for and five or whatever Facebook calls those things and you like, it's pretty exciting. You get caught up playing the game, you're rewarded for kind of doing things that move the company's key metrics. It's like fun to get promoted. It feels good to make more money and the incentive systems of the company. And that's what it rewards. An individual performance are maybe like not what we all want. And here I don't want to pick on Facebook at all because I think there's versions of this at play it like every big tech company, including in some ways I'm sure it open I but to the degree that we can better align the incentives of companies with the welfare of society and then the incentives of an individual at those companies within the now realign incentives for those companies, the more likely we are to be able to have things like ajai that. Follow an incentive system of. What we want in our most reflective best moments and are even better than what we could think of ourselves is is it still the vision for open eye that you will get to? Artificial general intelligence ahead of. The corporations, so that you can somehow put a stake in the ground and build it the right way. Is that really a realistic thing to to dream for? And if not, how do you live up to the mission and help ensure that this thing doesn't go off the rails? I think it is. Look, I certainly don't think we will be the only group to build an AGI, but I think we could be the first. And I think if you are the first, you have a lot of norms that empower. And I think you've already seen that. You know, we have released some of the most powerful systems to date. And I think the way that we have done that kind of in controlled release where we've released a bigger model than a bigger one than a bigger one, and we sort of try and talk about the potential misuse cases and we try to like talk about the importance of releasing this behind an API so that you can make changes. Other groups have followed suit in some of those directions, and I think that's good. So, yes, I don't think we can be the only I do think we can be ahead. And if we are ahead, I think we can use that leverage to hopefully push people in a better direction or maybe we're wrong and somebody else has a better direction. We're doing something about do you have a structural advantage in that your mission is to do this for everyone as opposed to for some corporate objective. And that that that allows you that. Why is it that we came out of open eye and not someone else? It's like it's surprising in some ways when you're up against so much money and so much talent in these other companies that you came up with this platform ahead of. You know, in some sense it's surprising and in some sense, like the startup wins most of the time, like I'm a huge believer in startups as the best force for innovation we have in the world today. I talked a little bit about how we combine these three. Different clans of research, engineering and sort of safety and policy that don't normally combine well and I think we have an unusual strength, there were clearly like well funded. We have super talented people. But what we really have is like intense focus and self belief that what we're doing is possible and good. And I appreciate the implied compliment. But, you know, we, like, work really hard. And if we stop doing that, I'm sure someone would run by us fast. Tell us a bit more about some of your prior life sentences. Yeah, for several years, you were running Y Combinator, which had incredible impact on some 70 companies. There are so many startup stories that began at Y Combinator. What were key drivers in your own life that took you on the path you're on? And how did that path end up at Y Combinator? No exaggeration. I think I have back to back had the two jobs that are at least the most interesting to me in all of Silicon Valley. I, I was I went to college to study computer science. I was a major computer nerd growing up. I knew like a little bit about startups, but not very much. I started working on this project the same year I started working on that. This thing called Y Combinator started and funded me and my co-founders. And we dropped out of school and did this company, which I ran for like seven years. And then after that I got acquired. I had stayed close to my comment the whole time. I thought it was just this incredible group of people and spirit and set of incentives and just badly misunderstood by most of the world, but obvious to everyone within it that it was going to create huge amounts of value and do a lot of new things. My company had acquired PJI, who is the founder of ICI, and like truly one of the most incredible humans and business people. And Paul Burrell, Paul Graham asked me if I wanted to run it. And kind of like the central learning of my career, why I individual startups has been that if you really scale them up, remarkable things can happen. And I. I did it and I was like, one of the things that would make this exciting for me personally motivating would be if I could sort of push it in the direction of doing these hard tech companies, one of which became open. I describe actually what Y Combinator is, you know, how many people come through it to give us a couple of stories of its impact. Yeah. So you basically apply as a handful of people and an idea, maybe a prototype and say, I would like to start a company and will you please fund me? And we review those applications and we I shouldn't say we anymore. I guess they fund four hundred companies a year. You get about one hundred and fifty thousand dollars while she takes about seven percent ownership and then gives you lots of advice and then networking and sort of this like fast track program for starting a startup. I haven't looked at this in a while, but at one point a significant fraction of the billion dollar plus companies in the US that got started. It all came through the Wiki program, some recently in the news ones have been like Airbnb, Jordache, Coinbase, insta card stripe. And I think it's just it has become an incredible way to help. People who understand technology get a three month course in business, but instead of like herding you with an MBA, we actually teach you the things that matter and kind of go on to do incredible, incredible work. What is it about entrepreneurs? Why do they matter? Some people just find them kind of annoying. But I think you would argue I think I would argue that they have done as much as anyone to shape the future. Why ? What is it about them? I think it is the ability to take. And idea and by force of will to make it happen in the world and in an incentive system that rewards you for making the most impact on the most people like in our system. That's how we get most of the things that that we use. That's how we got the computer that I'm using, the software I'm using to talk to you on it. Like all of this, you know, everyone in life, everything has a balance sheet. There's plenty of very annoying things about them. And there's plenty of very annoying things about the system that sort of idolizes them. But we do get something really important in return. And I think that as a force for making things that make all of our lives better happen, it's very cool. Otherwise, you know, like if you have, like, a great idea, but you don't actually do anything useful with it for people, that's still cool. It's still intellectually interesting. But like, there's got to be something about the reward function in society that is like, did you actually do something useful? Did you create value? And I think entrepreneurship and startups are a wonderful way to do that. You know, we get all these great software companies. But I also think it's like how we're going to get ajai, how we're going to get nuclear fusion, how we're going to get life extension. And like on any of those topics are a long list of other things I could point to. There's like a number of startups that I think are doing incredible work, some of which will actually deliver. It is a truly amazing thing when you put the camera back and to believe that a human being could be lying awake at night and something pops inside their mind as a patterning of the neurons in their brain that is effectively them saying, aha, I can see a way where the future could be better and and they can actually picture it. And then they wake up and then they talk to other people and they persuade them and they persuade investors and so forth. And the fact that this this system can happen and that they can then actually change the history changes in some sense. It is mind boggling that that happens that way and it happens again and again. So you've seen so many of these stories happen. What would you say? Is the is there a key thing that differentiates good entrepreneurs from others? If you could double down on one trait, what would it be? If I could pick only one, I would pick determination. I think that is the biggest predictor of success, the biggest differentiator predictor. And if you would allow a second, I would pick like communication skills or evangelism or something in that direction as well. There are all of the obvious ones that matter, like intelligence, but there's like a lot of smart people in the world. And when I look back at kind of the thousands of entrepreneurs I've worked with, all of many of whom were like quite capable, I would say that's like one and two of the surprisingly differentiated characteristics. What it's it's what I look at, the different things that you've built and you're working on. I mean, it could not be more foundational for the future. I mean, entrepreneurship. I know this is I agree that this is really what has driven the future. Do you see some people get really now they look at Silicon Valley and they look at this story and they worry about the culture. Right. That it's this is a bro culture. Do you see prospects of that changing anytime soon? And would you welcome it? Can we get better companies by really working to expand a group of people who can be entrepreneurs and who can contribute to aid, for example? For sure. And in fact, I think I'm hopeful, since these are the two things I've thought the most about. I'm excited for the day when someone combines them and uses A.I. to better select who did more fairly, maybe even select who to fund and how to advise them and really kind of make entrepreneurship super widely available that will lead to like better outcomes and sort of more societal wealth for all of us. So are. So, yeah, I think. Broadening the set of people able to start companies and that sort of get the resources that you need, that is like an unequivocally good thing and it's something that I think Silicon Valley is making some progress in. But I hope we see a lot more. And I do really, truly think that the technology industry entrepreneurship is one of the greatest forces for self betterment. If we can just figure out how to be a little bit more inclusive in how we do things. My last question today is about ideas were spreading. If you could inject one idea into the mind of everyone listening, what what would the idea be? We've touched on it a bunch, but the one idea would be the ajai really is going to happen. You have to engage with it seriously, and you shouldn't just listen to this and then brush aside and go about life as if it's not going to happen because it is going to affect everything. And we will all we all, I think, have an obligation, but also an opportunity to figure out what not means and how we want the world and this sort of one time shift to go on. I'm kind of awed by the breadth of things are engaged with. Thank you so much for spending so much time sharing your vision. Thanks so much for having me. OK, that's it for today. You can read more about open eyes, vision and progress at open eye dotcom. If you want to try playing with yourself, it's a little tricky. You have to find a website that has licensed the API. The one I went to was philosopher ehi dot com, where you just you pay a few dollars to get access to a very strange mind. That's actually quite a lot of fun. The interview is part of the TED Audio Collective, a collection of podcasts dedicated to sparking curiosity and sharing ideas that matter. This show is produced by Kim Net2Phone Pittas and edited by Grace Rubenstein and Sheila Boffano, Sambor Islamic Sir. Fact Check is by Paul Durbin and special thanks to Michele Quent, Colin Helmes and Anna Felin. If you like the show, please write and review it. It helps other people find us. We read every review, so thanks so much for listening. See you next time.

Frequently Occurring Word Combinations


ngrams of length 2

collocation frequency
open eye 6
human values 6
long time 5
silicon valley 5
artificial intelligence 4
technological revolution 4
long term 4
unintended consequences 4
incentive system 4
powerful systems 3
artificial general 3
billion dollars 3
build systems 2
natural language 2
guardian essay 2
smart people 2
bad data 2
valley companies 2
social media 2
societal conversation 2
pretty cool 2
ted talk 2
familiar story 2
general intelligence 2
powerful ai 2
people call 2
start editing 2
wreak havoc 2
human nature 2
systemic complexity 2
huge amounts 2
talented people 2
real version 2
incentive systems 2
started working 2

ngrams of length 3

collocation frequency
silicon valley companies 2
artificial general intelligence 2


Important Words


  1. abandoned
  2. ability
  3. absolute
  4. accelerator
  5. acceptable
  6. access
  7. accessible
  8. accidental
  9. accidentally
  10. accurate
  11. acquired
  12. actions
  13. active
  14. activity
  15. actor
  16. acts
  17. actual
  18. adaptation
  19. add
  20. adjacent
  21. adjei
  22. advanced
  23. advantage
  24. advice
  25. advise
  26. affect
  27. age
  28. agenda
  29. agi
  30. agree
  31. aha
  32. ai
  33. aid
  34. aids
  35. airbnb
  36. ajai
  37. albeit
  38. album
  39. alert
  40. algorithm
  41. algorithms
  42. alien
  43. align
  44. aligned
  45. alignment
  46. altman
  47. altruistic
  48. amazing
  49. amok
  50. amount
  51. amounts
  52. amuck
  53. analogy
  54. anderson
  55. animal
  56. animals
  57. anna
  58. announced
  59. annoying
  60. answer
  61. answered
  62. anthropomorphic
  63. anymore
  64. anytime
  65. api
  66. app
  67. application
  68. applications
  69. applies
  70. apply
  71. architecture
  72. area
  73. areas
  74. argue
  75. argument
  76. array
  77. artificial
  78. artists
  79. asked
  80. asleep
  81. assemble
  82. assisted
  83. asterisks
  84. attached
  85. attention
  86. attitude
  87. audio
  88. avoid
  89. avoiding
  90. awake
  91. awed
  92. awesome
  93. awful
  94. axis
  95. axum
  96. background
  97. bad
  98. badly
  99. balance
  100. ball
  101. bars
  102. base
  103. based
  104. basically
  105. beautiful
  106. bed
  107. bedroom
  108. began
  109. beginning
  110. begun
  111. behave
  112. behaves
  113. behavior
  114. behavioral
  115. belief
  116. beliefs
  117. believer
  118. believes
  119. beneficial
  120. benefit
  121. benefits
  122. betterment
  123. big
  124. bigger
  125. biggest
  126. billion
  127. biology
  128. bit
  129. block
  130. blue
  131. boffano
  132. boggling
  133. books
  134. bostrom
  135. bot
  136. brain
  137. brains
  138. brand
  139. breadth
  140. break
  141. brilliant
  142. bring
  143. bro
  144. broad
  145. broadening
  146. broken
  147. brush
  148. bubble
  149. bubbles
  150. build
  151. building
  152. built
  153. bunch
  154. burrell
  155. business
  156. buy
  157. buzz
  158. calendar
  159. call
  160. called
  161. calls
  162. camera
  163. camp
  164. canary
  165. cap
  166. capabilities
  167. capable
  168. capital
  169. capitalism
  170. capped
  171. carbon
  172. card
  173. care
  174. career
  175. case
  176. cases
  177. catastrophe
  178. cats
  179. caught
  180. central
  181. cgp
  182. chain
  183. challenge
  184. chance
  185. change
  186. changer
  187. changing
  188. chapatti
  189. characteristics
  190. characters
  191. charlie
  192. charter
  193. chat
  194. check
  195. cheerfully
  196. child
  197. choice
  198. choices
  199. chris
  200. clans
  201. classify
  202. clear
  203. click
  204. climate
  205. clips
  206. close
  207. coal
  208. code
  209. cognition
  210. cognitive
  211. coinbase
  212. colin
  213. collar
  214. collateral
  215. collection
  216. collective
  217. college
  218. combination
  219. combinator
  220. combine
  221. combines
  222. combining
  223. coming
  224. comment
  225. commercial
  226. communication
  227. community
  228. companies
  229. company
  230. compelling
  231. compensate
  232. compensated
  233. competing
  234. complex
  235. complexity
  236. complicated
  237. complications
  238. compliment
  239. composite
  240. computer
  241. computers
  242. concentration
  243. concept
  244. concepts
  245. conclusion
  246. confident
  247. confront
  248. confused
  249. confusing
  250. connect
  251. consequences
  252. constantly
  253. context
  254. continue
  255. continued
  256. continues
  257. contract
  258. contribute
  259. contribution
  260. controlled
  261. controlling
  262. controls
  263. conversation
  264. cool
  265. corner
  266. corporate
  267. corporation
  268. corporations
  269. correct
  270. couple
  271. crashing
  272. create
  273. created
  274. creates
  275. creating
  276. creative
  277. creativity
  278. critics
  279. culture
  280. curating
  281. cure
  282. curiosity
  283. current
  284. cursor
  285. cushion
  286. custom
  287. customers
  288. cut
  289. cycle
  290. damage
  291. damaging
  292. dangerous
  293. dark
  294. data
  295. date
  296. day
  297. deal
  298. decades
  299. decide
  300. decided
  301. deciding
  302. decision
  303. decisions
  304. dedicated
  305. defined
  306. definition
  307. degree
  308. degrees
  309. deliver
  310. delivered
  311. demand
  312. deployed
  313. deployment
  314. describe
  315. describing
  316. design
  317. designed
  318. designers
  319. destruction
  320. determination
  321. develop
  322. developed
  323. developers
  324. developing
  325. development
  326. dictates
  327. difference
  328. differentiate
  329. differentiated
  330. differentiates
  331. differentiator
  332. difficult
  333. dioxide
  334. directed
  335. direction
  336. directions
  337. disagree
  338. disaster
  339. disclosed
  340. discretion
  341. discussion
  342. disease
  343. displacement
  344. distinguish
  345. distracted
  346. division
  347. doable
  348. doctor
  349. document
  350. documents
  351. dollar
  352. dollars
  353. dolly
  354. door
  355. dopamine
  356. dot
  357. dotcom
  358. dots
  359. double
  360. dramatic
  361. dream
  362. driven
  363. drivers
  364. dropped
  365. durbin
  366. dystopian
  367. earlier
  368. earliest
  369. early
  370. earth
  371. easy
  372. economic
  373. edited
  374. editing
  375. editor
  376. education
  377. effect
  378. effective
  379. effectively
  380. efficiency
  381. effort
  382. efforts
  383. ehi
  384. elements
  385. elevator
  386. elon
  387. email
  388. embracing
  389. emitting
  390. employees
  391. empower
  392. energy
  393. enforce
  394. engage
  395. engaged
  396. engaging
  397. engineering
  398. engineers
  399. english
  400. enjoy
  401. enormous
  402. ensure
  403. entertaining
  404. enthusiastic
  405. entire
  406. entrepreneurs
  407. entrepreneurship
  408. equity
  409. era
  410. errors
  411. essay
  412. essays
  413. essentially
  414. evangelism
  415. event
  416. eventually
  417. evidence
  418. evil
  419. evolution
  420. evolved
  421. exact
  422. exaggeration
  423. examples
  424. excited
  425. excitement
  426. exciting
  427. exclusive
  428. exist
  429. existential
  430. expand
  431. experience
  432. experts
  433. explain
  434. explodes
  435. exploit
  436. explosion
  437. exposed
  438. extension
  439. external
  440. extra
  441. extraordinarily
  442. extremely
  443. eye
  444. eyes
  445. facebook
  446. facing
  447. fact
  448. factor
  449. fair
  450. fairness
  451. fall
  452. false
  453. falsehood
  454. familiar
  455. fanboy
  456. fantastic
  457. farshid
  458. fast
  459. faster
  460. favorite
  461. fearful
  462. feat
  463. feature
  464. feed
  465. feedback
  466. feel
  467. feeling
  468. feels
  469. felin
  470. fi
  471. field
  472. fifty
  473. figure
  474. find
  475. finding
  476. firepower
  477. fix
  478. fixed
  479. flashing
  480. flesh
  481. flexibility
  482. flows
  483. focus
  484. follow
  485. food
  486. foolish
  487. force
  488. forces
  489. foundational
  490. founder
  491. fraction
  492. frame
  493. framework
  494. free
  495. french
  496. friend
  497. fulfillment
  498. full
  499. fully
  500. fun
  501. function
  502. functions
  503. fund
  504. fundamental
  505. fundamentally
  506. funded
  507. fusion
  508. future
  509. fuzzy
  510. gain
  511. game
  512. games
  513. general
  514. generalize
  515. generalized
  516. generate
  517. generated
  518. generative
  519. genetics
  520. genuinely
  521. giant
  522. gibberish
  523. give
  524. giving
  525. glimpse
  526. glimpses
  527. global
  528. goal
  529. golden
  530. goldstar
  531. good
  532. google
  533. googling
  534. govern
  535. governance
  536. governs
  537. gpg
  538. gpp
  539. grace
  540. graham
  541. grant
  542. graphic
  543. gravitate
  544. great
  545. greatest
  546. grew
  547. grids
  548. gross
  549. ground
  550. group
  551. groups
  552. growing
  553. guardian
  554. guardrails
  555. guess
  556. hack
  557. hand
  558. handful
  559. hands
  560. happen
  561. happened
  562. happening
  563. happier
  564. happiness
  565. happy
  566. hard
  567. harm
  568. harvest
  569. harvesting
  570. hate
  571. havoc
  572. head
  573. health
  574. hear
  575. heard
  576. hearing
  577. helmes
  578. helping
  579. helps
  580. herding
  581. hey
  582. high
  583. higher
  584. highly
  585. hiring
  586. history
  587. hit
  588. hmm
  589. hold
  590. holding
  591. honestly
  592. hope
  593. hopeful
  594. horizon
  595. horrifying
  596. horsepower
  597. hours
  598. huge
  599. hugely
  600. human
  601. humanity
  602. humans
  603. hundreds
  604. hybrid
  605. hype
  606. ici
  607. idea
  608. ideas
  609. identify
  610. idolizes
  611. ignore
  612. ignoring
  613. ill
  614. images
  615. imagine
  616. imagined
  617. immediately
  618. impact
  619. impacts
  620. impending
  621. imperfect
  622. implied
  623. implies
  624. importance
  625. important
  626. impressive
  627. improve
  628. improving
  629. inappropriate
  630. incentive
  631. incentives
  632. includes
  633. including
  634. inclusive
  635. incompatible
  636. incorporated
  637. increase
  638. incredible
  639. incredibly
  640. individual
  641. individuals
  642. industry
  643. inexpensive
  644. information
  645. ingesting
  646. inherently
  647. initially
  648. inject
  649. innovation
  650. innovative
  651. input
  652. inputting
  653. inspiration
  654. inspire
  655. inspiring
  656. insta
  657. instagram
  658. instances
  659. instincts
  660. instructions
  661. intellectually
  662. intelligence
  663. intelligences
  664. intelligent
  665. intend
  666. intense
  667. intent
  668. intentional
  669. intentionality
  670. intentions
  671. interact
  672. interactive
  673. interested
  674. interesting
  675. interestingness
  676. interests
  677. interface
  678. internal
  679. internet
  680. interview
  681. introduce
  682. invested
  683. investment
  684. investors
  685. ish
  686. islamic
  687. issues
  688. jibberish
  689. jingle
  690. jingles
  691. job
  692. jobs
  693. join
  694. joins
  695. jordache
  696. judges
  697. judgment
  698. jukebox
  699. kernel
  700. key
  701. keys
  702. kicked
  703. kim
  704. kind
  705. kinds
  706. knew
  707. knowledge
  708. language
  709. languages
  710. larval
  711. launched
  712. layer
  713. layered
  714. lead
  715. leading
  716. leads
  717. learn
  718. learned
  719. learning
  720. led
  721. left
  722. legendary
  723. lets
  724. letting
  725. level
  726. leverage
  727. license
  728. licensed
  729. licensee
  730. licensing
  731. life
  732. light
  733. link
  734. list
  735. listen
  736. listening
  737. literal
  738. literally
  739. live
  740. lives
  741. llc
  742. lofty
  743. logo
  744. long
  745. longer
  746. looked
  747. lose
  748. lot
  749. lots
  750. love
  751. loved
  752. lucky
  753. lying
  754. magnitude
  755. major
  756. majority
  757. majorly
  758. making
  759. market
  760. massive
  761. match
  762. matrix
  763. matter
  764. maximally
  765. maximize
  766. maximum
  767. maze
  768. mba
  769. meaning
  770. meaningful
  771. means
  772. media
  773. medical
  774. medium
  775. meeting
  776. mention
  777. mentioned
  778. merge
  779. merged
  780. mess
  781. metrics
  782. michele
  783. microsoft
  784. middle
  785. milestone
  786. million
  787. mind
  788. minds
  789. minimize
  790. minute
  791. minutes
  792. mirror
  793. misalignment
  794. misgivings
  795. missing
  796. mission
  797. mistake
  798. mistakes
  799. misunderstood
  800. misuse
  801. mix
  802. mobile
  803. mode
  804. model
  805. models
  806. moderately
  807. moment
  808. moments
  809. money
  810. month
  811. morning
  812. motivating
  813. move
  814. moving
  815. multiple
  816. munger
  817. music
  818. musician
  819. musk
  820. naive
  821. narrow
  822. national
  823. natural
  824. nature
  825. needed
  826. negative
  827. neocortex
  828. nerd
  829. nervous
  830. net
  831. networking
  832. neuro
  833. neurons
  834. news
  835. nick
  836. night
  837. noble
  838. nonprofit
  839. norms
  840. note
  841. nuclear
  842. number
  843. objective
  844. obligation
  845. observe
  846. obvious
  847. offer
  848. ollman
  849. online
  850. open
  851. opened
  852. opening
  853. openly
  854. opens
  855. operated
  856. operating
  857. opportunity
  858. opposed
  859. optimism
  860. optimistic
  861. orders
  862. organising
  863. organization
  864. ostler
  865. outcome
  866. outcomes
  867. output
  868. overexcited
  869. overhyped
  870. ownership
  871. pace
  872. painted
  873. palette
  874. pandemic
  875. paper
  876. paramount
  877. part
  878. partner
  879. partners
  880. path
  881. pathway
  882. patterning
  883. paul
  884. pay
  885. paying
  886. people
  887. percent
  888. percentage
  889. perfectly
  890. performance
  891. person
  892. personal
  893. personally
  894. personas
  895. perspective
  896. persuade
  897. phenomenal
  898. phenomenally
  899. philosopher
  900. philosophical
  901. phones
  902. pick
  903. picking
  904. picture
  905. pieces
  906. pincer
  907. pissed
  908. pitch
  909. pittas
  910. pji
  911. place
  912. plan
  913. planning
  914. platform
  915. play
  916. playing
  917. pleasantly
  918. pleasing
  919. plenty
  920. plot
  921. plugged
  922. podcasts
  923. point
  924. pointing
  925. policy
  926. polish
  927. political
  928. pool
  929. pops
  930. position
  931. positive
  932. possibilities
  933. possibility
  934. possibly
  935. potential
  936. potentially
  937. power
  938. powerful
  939. powers
  940. predict
  941. predictions
  942. predictor
  943. prepares
  944. present
  945. presented
  946. president
  947. pressing
  948. presume
  949. pretend
  950. pretty
  951. previous
  952. principle
  953. prior
  954. problem
  955. problems
  956. process
  957. processes
  958. produced
  959. produces
  960. production
  961. products
  962. profit
  963. profound
  964. program
  965. programers
  966. programmed
  967. programmer
  968. programming
  969. progress
  970. project
  971. promoted
  972. prompted
  973. pronouncements
  974. properly
  975. proposed
  976. prospects
  977. protect
  978. prototype
  979. proud
  980. psychologists
  981. pull
  982. pulled
  983. purely
  984. purpose
  985. purposes
  986. pursue
  987. push
  988. pushback
  989. put
  990. putting
  991. python
  992. quadrant
  993. quality
  994. quent
  995. queries
  996. query
  997. question
  998. questions
  999. quickly
  1000. quote
  1001. race
  1002. racial
  1003. rails
  1004. raise
  1005. ramping
  1006. ran
  1007. random
  1008. rapid
  1009. rapidity
  1010. rarely
  1011. rational
  1012. reached
  1013. read
  1014. reading
  1015. reads
  1016. real
  1017. realign
  1018. realistic
  1019. reality
  1020. realize
  1021. realized
  1022. realm
  1023. rearview
  1024. reason
  1025. reasoning
  1026. reasons
  1027. recesses
  1028. reckoning
  1029. recruit
  1030. reflective
  1031. regurgitating
  1032. reinforcement
  1033. relative
  1034. release
  1035. released
  1036. releasing
  1037. reliable
  1038. rely
  1039. remarkable
  1040. remember
  1041. replicates
  1042. representation
  1043. representational
  1044. require
  1045. research
  1046. researchers
  1047. resources
  1048. response
  1049. responses
  1050. restrictions
  1051. result
  1052. results
  1053. retreated
  1054. return
  1055. review
  1056. revolution
  1057. reward
  1058. rewarded
  1059. rewards
  1060. rightly
  1061. rights
  1062. risk
  1063. risks
  1064. robotics
  1065. rpi
  1066. rubenstein
  1067. rug
  1068. rule
  1069. rules
  1070. run
  1071. runaway
  1072. running
  1073. rushed
  1074. sad
  1075. safe
  1076. safety
  1077. sam
  1078. sambor
  1079. sampling
  1080. saved
  1081. scale
  1082. scaling
  1083. scary
  1084. scenario
  1085. scenarios
  1086. school
  1087. sci
  1088. science
  1089. scientific
  1090. screen
  1091. scroll
  1092. scrolling
  1093. search
  1094. season
  1095. sec
  1096. secret
  1097. secrets
  1098. seeker
  1099. seeps
  1100. select
  1101. sense
  1102. sentence
  1103. sentences
  1104. sentient
  1105. sequence
  1106. series
  1107. service
  1108. services
  1109. set
  1110. sets
  1111. shallow
  1112. shape
  1113. share
  1114. sharing
  1115. sheet
  1116. sheila
  1117. shift
  1118. shifts
  1119. shipping
  1120. shocking
  1121. shockingly
  1122. short
  1123. show
  1124. showed
  1125. shown
  1126. shows
  1127. side
  1128. sign
  1129. significant
  1130. signing
  1131. silicon
  1132. similar
  1133. simple
  1134. simply
  1135. sincere
  1136. single
  1137. sir
  1138. sitting
  1139. skills
  1140. sleep
  1141. slow
  1142. small
  1143. smart
  1144. smarter
  1145. smartest
  1146. snippets
  1147. social
  1148. societal
  1149. society
  1150. software
  1151. solely
  1152. solution
  1153. solutions
  1154. solve
  1155. solves
  1156. song
  1157. songs
  1158. sooner
  1159. sort
  1160. sorts
  1161. sounds
  1162. space
  1163. spans
  1164. sparking
  1165. speak
  1166. spec
  1167. special
  1168. species
  1169. specific
  1170. specifically
  1171. spend
  1172. spending
  1173. spent
  1174. spirit
  1175. spread
  1176. spreading
  1177. staccato
  1178. stage
  1179. stake
  1180. stance
  1181. stand
  1182. stands
  1183. start
  1184. started
  1185. starting
  1186. startup
  1187. startups
  1188. state
  1189. stayed
  1190. steering
  1191. steers
  1192. sticking
  1193. stitched
  1194. stop
  1195. stories
  1196. story
  1197. strange
  1198. strength
  1199. stressful
  1200. strike
  1201. strikes
  1202. striking
  1203. stripe
  1204. strong
  1205. structural
  1206. structure
  1207. struggling
  1208. studied
  1209. study
  1210. stuff
  1211. style
  1212. subjected
  1213. subjective
  1214. subsidiary
  1215. subspace
  1216. success
  1217. successors
  1218. sucked
  1219. suddenly
  1220. suit
  1221. summary
  1222. summer
  1223. super
  1224. superpowers
  1225. support
  1226. supporting
  1227. surprised
  1228. surprising
  1229. surprisingly
  1230. suspect
  1231. symbiotic
  1232. system
  1233. systemic
  1234. systems
  1235. table
  1236. tailored
  1237. takes
  1238. talent
  1239. talented
  1240. talk
  1241. talked
  1242. talking
  1243. tap
  1244. task
  1245. tasks
  1246. tax
  1247. teach
  1248. teaches
  1249. team
  1250. tech
  1251. technical
  1252. technically
  1253. technique
  1254. technological
  1255. technologically
  1256. technology
  1257. ted
  1258. tempting
  1259. term
  1260. terms
  1261. terrible
  1262. terrorist
  1263. text
  1264. theme
  1265. theory
  1266. thinking
  1267. thinks
  1268. thought
  1269. thoughtful
  1270. thousand
  1271. thousands
  1272. threat
  1273. threats
  1274. throw
  1275. time
  1276. timelines
  1277. times
  1278. tired
  1279. today
  1280. told
  1281. tomorrow
  1282. tool
  1283. tools
  1284. top
  1285. topic
  1286. topics
  1287. total
  1288. totally
  1289. touch
  1290. touched
  1291. track
  1292. trade
  1293. traditionally
  1294. train
  1295. trained
  1296. trait
  1297. translate
  1298. translation
  1299. treat
  1300. tremendous
  1301. tremendously
  1302. tricky
  1303. true
  1304. trusted
  1305. truth
  1306. tuba
  1307. turned
  1308. tutor
  1309. tutors
  1310. tweak
  1311. twenty
  1312. twitter
  1313. typed
  1314. ugly
  1315. ultimately
  1316. unambiguous
  1317. underestimated
  1318. underlying
  1319. understand
  1320. understanding
  1321. understands
  1322. undesirable
  1323. undo
  1324. unequivocally
  1325. unintended
  1326. uniquely
  1327. unquote
  1328. unreliable
  1329. unstoppable
  1330. unsuitable
  1331. unsure
  1332. unusual
  1333. unwanted
  1334. update
  1335. uploaded
  1336. upside
  1337. usage
  1338. user
  1339. usual
  1340. vaccines
  1341. valley
  1342. values
  1343. varies
  1344. varying
  1345. vast
  1346. version
  1347. versions
  1348. veto
  1349. view
  1350. views
  1351. virtual
  1352. virus
  1353. visibility
  1354. vision
  1355. visions
  1356. wait
  1357. wake
  1358. waking
  1359. walk
  1360. wanted
  1361. watch
  1362. ways
  1363. weak
  1364. wealth
  1365. weapon
  1366. website
  1367. week
  1368. weird
  1369. welfare
  1370. whatsoever
  1371. wheel
  1372. widely
  1373. wiki
  1374. wikipedia
  1375. win
  1376. wins
  1377. wisdom
  1378. wise
  1379. wonderful
  1380. word
  1381. words
  1382. work
  1383. worked
  1384. working
  1385. works
  1386. world
  1387. worrisome
  1388. worry
  1389. worrying
  1390. worse
  1391. worst
  1392. worth
  1393. wow
  1394. wrapped
  1395. wreak
  1396. wrestling
  1397. write
  1398. written
  1399. wrong
  1400. yeah
  1401. year
  1402. years