so you are the CEO of openai ,37 years old .your company is the maker of chatgbt which has taken the World by storm. why do you think it's captured people's imagination ?
I think people really have fun with it ,and they see the possibility and they see the ways this can help them. this can Inspire them ,this can help people create, help people learn, help people do all these different tasks ,and it is a technology that rewards experimentation and use in creative ways. so I think people are just having a good time with it, and finding real value .
so paint a picture for us one five ten years in the future ,what changes because of artificial intelligence ?
so part of the exciting thing here is we get continually surprised by the creative power of of all of society. it's going to be the collective power and creativity and will of humanity that figures out what to do with these things.
I think that word surprise though both exhilarating as well as terrifying to people, because on the one hand there's all of this potential for good, on the other hand there's a huge number of unknowns that could turn out very badly for society .what do you think about that?
we've got to be cautious here, and and also I I think it doesn't work to do all this in a lab ,you've got to get these products out into the world and and make contact with reality, make our mistakes while the stakes are low ,but all of that said ,uh I think people should be happy that we're a little bit scared of this ,I think people should you're a little bit scared
a little bit you personally .
I think if I said I were not you should either not trust me or be very unhappy I'm in this job .
so what is the worst possible outcome?
there's like a set of very bad outcomes ,one thing I'm particularly worried about is that these models could be used for large-scale disinformation. I am worried that these systems now that they're getting better at writing computer code could be used for offensive cyber attacks, um and we're trying to talk about this I think Society needs time to adapt .
and how confident are you that what you've built won't lead to those outcomes ?
we'll adapt it also I think
you'll adapt it as negative things occur ?
for sure for sure, and so putting these systems out now while the stakes are fairly low ,learning as much as we can and feeding that into the future systems, we create that tight feedback loop that we run .I think is how we avoid the more dangerous scenarios.
you're spending 24 7 with this technology ,you're one of the people who built this technology .what is most concerning to you about safety?
this is a very general technology and whenever you have something so General ,it is hard to know up front all the capabilities ,all all the potential impact of it ,as well as its downfalls and the limitations of it
can someone guide the technology to negative outcomes?
the answer is yes .you could guide it to negative outcomes and this is why we make it available initially in very constrained ways ,so we can learn what are these negative outcomes what are these the ways in which technology could be harmful such as with gpt4 .we ,you know if you ask the question to gpd4 can you help me make a bomb ,versus the previous systems, it is much less likely to follow that guidance ,versus the previous systems and so we're able to intervene with um at the pre-training stage to make these models more likely to refuse direction or guidance that could be harmful .
what's easier to predict today based on where we are humans or machines?
I would probably say machines ,because there is a scientific process to them that we we understand and humans are just there's so much more Nuance .
does the machine become more human-like over time ?
we are getting to a point where machines will be capable of a lot of the cognitive work that that humans do ,at some point.
is there a point of no return in that process?
there could be ,there could be .um but it's not obvious what that looks like today and our goal is to make sure that we can predict as much as possible in terms of capabilities before we even develop these systems as well as limitations .
Its Behavior is very contingent on what humans choose for Its Behavior to be, therefore the choices that humans are making and feeding into the technology will dictate what it does ,at least for now .so they're incredibly important choices being made by you and your team
absolutely
and how do you decide between right and wrong?
as we make a lot of progress it becomes these decisions become harder and they become far more nuanced um and so there there are a couple of things in terms of customization .there is a part of just making the model more capable in in a way where you can customize Its Behavior, and you can give the user a lot of flexibility and choice, in having the AI that is more aligned with their own values and with our own beliefs .so so that's very important and we're working on that .
and in other words it's almost the future is is potentially a place where each person has their sort of own customized AI that is specific to what they care about and what they need?
within certain bounds, so there should be some broad bounds and then the question is what should they look like, and this is where we are working on Gathering public input ,what should this heart balance look like and Within These higher bounds ,you can have a lot of choice in having your own AI represent your own beliefs and your own values .
are there negative consequences we need to be thinking about?
I think there are massive potential negative consequences whenever you build something so powerful with which so much good can come. I think a long side it carries the possibility of big harms as well, and that's why you know we exist and that's why we're we're trying to uh figure out how to deploy these systems responsibly .but I think the potential for good is huge.
why put this out for the world to start playing with to start using when we don't know where this is heading?
you mean like why develop AI at all .
why develop AI in the first place and then why put it out for the world to use before we know that we are safeguarded that those guardrails are in place already?
this will be the the greatest technology Humanity has yet developed ,we can all have a an incredible educator in our pocket that's customized for us, that helps us learn ,that helps us do what we want ,we can have medical advice for everybody ,that is beyond what we can get today .we can have creative tools that help us figure out the new problems we want to solve, wonderful new things to co-create with this technology for Humanity .we have this idea of a co-pilot this tool that today we help people write computer code and they love it .we can have that for every profession and and we can have a much higher quality of life like standard of living ,as you point out there's a huge uh there is huge potential downside ,people need time to update to react to get used to this technology, to understand where the the downsides are ,and and what the mitigations can be .if we just develop this in secret in our little lab here, and didn't give didn't have contact with reality ,and made gpt7 and then drop that on the world all at once. that I think is a situation with a lot more downside.
is there a kill switch a way to shut the whole thing down?
Yes, what really happens is like any engineer can just say like we're going to disable this for now, or we're going to deploy this new version of the model .
a human
yeah
the model itself can it take the place of that human ,could could it become more powerful than that human?
the uh so in the Sci-Fi movies yes. in in our world and the way we're doing things .this model is you know it's sitting on a server it waits until someone gives it an input .
but you raise an important point which is the humans who are in control of the machine right now also have a huge amount of power?
we do worry a lot about authoritarian governments developing this.
Putin has himself said whoever wins this artificial intelligence race is essentially the controller of humankind, do you agree with that ?
so that was a chilling statement for sure ,what I hope instead is that we successfully develop more and more powerful systems that we can all use in different ways ,that get integrated into our daily lives, into the economy and and become an amplifier of human will, but not this autonomous system that is .
the single controller essentially got
really don't want that
what should people not be using it for right now ?
the thing that I try to caution people the most is what we call the hallucinations problem the model will confidently state things ,as if they were facts that are entirely made up, and the more you use the model ,because it's right so often ,the more you come to just rely on it, and and not check like ah, this is just a language model .
does chat gbt ,does artificial intelligence create more truth in the world or more untruth in the world ?
oh ,I think we're on a trajectory for it to create much more truth in the world.
if there's a bunch of misinformation fed into the model isn't going to isn't it going to spit out more misinformation ?
great question, I think the right way to think of the models that we create, um is a reasoning engine ,not a fact database .they can also act as a fact database
,but that's not really what's special about them .well we're training these models to do is something closer to what we want them to do with something, closer to the ability to reason not to memorize .
all of these capabilities could wipe out millions of jobs, if a machine can reason then what do you need a human for ?
a lot of stuff it turns out, one of the things that we are trying to push the technology trajectory towards and also the way we build these products is to be a tool for humans ,an amplifier of humans ,and if you look at the way people use ChaGPT, there's a pretty common arc where people hear about it, the first time, they're a little bit dubious and then someone tells them about something .and then they're a little bit afraid and then they use it .I see how this can help me, I see how this is a tool that helps me do my job better ,and with every great technological revolution in human history ,although it has been true that the jobs change a lot ,some jobs even go away and I'm sure we'll see a lot of that here .human demand for new stuff, human creativity is Limitless and we find new jobs, we find new things to do .they're hard to imagine from where we sit today I certainly don't know what they'll be .um but I think the future will have all sorts of wonderful new things we do that you and I can't even really imagine today .so the speed of the change that may happen here is the part that I worry about the most ,but if this happens ,you know ,in a single digit number of years some of these shifts. that that is the part I worry about the most .
could it tell me how to build a bomb
it shouldn't tell you how to build a bomb ,but even though Google searched, well no no we put we put constraints so if you go ask it to tell you how to build a bomb .um our version ,I don't think we'll do that. Google already does and so it's not like this is something that technology has not already made the information available to, but I think that every incremental degree you make that easier is something to avoid a thing that I do worry about uh is we're not going to be the only creator of this technology ,there will be other people who don't put some of the safety limits that we put on it .Society I think has a limited amount of time to figure out how to react to that ,how to regulate that ,how to how to handle it.
and how do you decide here at open AI ,what goes in what shouldn't?
we have policy teams ,we have safety teams ,we talk a lot to other groups in the in the rest of the world. um we finished GPT for a very long time ago, it feels like a very long time ago in this industry .I think like seven months ago, something like that ,um and since then we have been internally, externally talking to people trying to make these decisions working with red teamers ,talking to various policy and safety experts ,getting audits of the system to try to address these issues and put something out that we think is safe and good .
and who should be defining those guard rails for society ?
Society should
one society as a whole ,how are we going to do that ?
so I can paint like a vision that I I find compelling this will be one one way of many that it could go .um if you had representatives from Major World governments,uh you know trusted International institutions come together and write a governing document ,you know here is what the system should do ,here's what the system shouldn't do, here's you know very dangerous things that the system should never touch ,even in a mode where it's creatively exploring ,um and then developers of language models like us use that as the governing document
you've said AI will likely eliminate millions of jobs, it could increase racial bias misinformation ,create machines that are smarter than all of humanity combined and other consequences so terrible, we can't even imagine what they could be .many people are going to ask why on Earth did you create this technology, why Sam?
I think it can do the opposite of all of those things too properly done, it is going to eliminate a lot of current jobs that's true ,we can make much better ones .so talking about the downsides, acknowledging the downsides trying to avoid those while we push in the direction of the upsides, I think that's important and again very early preview ,like would you push a button to stop us if it meant we are no longer able to cure all diseases .would you push a button to stop this if it meant we couldn't educate every child in the world super well.
would you push a button to stop this if it meant there was a five percent chance it would be the end of the world?
I would push a button to slow it down and in fact I think we will need to figure out ways to slow down this technology over time.
2024, the next major election in the United States ,might not be on everyone's mind but it certainly is on yours .is this technology going to have the kind of impact that maybe social media has had on previous elections and how can you guarantee there won't be those kind of problems because of chat gbt ?
we don't know is the honest answer. we're monitoring very closely and and again ,we can take it back ,we can turn things off ,we can change the rules.
is this a Google killer ,will people say I'm going to chat gbt yet instead of Google it in the future?
I think if you're thinking about this as search it's sort of the wrong framework ,I have no doubt that there will be some things that people used to do on Google that they do in chatGPT, but I think it's a fundamentally different kind of product .
Elon Musk, who's an early investor in your company, he since left .um he has called out, some of the chat gbt inaccuracies and he tweeted recently what we need is truth GPT .is he right ?
I think he is right, and that we want these systems to tell the truth, but I don't know the full context of that tweet and I suspect, but yeah I don't think I know what it's referring to
do you when he speak anymore
we do
and what does he say to you off off the Twitter ?
um I have tremendous respect for Elon ,I you know obviously ,we have some different opinions about how AI should go, but I think we fundamentally agree on more than we disagree on .
what do you think you agree most about ?
that getting this technology right and figuring out how to navigate the risks is super important to the future of humanity.
how will you know if you got it right?
one simple way is if if most people think they're much better off than they were before, we put the technology out into the world ,that would be an indication we got it right .
you know a lot of people think science fiction yeah when they think chat GPT ,can you keep it so that these are truly closed systems that don't become more powerful than we are as human beings communicate with each other and plan our destruction?
it's so tempting to anthropomorphize ChatGPT, but I think it's important to talk about what it's not as much as what it is, and it because deep in our biology ,we are programmed to respond to someone talking to us. you talk to chat GPT ,which you know really you're talking to this Transformer somewhere in a cloud, and it's trying to predict the next word in a token and give it to you back .but it's so tempting to anthropomorphize that and think that this is like and and like a entity a sentient being that I'm talking to and it's gonna go do its own thing and have its own will and you know plan with others
but it can't
it can't
could it?
there I can imagine, in the far future ,other versions of artificial intelligence ,different setups that are not a large language model that could do that.
it really took a decade plus of social media being out in the world for us to sort of realize and even characterize some of the real downsides of it, how should we be measuring it here with AI ?
there's a number of new organizations starting and I expect relatively soon there will be new governmental departments or commissions or groups starting .
is the government prepared for this ?
they are beginning to really pay attention which I think is great and I think this is another reason that's important to put these Technologies out into the world.we really need the government's attention .we really need thoughtful policy here and that takes a while to do .
if government can do one thing right now to protect people and protect from the downside of this technology, what should they do?
the main thing I would like to see the government do today is really come up to speed quickly on understanding what's happening ,get insight into the top efforts where our capabilities are ,what we're doing. and I think that could start right now .
are you speaking to the government you're in regular content
regular contact
and do you think they get it ?
more and more every day .
when it comes to schools you have this this technology can beat most humans at the SATs ,the bar exam. how should schools be integrating this technology in a way that doesn't increase cheating that doesn't increase laziness among students?
education is going to have to change, but it's happened many other times with technology when we got the calculator ,the way we taught math and what we tested students on, that totally changed. the the promise of this technology, one of the ones that I'm most excited about is the ability to provide individual learning ,great individual learning for each student .you're already seeing students using chatgpt for this in a very primitive way to great success .and as companies take our technology and create dedicated platforms for this kind of learning. I think it will revolutionize education and I think that kids that are starting the education process today by the time they graduate from high school are going to be like smarter and more capable than we can imagine .
it's a little better than a TI-85
it's a little better
uh but but it is it does put a lot of pressure on teachers to read ,for example, if they've assigned an essay three of their students use chat GPT to write that essay ,how are they going to figure that out?
I've talked to a lot of teachers about this, and it is true that it puts pressure in some ways but for an overworked teacher to be able to say hey go use chanchi PT to learn this concept, that you're you're struggling with and just sort of talk back and forth, one of the new things that we showed yesterday in the gpt4 launch, is using gpt4 to be a Socratic method educator. Teachers, not all, but many teachers really really love this, they say it's totally changing the way I teach my students .
it's basically the new office hours
yeah it's a different it's a different thing, but it is a it is a new way to supplement learning for sure. 作者:bella0282 https://www.bilibili.com/read/cv22622401/ 出处:bilibili