Over three years ago, this editor sat down Sam Altman, who stepped down as president of Y Combinator to become CEO of the AI company he co-founded with Elon Musk and others in 2015, attended a small event in San Francisco. Open AI.
At the time, Altman described OpenAI’s potential in strange-sounding terms. For example, Altman says the potential for artificial intelligence (machine intelligence that can solve problems not only for humans) is huge, and if OpenAI can decipher it, “perhaps we could capture all future light cones.” No,” he said. cosmic value. He said the company is so powerful that “there will be a need to keep the research private.” Asked if OpenAI guilty of scare-mongering Founder Elon Musk wants every organization developing AI to: Regulated — Altman talked about the dangers No Think about the “social impact” when you’re “building something on an exponential curve.”
Audiences were unsure how seriously Altman should be taken, laughing at various points in the dialogue. But now no one is laughing. Machines are not yet as smart as humans, but Tech Since then, the fact that OpenAI has been released to the world has come so close that some critics fear it will bring about our doom (and the more sophisticated technology reportedly coming).
Sure, heavy users say so not so smart, Chat GPT The model OpenAI released to the public last week could answer the questionIt’s a question like the one that various industry experts are trying to process its meaning. For example, educators wonder how they can distinguish between the algorithm-generated essays they are supposed to receive and the original text, and how they can avoid it. anti-plagiarism software.
Paul Kedrsky is not an educator. He is an economist, venture capitalist, MIT Fellow, and describes himself as “a normal frustrated person who likes to think about the risks and unintended consequences of complex systems.” increase. But he’s one of those people who are suddenly worried about our collective future. Tweet Yesterday: “[S]Kudos to OpenAI for launching this pocket nuclear bomb into an unprepared society without limit. ” Kedrosky writes. And even if it was reintroduced, it would only be with severe restrictions. ”
We discussed some of his concerns yesterday and how OpenAI is driving what he believes is “the most disruptive change the U.S. economy has seen in 100 years,” and not in a good way. I talked about why
Our chats have been edited for length and clarity.
TC: ChatGPT was released last Wednesday. What triggered the reaction on Twitter?
PK: I’ve experimented with these conversational user interfaces and AI services in the past, and this is clearly a big leap. And what particularly bothered me here is that it’s casually brutal, and it’s had a huge impact on a lot of different activities. but it spans almost all areas where there is a grammar — [meaning] An organized way of expressing yourself. Whether it’s software engineering, high school essays, or legal documents. All of them are easily eaten by this ravenous beast, and they spit it out again without compensating for what was used in their training.
A colleague at UCLA told me that he didn’t know what to do with his essay at the end of the current semester. They receive hundreds of essays per course and thousands per department. No.So, as someone told me today, doing this casually is the so-called [ethical] White hat hackers find bugs in widely used products and notify developers before they become known to the public, allowing developers to patch their products, causing widespread devastation and power grid outages. avoid. This is the opposite, if the virus were released into the wild without caring about the consequences.
It feels like it can swallow the world.
Some might say, “Did you feel the same way when automation was introduced to car factories and auto workers lost their jobs?” Because this is kind of a broader phenomenon. But this is very different. These particular learning techniques are autocatalytic. They learn from requests. So robots in manufacturing plants, which are disruptive and have an incredible economic impact on the people working there, then turn around and start absorbing everything that goes on in the factory, sector by sector. Didn’t move to, but that’s not exactly what we can expect but what you should expect.
Musk partially leaves OpenAI disagreement He described the company’s development in 2019 and has long spoken of AI as an existential threat.But people say he i didn’t know what he was talking aboutNow we are faced with this powerful technology, and it is not clear who will step in to deal with it.
I think it starts with a lot of places at once, most of which look really clumsy and people [then] Because that’s what technicians do.But unfortunately we stepped into this by creating something with such consequences: years ago he told the people who run the blog that the FTC [make clear they] If you’re putting up affiliate links and making money from it, on a trivial level, people say, ‘We didn’t write anything, this is all machine generated.
And I think we’ll see new energy ongoing litigation Against Microsoft and OpenAI for copyright infringement in the context of machine learning algorithms during training. I think there is a broader DMCA issue here regarding this service.
and i think i could [massive] Regarding service results, litigation and settlement are ultimately required. This is probably taking too long to help enough people, but I don’t see how it doesn’t work out in the end. [this place] regarding these techniques.
What are your thoughts at MIT?
Andy McAfee His group there is more optimistic, more orthodox, and whenever you see turmoil, other opportunities arise, people are in flux, from place to place, from profession. move to a profession. We believe that this particular evolution of technology cannot be changed or migrated. And I think that’s broadly true.
But the lesson, especially over the last five years, is that these changes can take a long time. Free trade, for example, is one of the most destructive experiences affecting the entire economy. As economists, we thought the economy would adapt and the public would benefit from lower prices. What no one expected was that someone would organize all the angry people and elect Donald Trump. , [we can’t].
You talked about high school and college essay writing. One of our children’s girlfriends has already asked — in theory! — if writing a paper using ChatGPT would be plagiarism.
Since the purpose of writing an essay is to prove that you can think, this short-circuits the process and defeats the purpose. If you can’t do your homework because you don’t know if you’re good at it, it means everything happens in the classroom and has to be supervised. We need to do more verbally, but what does that mean? This means that schools have become more expensive, more It’s trying to do the opposite.
What are your thoughts on the idea of a universal Basic Income, or allowing everyone to participate? profit From AI?
I am a much weaker advocate than I was pre-COVID. The reason is that, in some ways, COVID was a universal basic income experiment. We paid people to stay home, and they came up with QAnon. I am really worried when I no longer need it. Lazy hands and many demonic things.