This Tuesday, neuroscientist, founder and author Gary Marcus sat between OpenAI CEO Sam Altman and IBM’s Chief Privacy Trust Officer Christina Montgomery, all three of whom were in the Senate. I testified before the Judiciary Committee for more than three hours. Senators have largely focused on Altman, who runs one of the most powerful companies on the planet at the moment, and has repeatedly asked Altman to help regulate his own work. It’s because (Most CEOs are begging Congress to leave their industries alone.)
Marcus has been known in academia for some time, but his star status has recently risen thanks to a newsletter (“The Path to Trustworthy AI“), podcast (“man vs machine”) and his sympathetic anxieties about the uncontrolled rise of AI. sunday magazine and Wired Elsewhere too.
Because this week’s hearings seemed truly historic in some ways — Sen. Josh Hawley characterized AI as “one of the greatest technological innovations in human history,” and Sen. John Kennedy gave Altman a very We were so fascinated that we even asked Altman to choose our own regulator – we wanted to say. Talk to Marcus as well to discuss that experience with him and see what he knows about what happens next.
Are you still in Washington?
I am still in Washington. I’m meeting with members of Congress, their staff, and various other interesting people to see if the things I’ve talked about can become a reality.
You taught at New York University. You co-founded several AI companies. one With famed roboticist Rodney Brooks. When I interviewed Brooks onstage in 2017, he said he didn’t think Elon Musk really understood AI, and that he thought . the mask was wrong AI is an existential threat.
I think Rod and I share skepticism about whether current AI is anything like artificial general intelligence. There are some issues that need to be resolved. One is, are we getting closer to AGI, and the other is how dangerous is the current AI we have? I don’t think current AI is an existential threat, but I do see it as dangerous. In many ways, I think this is a threat to democracy. It is not a threat to humanity. It doesn’t destroy all humans. But it’s a pretty serious risk.
Not long ago you were arguing Yann LeCun, Chief AI Scientist at Meta. i don’t understand something that flap It was about the true meaning of deep learning neural networks.
So Lucan and I actually discussed a lot of things many years. An open forum was held in 2017 moderated by philosopher David Chalmers. [LeCun] Since then, I’m going to have another serious discussion, but he won’t do it. He prefers to sub-tweet me on his Twitter or whatever, which doesn’t seem like the most adult way of talking, but he’s an important person, so I reply.
One thing I think we disagree on is that [currently] LeCun sees no problem using these [large language models] And there is no possibility of harm here. I think he is very wrong about that. Potential threats to democracy range from misinformation deliberately produced by bad actors to accidental misinformation, such as a law professor accused of sexual harassment even though he did not. ranging from to [to the ability to] It subtly shapes people’s political beliefs based on training data that the average person knows nothing about. It’s like social media, but even more insidious. You can also use these tools to manipulate other people and do whatever you want. You can scale them massively. There are definitely risks here.
You said something interesting about Sam Altman on Tuesday, told the senator you didn’t tell them what his biggest fear was, which you called “close relationships,” and redirected them to him. Did. What he hasn’t said yet is about autonomous weapons, which I spoke to him a few years ago as a major concern. I thought it was interesting that the weapon didn’t come out.
We’ve covered many areas, but there’s a lot we didn’t get to, including the all-important enforcement, national security, autonomous weapons, and more.there will be a few more [these].
Did you talk about open source and closed systems?
I almost didn’t come out. That’s obviously a very complicated and interesting question. It’s not really clear what the correct answer is. You want people to do independent science. Perhaps you’d like to get some kind of license for something that’s deployed at a very large scale, but they come with certain risks, such as security risks. It’s not clear that you want all malicious attackers to have access to any powerful tool. So there are arguments for and against, and perhaps the correct answer is to allow a fair amount of open source, but place some restrictions on what can be done and how it can be deployed.
Concrete Thoughts on Strategies Using Meta’s Language Model go out into the world Is it for people to mess around with?
don’t think it’s great [Meta’s AI technology] Let’s be honest, LLaMA is there. I think I was a little careless. And you know, it’s literally one of the genies out of the bottle. The legal infrastructure was not in place. As far as I know, they never consulted anyone about what they were doing. They probably did, but the decision-making process, and Bing, for example, is basically simple. It’s just a matter of whether companies do this or not.
However, some decisions that companies make can be harmful in the near or long term. So I think governments and scientists should play some role in deciding what gets out into the world. [through a kind of] The FDA for AI will conduct trials first if they want to implement it broadly. You’re talking about cost benefits. make another trial. And finally, if you are sure that the benefits outweigh the risks, [you do the] Release on a large scale. But now, any company, at any time, he can decide to introduce something to his 100 million customers, and he can do it without any kind of government or scientific oversight. We need some system that impartial authorities can get into.
Where does this unbiased authority come from? Anyone who knows anything about how these things work already works for a company, right?
it’s not. [Canadian computer scientist] Joshua Bengio is different. There are many scientists who do not work for these companies. How to get enough auditors and how to incentivize them to audit is a big headache. But here are his 100,000 computer scientists with some expertise. Not all of them work under contract with Google or Microsoft.
Would you like to play a role in this AI agency?
I’m interested We feel that whatever we build should be global, neutral, and perhaps non-profit. I think I have a good neutral opinion here. I hope to share it and guide us in the right direction.
What was it like sitting in front of the Senate Judiciary Committee? And do you think you’ll be invited again?
I wouldn’t be shocked if he invited me again, but I don’t know. I was so deeply touched by that, so deeply moved to be in that room. I think it’s a little smaller than it looks on TV. But it felt like everyone was there to do what was best for America, for humanity. Everyone understands the weight of the moment, and by all accounts the Senators played their best game. We knew we were there for a reason and we did our best.