A group of eminent AI ethicists wrote a counter-argument to this week’s controversial letter calling for a six-month “pause” in AI development, suggesting that the real harm caused by the misuse of today’s technology will become a virtual future. criticized its focus on threats. .
Thousands of people, including familiar names like Steve Wozniak and Elon Musk, Signed an open letter from the Future Life Institute earlier this week, It proposes that the development of AI models like GPT-4 should be put on hold to avoid “loss of control of our civilization”, among other threats.
Timnit Gebru, Emily M. Bender, Angelina McMillan-Major, and Margaret Mitchell are all major figures in the AI and ethics field, and are known (in addition to their work) to have been kicked out of Google. paper Criticize the capabilities of AI. They are now working together at the DAIR Institute. new study clothes It aims to research, uncover and prevent AI-related harm.
However, they were not on the list of signers. posted a reprimand It points out that the letter did not involve existing problems caused by technology.
“These hypothetical risks are the focal point of a dangerous ideology called long-termism, which ignores the real harm caused by the deployment of today’s AI systems,” they wrote. , data theft, synthetic media underpinning existing power structures, and further concentration. Those power structures with fewer hands.
The choice to worry about the Terminator or the Matrix-esque robotic apocalypse is a red herring when there are reports of companies like Clearview AI at the same moment. used by police to frame an essentially innocent manYou don’t need a T-1000 if every front door you have access to from your online rubber stamp factory has a ring cam.
The DAIR crew agrees with some of the purposes of the letter, including identifying synthetic media, but we need to act now with the remedies available to us for today’s problems. I am emphasizing that
What we need is regulation that enhances transparency. Not only should synthetic media be made explicit whenever they are encountered, but organizations building these systems should also document and disclose their training data and model architecture. Responsibility for creating tools that are safe to use should lie with the companies that build and deploy the generation systems. This means that the builders of these systems must be held accountable for the output produced by their products.
The current race towards increasingly large-scale “AI experiments” is not a pre-determined path where you can only choose how fast you run it, but a series of decisions driven by a profit motive. Corporate actions and choices must be shaped by regulations that protect the rights and interests of people.
Now is the time to act. But the focus of our concerns should not be the fictitious “powerful digital mind”. Instead, we need to focus on the very real and very current exploitative practices of the companies they claim to build that are rapidly centralizing power and increasing social injustice. there is.
By the way, this letter reflects what I heard from Uncharted Power founder Jessica Matthews at the AfroTech event in Seattle yesterday. We should fear the people who make it. (Her solution is to be the people who build it.)
It is highly unlikely that a large company will agree to suspend research activities following an open letter, but judging from the engagements it has received, the risks of AI – real and hypothetical – are at the forefront of many of the industries. Clearly, this is a major concern in the field. society. But if they don’t do it, someone will probably have to do it for them.
Leave a Reply