It doesn’t make sense to barrel ahead with developing ever-more-powerful AI without at least some measure of confidence that the risks will be manageable. The key point here is that we’ve got to strike a wise balance between potential benefits and potential risks. What about the other assumption - that we shouldn’t slow down AI because it can bring the world so many benefits? But with rare exceptions like the Chinese scientist He Jiankui - who was sentenced to three years in prison for his work on modifying human embryos - they don’t. Scientists definitely can modify the human germline, and they probably could engage in cloning. The recombinant DNA researchers behind the Asilomar Conference of 1975 famously organized a moratorium on certain experiments. Just think of human cloning or human germline modification. Humanity has done this before - even with economically valuable technologies. Slowing down a new technology is not some radical idea, destined for futility. In other words: We don’t have to build robots that will steal our jobs and maybe kill us. Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? We need a moratorium on powerful AI, it says, so we have a chance to ask ourselves: Which types of AI we choose to build or not build, how fast or how slow we choose to go - these are decisions that are up to us humans to make.Īlthough it might seem like an AI race is inevitable because of the profit and prestige incentives in the industry - and because of the geopolitical competition - all that really means is that the true challenge is to change the underlying incentive structure that drives all actors. Or that even if we can, we shouldn’t, because AI can bring the world so many benefits.īoth those assumptions start to fall apart when you think about them.Īs I wrote in my piece laying out the case for slowing down AI, there is no technological inevitability, no law of nature, declaring that we must get GPT-5 next year and GPT-6 the year after. Some people assume that we can’t slow down technological progress. We can - and should - slow down AI progress The letter is right to argue that there’s still a lot we can do. Option one is to object, “These are the people who got us into this mess!” Option two is to object, “These are the people who got us into this mess!” - and then put pressure on them to do everything we can to stop the mess from spiraling out of control. People like Emad Mostaque, the CEO of Stability AI, which released the text-to-image model Stable Diffusion last year.īut given the high stakes around rapid AI development, we have two options. After all, the signatories include some of the very people who are pushing out the generative AI models that the letter warns about. There’s an understandable impulse here to eye-roll. And they’re warning that society is not ready for the increasingly advanced systems that labs are racing to deploy. More to the point, the signatories include foundational figures in artificial intelligence, including Yoshua Bengio, who pioneered the AI approach known as deep learning Stuart Russell, a leading researcher at UC Berkeley’s Center for Human-Compatible AI and Victoria Krakovna, a research scientist at DeepMind. Signatories include Elon Musk, who helped co-found GPT-4 maker OpenAI before breaking with the company in 2018, along with Apple co-founder Steve Wozniak and Skype co-founder Jaan Tallinn. These are powerful words from powerful people. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.” “This pause should be public and verifiable, and include all key actors. “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 (including the currently-being-trained GPT-5),” reads the letter, released by the Future of Life Institute, a nonprofit that works to reduce catastrophic and existential risks. In an open letter published Tuesday, more than 1,100 signatories called for a moratorium on state-of-the-art AI development. Some of the biggest names in AI are raising the alarm about their own creations.
0 Comments
Leave a Reply. |