The industry’s heavyweights wrote an open letter calling for an immediate pause of AI far more powerful than ChatGPT-4 (Generative Pre-trained Transformer)** for the sake of humanity. “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs…”
THE OPEN LETTER
“Should we let machines flood our information channels with propaganda and untruth… automate away all the jobs, including the fulfilling ones…develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us. Should we risk loss of control of our civilization?”
“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”
Elon Musk, Steve Wozniak, and other tech leaders and artificial intelligence experts are urging AI labs to immediately pause the development of powerful new AI systems in an open letter citing potential risks to society.
The signatories include Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio and Stuart Russell.
The Future of Life Institute issued the letter. According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as the London-based effective altruism group Founders Pledge and Silicon Valley Community Foundation.
The signatories want a six-month pause or longer. They ask that governments step in until there are rigorous audits and oversight. Other security measures would include watermarking systems to help distinguish real from synthetic. They want to see devices to track leaks. The experts want certification requirements. They also want to see liability for harm that AI causes.
“In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems.”
Sam Altman, CEO of OpenAI did not sign. Perhaps he’d rather make money and be first than protect the world. That’s conjecture, but we’d like to know his reasons.
You thought Big Tech was a problem. Think AI!
Jesse Waters Primetime – Arkansas sues Big Tech pic.twitter.com/l7RfEEILp1
— Wittgenstein (@backtolife_2023) March 29, 2023
OPINION
Soon, you won’t be able to distinguish reality from the imagined. I had a student like that once. He couldn’t tell the difference. He’d ask if the situation he was in was really happening. Sometimes he’d watch a lesson and think he was watching TV. The little boy continuously asked, “Are you real?” Think of the uses corrupt governments could make of a tool like that—one where we’d constantly have to ask if something is real. We could be indoctrinated 24/7. TikTok gives us a peek into what it could do to children.
**Source: AI Experts Warn of Devastation to Society: Pause AI “Immediately”