Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development
Experts Urge Safeguards Before AI Can 'Feign Obedience'
Academics Call for Adoption of AI Guardrails to Prevent Potential Existential RiskLeading artificial intelligence experts are calling on governments and tech companies to swiftly develop safeguards for AI systems to mitigate potential existential threats posed by the technology.
See Also: GDPR & Generative AI: A Guide for Customers
An essay co-written by 24 academicians and experts says reckless developments by tech companies, especially in the field of autonomous and other cutting-edge AI systems, could lead to large-scale societal risks. Among the authors are Yoshua Bengio and Geoffrey Hinton, widely known as the "godfathers of AI."
"We urgently need national institutions and international governance to enforce standards in order to prevent recklessness and misuse," the experts said in the essay published Tuesday. "Without them, companies and countries may seek a competitive edge by pushing AI capabilities to new heights while cutting corners on safety, or by delegating key societal roles to AI systems with little human oversight."
The most pressing need for restraint exists in "frontier systems," they said: "A small number of most powerful AI systems - trained on billion-dollar supercomputers - which will have the most hazardous and unpredictable capabilities."
More capable future AI systems might "learn to feign obedience" to human directives or "exploit weaknesses in our safety objectives and shutdown mechanisms," the essay says. AI systems could avoid human intervention by spreading their algorithms through wormlike infections, inserting and exploiting cybersecurity vulnerabilities to control computer systems that underpin communications, media, government and supply chains. "Unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or even extinction of humanity," the experts said.
Tuesday's warning comes a week before the government of U.K. Prime Minister Rishi Sunak is slated to hold its AI Safety Summit to address challenges and identify opportunities presented by machine learning and deep learning technologies (see: UK's AI Safety Summit to Focus on Risk and Governance).
The authors of the essay urged industry to earmark one-third of its research and development budget for safety and ethics issues and for government to ensure industry oversight through mechanisms such as legal protection for whistleblowers and mandatory reporting requirements. Governments should also mandate that AI labs report training runs over a certain computational size and ensure that independent auditors obtain access to labs, they said, adding that meaningful assessment can be conducted without providing auditors full access to models.
Many of the authors signed an earlier open letter calling on the AI developers to observe a voluntary half-year pause to develop an auditable set of safety protocols - a call that the tech industry did not observe. The essay authors are not the only people calling for some set of restraints on AI. Microsoft President Brad Smith recently told a panel of U.S. senators that AI needs a "safety brake" before it can be deployed without concerns (see: US Lawmakers Warned That AI Needs a 'Safety Brake').
As a bridge to the time when regulations can be put in place, the authors said, major AI developers should define red-line AI capabilities that require intervention. Those red lines, plus the commitments developers would undertake should an AI model cross them, must be "detailed and independently scrutinized," the essay said.
"To steer AI toward positive outcomes and away from catastrophe, we need to reorient. There is a responsible path, if we have the wisdom to take it."