The United Nations attempt to regulate AI could complicate enterprise compliance
The United Nations has launched a call for countries to agree to what it refers to as AI red lines: “do-not-cross limits for artificial intelligence, [to be established] by the end of 2026 to prevent the most severe risks to humanity and global stability.”
The UN statement issued Monday said, “without binding international rules, humanity faces escalating risks [ranging] from engineered pandemics and large-scale disinformation to global security threats, systematic human rights abuses and loss of human control and oversight over advanced systems.”
In a UN Q&A document about the initiative, the global body offered a wide range of possible AI bans, including barring its use in nuclear command and control, lethal autonomous weapons, mass surveillance, human impersonation involving “AI systems that deceive users into believing they are interacting with a human without disclosing their AI nature,” and cyber malicious use, which it defined as “prohibiting the uncontrolled release of cyberoffensive agents capable of disrupting critical infrastructure.”
The UN also wants prohibitions against autonomous self-replication, which it said is the “deployment of AI systems capable of replicating or significantly improving themselves without explicit human authorization,” as well blocking “the development of AI systems that cannot be immediately terminated if meaningful human control over them is lost.”
And, it emphasized, “any future treaty should be built on three pillars: a clear list of prohibitions; robust, auditable verification mechanisms; and the appointment of an independent body established by the Parties to oversee implementation.”
Many analysts and observers, however, have concerns about whether such global restrictions are practical, enforceable, or even in time to limit damages.
Analyst concerns focused not on what the UN is attempting, but on whether enough countries would support it, whether its end of 2026 target implementation is soon enough to make a difference, and whether it’s meaningfully enforceable.
They noted that the applicability/enforceability of the rules would have an impact on enterprises, mostly via compliance rules, but the UN’s requirements are really intended to impact hyperscalers and other AI vendors, as opposed to their customers.
AI rules that could impact enterprises might include limits on using AI to screen job applicants, make loan decisions or on training models on confidential customer data.
Enterprises would still have to comply if they are operating in any country that signed the UN agreement. Then again, those countries, such as Germany, Canada, Switzerland or Japan, would likely have their own AI compliance rules, making the UN mandate potentially irrelevant.
Valence Howden, an advisory fellow at Info-Tech Research Group, said that he understands and applauds the intent behind the UN’s effort, even if he questions how viable it would be.
“How do we protect organizations [given that] the risks are not tied to country boundaries?” Howden asked. “There is more general agreement that it is necessary than people think. America is an outlier; they don’t want to regulate or control where AI can go. Even China is saying the right things.”
“A lot of the players that I thought would oppose it didn’t,” he noted.
In fact, the only country other than the United States that expressed strong hesitation was France, Howden said, “because they have the same concern [as the US] that innovation will be stifled,” but he added that France is likely supportive, but it said those things because of other delegates in the room.
Howden also expressed concern about the UN target of implementing these restrictions by the end of next year.
“A lot of this has to happen, and it has to happen quickly,” he said, but with the UN, “the governance and protections are moving at the speed of bureaucracy.”
Howden said that the AI vendor space is approaching what he called “the point of ungovernability,” and that the industry is “very close to that state right now, being beyond the point of no return.”
He noted that even if the UN effort passed, it’s doubtful that the major hyperscalers, who offer most of the genAI models, would comply.
“Can we trust the large scale enterprise vendors to do this? No. They don’t do it now,” Howden said.
Brian Levine is a former federal prosecutor who today serves as the executive director of a directory of former government and military specialists called FormerGov. His US Justice Department role included involvement in many global standards efforts, including work with Interpol on international ransomware coordination and serving on the law enforcement Joint Liaison Group (JLG) with China.
Levine said that he expects the UN measure will likely happen because most members will agree on the fundamental principles. “But,” he said, “those principles will be so high level that they won’t really move the ball forward in any meaningful way.”
Levine added that agreeing to the UN proposal is fairly low risk, as countries will likely think, “Don’t worry. It isn’t enforceable anyway.”
The UN has engaged in similar efforts before, with little to show for it. About 11 years ago, the UN tried to ban autonomous killing robots.
Peter Salib, an assistant professor of law at the University of Houston Law Center, said that the real world deployments of genAI systems today make the threat of AI harm much more concrete than was the threat of autonomous killer robots back in 2014.
But as for the UN effort announced this week, Salib said that he doubts much will come from it.
“Probably nothing happens that matters,” Salib said. “The countries don’t care very much and don’t care enough to give up their sovereignty.”The United Nations attempt to regulate AI could complicate enterprise compliance – ComputerworldRead More