Some artificial intelligence experts have signed a letter of caution regarding unchecked technological development; the 22-word statement refers to AI as a “society-scale risk.” Meanwhile, regulatory bodies work on finalizing their stances when it comes to the uses of generative AI. How might either proposed regulation or caution from inside the tech industry affect what AI tools are available to enterprises?
Jump to:
- Experts warn of AI risks
- European Union policy could shape global AI rules
- U.S. government exploring AI’s “risks and opportunities”
- What do AI regulations mean for enterprises?
Experts warn of AI risks
This week’s statement about AI risk from the Center for AI Safety was succinct: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The short statement is meant to “open up discussion” and encourage wide adoption, The Center for AI Safety said. Bill Gates, AI pioneer Geoffrey Hinton, and Google DeepMind CEO Demis Hassabis are among the signatories.
The Center for AI Safety is a nonprofit founded in order to “reduce society-scale risk” from AI. The Center lists possible problems it anticipates AI could cause, including being used in warfare, misinformation, the radicalization of people through content creation, “deception” about the AI’s own inner workings, or “the sudden emergence of capabilities or goals” not anticipated by the AI’s creators.
Its statement follows an open letter from the Future of Life Institute in March 2023 cautioning against the use of AI and asking for AI firms to pause development for six months, possibly under a government moratorium.
Some of the concerns around generative AI have been criticized for being hypothetical. Other groups, including the EU-U.S. Trade and Technology Council, plan to include some of these considerations in an upcoming policy. For example, a joint statement notes the Council is committed to “limiting the challenges [AI] pose to universal human rights and shared democratic values,” and the EU AI Act limits the use of AI for predictive policing or using emotion recognition in border patrols.
SEE: Does your company need an Artificial Intelligence Ethics Policy?
European Union policy could shape global AI risk management rules
One of the signatories of the warning statement, OpenAI CEO Sam Altman, was among those present at an EU-U.S. Trade and Technology Council meeting on Wednesday. His company is wary of over-regulation in the EU but says he does plan to comply with them, according to a report from Bloomberg.
The council plans to produce a draft of an AI code of conduct within the next few weeks, European Commission Vice President Margrethe Vestager said after the council meeting. She proposed external audits and watermarking as possible safeguards against the misuse of AI-generated content.
Vestager wants to see her committee draft their code of conduct well before the two to three years it may take for the proposed AI Act to go through the European Union’s legislative process.The AI Act is ongoing and will next need to be read in the European Parliament, possibly as soon as June.
The Group of Seven nations is also looking into regulating generative AI like ChatGPT in order to ensure it is “accurate, reliable, safe and non-discriminatory,” European Commission President Ursula von der Leyen said in a comment to Reuters.
U.S. government exploring AI’s “risks and opportunities”
The U.S. government is working on a plan to “advance a cohesive and comprehensive approach to AI-related risks and opportunities,” National Security Council spokesman Adam Hodge said in a statement acquired by Bloomberg.
In the United States, individual sentiment within the Biden administration after the council meeting is reportedly divided between those who want to use AI in order to stay competitive and those who support the EU’s plans to regulate AI.
What do AI regulations mean for enterprises?
Organizations making AI-driven products or the hardware and software to run AI should keep an eye on the progress of regulations like those proposed by the EU. State regulations may also eventually come into play, such as the California proposal to limit how AI can be used in hiring and other decisions that may affect a person’s quality of life.
Organizations should also consider how their own ethical policies might relate to when and where AI is given any human-facing, decision-making tasks.