European lawmakers moved closer to passing a pioneering law on artificial intelligence last week, advancing legislation that aims to set a benchmark for the rapidly evolving — yet minimally regulated — technology.
On Wednesday, June 14, the European Parliament approved draft legislation known as the AI Act. Billed as the world’s first comprehensive AI law, the legislation represents a rulebook for the adoption and use of AI technology in the European Union’s 27 member states.
The AI Act proposes a ban on high-risk AI practices in Europe, including the use of real-time facial recognition technology in public places and other AI systems deemed “intrusive and discriminatory” by the European Parliament, such as social scoring systems and models that employ “subliminal or purposefully manipulative techniques.”
The draft legislation also includes stricter requirements for generative AI models like ChatGPT, which will be forced to disclose when content has been machine-generated and design its models with built-in measures to prevent the generation of illegal content.
Jump to:
- A precedent in AI law
- A risk-based approach to AI rules
- Mixed reactions from the tech community
- When will the new law pass?
A precedent in AI law
Europe’s pioneering legislation intends to set a precedent for artificial intelligence regulation around the world, where an explosion in the use of AI and machine learning tools has left policymakers scrambling to keep up.
Deirdre Clune, a Member of the European Parliament, heralded the AI Act as “a ground-breaking piece of legislation” with the potential to become “the de facto global approach to regulating AI.”
Speaking to MEPs on June 13, Clune said: “It is among the first global attempts to regulate AI … AI has the capacity to solve the most pressing issues, including climate change or serious illness, and we want to lay the foundations for doing this here in the European Union.”
Clune added: “We cannot do this entirely on our own, but we should be leaders in ensuring that this technology is developed and used in a responsible ethical manner, while also supporting innovation and economic growth.”
A risk-based approach to AI rules
The EU’s draft legislation proposes a risk-based approach to AI regulation that categorizes artificial intelligence systems based on their potential threat to users, which is a topic that has long been the subject of fierce debate.
AI systems deemed to be carrying an “unacceptable” risk level will be strictly prohibited under the EU’s law with limited exceptions. AI systems and functions deemed unacceptable — and therefore banned — under the draft bill include:
- Cognitive behavioral manipulation of people or specific vulnerable groups: for example, voice-activated toys that encourage dangerous behavior in children.
- Social scoring: classifying people based on behavior, socioeconomic status or personal characteristics.
- Real-time and remote biometric identification systems: facial recognition tools are a possible example.
An exception in the case of remote biometric identification systems would be for prosecuting serious crimes in instances where identification occurs after “a significant delay,” though such cases will require court approval.
High-risk AI systems include those used in EU-regulated products like toys and cars, as well as specific areas such as biometric identification, critical infrastructure management, employment and law enforcement. Under new EU rules, these systems must be registered in an EU database.
SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)
AI systems with the potential to influence voters in political campaigns, as well as those found in recommendation systems used by social media platforms, also feature on the AI Act’s high-risk list.
Meanwhile, generative AI tools such as ChatGPT and Google Bard, as well as other AI systems deemed a limited risk, will be required to adopt stronger safeguards under the new EU rules. These safeguards include stricter transparency requirements and enabling users to make informed decisions about if and how they interact with AI models.
Users will have to be informed when they are interacting with an AI and must also be given the option to cease or continue using AI applications once they’ve interacted with them.
“The absolute minimum that we need to offer here is transparency,” Clune said. “It must be clear that this content has not been made by humans. And we also go one step further and ask developers of these large models to be more transparent and share their information with providers and how these systems were trained and how they were developed. This should address and alter the environmental sustainability of these systems.”
Mixed reactions from the tech community
While Europe’s AI Act ultimately aims to govern the use of artificial intelligence in a way that balances safety and transparency with innovative potential, members of the tech community have voiced concerns that increased scrutiny and potential penalties for breaching the rules could limit innovation.
Kevin Bocek, vice president of ecosystem and community at cybersecurity company Venafi, argued that the European Parliament was “squarely taking aim at Silicon Valley’s AI innovations” and warned of “a potentially huge impact on U.S. business and their investors.”
“The EU will significantly crimp the current approach to AI of weekly product releases and daily model updates,” Bocek told TechRepublic. “The bloc’s requirements for transparency, certification and safety don’t align with how software and cloud providers innovate at present. This opens a path for European startups and open source to play a larger role in AI than we’re currently seeing today.”
The new EU rules could also make things trickier for companies that operate in Europe but are headquartered elsewhere.
Greg Hanson, group vice president of platform sales for EMEA and Latin America at software development company Informatica, shared with TechRepublic that Europe’s AI Act would require U.S. and non-EU businesses to establish full visibility around the origin of the data on which their AI models were built to ensure compliance.
“For organizations with data crossing international borders — which is most — they will now need to have full visibility of how and where their data is processed to meet different geographical legislation,” Hanson told TechRepublic.
“For example, an organization headquartered in the USA but operating in Europe will need to fully understand the quality of its data and be able to trace it fully through their data supply chain …This means the need for data accuracy, clarity, lineage and governance will intensify.”
SEE: Experts laud GDPR at five year milestone (TechRepublic)
The AI Act includes exemptions to rules for research activities and AI components provided under open-source licenses.
To help ensure businesses can effectively harness AI while protecting citizens’ rights, regulatory sandboxes will be established by public authorities in order to test new AI systems before their deployment.
Meanwhile, citizens will have enhanced rights to file complaints about AI systems and receive explanations for decisions based on high-risk AI systems, with the reformed EU AI Office taking responsibility for monitoring how the rulebook is implemented.
Despite this, Kamales Lardi, digital transformation consultant and author of “The Human Side of Digital Business Transformation,” warned that regulators would “continue to play a catch-up game” unless limitations in the draft act were quickly addressed.
“The Act is taking a traditional regulatory and compliance approach to a dynamic and rapidly changing landscape that is generative AI,” Lardi told TechRepublic.
“The Act does not sufficiently address topics around copyright, even from the perspective that debate and definitions around what is considered copyright boundaries are still in discussion.”
Lardi also noted that implementation of the AI Act would “be a nightmare” given the number of companies currently using AI-based solutions or planning to do so in the near future. “The review of applications and conformity assessment will be a daunting task, and relying on self-assessment will not be sufficient in the long term,” she added.
“Companies may need to make substantial changes to their data collection and management practices to meet the new data privacy standards set by the legislation.”
When will the new law pass?
The EU hopes to finalize the AI Act by the end of 2023, though even if successful in doing so, the new legislation is not expected to come into force for a few years — potentially around 2026.
Regardless of the intricacies that will need to be worked out in the interim, Hanson said he welcomes the introduction of Europe’s landmark AI legislation.
“The EU’s decision to regulate the data feeding AI systems is a smart legislative move,” Hanson stated. “Not only will it protect the potential of a technology that can fuel economic growth, but it will protect the very essence of a business,” he said.
“AI, in particular, puts data accuracy on a knife edge. Incorrect data, which ultimately ends up fueling AI models, will have a negative brand impact. But accurate, trustworthy, timely data will give organizations that much-needed competitive edge and drive organizational growth.”