Artificial Intelligence Predicts the Future of AI
In 2021, the European Commission published its proposal for Artificial Intelligence Regulation, or in their words; “a proposal for a Regulation laying down harmonized rules on artificial intelligence.”
This is expected to be the first coherent piece of ongoing regulation and legislation on AI.
Not only will the regulation change the way future AI projects are developed in the EU and across much of the world, but it’ll also change the way current AI projects are deployed and distributed.
The regulations remain forthcoming as of the latter half of 2022 – the draft is now progressing through a detailed legislative process and will likely be amended several times.
As a result, laws won’t be binding for some 12 to 24 months or so, and it’ll likely be another 24 months after then that requirements come into force.
However, organizations are already taking notice of how the regulation will affect them, especially because governments in the UK, China, and the USA are also developing their own coherent, overarching AI laws and regulations.
The debate surrounding AI regulation is ongoing and is sure to evolve over the years ahead.
There is a growing consensus that current AI regulation is incomplete or that it forms a patchwork that doesn’t adequately cover the size and scope of the modern industry. AI is powerful and has the potential to do enormous good, but several key issues have become evident and there are many examples of AI’s unintended affects or consequences.
Traditionally, technology regulation has lagged behind what’s required. For example, it wasn’t until 2016 that the EU adopted GDPR to properly protect people’s digital data.
Now, the EU, UK, and US governments are trying to be pragmatic by passing regulations to introduce a risk-based approach to AI. This will segregate different forms of AI into different risk categories.
EU AI regulation will even completely prohibit certain forms of AI, which will become illegal under the new laws.
It’s worth noting that EU laws apply to AIs operating in European countries for European audiences or customers. AI companies and businesses that use AI are keeping a close eye on this legislation, as it’ll likely affect businesses and organizations worldwide.
The EU has taken the first practical step to establishing mutually agreed laws and regulations for AI. The regulation for AI restricts some activity, but the EU also intends to invest in AI research and development to empower a new era of regulated AI.
While EU law is most relevant to member states, it will likely set the tone for future international AI legislation. The US, China and other jurisdictions are creating risk-based AI legislation similar to what the EU proposes.
The EU draft regulation applies to the following:
The draft then establishes three tiers of risk: (i) unacceptable risk, (ii) high risk, (iii) low risk.
Some AI uses and behaviors will be completely restricted by the legislation. Such AIs cause an immediate or unacceptably high risk of causing physical or mental damage, or violating human rights.
The draft regulation lists several AIs that are deemed “unacceptable risk.” The draft regulation prohibits these. Examples of possible unacceptable risk AIs include:
The next tier is high-risk AIs which are permitted subject to various rules and stipulations.
Providers must adhere to rules and monitor their AIs for ongoing risk. Many companies already actively manage their AIs using a combination of human-in-the-loop development cycles, monitoring and reporting.
There are two categories of high-risk AIs; (1) when the AI is (a part of) a product subject to certain EU safety regulations and (2) when an AI system is designated by the European Commission as high-risk.
Category 1 involves AIs associated with the following:
Category 2 high-risk AIs operate in or around sensitive areas such as health, law, critical infrastructure, transport, employment, and other topics/areas that intersect human rights. Some examples include:
When an AI falls under a high-risk category, providers have specific compliance regulations to follow.
High-risk AIs are permitted under certain rules and obligations. The EU intends to create a cycle of trust and accountability for higher-risk AIs with serious or large-scale impacts.
The following rules apply to the providers of such AI systems:
When an AI doesn’t fall into one of the above categories, it’s deemed as ‘low risk’ and remains relatively unregulated.
Low-risk AIs remain largely unregulated, but there are provisions to make AI systems more transparent to users.
For example, providers must be transparent when introducing people to (i) AI systems that interact with humans, like chatbots and conversational AIs, it’s necessary to alert the user that they’re using an AI system unless it is obvious from the circumstances that it is an AI system, (ii) emotional recognition or biometric systems and (iii) deep fakes.
Failure to comply with regulations carries harsh penalties, which are allocated across three tiers.
Fines are similar to that of GDPR. They’re currently set as:
Member states are expected to enforce the regulation and notify EU bodies when necessary.
GDPR shows that the EU doesn’t make empty threats, with some EUR 1.7 billion handed out to date, including a EUR 746,000,000 fine for Amazon. Most fines were issued when regulation was first introduced, as businesses and organizations were ill-prepared or slow to bring their systems up to date.
While the EU’s AI draft regulation has limited enforceability outside of the EU, other regulations will likely follow similar patterns, especially regarding the risk-based approach to AI.
To counteract disruption, the EU is unlocking billions in AI research and development funding.
The UK is planning its own AI policy reforms, including investment in innovation, research, and development. So while regulation will likely slow development and create teething issues in the short term, it will also drive innovation while setting clear guidelines that set the scene for the future of AI and ML.
Most individuals, researchers and organizations agree that AI requires regulation. For the most part, attempts to create mutually agreed AI law have been received positively across the sector.
The AI industry is already going through a period of introspection. As a result, businesses and organizations are taking a closer look at how their AIs intersect with sensitive areas.
In addition, large-scale AIs that interact with sensitive subjects demand more attention, requiring businesses to closely examine how their products interact with draft regulations.
It’s worth mentioning that many small-scale, novel, or creative uses of AI will remain completely unaffected by regulation. For example, geospatial AIs, AIs intended for agritech, and AIs deployed for local uses, e.g., Aya Data’s maize classification project, should all fall safely under the low-risk category and will face minimal disruption.
Where providers suspect their AIs of falling into high-risk categories, law firms are advising the creation of risk assessments and internal reviews and audits. Many of the obligations the EU is suggesting are already conducted internally, and most won’t cause major disruption.
By investing in AI and data governance, businesses can ready themselves for a new era of innovative regulated AI.
The EU draft regulation emphasizes the importance of datasets. Many impacts of bad AI are inadvertently caused by datasets.
Datasets have been outed as biased and non-representative, marginalizing various diverse groups ranging from women and older people to black people and disabled individuals.
AI regulations emphasize data quality. That means collecting high-quality data for AI and ML projects which are selected, labeled, and annotated with the assistance of professional teams like Aya Data.
Aya Data is focused on diversity and inclusion and emphasizes the need for high-quality, diverse, and inclusive datasets.
Contact us today to find out how we can support your organization’s present and future AI and ML projects.