Learn how AI can help your company gain a competetive edge!

GET IN TOUCH

Regulation for the AI Industry

/ Blog posts
ai industry

In 2021, the European Commission published its proposal for Artificial Intelligence Regulation, or in their words; “a proposal for a Regulation laying down harmonized rules on artificial intelligence.”

This is expected to be the first coherent piece of ongoing regulation and legislation on AI. 

Not only will the regulation change the way future AI projects are developed in the EU and across much of the world, but it’ll also change the way current AI projects are deployed and distributed.

The regulations remain forthcoming as of the latter half of 2022 – the draft is now progressing through a detailed legislative process and will likely be amended several times.

As a result, laws won’t be binding for some 12 to 24 months or so, and it’ll likely be another 24 months after then that requirements come into force.

However, organizations are already taking notice of how the regulation will affect them, especially because governments in the UK, China, and the USA are also developing their own coherent, overarching AI laws and regulations.

Why Does AI Need Regulation?

The debate surrounding AI regulation is ongoing and is sure to evolve over the years ahead.

There is a growing consensus that current AI regulation is incomplete or that it forms a patchwork that doesn’t adequately cover the size and scope of the modern industry. AI is powerful and has the potential to do enormous good, but several key issues have become evident and there are many examples of AI’s unintended affects or consequences.

Traditionally, technology regulation has lagged behind what’s required. For example, it wasn’t until 2016 that the EU adopted GDPR to properly protect people’s digital data.

Now, the EU, UK, and US governments are trying to be pragmatic by passing regulations to introduce a risk-based approach to AI. This will segregate different forms of AI into different risk categories.

EU AI regulation will even completely prohibit certain forms of AI, which will become illegal under the new laws.

It’s worth noting that EU laws apply to AIs operating in European countries for European audiences or customers. AI companies and businesses that use AI are keeping a close eye on this legislation, as it’ll likely affect businesses and organizations worldwide.

The EU Draft AI Regulation

The EU has taken the first practical step to establishing mutually agreed laws and regulations for AI. The regulation for AI restricts some activity, but the EU also intends to invest in AI research and development to empower a new era of regulated AI.

eu flag
EU AI legislation is among the first of its kind

While EU law is most relevant to member states, it will likely set the tone for future international AI legislation. The US, China and other jurisdictions are creating risk-based AI legislation similar to what the EU proposes.

The EU draft regulation applies to the following:

  • Providers: Providers supply AI systems either commercially or free of charge to the EU. If they make the AI available to people in the EU, the regulation applies, regardless of where the provider is situated. So, Facebook, Instagram, Uber, and most other multinational Big Tech products will fall under the regulation. In addition, there are various rules governing who the provider is when an organization or entity modifies the AI, meaning the original provider is no longer the provider.
  • Users are not people who use AI systems. Instead, these are the people or entities under whom an AI is operated. So, if company A uses a chatbot developed by company B, company A is the user, and company B is the provider.
  • The regulation also applies to AIs created in the EU that are in use outside the EU and not in the EU itself.

The draft then establishes three tiers of risk: (i) unacceptable risk, (ii) high risk, (iii) low risk.

Unacceptable Risk AIs

Some AI uses and behaviors will be completely restricted by the legislation. Such AIs cause an immediate or unacceptably high risk of causing physical or mental damage, or violating human rights.

prohibition sign
Some AIs will be banned completely under the regulation

The draft regulation lists several AIs that are deemed “unacceptable risk.” The draft regulation prohibits these. Examples of possible unacceptable risk AIs include:

  • Distorting human behavior: If an AI system distorts or manipulates a person in a manner likely to cause physical or mental harm, it’ll be banned under the regulation. This includes deploying subliminal techniques to exploit vulnerabilities.
  • Social scoring: Social scoring involves AIs that score individuals to create a rank or hierarchy that impairs groups of individuals.
  • Real-time biometric identification: The regulation proposes to ban remote biometric identification and facial recognition. This applies to law enforcement, but there are exemptions for serious crimes.

High-Risk AIs

The next tier is high-risk AIs which are permitted subject to various rules and stipulations.

Providers must adhere to rules and monitor their AIs for ongoing risk. Many companies already actively manage their AIs using a combination of human-in-the-loop development cycles, monitoring and reporting.

high risk aI
High risk AIs are subject to rigorous compliance

There are two categories of high-risk AIs; (1) when the AI is (a part of) a product subject to certain EU safety regulations and (2) when an AI system is designated by the European Commission as high-risk.

Category 1 involves AIs associated with the following:

Category 2 high-risk AIs operate in or around sensitive areas such as health, law, critical infrastructure, transport, employment, and other topics/areas that intersect human rights. Some examples include:

  • Biometric verification
  • Critical infrastructure
  • Employment, work, recruitment, and labor
  • Access to private and public services
  • Migration and asylum
  • Educational and vocational training

When an AI falls under a high-risk category, providers have specific compliance regulations to follow.

Contact us

Key Obligations for High-Risk AIs

High-risk AIs are permitted under certain rules and obligations. The EU intends to create a cycle of trust and accountability for higher-risk AIs with serious or large-scale impacts.

The following rules apply to the providers of such AI systems:

  • Risk management: Risk management and evaluation form the backbone of obligations. Risks to individuals must be eliminated and controlled throughout the AI’s lifecycle.
  • High-quality data: The data used to train AIs must be “high-quality.” This means creating fair, diverse, accurate, transparent, and inclusive datasets that are free from errors. This is critically important as many AIs have already been exposed for bias. Choosing high-quality data labeling and annotation providers is a must to create accurate, representative datasets. Data and data governance is a particular focus of the draft regulation, which describes heavy penalties for providers that fail to audit their data accuracy, diversity, and integrity.
  • Technical documentation: High-risk AIs must be technically documented to ensure they’re not ‘black box’ in the event of problems.
  • User information: The users of AI systems, as defined above, must understand the high-risk AI system, including training by the provider if necessary. Providers should mitigate risk through the AI supply chain.
  • Logging and reporting: High-risk systems must have a high-quality audit trail with robust automatic logging.
  • Human operation and oversight: AI systems should be designed to be overseen and operated by competent persons when/where necessary. In other words, the AI system should have failsafes that enable humans to override the system if necessary.
  • Cybersecurity: AI systems must be secured to prevent infiltration and abuse by third parties.
  • Conformity: Providers must provide an assessment of the AI to demonstrate how it conforms to the regulation. They must also be registered in a publicly accessible database.
  • Monitoring and alerts: Providers must audit the AI system and monitor for changes, errors, or unexpected actions. The EU requires serious incidents to be reported as soon as possible.

When an AI doesn’t fall into one of the above categories, it’s deemed as ‘low risk’ and remains relatively unregulated.

Low-Risk AIs

Low-risk AIs remain largely unregulated, but there are provisions to make AI systems more transparent to users.

Low risk AIs
Low risk AIs will remain largely unregulated

For example, providers must be transparent when introducing people to (i) AI systems that interact with humans, like chatbots and conversational AIs, it’s necessary to alert the user that they’re using an AI system unless it is obvious from the circumstances that it is an AI system, (ii) emotional recognition or biometric systems and (iii) deep fakes.

Failure to comply with regulations carries harsh penalties, which are allocated across three tiers.

Fines and Penalties

Fines are similar to that of GDPR. They’re currently set as:

  • Up to EUR 30 million and 6% of the total worldwide annual turnover for breaching rules for unacceptable-risk AI systems or infringing dataset and data governance rules for high-risk AI systems.
  • Up to EUR 20 million and 4% of the total worldwide annual turnover for non-compliance with AI systems.
  • Up UR 10 million and 2% of the total worldwide annual turnover for supplying false, incomplete, or incorrect info to regulatory bodies and national authorities.

Member states are expected to enforce the regulation and notify EU bodies when necessary.

GDPR shows that the EU doesn’t make empty threats, with some EUR 1.7 billion handed out to date, including a EUR 746,000,000 fine for Amazon. Most fines were issued when regulation was first introduced, as businesses and organizations were ill-prepared or slow to bring their systems up to date.

While the EU’s AI draft regulation has limited enforceability outside of the EU, other regulations will likely follow similar patterns, especially regarding the risk-based approach to AI.

To counteract disruption, the EU is unlocking billions in AI research and development funding.

The UK is planning its own AI policy reforms, including investment in innovation, research, and development. So while regulation will likely slow development and create teething issues in the short term, it will also drive innovation while setting clear guidelines that set the scene for the future of AI and ML.

Most individuals, researchers and organizations agree that AI requires regulation. For the most part, attempts to create mutually agreed AI law have been received positively across the sector.

What Should Businesses Do About AI Regulation?

The AI industry is already going through a period of introspection. As a result, businesses and organizations are taking a closer look at how their AIs intersect with sensitive areas.

In addition, large-scale AIs that interact with sensitive subjects demand more attention, requiring businesses to closely examine how their products interact with draft regulations.

It’s worth mentioning that many small-scale, novel, or creative uses of AI will remain completely unaffected by regulation. For example, geospatial AIs, AIs intended for agritech, and AIs deployed for local uses, e.g., Aya Data’s maize classification project, should all fall safely under the low-risk category and will face minimal disruption.

Where providers suspect their AIs of falling into high-risk categories, law firms are advising the creation of risk assessments and internal reviews and audits. Many of the obligations the EU is suggesting are already conducted internally, and most won’t cause major disruption.

By investing in AI and data governance, businesses can ready themselves for a new era of innovative regulated AI.

Datasets To Play a Leading Role in Regulation

The EU draft regulation emphasizes the importance of datasets. Many impacts of bad AI are inadvertently caused by datasets.

Datasets have been outed as biased and non-representative, marginalizing various diverse groups ranging from women and older people to black people and disabled individuals.

AI regulations emphasize data quality. That means collecting high-quality data for AI and ML projects which are selected, labeled, and annotated with the assistance of professional teams like Aya Data.

Aya Data is focused on diversity and inclusion and emphasizes the need for high-quality, diverse, and inclusive datasets.  

Contact us today to find out how we can support your organization’s present and future AI and ML projects.