top of page
Robot_edited_edited.jpg

ARTIFICIAL INTELLIGENCE 

Insights > AI

EUROPEAN COMMISSION PROPOSES HARMONIZED RULES FOR ARTIFICIAL INTELLIGENCE

pranav picture - ROUND CIRCLE.png

Tuhin Batra & Pranav Prakash

Published on: 22 July 2021

On 19 February, 2020, the European Commission had issued the “White Paper on Artificial Intelligence” which projected Europe’s ambitions for regulating Artificial Intelligence (“AI”). In line with the same ambitions, just more than a year later on 21 April, 2021, the European Commission unveiled a proposal for laying down harmonized rules for artificial intelligence.

 

The commission, in its proposal, has rightly identified the vast potential of AI in bringing societal and economic benefits across the entire spectrum of industries and social activities. The proposal, therefore, seeks to achieve the following objectives through this proposal:

  1. Ensure that AI systems placed on the European Union market and used are safe and respect existing law on fundamental rights and EU’s values;

  2. Ensure legal certainty to facilitate investment and innovation in AI;

  3. Improve governance and enforcement of law on fundamental rights and safety requirements applicable to AI systems;

  4. Facilitate development of singular market for lawful, safe and trustworthy AI applications and prevent fragmentation of market.

The proposal is divided into 12 titles, each dealing with a particular issue/subject. Here are some of the essential aspects of the Commission’s proposal:

Extra-Territorial Applicability of the Proposed Rules

The proposal seeks to cover the following entities/persons:

  1. Providers placing on the market or putting into service AI systems in the EU, irrespective of whether those providers are established within the EU or in a third country;

  2. Users of AI systems located within the EU; and

  3. Providers and users of AI systems located outside EU, where the system output is utilized in the EU.

 

The Commission, however, chose to keep the following AI systems and entities outside the scope of the proposal:

  1. AI systems developed or used exclusively for military purposes;

  2. Public Authorities in a third country using AI systems that fall within the scope of the proposed rules, and use such systems within the framework of international agreements for law enforcement and judicial cooperation with the EU or with one or more member states of the EU; and

  3. International Organizations using AI systems that fall within the scope of the proposed rules, and use such systems within the framework of international agreements for law enforcement and judicial cooperation with the EU or with one or more member states of the EU.

 

Contemporary and Future-proof Definition of Artificial Intelligence

The proposal has defined “AI System” as “software that is developed with one or more of the techniques and approaches listed (under the proposed rules) and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.

Presently, software developed with one or more of the contemporary techniques and approaches such as machine learning approaches, logic and knowledge- based approaches, statistical approaches, Bayesian approaches, and search & optimization approaches have been covered by the scope of the definition provided in the proposal. However, taking into consideration the dynamic and constantly evolving nature of AI, the proposal has specifically provided the European Commission with the power to include more techniques and approaches, so as to make the rules future-proof against software developed with new techniques and approaches.

 

Categorization of AI Systems

The proposed rules categorize AI into different categories based on the risk such systems pose, and govern them differently:

RISK CATAGORY

TREATMENT UNDER PROPOSAL

UNACCEPTABLE RISK

PROHIBITED

HIGH RISK

ALLOWED 
WITH CERTAIN RESTRICTIONS

LIMITED RISK

ALLOWED 
WITH TRANSPARENCY OBLIGATIONS

ALLOWED 
WITHOUT RESTRICTIONS

MINIMAL RISK

GLOBAL ETHCAL AI PRINCIPLES 

1.

TRANSPARENCY

2.

JUSTICE & FAIRNESS

3.

NON - MALEFICENCE

4.

RESPONSIBILITY

5.

PRIVACY

6.

FREEDOM & AUTONOMY

7.

SOLIDARITY

OTHER INSIGHTS

India seems sufficient for challenges that AI pose directly to the Indian society. These challenges are termed by the Indicative Policy as "System Considerations". It is suggested that Sector-specific modifications and alignments are required in existing laws to face these challenges effectively. There are also other challenges which impact the society indirectly as deep fakes, loss of jobs, psychological profiling, and malicious use.

Image by Michael Dziedzic

Prohibition of Certain Manipulative, Exploitative, Social Scoring and Remote Biometric Identification Practices by AI

 

The proposal prohibits certain AI practices that are deemed to pose an unacceptable level of risk and contravene EU values.

 

These practices include:

  1. Practices that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness in order to materially distort their behaviour that causes harm to such persons or others;

  2. Practices that exploit vulnerabilities of specific vulnerable groups in order to materially distort their behaviour that causes harm to such persons or others;

  3. AI-based social scoring for general purposes done by public authorities; and

  4. Use of ‘real time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement unless it is for specific purposes allowed under the rules.

 

Classification of High-Risk AI Systems and their Additional Obligations

The proposal classifies certain AI systems as intrinsically “high-risk”. These systems, are listed in Annexes II and III of the proposal, and include specific AI systems that are used in the following areas:

  1. Biometric identification and categorisation of natural persons;

  2. Management and operation of critical infrastructure;

  3. Education and vocational training;

  4. Employment, workers management and access to self-employment;

  5. Access to essential private services and public services and benefits;

  6. Law enforcement;

  7. Migration, asylum and border control management; and

  8. Administration of justice and democratic processes.

 

The AI systems that are classified as “high-risk” under the proposed rules will be subject to additional obligations with respect to data and data governance, technical documentation, recording keeping, communication and co-operation with authorities, transparency, accuracy, human oversight, robustness, and security.

 

Additional Transparency Obligations for AI Systems with Risks of Manipulation

The following AI systems will be subject to heightened transparency obligations owing to the risks of such systems in manipulation of its users. This will ensure that the users have full-disclosure of the technology and can make informed decisions:

  1. Systems that interact with humans;

  2. Systems that are used to detect emotions or determine association with (social) categories based on biometric data; and

  3. Systems used to generate or manipulate content;

 

Regulatory Sandbox and Innovation Assistance to Small-Scale AI Providers

The proposal strives to create a legal framework that is innovation-friendly, future-proof and resilient to disruption. It enables national competent authorities to set up regulatory sandboxes for AI systems that supports the development, testing and approval of innovative AI technologies in a controlled regulatory environment. It also provides for measures to reduce the compliance burden on small-scale AI providers such as start-ups and Small and Medium-sized Enterprises (SMEs).

 

Constitution of European Artificial Intelligence Board and National Authorities, and Penalties for Violation of the Proposed Rules

The proposal establishes comprehensive governance, monitoring and enforcement mechanisms. It constitutes a European Artificial Intelligence Board (“EAIB”) which will serve as an advisory board to the European Commission and national authorities of the member states on various matters relating to the proposed rules. It also provides for the mandatory constitution of a National Supervisory Authority by each member state, which will serve as the notifying and market surveillance authority within each state’s jurisdiction. The National Supervisory Authority of each member state will represent the state in the EAIB. The proposal also provides member states with the authority to establish more national competent authorities to ensure better compliance with the rules proposed thereunder.

The proposal imposes hefty penalties in the form of fines for certain violations of the proposed rules. The fines range between 2-6% of the worldwide annual turnover, depending on the violation. The extent of the penalty will be determined upon taking various factors into consideration, such as:

  1. The nature, gravity and duration of the infringement and its consequences;

  2. Whether a fine has already been imposed by another authority; and

  3. The size and market share of the infringing entity.

 

The proposal also empowers member states to establish penalty regimes of their own for infringements of the provisions of the proposed rules (excluding the violations already provided in the proposal).

Impact on Indian AI Ecosystem

Similar to the General Data Protection Regulation (GDPR), EU takes an extra-territorial approach in regulating AI. Irrespective of where an entity is based, if its AI systems are placed on the market or put into service in EU, or even if the providers and users of its AI systems are outside EU, but if the output produced by such AI systems are used in EU, the entity will be regulated by the proposed rules.  In such light, here are some of the ways the proposed rules will affect the Indian AI ecosystem:

  1. Indian AI developers with ambitions to enter the EU market will have to follow the developments closely and align themselves and their technologies with such developments.

  2. Indian AI developers can make use of the regulatory sandbox provisions in the proposed rules and test their technologies in the European market with reduced regulatory burden. This will help them draw useful insights into the performance of their AI systems and accordingly develop the system further and make it market ready.

  3. Indian AI developers will have to ensure compliance with GDPR due to usage of personal data in AI systems development.

  4. Indian AI developers seeking to get empanelled even in short term projects, such as projects based on RfPs from any member state of the EU, will have to ensure compliance with the AI rules and GDPR.

 

Criticisms of the Proposed Rules

The proposed rules are not final and is yet to be reviewed by the Council of Europe and the EU Parliament. Here are a few issues in the proposal that the two bodies may have to address during their review:

  1. The regulation of only high-risk AI system while leaving the AI systems (that constitute the vast majority of all the AI systems developed) outside the scope of the proposed rules.

  2. There is no definition or explanation provided for the expression “adverse impact (on fundamental rights” under Article 7. The expression is an indispensable determinant in the European Commission deciding to add more high-risk AI systems to within the scope of the proposed rules.

  3. The use of the expression “intended to be used” throughout Annex III (List of high-risk AI systems) is vague. This will allow developers of predominantly high-risk AI systems to state that the system’s intended purpose is not one that falls within the list given in Annex III and therefore escape regulation by the proposed rules.

  4. To prove conformity with the obligations of high-risk AI systems providers, the providers of high-risk AI systems are expected to subject such systems to conformity assessments. Conformity assessments can either be conducted by a notified body (notified by the member state) or by the provider itself (self-assessment). The proposal mandates only AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons to be assessed by the notified body. Assessment of all other high-risk AI systems, including systems that are intended to be used in evaluation of credit-worthiness, predictive systems in law-enforcement, migration control and recruitment, is subject only to self-assessment. Subjecting such sensitive AI systems to mere self-assessment will lead to detrimental effects on humans, as it only takes a minor manipulation or disruption in such systems for the developers of the systems to further their interests at the cost of others’.

  5. The proposal excludes many potentially, if not already, harmful AI technologies, from being categorized as high-risk AI systems like systems interacting with natural persons, emotional recognition and biometric categorisation systems, as well as systems deployed in content manipulation. Emotional recognition, content manipulation, and biometric categorisation systems are all sensitive and risky deployment of AI systems and can potentially threaten individual and societal peace.

 

Addressal of Global Guidelines for Development and Use of AI by the Proposed Rules

Internationally, there have been various sets of guidelines and principles proposed by various stakeholders and bodies for the use and development of AI. Here is a tabulation of all such guidelines and a checklist of whether the proposed rules address or provide for such internationally developed guidelines:

WHETHER ADDRESSED

REFERENCE

TRANSPARENCY

DISCLOSURE

REPORTING

JUSTICE

FAIRNESS

CONSISTENCY

INCLUSION

EQUALITY

EQUITY

NON-BIAS

NON-DISCRIMINATION

ACCESSIBILITY

REVERSIBILITY

REMEDY

REDRESS

NON-MALEFICENCE

SECURITY

PROTECTION

PREVENTION

NON-SUBVERSION

RESPONSIBILITY

PRIVACY BY DESIGN

FREEDOM

AUTONOMY

SOCIAL SECURITY

ARTICLE 18 READ WITH ANNEXURE IV

EXPLAINABILITY

pngtree-check-mark-icon-design-template-vector-isolated-png-image_711429.jpg

EXPLICABILITY

UNDERSTANDABILITY

HUMAN OVERSIGHT

COMMUNICATION

JUSTICE & FAIRNESS

NON-MALEFEICENCE

RESPONSIBILITY

PRIVACY

FREEDOM & AUTONOMY

SOLIDARITY

pngtree-check-mark-icon-design-template-vector-isolated-png-image_711429.jpg
pngtree-check-mark-icon-design-template-vector-isolated-png-image_711429.jpg
pngtree-check-mark-icon-design-template-vector-isolated-png-image_711429.jpg
pngtree-check-mark-icon-design-template-vector-isolated-png-image_711429.jpg
pngtree-check-mark-icon-design-template-vector-isolated-png-image_711429.jpg
pngtree-check-mark-icon-design-template-vector-isolated-png-image_711429.jpg
pngtree-check-mark-icon-design-template-vector-isolated-png-image_711429.jpg
pngtree-check-mark-icon-design-template-vector-isolated-png-image_711429.jpg
pngtree-check-mark-icon-design-template-vector-isolated-png-image_711429.jpg
pngtree-check-mark-icon-design-template-vector-isolated-png-image_711429.jpg
pngtree-check-mark-icon-design-template-vector-isolated-png-image_711429.jpg
pngtree-check-mark-icon-design-template-vector-isolated-png-image_711429.jpg
download.png
download.png
download.png
download.png
download.png
download.png
download.png
download.png
download.png
download.png
download.png
Warning.png
Warning.png
Warning.png
pngtree-check-mark-icon-design-template-vector-isolated-png-image_711429.jpg

ARTICLE 12

ARTICLE 14

ARTICLE 13

APPLICABLE ONLY ON DATA PROCESSIING FOR SPECIFIC PURPOSES

TITLE VIII

ARTICLE 15

COVERED UNDER GDPR

ARTICLE 14

TITLE II & III

TITLE III

TITLE III

ARTICLE 9

TITLE III

CHAPTER 3 , TITLE III

PROTECTION GRANTED ONLY AGAINST CERTAIN ACTIVITIES

ARTICLE 13

download.png

The Road Ahead for the Proposal

The proposal, considering its wide scope, risk-based characterization of AI systems, extra-territorial application, comprehensive governance and oversight provisions, will have a global impact. The proposal will now be reviewed by the Council of Europe and European Parliament where amendments may be made to the proposal. It is advisable that the loopholes in the proposed rules be fixed in the rounds of discussion to follow. Efforts must also be taken to address and include provisions of internationally recognized guidelines for use and development of AI that have unaddressed by the proposal. Once adopted, the proposed rules will take the shape of a final and binding regulation across all EU member states. AI systems providers will have a period of 2 years (from the date of the regulation coming to force) to align themselves with the final regulation.

bottom of page