The EU AI Act: Adoption Through a Risk Management Framework

Adeline Chan
Author: Adeline Chan, CISM
Date Published: 10 July 2023

Artificial intelligence (AI) failures have made headlines in recent years. These incidents include Tesla’s car crash due to an issue with the autopilot feature,1 Amazon’s AI recruiting tool showing bias against women2 and Microsoft’s AI chatbot, Tay, being manipulated by users to make sexist and racist remarks.3 这些与偏见和恶意使用相关的日益严重的伦理问题导致了欧盟人工智能法案(AI法案)的发展,以建立治理和执法,以保护人工智能使用方面的人权和安全. The AI Act is the first AI law established by a major regulator. This law seeks to ensure that AI is used safely and responsibly, with the interests of both people and enterprises in mind.

The AI Act is an important step in the development of an effective and responsible regulatory framework for AI in Europe. 希望这部法律能够为所有澳门赌场官方下载创造一个公平竞争的环境,同时也保护人民的权益.4

Risk of Generative AI

Generative AI content poses significant risk, perhaps most notably, the spread of misinformation. Generative AI can be used to create fake news and other forms of misinformation that can be spread quickly and widely. This can have serious consequences including damaging individuals' and organizations' reputations, political instability and undermining public trust in media.

AI tools such as ChatGPT write with confidence and persuasiveness that can be interpreted as authority. The text may be taken at face value by casual users, which can send incorrect data and ideas throughout the Internet. An example of data inaccuracy from ChatGPT is Stack Overflow, which is a question-and-answer website for programmers. Coders have been filling Stack Overflow's query boards with AI-generated posts. Due to a high volume of errors, Stack Overflow has taken action to prevent anyone from posting answers generated by ChatGPT.5

Another risk of generative AI content is malicious use. In the wrong hands, generative AI can be a powerful tool for causing harm. For example, generative AI can be used to create fake reviews, scams and other forms of online fraud. It can also automate spam messages and other unwanted communications. In addition, there have been proof-of-concept attacks where AI created mutating malware.6 ChatGPT may also be used to write malware—researchers found a thread named "'ChatGPT—Benefits of Malware'" on a hacking forum.7

Because AI can only generate content based on what it has learned from data, it may be limited in its ability to provide in-depth investigations of complex subjects or offer new insights and perspectives.

Because AI can only generate content based on what it has learned from data, it may be limited in its ability to provide in-depth investigations of complex subjects or offer new insights and perspectives. This lack of substance and depth in generative AI content can have serious consequences. For example, it can lead to a superficial understanding of key topics and issues and make it difficult for people to make informed decisions.8

Because of the complexity of algorithms used in AI systems, AI presents a challenge to the privacy of individuals and organizations. This means that individuals may not even be aware that their data are being used to make decisions that affect them.9 For example, Clearview人工智能允许执法人员上传一张人脸照片,并在它收集的数十亿张图像的数据库中找到匹配的照片. 澳大利亚信息专员和隐私专员发现,明视人工智能公司从网上收集了澳大利亚人的生物特征信息,并通过面部识别工具披露了这些信息,侵犯了澳大利亚人的隐私.10

AI Act Risk Categories

《澳门赌场官方软件》根据人工智能应用构成的潜在危险,将人工智能应用分为三类:不可接受的风险应用, high-risk applications and limited or low-risk applications.

The first category bans applications and systems that create an unacceptable risk. For example, unacceptable uses include real-time biometric identification in public spaces, where AI scans faces and then automatically identifies people.

The second category covers high-risk applications, such as a resume-scanning tool that ranks job applicants based on automated algorithms. 这类申请受到严格的规定和额外的保护措施的约束,以确保人们不因性别而受到歧视, ethnicity or other protected characteristics. Higher-risk AI systems are those that may have more serious implications, such as automated decision-making systems that can affect people's lives. In these cases, 重要的是要让用户意识到使用这些系统的影响,并在他们感到不舒服时给予他们退出的选择.

The third category is limited-risk AI systems, which are those that have specific transparency obligations of which users must be made aware. This allows users to make informed decisions about whether they wish to continue with the interaction. Examples of low-risk AI systems include AI-enabled video games or spam filters, which can be used freely without adverse effects.

Will a Risk-Based Approach Work?

To address this risk, the European Commission undertook an impact assessment focusing on the case for action, the objectives and the impact of different policy options for a European framework for AI, which would address the risk of AI and position Europe to play a leading role globally. The impact assessment is being used to create the European legal framework for AI, which will be part of the proposed AI Act.

Several policy options considered in the impact assessment undertaken by the European Commission were:

  • Option 1: One definition of AI (applicable only voluntarily)—Under this option, 欧盟立法文书将建立欧盟自愿标签计划,使人工智能应用程序提供商能够证明其人工智能系统符合可信赖人工智能的某些要求,并获得欧盟范围内的标签.
  • Option 2: Each sector adopts a definition of AI and determines the riskiness of the AI systems covered—By drafting ad hoc legislation or by reviewing existing legislation on a case-by-case basis, this option would address specific risk related to certain AI applications. There would be no coordinated approach to regulating AI across sectors, nor would there be horizontal requirements or obligations.
  • Option 3a: One horizontally applicable AI definition and methodology of determination of high-risk (risk-based approach)-该选项将设想一个横向的欧盟立法文书,适用于市场上或在欧盟使用的所有人工智能系统. This would follow a proportionate risk-based approach. A single definition of AI would be established by the horizontal instrument.
  • 选项3b:一个横向适用的人工智能定义和高风险(基于风险的方法)确定方法,以及行业主导的非高风险人工智能行为准则-该选项将结合选项3a下高风险人工智能应用的强制性要求和义务,以及非高风险人工智能的自愿行为准则.
  • Option 4: One horizontal AI definition but no gradation—Under this option, the same requirements and obligations as those for option 3 would be imposed on providers and users of AI systems, but this would be applicable for all AI systems regardless of the risk they pose (high or low).

The following criteria were used to assess how the options would potentially perform:

  • Effectiveness in achieving the specific objectives of the AI Act
  • Assurance that AI systems placed on the market and used are safe and respect human rights and EU values
  • Legal certainty to facilitate investment and innovation
  • Enhancement of governance and effective enforcement of fundamental rights and safety requirements applicable to AI
  • Development of a single market for lawful, safe and trustworthy AI applications that helps prevent market fragmentation
  • Efficiency in the cost-benefit ratio of each policy option in achieving the specific objectives
  • Alignment with other policy objectives and initiatives
  • Proportionality (i.e., whether the options go beyond what is a necessary intervention at the EU level in achieving the objectives)

Based on these criteria, option 3b yielded the highest scores.11 使用基于风险的方法意味着,与低风险应用相比,大多数努力都集中在评估和减轻高风险人工智能应用上. 风险管理框架是一个有用的路线图,可以提供所需的结构和指导,以平衡人工智能应用的风险,同时又不妨碍人工智能的创新和效率. It also ensures that the AI Act can be implemented and governed and the interests and privacy of people are protected.

风险管理框架是一个有用的路线图,可以提供所需的结构和指导,以平衡人工智能应用的风险,同时又不妨碍人工智能的创新和效率.

Governance Through a Risk Management Framework

To address how the AI Act can be successfully applied, it is necessary to have a risk management framework to support the regulation.

A standard risk management framework encompasses key elements including risk identification, mitigation and monitoring, which sets the foundation for governance. 建议美国国家标准与技术研究院(NIST)人工智能风险管理框架(RMF)补充人工智能法案,并且是实施选项3b的可行方法,因为它提出了对话, understanding and activities to manage AI risk responsibly.12

Many leading technology organizations such as Amazon, Google and IBM have applauded the efforts of the NIST AI RMF for the responsible development and deployment of AI products, stating that it is:

…an important path forward for the responsible development and deployment of AI products and services. The AI RMF, like the BSA Framework, creates a lifecycle approach for addressing AI risks, identifies characteristics of Trustworthy AI, recognizes the importance of context-based solutions, and acknowledges the importance of impact assessments to identify, document, and mitigate risks. This approach is well-aligned with BSA’s Framework to Build Trust in AI, which emphasizes the need to focus on high-risk uses of AI, highlights the value of impact assessments, and distinguishes between the obligations of those companies that develop AI, and those entities that deploy AI.13

As shown in figure 1, the AI RMF Core is composed of 4 functions: govern, map, measure and manage.

The Difference Between Data Privacy and Data Security

治理功能为组织提供了澄清和定义监督AI系统性能的人员的角色和职责的机会. It also creates mechanisms for organizations to make their decision-making processes more explicit to counter systemic biases.

地图功能为操作员和从业者熟练掌握人工智能系统性能和可信度概念提供了定义和记录流程的机会. It also suggests opportunities to define relevant technical standards and certifications.

治理和地图功能描述了跨学科和人口多样化团队的重要性,同时利用来自潜在受影响的个人和澳门赌场官方下载的反馈. 参与在RMF中应用其专业知识和活动的AI参与者可以通过将设计和开发实践锚定到用户意图和更广泛的AI澳门赌场官方下载和社会价值的代表中来协助技术团队. 这些人工智能参与者是看门人或控制点,他们协助整合特定于环境的规范和价值观,并评估最终用户体验和人工智能系统.

The measure function analyzes, assesses, benchmarks and monitors AI risk and related impacts using quantitative, qualitative, or mixed-method tools, techniques and methodologies. It uses knowledge relevant to AI risk identified in the map function and informs the manage function. AI systems should be tested before deployment and regularly thereafter. AI risk measurements include documenting systems' functionality and trustworthiness.

Measurement results are used in the manage function to assist risk monitoring and response efforts. Framework users must continue applying the measure function to AI systems as knowledge, methodologies, risk and impacts evolve.14

欧盟和美国都致力于采用基于风险的人工智能方法,以推进可信赖和负责任的人工智能技术. 双方管理机构的专家正致力于“在人工智能标准和工具方面的合作,以实现可信赖的人工智能和风险管理”.” They are expected to draft a voluntary code of conduct for AI that can be adopted by like-minded countries.15

Conclusion

By understanding the current limitations of human-AI interactions, organizations can improve their AI risk management. 重要的是要认识到,人工智能系统用于试图转换或表示个人和社会观察和决策实践的许多数据驱动方法需要不断理解和管理.

The AI Act proposes a risk-based approach to managing AI risk. 它要求提供人工智能工具或在其过程中采用人工智能的组织进行影响评估,以确定其计划的风险,并应用适当的风险管理方法. High-risk profile AI initiatives should be mitigated with effective risk controls, which can be discussed and reviewed with similar industry groups that have common products or risk areas. 这产生了积极的结果——制定了自愿的行业主导的行为准则,可以支持人工智能的风险治理。. This approach can also help spread the cost of regulation and oversight responsibility. The synergies achieved will benefit and protect users of AI.

With this strategic adoption of AI, efficiencies can be achieved that are not possible with human effort only.

Endnotes

1 McFarland, M.; “Tesla-Induced Pileup Involved Driver-Assist Tech, Government Data Reveals,” CNN, 17 January 2023
2 Dastin, J.; “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, 10 October 208
3 Tennery, A.; G. Cherelus; “Microsoft's AI Twitter Bot Goes Dark After Racist, Sexist Tweets,” Reuters, 24 March 20216
4 The AI Act, “The Artificial Intelligence Act
5 Vigliarolo, B.; “Stack Overflow Bans ChatGPT as 'Substantially Harmful' for Coding Issues,” The Register, 5 December 2022
6 Sharma, S.; “ChatGPT Creates Mutating Malware That Evades Detection by EDR,” CSO, 6 June 2023
7 Rees, K.; “ChatGPT Used By Cybercriminals to Write Malware,” Make Use Of, 9 January 2023
8 O’Neill, S.; “What Are the Dangers of Poor Quality Generative AI Content?” LXA, 12 December 2022
9  Van Rijmenam, M.; “Privacy In the Age of AI: Risks, Challenges and Solutions,” The Digital Speaker, 17 February 2023
10 Office of the Australian information Commissioner, “Clearview AI Breached Australians’ Privacy,” 3 November 2021
11 European Commission, “Impact Assessment of the Regulation on Artificial Intelligence,” 21 April 2021
12 National Institute of Standards and Technology (NIST), Artificial Intelligence Risk Management Framework (AI RMF 1.0), USA, January 2023
13 National Institute of Standards and Technology (NIST), “Perspectives About the NIST Artificial Intelligence Risk Management Framework,” USA, 6 February 2023
14 Op cit NIST, Artificial Intelligence Risk Management Framework (AI RMF 1.0)
15 Staff, “EU, US to Draft Voluntary AI Code of Conduct,” The Straits Times, 1 June 2023

Adeline Chan

Leads risk management teams in assessing and mitigating risk and enhancing bank risk culture. She has implemented various risk frameworks for the cloud, SC ventures, operations and technology, and cybersecurity. Her focus is on creating business value and aligning risk management with business objectives. Previously, she led teams in business transformation and banking mergers. While managing project and change risk, she coached subject matter experts on organization redesign and achieving cost efficiencies. Her experience spans global and corporate banking, wealth management, insurance and energy. 她是新加坡金融科技协会和新加坡区块链协会的成员,在数字资产交换和代币小组委员会中发挥积极作用. Her social responsibility involvement includes volunteering for ISACA® SheLeadsTech (SLT) as a mentor to women in the technology sector and candidates looking to change careers to the GRC sector. She shares her professional insights through writing (http://medium.com/@adelineml.chan) and has contributed articles to ISACA Industry News and the ISACA® Journal.