Artificial intelligence (AI) failures have made headlines in recent years. These incidents include Tesla’s car crash due to an issue with the autopilot feature,1 Amazon’s AI recruiting tool showing bias against women2 and Microsoft’s AI chatbot, Tay, being manipulated by users to make sexist and racist remarks.3 These mounting ethical issues related to biases and malicious use have led to the development of the EU Artificial Intelligence Act (AI Act) to establish governance and enforcement to protect human rights and safety with regard to the use of AI. The AI Act is the first AI law established by a major regulator. This law seeks to ensure that AI is used safely and responsibly, with the interests of both people and enterprises in mind.
The AI Act is an important step in the development of an effective and responsible regulatory framework for AI in Europe. It is hoped that this law will create a level playing field for all enterprises while also protecting the rights and interests of people.4
Risk of Generative AI
Generative AI content poses significant risk, perhaps most notably, the spread of misinformation. Generative AI can be used to create fake news and other forms of misinformation that can be spread quickly and widely. This can have serious consequences including damaging individuals' and organizations' reputations, political instability and undermining public trust in media.
AI tools such as ChatGPT write with confidence and persuasiveness that can be interpreted as authority. The text may be taken at face value by casual users, which can send incorrect data and ideas throughout the Internet. An example of data inaccuracy from ChatGPT is Stack Overflow, which is a question-and-answer website for programmers. Coders have been filling Stack Overflow's query boards with AI-generated posts. Due to a high volume of errors, Stack Overflow has taken action to prevent anyone from posting answers generated by ChatGPT.5
Another risk of generative AI content is malicious use. In the wrong hands, generative AI can be a powerful tool for causing harm. For example, generative AI can be used to create fake reviews, scams and other forms of online fraud. It can also automate spam messages and other unwanted communications. In addition, there have been proof-of-concept attacks where AI created mutating malware.6 ChatGPT may also be used to write malware—researchers found a thread named "'ChatGPT—Benefits of Malware'" on a hacking forum.7
Because AI can only generate content based on what it has learned from data, it may be limited in its ability to provide in-depth investigations of complex subjects or offer new insights and perspectives.
Because AI can only generate content based on what it has learned from data, it may be limited in its ability to provide in-depth investigations of complex subjects or offer new insights and perspectives. This lack of substance and depth in generative AI content can have serious consequences. For example, it can lead to a superficial understanding of key topics and issues and make it difficult for people to make informed decisions.8
Because of the complexity of algorithms used in AI systems, AI presents a challenge to the privacy of individuals and organizations. This means that individuals may not even be aware that their data are being used to make decisions that affect them.9 For example, Clearview AI allows a law enforcement officer to upload a photo of a face and find matches in a database of billions of images it has collected. The Australian Information Commissioner and Privacy Commissioner found that Clearview AI breached Australians’ privacy by scraping their biometric information from the web and disclosing it through a facial recognition tool.10
AI Act Risk Categories
The AI Act assigns applications of AI to 3 risk categories based on the potential danger these applications pose: unacceptable risk applications, high-risk applications and limited or low-risk applications.
The first category bans applications and systems that create an unacceptable risk. For example, unacceptable uses include real-time biometric identification in public spaces, where AI scans faces and then automatically identifies people.
The second category covers high-risk applications, such as a resume-scanning tool that ranks job applicants based on automated algorithms. This type of application is subject to strict regulations and additional protective measures to ensure that people are not discriminated against based on their gender, ethnicity or other protected characteristics. Higher-risk AI systems are those that may have more serious implications, such as automated decision-making systems that can affect people's lives. In these cases, it is important that users are made aware of the implications of using such systems and are given the option to opt out if they feel uncomfortable.
The third category is limited-risk AI systems, which are those that have specific transparency obligations of which users must be made aware. This allows users to make informed decisions about whether they wish to continue with the interaction. Examples of low-risk AI systems include AI-enabled video games or spam filters, which can be used freely without adverse effects.
Will a Risk-Based Approach Work?
To address this risk, the European Commission undertook an impact assessment focusing on the case for action, the objectives and the impact of different policy options for a European framework for AI, which would address the risk of AI and position Europe to play a leading role globally. The impact assessment is being used to create the European legal framework for AI, which will be part of the proposed AI Act.
Several policy options considered in the impact assessment undertaken by the European Commission were:
- Option 1: One definition of AI (applicable only voluntarily)—Under this option, an EU legislative instrument would establish an EU voluntary labeling scheme to enable providers of AI applications to certify their AI systems’ compliance with certain requirements for trustworthy AI and obtain an EU-wide label.
- Option 2: Each sector adopts a definition of AI and determines the riskiness of the AI systems covered—By drafting ad hoc legislation or by reviewing existing legislation on a case-by-case basis, this option would address specific risk related to certain AI applications. There would be no coordinated approach to regulating AI across sectors, nor would there be horizontal requirements or obligations.
- Option 3a: One horizontally applicable AI definition and methodology of determination of high-risk (risk-based approach)—This option would envisage a horizontal EU legislative instrument applicable to all AI systems placed on the market or used in the EU. This would follow a proportionate risk-based approach. A single definition of AI would be established by the horizontal instrument.
- Option 3b: One horizontally applicable AI definition and methodology of determination of high-risk (risk-based approach) and industry-led codes of conduct for nonhigh-risk AI—This option would combine mandatory requirements and obligations for high-risk AI applications as under option 3a with voluntary codes of conduct for nonhigh-risk AI.
- Option 4: One horizontal AI definition but no gradation—Under this option, the same requirements and obligations as those for option 3 would be imposed on providers and users of AI systems, but this would be applicable for all AI systems regardless of the risk they pose (high or low).
The following criteria were used to assess how the options would potentially perform:
- Effectiveness in achieving the specific objectives of the AI Act
- Assurance that AI systems placed on the market and used are safe and respect human rights and EU values
- Legal certainty to facilitate investment and innovation
- Enhancement of governance and effective enforcement of fundamental rights and safety requirements applicable to AI
- Development of a single market for lawful, safe and trustworthy AI applications that helps prevent market fragmentation
- Efficiency in the cost-benefit ratio of each policy option in achieving the specific objectives
- Alignment with other policy objectives and initiatives
- Proportionality (i.e., whether the options go beyond what is a necessary intervention at the EU level in achieving the objectives)
Based on these criteria, option 3b yielded the highest scores.11 Using a risk-based approach means that most efforts are focused on assessing and mitigating the high-risk AI applications in contrast to low-risk ones. A risk management framework is a useful road map for providing the required structure and guidance to balance the risk of AI applications without dampening innovation and efficiencies from AI. It also ensures that the AI Act can be implemented and governed and the interests and privacy of people are protected.
A risk management framework is a useful road map for providing the required structure and guidance to balance the risk of AI applications without dampening innovation and efficiencies from AI.
Governance Through a Risk Management Framework
To address how the AI Act can be successfully applied, it is necessary to have a risk management framework to support the regulation.
A standard risk management framework encompasses key elements including risk identification, mitigation and monitoring, which sets the foundation for governance. The US National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) is suggested to complement the AI Act and is a feasible approach to the implementation of option 3b as it sets forth dialog, understanding and activities to manage AI risk responsibly.12
Many leading technology organizations such as Amazon, Google and IBM have applauded the efforts of the NIST AI RMF for the responsible development and deployment of AI products, stating that it is:
…an important path forward for the responsible development and deployment of AI products and services. The AI RMF, like the BSA Framework, creates a lifecycle approach for addressing AI risks, identifies characteristics of Trustworthy AI, recognizes the importance of context-based solutions, and acknowledges the importance of impact assessments to identify, document, and mitigate risks. This approach is well-aligned with BSA’s Framework to Build Trust in AI, which emphasizes the need to focus on high-risk uses of AI, highlights the value of impact assessments, and distinguishes between the obligations of those companies that develop AI, and those entities that deploy AI.13
As shown in figure 1, the AI RMF Core is composed of 4 functions: govern, map, measure and manage.
The govern function provides organizations with the opportunity to clarify and define the roles and responsibilities of those who oversee AI system performance. It also creates mechanisms for organizations to make their decision-making processes more explicit to counter systemic biases.
The map function suggests opportunities to define and document processes for operator and practitioner proficiency with AI system performance and trustworthiness concepts. It also suggests opportunities to define relevant technical standards and certifications.
The govern and map functions describe the importance of interdisciplinary and demographically diverse teams while utilizing feedback from potentially impacted individuals and communities. AI actors who are involved in applying their professional expertise and activities in the RMF can assist technical teams by anchoring design and development practices to user intentions and representatives of the broader AI community and societal value. These AI actors are gatekeepers or control points who assist in incorporating context-specific norms and values and evaluating end-user experiences and AI systems.
The measure function analyzes, assesses, benchmarks and monitors AI risk and related impacts using quantitative, qualitative, or mixed-method tools, techniques and methodologies. It uses knowledge relevant to AI risk identified in the map function and informs the manage function. AI systems should be tested before deployment and regularly thereafter. AI risk measurements include documenting systems' functionality and trustworthiness.
Measurement results are used in the manage function to assist risk monitoring and response efforts. Framework users must continue applying the measure function to AI systems as knowledge, methodologies, risk and impacts evolve.14
Both the European Union and the United States are committed to a risk-based approach to AI to advance trustworthy and responsible AI technologies. Experts from the governing bodies of both are working on “cooperation on AI standards and tools for trustworthy AI and risk management.” They are expected to draft a voluntary code of conduct for AI that can be adopted by like-minded countries.15
Conclusion
By understanding the current limitations of human-AI interactions, organizations can improve their AI risk management. It is important to recognize that many of the data-driven approaches that AI systems use to attempt to convert or represent individual and social observational and decision-making practices need to be continuously understood and managed.
The AI Act proposes a risk-based approach to managing AI risk. It requires organizations that are providing AI tools or adopting AI in their processes to undertake an impact assessment to identify the risk of their initiatives and apply an appropriate risk management approach. High-risk profile AI initiatives should be mitigated with effective risk controls, which can be discussed and reviewed with similar industry groups that have common products or risk areas. This results in a positive outcome—development of voluntary industry-led codes of conduct that can support the risk governance of AI). This approach can also help spread the cost of regulation and oversight responsibility. The synergies achieved will benefit and protect users of AI.
With this strategic adoption of AI, efficiencies can be achieved that are not possible with human effort only.
Endnotes
1 McFarland, M.; “Tesla-Induced Pileup Involved Driver-Assist Tech, Government Data Reveals,” CNN, 17 January 2023
2 Dastin, J.; “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, 10 October 208
3 Tennery, A.; G. Cherelus; “Microsoft's AI Twitter Bot Goes Dark After Racist, Sexist Tweets,” Reuters, 24 March 20216
4 The AI Act, “The Artificial Intelligence Act”
5 Vigliarolo, B.; “Stack Overflow Bans ChatGPT as 'Substantially Harmful' for Coding Issues,” The Register, 5 December 2022
6 Sharma, S.; “ChatGPT Creates Mutating Malware That Evades Detection by EDR,” CSO, 6 June 2023
7 Rees, K.; “ChatGPT Used By Cybercriminals to Write Malware,” Make Use Of, 9 January 2023
8 O’Neill, S.; “What Are the Dangers of Poor Quality Generative AI Content?” LXA, 12 December 2022
9 Van Rijmenam, M.; “Privacy In the Age of AI: Risks, Challenges and Solutions,” The Digital Speaker, 17 February 2023
10 Office of the Australian information Commissioner, “Clearview AI Breached Australians’ Privacy,” 3 November 2021
11 European Commission, “Impact Assessment of the Regulation on Artificial Intelligence,” 21 April 2021
12 National Institute of Standards and Technology (NIST), Artificial Intelligence Risk Management Framework (AI RMF 1.0), USA, January 2023
13 National Institute of Standards and Technology (NIST), “Perspectives About the NIST Artificial Intelligence Risk Management Framework,” USA, 6 February 2023
14 Op cit NIST, Artificial Intelligence Risk Management Framework (AI RMF 1.0)
15 Staff, “EU, US to Draft Voluntary AI Code of Conduct,” The Straits Times, 1 June 2023
Adeline Chan
Leads risk management teams in assessing and mitigating risk and enhancing bank risk culture. She has implemented various risk frameworks for the cloud, SC ventures, operations and technology, and cybersecurity. Her focus is on creating business value and aligning risk management with business objectives. Previously, she led teams in business transformation and banking mergers. While managing project and change risk, she coached subject matter experts on organization redesign and achieving cost efficiencies. Her experience spans global and corporate banking, wealth management, insurance and energy. She is a member of the Singapore Fintech Association and the Blockchain Association Singapore where she plays an active role in the digital assets exchange and token subcommittee. Her social responsibility involvement includes volunteering for ISACA® SheLeadsTech (SLT) as a mentor to women in the technology sector and candidates looking to change careers to the GRC sector. She shares her professional insights through writing (http://medium.com/@adelineml.chan) and has contributed articles to ISACA Industry News and the ISACA® Journal.