Artificial Intelligence

How to build responsible AI systems: Introduction

Wie man verant-wortungsvolle KI-Systeme entwickelt: Einführung

Sebastian Fetz
9 Nov 2023
10 min read

Who is this article for?

This is the beginning of a series of articles that aims to assist in the field of compliance and quality management in the development of AI-based products. The first article will provide a brief overview of the field, why it is important to us, and explain why it should be of higher importance for everyone.

The target audience for this series is not limited to developers. Anyone involved in the development, management, or procurement of AI may find these individual deep dives valuable. We will cover a diverse range of topics, including regulatory and standardization issues, as well as practical advice for testing and documenting AI products.

The first part of this article explains the importance of the topic, while the second part provides practical examples.


Part 1: The importance of quality management in AI systems

Enabling Meaningful Innovation

For an innovation to be meaningful, it should solve real-world problems and create significant positive impact or value for individuals, communities, or the society in general. In order to produce meaningful innovation in the space of AI, you need to create a product that solves a problem and can be deployed and used in the real-world. They need to be maintainable and cohere to the legal and societal standards of the society it is used by. If the built system does not meet the legal requirements or if its usage creates costs that outweigh the benefits, it will not be used to address these problems.

This can be seen in the three waves of developments in AI. Whereas the first wave was focussing on showing the capabilities of machine learning and improving algorithms and training mechanisms, the second wave focussed on inference and deployment problems. Many companies and solutions around MLOps were founded and the industry shifted towards operations to solve the problem of costs and maintainability (and it is still far from solved: Big Tech Struggles to Turn AI Hype Into Profits). The third wave now is compliance. With more and more applications coming into the life of everyone and influencing more lives, we need discussions about the rules and regulations of AI. Examples are the EU AI Act, the AI Bill of Rights and many more proposed legislative initiatives. The newest initiatives are the G7 AI code of conduct and a new executive order from the White House.

Ensuring the implementation and monitoring of all these rules is essential. It is important to have a comprehensive risk management system that effectively covers all aspects of AI. It is crucial to have a solid plan in place to test and document. How do I test and prove that my credit scoring algorithm is not discriminating anyone due to protected attributes? How do I set up an AI risk management system appropriately?

Keeping and maintaining a balance between innovation and compliance is very difficult because both aspects can threaten the overall success. Too much compliance denies any kind of fast experimentation and testing, whereas zero compliance means technological showcases that might never be deployed in a real world scenario and threaten the company or even society in general, depending on the risk. Our mindset at Perelyn is to maintain a balance from the beginning and involve the relevant stakeholders at the appropriate development stage. This ensures rapid prototyping in the innovation phase, while also delivering a reliable, secure, and maintainable application that can solve real-world problems in production.

To ensure timely and cost-effective implementation, it is crucial to have a clear understanding of all the requirements from the start. This includes accurately representing important groups of people and testing for unintended bias, as well as documenting the results properly. Let's explore the different phases of the AI life cycle in more detail.


Part 2: Compliance in the AI system lifecycle

Ensuring compliance for AI systems is crucial at every stage of the life cycle. The following examples showcase some of the steps required to develop a compliant and trustworthy AI system. The list is not exhaustive, and certain steps may be unnecessary depending on the risk level of your AI system.

System Design Phase: Establishing a Solid Foundation

Before starting the coding and testing phase, it is essential to define the requirements and select the appropriate metrics to measure these requirements. This step is similar to the traditional software development process. However, the content does have some added complexity.

It is particularly important to determine the limits within which the system should operate and how to evaluate its normal functioning. Operational Design Domain (ODD) is a concept related to automated and autonomous systems, specifically self-driving or autonomous vehicles. It specifies the particular conditions, under which the autonomous system is intended to operate. ODD among others includes factors, such as environmental, geographical, time-of-day, traffic, and weather conditions.

Based on the specified ODD (Operational Design Domain), the system requirements may vary. A recommendation system may have different reliability requirements, when compared to a fully autonomous one. Additionally, depending on the field of application, considerations of fairness and explainability may hold greater significance. Systems created to automate the job application processes should prioritize fairness, while fairness is less important in quality control in manufacturing.


Documentation: The Foundation of Accountability

Documenting each stage of the development process is crucial for ensuring accountability and auditing. It is instrumental in adhering to legal and regulatory mandates, alongside industry standards and recognized best practices. It allows developers, auditors, or shareholders to trace the system's evolution and understand the rationale behind design decisions.

There are currently no established standards in this field. However, several useful references can provide guidance. For example, you can refer to model cards or data cards for valuable insights. It is worth noting that many models available on huggingface already come with pre-filled data cards, which can be beneficial for your specific needs. However, these only cover a limited portion of the system documentation. Additional factors to consider may include potential misuse, acknowledged limitations, training needs for system operation, and other relevant aspects. The extent of these considerations will depend on the specific type of risk and how the system is used.

Risk Management System

A risk management system is an elementary part of being compliant. It is a systematic approach to identify and classify the risks and decide, which mitigation measures should be prioritized. A generally well-accepted guideline is the ISO 31000 series. One tool used for an effective risk identification process is Failure Modes and Effects Analysis (FMEA). An adapted version for an AI system could look like this:

1. Risk Identification/impact assessment:
Objective: Identify potential risks for the company and the society

2. Risk Analysis:
Objective: Assess the nature, probability, and severity of identified risks.

3. Risk Prioritization:
Objective: Rank and prioritize risks based on the results of risk analysis. Helps in allocating resources efficiently to address the most significant risks first.
Risk scores are often calculated as the product of probability and impact.

4. Risk Mitigation:
Objective: Implement mitigation strategy as well as risk response plans to address and reduce risks.

5. Risk Monitoring and Review:
Objective: Continuously track and review identified risks and the effectiveness of response strategies.


Development/Training phase: Knowing what to focus on

To ensure compliance and accountability in AI systems, it is important to track the training data and licenses of the data used. This helps establish transparency and traceability in the development process.

Additionally, documenting the characteristics of the test set is crucial. This provides insights into the data used for testing and helps evaluate the performance of the AI system accurately. Tools like DVC can help you to automate the tracking. Combined with MLOps tools, for example MLflow, you can keep track of the training data, as well as model artifacts, performance metrics and further important metadata.

It is important to note that not only performance metrics are crucial to measure and track. To ensure fairness and avoid discrimination, it is important to have a dedicated strategy that outlines the steps and documentation required to address these concerns. This includes conducting fairness analysis, specifically focusing on protected groups, biases, discrimination, and fairness tests. By measuring and evaluating these factors, we can develop an AI system that avoids discriminatory biases as much as possible. However, it is important to note that measuring fairness can be a complex task.


Monitoring: The Key to Keeping Tabs on Your Innovation

Monitoring is a critical aspect of ensuring the success of your innovation. By implementing efficient monitoring practices, you try to effectively detect any changes in data or concept drift, which allows you to address them promptly. This helps in maintaining the accuracy and reliability of your AI system over time.

Additionally, having a robust complaints management system in place enables you to address any issues or concerns raised by users or stakeholders, ensuring their satisfaction and trust in your innovation. Prioritizing and investing in monitoring and complaint management can significantly contribute to the long-term success and sustainability of your AI solutions.


Conclusion

To successfully create meaningful AI innovations, it is important to not only focus on groundbreaking algorithms or features, but also to ensure that these advancements align with legal and ethical requirements. By considering these factors throughout the entire development process, from the initial stages to post-launch monitoring, developers and compliance experts can contribute to the creation of AI solutions that are not only innovative but also responsible and long-lasting.

For us at Perelyn, responsible AI isn't just an ideal; it's the core of our consulting practice. We steer companies through the complexities of AI development, ensuring ethical compliance from foundational system design to thorough risk management. Our mission is to equip organizations with transparent, fair, and accountable AI solutions, aligning cutting-edge innovation with societal and regulatory expectations. At Perelyn, we're not just creating AI—we're shaping a future where technology advances with integrity and responsibility.

Other Insights

How do we solve problems?

Please consider helping us improve our website. By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation and analyse site usage. View our Privacy Settings for more information.