In brief
In September 2023, the Saudi Authority for Data and Artificial Intelligence (SDAIA) published the “AI Ethics Principles“, intending to support the Kingdom’s effort towards achieving its vision and national strategies related to adopting AI technology. SDAIA has analyzed global practices and developed the AI Ethics Principles to limit the negative implications of AI systems, help companies ensure the responsible use of those systems and protect the privacy of individuals and their rights concerning the collection and processing of their data.
The AI Ethics Principles shall apply to all AI stakeholders designing, developing, deploying, implementing, using, or being affected by AI systems within the Kingdom, including but not limited to public entities, private entities, non-profit entities, researchers, public services, institutions, civil society organizations, individuals, workers and consumers.
The AI Ethics Principles are a significant step forward in the Kingdom’s efforts to ensure the responsible and ethical development and use of AI technology. The principles are comprehensive and well-aligned with global best practices. The principles will be essential in ensuring that AI is used to benefit society and the environment and to avoid the potential harm that AI can pose.
Key takeaways
The Al Ethics Principles to be taken into account when designing and developing AI systems are:
- Fairness. The fairness principle requires actions to eliminate bias, discrimination or stigmatization of individuals, communities, or groups in the design, data, development, deployment and use of AI systems. When designing AI systems, it is essential to ensure fair, objective standards that are inclusive, diverse and representative of all or targeted segments of society.
- Privacy & Security. The privacy and security principle represents overarching values that require AI systems are required to have.; the latter AI systems have to be built in a safe way that respects the privacy of the data collected and upholds the highest levels of data security processes and procedures to keep the data confidential and prevent breaches.
- Humanity. The humanity principle highlights that AI systems should be built using an ethical methodology to be just and ethically permissible, based on intrinsic and fundamental human rights and cultural values, to generate a beneficial impact on individual stakeholders and communities through the adoption of a more human-centric design approach.
- Social & Environmental Benefits. This principle embraces the beneficial and positive impact of social and environmental priorities that should benefit individuals and the broader community that focuses on sustainable goals and objectives. AI systems should contribute to empowering and complementing social and ecological progress while addressing associated social and environmental ills.
- Reliability & Safety. The reliability and safety principle ensures that the AI system adheres to the set specifications and that the AI system behaves exactly as its designers intended and anticipated. Indeed, reliability is a measure of consistency and provides confidence in how robust a system is. At the same time, safety ensures that the AI system does not pose a risk of harm or danger to society and individuals. As an illustration, AI systems such as autonomous vehicles can threaten people’s lives if living organisms are not adequately recognized, specific scenarios are not trained for or if the system malfunctions.
- Transparency & Explainability. This principle is crucial for building and maintaining trust in AI systems and technologies. According to this, AI systems must be built with a high level of clarity and explainability, as well as features to track the stages of automated decision-making, particularly those that may lead to detrimental effects on individuals. Namely, the AI system should be designed to include an information section in the platform to give an overview of the AI model decisions as part of the overall transparency application of the technology.
- Accountability & Responsibility. This principle is closely related to the fairness principle, and it holds designers, vendors, procurers, developers, owners and assessors of AI systems and the technology itself ethically responsible and liable for the decisions and actions that may result in potential risk and adverse effects on individuals and communities. Proper mechanisms and control mechanisms have to be in place to avoid harm and misuse of this technology.
Entities shall be responsible for ensuring that their AI documents are published in compliance with the AI Ethics Principles, and the authority may measure their level of commitment, supporting them in evaluating their AI systems and making recommendations on how to improve their compliance.