Contact us
Need help navigating AI governance?
Saudi Arabia has opened a public consultation on its draft Responsible AI Policy, with submissions due by 3 May 2026. For businesses active in the Kingdom, the significance is broader than the consultation itself: the draft signals that Saudi Arabia is moving from high-level principles toward a more operational governance model for the development and use of AI systems.
The draft applies broadly across government entities, private sector organisations, non-profit actors, and individuals who develop, use, or publish AI-enabled applications or solutions in Saudi Arabia. The policy is designed to balance AI adoption and innovation with responsible use, while introducing a more preventive approach to identifying and managing higher-risk AI uses.
This policy demonstrates a clear shift from broad principle-setting to more structured governance expectations. At its core, the draft policy establishes a risk-tiering framework, categorising AI systems into four levels: critical, high, limited, and low risk. It addresses privacy, transparency, and safety by design, alongside requirements relating to testing, performance monitoring, data protection, cybersecurity, content moderation, non-discrimination, governance, and registration-related obligations.
For companies developing or deploying AI in Saudi Arabia, it points to a more formal expectation that responsible AI practices should be built into product design, governance, and compliance processes from the outset.
The Saudi Data and AI Authority (SDAIA) also introduces several operational mechanisms that move beyond principle-based governance:
The draft should also be seen in context. Saudi Arabia has already laid important foundations through earlier AI ethics principles and guidance on issues such as deepfakes. This consultation suggests the next phase is beginning: one in which responsible AI is translated into more operational expectations for market participants, not just broad directional guidance.
For businesses, the immediate implication is not that an AI-specific law has arrived overnight. It is that AI governance in Saudi Arabia is becoming more structured, more implementation-focused and more relevant to day-to-day business decisions. Organisations operating in the Kingdom should expect closer attention to how AI systems are designed, documented, monitored, and governed in practice.
SDAIA’s draft policy signals a clear move toward a layered AI governance regime in Saudi Arabia, complementing existing frameworks such as the Personal Data Protection Law (PDPL) and National Cybersecurity Authority (NCA) controls. Together, these regimes point to a more converged compliance model, requiring organisations to align AI governance with data protection and cybersecurity requirements.
This matters especially for multinational technology companies and enterprise deployers. Responsible AI can no longer sit only with policy or legal teams. Privacy, product and security, compliance, and policy functions will need to work more closely together to ensure that internal governance approaches can be explained, evidenced, and adapted to Saudi requirements as they evolve.
Saudi Arabia is positioning itself as a serious actor in shaping the next phase of AI governance. For companies with exposure to the Kingdom, this consultation is an early opportunity to understand where expectations are heading and to engage before the framework is finalised.






Our dedicated experts are tracking regulatory developments across the Middle East and beyond.
