AI Policy Tracker
A regularly updated overview of AI regulation and governance frameworks worldwide. Each entry includes the regulation status, scope, and implications for builders and procurement teams.
Last reviewed: April 2026
EU AI Act
High RiskRisk-based classification of AI systems. Bans certain uses (social scoring, real-time biometric surveillance with exceptions). Requires conformity assessments for high-risk systems. Foundation model obligations for general-purpose AI.
Official sourceUS Executive Order 14110
Medium RiskRequires safety testing and reporting for powerful AI models. Directs agencies to develop AI governance frameworks. Establishes AI Safety Institute at NIST. Addresses AI in critical infrastructure, healthcare, and hiring.
Official sourceUK AI Safety Institute
Low RiskGovernment body focused on evaluating frontier AI risks. Conducts pre-release safety testing of advanced models. Published framework for evaluating societal impacts. Voluntary cooperation model rather than binding regulation.
Official sourceCanada AIDA (C-27)
Medium RiskArtificial Intelligence and Data Act. Part of broader digital charter. Creates obligations for high-impact AI systems. Requires impact assessments and mitigation measures. Criminal penalties for reckless deployment causing harm.
Official sourceChina AI Regulations
High RiskSeries of regulations covering generative AI, deepfakes, recommendation algorithms, and synthetic content. Requires algorithmic filing and security assessments. Content must align with socialist core values. Mandates AI-generated content labeling.
Official sourceAbout this tracker
This tracker covers major AI governance frameworks with global impact. It is reviewed monthly and updated when significant changes occur. The risk levels indicate the regulatory burden on AI system deployers, not a judgment on the regulation quality. For methodology details, see the source standards page.