AI Compliance in Canada: 7 Obligations Most Businesses Don't Know They Have
Canada has no federal AI law, but businesses face binding obligations from 7+ regulatory instruments. Quebec's Law 25 carries penalties up to C$25 million. The EU AI Act reaches Canadian exporters August 2026. Here is what applies to your business.
A jurisdiction-by-jurisdiction map of every binding AI obligation that applies to Canadian businesses in 2026 — federal, provincial, and international — with the specific penalty amounts, effective dates, and a practical compliance framework that works regardless of your industry.
AI compliance in Canada refers to the set of legal obligations governing how businesses develop, deploy, and use artificial intelligence systems. Canada does not have a single federal AI law. Instead, AI compliance requirements come from a patchwork of federal privacy law (PIPEDA), provincial legislation (Quebec's Law 25, Ontario's Bills 149 and 194), sector-specific guidelines (OSFI E-23 for financial institutions), and international regulations (the EU AI Act for businesses serving European markets).
72% of Canadian organizations say responsible AI is a top priority. 36% have no dedicated governance function (PwC Canada, 2026 Trust in AI Report). That gap between stated intention and operational reality is where compliance risk lives.
The assumption most Canadian business owners operate under is straightforward: Canada has not passed an AI law, so AI compliance is a future problem. That assumption is wrong. The obligations are already binding, the penalties are already real, and the enforcement has already started.
What Died, and What Survived
The Artificial Intelligence and Data Act was supposed to be Canada's answer to the EU AI Act. Introduced in June 2022 as Part 3 of Bill C-27, AIDA proposed a risk-based framework for regulating high-impact AI systems (Parliament of Canada). It never reached a vote. When Parliament was prorogued in January 2025, Bill C-27 died on the Order Paper. The federal election that followed buried any prospect of revival.
Instead of one framework with clear rules, what survived is more complicated: Canadian businesses face obligations from at least seven different regulatory instruments, each with its own scope, definitions, and penalties. The fragmentation is the compliance problem. A business operating in Ontario with clients in Quebec and customers in Europe is simultaneously subject to PIPEDA, Law 25, two Ontario statutes, the EU AI Act, and potentially OSFI guidelines if it touches financial services.
What Is Binding Right Now
Federal: PIPEDA
PIPEDA contains no mention of artificial intelligence. It was written in 2000. But the Office of the Privacy Commissioner interprets its principles to apply to AI systems that make decisions about individuals. Organizations must provide meaningful explanations of how significant AI-assisted decisions are made and ensure individuals can access information about automated processing that affects them (OPC, Regulatory Framework for AI).
The practical implication: if your AI system processes personal information and influences decisions about customers, employees, or applicants, PIPEDA's accountability and transparency principles already apply.
Federal: Treasury Board Directive on Automated Decision-Making
The federal government regulates its own AI use through the Treasury Board Directive on Automated Decision-Making, which applies to most federal institutions (Treasury Board of Canada Secretariat). Institutions must complete an Algorithmic Impact Assessment containing 65 risk questions and 41 mitigation questions that produce an impact level from I to IV. Existing automated decision systems developed or procured before June 24, 2025 have until June 24, 2026 to comply with updated requirements.
This Directive does not apply directly to private sector businesses. But it matters for two reasons: federal procurement contracts increasingly flow these requirements down to vendors, and the Directive signals the governance standard that future federal legislation is likely to adopt for the private sector.
Quebec: Law 25
Quebec's privacy modernization carries the most direct AI compliance teeth in Canada. Organizations using automated decision-making must inform individuals when a decision is made exclusively by automated processing, disclose the personal information used and the principal factors behind the decision, and provide an opportunity for review (Fasken). AI deployments that involve profiling or elevated privacy risk require a mandatory privacy impact assessment under Section 3.3 (Raymond Chabot Grant Thornton).
Non-compliance carries two penalty tiers. Administrative monetary penalties reach C$10 million or 2% of worldwide turnover. Penal fines — for serious violations — reach C$25 million or 4% of worldwide turnover, whichever is greater (BigID; Osler). Enforcement is active: 444 confidentiality-incident declarations were filed in 2023-2024 alone. Law 25 applies to any organization that collects, uses, or discloses personal information of Quebec residents, regardless of where the organization is headquartered.
Ontario: Three Governance Layers
Ontario built a three-layer AI governance system over 13 months. Bill 149 requires employers with 25 or more employees to disclose when AI is used in hiring, effective January 2026. Bill 194 (the Strengthening Cyber Security and Building Trust in the Public Sector Act) received Royal Assent in November 2024 and creates AI accountability requirements for public sector entities and their contractors (Dentons). The IPC-OHRC Joint Principles for Responsible AI Use, published January 2026, establish evaluation standards that apply to all organizations.
DeployLabs published a detailed breakdown of how Ontario's three frameworks interlock: Ontario Has Three AI Governance Frameworks. Most Organizations Know About One.
For a focused analysis of Bill 149's hiring disclosure requirements, see: Ontario AI Hiring Disclosure: What Employers Must Know
| Jurisdiction | Regulation | Status | Key AI Obligation | Penalty |
|---|---|---|---|---|
| Federal | PIPEDA | In force (2000) | Transparency for automated decisions | Commissioner orders, Federal Court |
| Quebec | Law 25 | In force (2023-2024) | Disclose automated decisions, impact assessments | Up to C$25M or 4% turnover |
| Ontario | Bill 149 | In force (Jan 2026) | AI hiring disclosure (25+ employees) | ESA enforcement |
| Ontario | Bill 194 | In force (Nov 2024) | Public sector AI accountability | Regulations pending |
| Federal | TB Directive on ADM | In force (updated 2024) | Algorithmic Impact Assessment for federal AI systems | Compliance mandate (June 24, 2026 deadline for existing systems) |
| Federal | Voluntary Code | Active (2025) | Responsible GenAI development | Voluntary (reputational) |
Not sure where AI fits in your operations?
Take the Free AI Readiness Assessment →What Is Coming
EU AI Act: August 2, 2026
The regulation that will affect the most Canadian businesses is not Canadian. The EU AI Act applies to any company offering AI systems or services within the EU, regardless of where the company is physically located (Canada Trade Commissioner Service). High-risk AI system obligations become applicable on August 2, 2026, with remaining provisions following by August 2027.
The penalty structure exceeds GDPR. Prohibited AI practices carry fines up to EUR 35 million or 7% of global annual turnover. High-risk system violations carry fines up to EUR 15 million or 3% of turnover. Providing false information about AI systems carries fines up to EUR 7.5 million or 1% of turnover (EU AI Act, Article 99). SME provisions exist but the thresholds are defined by EU standards: fewer than 250 employees and either annual turnover under EUR 50 million or a balance sheet under EUR 43 million.
Canadian SaaS companies with EU customers, AI product exporters, and service providers processing EU residents' data through AI systems all fall within scope.
OSFI E-23: May 1, 2027
Federally regulated financial institutions face the most specific AI governance requirements through OSFI's updated Guideline E-23 on Model Risk Management. Published September 2025 with an 18-month transition period, E-23 requires enterprise-wide model risk management frameworks that explicitly cover AI and machine learning models (OSFI). The guideline expanded from covering only banks and trust companies to include all FRFIs: foreign bank branches, life insurance companies, and property and casualty companies.
For businesses that serve financial institutions as technology vendors or AI consultants, E-23 will flow through procurement requirements. Banks will need their AI vendors to demonstrate compliance with model governance standards.
The Compliance Framework That Works Across Jurisdictions
The patchwork creates a practical challenge. A Toronto business with Quebec clients and European customers faces obligations from five jurisdictions simultaneously. Building separate compliance programs for each is expensive and fragile.
The common thread across every regulation listed above is three requirements: know what AI you use, document how it makes decisions, and be able to explain those decisions to the people they affect.
A 30-person professional services firm in Toronto uses AI for client intake screening, document review, and scheduling. Under PIPEDA, it must be transparent about automated processing. Under Ontario's Bill 149, it must disclose AI in hiring. If it serves Quebec clients, Law 25 requires disclosure and impact assessments for automated decisions. European clients bring the EU AI Act into scope, which may classify the document review system as high-risk. The firm needs one governance framework that satisfies the highest applicable standard, not four separate compliance programs. Currently, that standard is Quebec's Law 25 for domestic operations and the EU AI Act for international ones.
Building compliance against the most stringent applicable regulation creates automatic compliance with less demanding ones. For most Canadian businesses, that means Law 25 domestically and the EU AI Act internationally, covering multiple jurisdictions with a single governance framework.
The practical first step is an inventory. Map every AI system your business uses, what personal information it processes, what decisions it influences, and which jurisdictions apply based on where your customers and employees are located. That mapping is the foundation of every compliance requirement across every jurisdiction. Most businesses have never done it.
Understanding what your AI systems actually do, what data they touch, and what governance they need is the starting point for compliance. A structured AI readiness assessment identifies these gaps before they become enforcement actions.
For Ontario-specific compliance, start with our breakdown of the three governance frameworks and Bill 149 hiring disclosure requirements. For businesses evaluating their security posture alongside compliance, see what most businesses miss about AI agent security.
- Canada has no federal AI law — AIDA died in Parliament in January 2025 — but Canadian businesses face binding AI obligations from at least 7 regulatory instruments across federal, provincial, and international jurisdictions
- Quebec's Law 25 carries the strongest domestic enforcement: penal fines up to C$25 million or 4% of worldwide turnover for serious violations, with mandatory impact assessments for AI that profiles or makes decisions about individuals
- The EU AI Act applies to Canadian businesses selling AI into Europe regardless of where they are located, with high-risk obligations effective August 2, 2026 and fines up to EUR 35 million or 7% of global turnover
- The practical compliance approach: build governance against the most stringent applicable regulation (Law 25 domestically, EU AI Act internationally) and satisfy less demanding requirements automatically