governancecomplianceenterprise

EU AI Act Compliance Checklist for Canadian Companies

The EU AI Act applies to Canadian companies that deploy AI systems affecting EU residents, regardless of where the company is headquartered. With prohibited practices enforcement starting August 2025 and high-risk requirements taking effect in August 2026, Canadian companies face a compressed compliance timeline. This guide breaks down the risk classification tiers, specific requirements for each, the documentation burden, penalties up to 7% of global annual turnover, and a step-by-step compliance checklist.

Digiteria Labs/21 min read

Key Signals

  • The EU AI Act (Regulation 2024/1689) entered into force on August 1, 2024, with a staggered enforcement timeline. Prohibited AI practices became enforceable on February 2, 2025. General-purpose AI model obligations took effect on August 2, 2025. High-risk AI system requirements become enforceable on August 2, 2026 — giving Canadian companies roughly five months from the date of this article to reach full compliance.
  • The Act has extraterritorial scope modeled on GDPR Article 3: it applies to any organization that places an AI system on the EU market or whose AI system's output is "used in the Union," regardless of where the organization is established. A Canadian SaaS company whose product is used by EU customers is in scope. Full stop.
  • Penalties are severe and designed to be felt: up to 35 million EUR or 7% of global annual turnover (whichever is higher) for prohibited practice violations, up to 15 million EUR or 3% for high-risk non-compliance, and up to 7.5 million EUR or 1% for providing incorrect information to authorities.
  • Canada's own Artificial Intelligence and Data Act (AIDA), originally introduced as Part 3 of Bill C-27, remains stalled in the legislative process as of March 2026. This means Canadian companies face the paradox of needing to comply with EU AI regulation while having no equivalent domestic framework to benchmark against — and the two regimes, when AIDA eventually passes, are likely to differ significantly in approach.
  • The EU AI Office, established to oversee enforcement, published its first set of implementation guidelines in January 2026, including the Code of Practice for general-purpose AI model providers that will serve as the baseline compliance standard.

What Happened

I've been tracking the EU AI Act since the political agreement in December 2023, and I want to be direct: most Canadian companies I've spoken with are not ready. Some are not even aware they're in scope. The assumption I hear most often is "we're a Canadian company, EU regulation doesn't apply to us." This is wrong, and it's dangerously wrong, because the penalties are calculated as a percentage of global turnover — not EU revenue. A Canadian company with $50 million in annual revenue and a single EU customer using its AI-powered product faces a theoretical maximum penalty of $3.5 million for a prohibited practice violation. That gets your attention.

The EU AI Act is the world's first comprehensive AI-specific regulation, and its extraterritorial reach means it functions as a de facto global standard — much like GDPR did for data privacy. Canadian companies that sell software, provide SaaS services, or deploy AI features used by anyone in the EU need to understand their obligations now, not when AIDA eventually passes and creates a domestic compliance baseline. The compliance timeline is not hypothetical. Prohibited practices are already enforceable. High-risk requirements take effect in August 2026. The window for preparation is closing.

Let me break down exactly what this means for Canadian companies, what you need to do, and how much it's going to cost.

Note: I am not a lawyer, and this analysis does not constitute legal advice. The EU AI Act is a complex regulation with significant interpretive questions that are still being resolved through EU AI Office guidance and will ultimately be clarified by enforcement actions and case law. If your company is in scope (and if you have EU customers using AI features, it probably is), engage qualified EU regulatory counsel. This article is a strategic orientation to help you ask the right questions and prioritize the right workstreams — not a substitute for legal advice.

Does the EU AI Act Apply to Canadian Companies?

Yes. Article 2 of the Act defines three categories of entities in scope:

  1. Providers — any entity that develops an AI system or general-purpose AI model and places it on the market or puts it into service in the Union, "irrespective of whether that provider is established within the Union or in a third country." If you build AI software and it's available to EU users, you are a provider.
  2. Deployers — any entity that uses an AI system under its authority, if the deployer is established in the Union or if "the output produced by the AI system is used in the Union." If a Canadian company deploys an AI system whose results affect EU residents — even if the system runs on servers in Montreal — the company is a deployer subject to EU obligations.
  3. Importers and Distributors — entities in the EU supply chain for AI systems originating outside the Union. These are typically EU-based entities, but the obligations cascade back to the non-EU provider.

The critical phrase is "output used in the Union." This is interpreted broadly. If your AI-powered recommendation engine suggests products to EU customers, your AI-powered fraud detection system flags transactions by EU cardholders, or your AI-powered HR tool screens applications from EU candidates — the output is used in the Union, and you are in scope. The threshold is not whether you actively market to the EU. It is whether your AI system's output reaches EU residents.

The GDPR Precedent

If this sounds familiar, it should. The extraterritorial scope mirrors GDPR Article 3, and the enforcement precedent is instructive. When GDPR took effect in 2018, many non-EU companies assumed enforcement against foreign entities would be toothless. They were wrong. European data protection authorities have issued fines against companies headquartered in the US, Canada, and elsewhere, and have mechanisms for enforcement cooperation with non-EU jurisdictions. Expect the same for the AI Act. The EU AI Office has explicitly stated it intends to enforce against non-EU providers.

What Are the Key Compliance Dates?

The EU AI Act uses a staggered enforcement timeline. Here are the dates Canadian companies need to track:

  • February 2, 2025 (already in effect): Prohibitions on unacceptable-risk AI practices. If your AI system falls into a prohibited category, you are already non-compliant.
  • August 2, 2025 (already in effect): Obligations for providers of general-purpose AI (GPAI) models. If you offer a foundation model or general-purpose AI model used in the EU, these obligations apply now.
  • August 2, 2026: Full enforcement of high-risk AI system requirements, transparency obligations for limited-risk systems, and the complete conformity assessment framework. This is the big one.
  • August 2, 2027: High-risk AI system requirements that relate to AI systems used as safety components of products already regulated under existing EU product-safety legislation (e.g., medical devices, automotive, aviation).

The practical implication for most Canadian companies: you have until August 2, 2026 to achieve compliance for high-risk systems. If you haven't started, you're behind.

How Does the AI Act Classify Risk?

The Act organizes AI systems into four risk tiers, each with different obligations. Understanding where your systems fall is the first step in any compliance program.

Unacceptable Risk (Prohibited)

These AI practices are banned outright. No exceptions, no conformity assessments — they simply cannot be deployed in the EU market. Already enforceable since February 2025.

Prohibited practices include:

  • Social scoring by public authorities or on their behalf — AI systems that evaluate or classify individuals based on social behavior or personality characteristics, leading to detrimental or unfavorable treatment.
  • Subliminal manipulation — AI systems that deploy techniques beyond a person's consciousness to materially distort behavior in a way that causes or is likely to cause harm.
  • Exploitation of vulnerabilities — AI systems that exploit vulnerabilities of specific groups (age, disability, social or economic situation) to materially distort behavior.
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement purposes (with narrow exceptions for specific serious crimes).
  • Emotion recognition in the workplace and educational institutions.
  • Untargeted scraping of facial images from the internet or CCTV footage to build facial recognition databases.
  • Biometric categorization to infer sensitive attributes (race, political opinions, trade union membership, religious beliefs, sexual orientation).
  • Predictive policing based solely on profiling or personality traits.

What this means for Canadian companies: Audit your AI systems now. If any system deployed in or affecting the EU falls into these categories, it must be decommissioned immediately. The penalty for prohibited practice violations is up to 35 million EUR or 7% of global annual turnover.

High Risk

High-risk AI systems are permitted but subject to extensive requirements including conformity assessments, risk management systems, data governance, technical documentation, human oversight, and post-market monitoring. Enforcement begins August 2026.

AI systems classified as high-risk include those used in:

  • Biometric identification and categorization (beyond the prohibited categories)
  • Critical infrastructure management and operation (energy, water, transport, digital infrastructure)
  • Education and vocational training — AI that determines access to education, evaluates learning outcomes, or monitors student behavior during assessments
  • Employment — AI for recruitment, screening, hiring decisions, task allocation, performance monitoring, or termination decisions
  • Essential services access — AI used for credit scoring, insurance pricing, emergency services dispatch, or public benefit eligibility
  • Law enforcement — risk assessment, evidence reliability, crime prediction (beyond prohibited predictive policing)
  • Migration and border control — risk assessment, document verification
  • Justice and democratic processes — AI assisting judicial authorities in researching and interpreting facts and law

What this means for Canadian companies: If you provide AI-powered HR tech, credit scoring, educational assessment, insurance underwriting, or infrastructure management tools used by EU customers, your systems are almost certainly high-risk. The compliance requirements are substantial — see the detailed checklist below.

Limited Risk

Systems that interact with people, generate content, or perform emotion detection (outside prohibited contexts) face transparency obligations. Users must be informed they are interacting with an AI system, and AI-generated content must be labeled as such.

Examples:

  • Chatbots and virtual assistants (must disclose AI nature to users)
  • AI-generated text, images, audio, video (must be machine-readable labeled)
  • Emotion recognition systems (outside prohibited workplace/education contexts — must inform subjects)

What this means for Canadian companies: If your product includes a chatbot, AI content generation, or synthetic media features used by EU residents, you need transparency disclosures. This is the lowest compliance burden but is still legally required.

Minimal Risk

AI systems that don't fall into the above categories — spam filters, AI-powered search ranking, inventory optimization, most recommendation systems — face no mandatory requirements under the Act, though the EU encourages voluntary adoption of codes of conduct.

What Are the Specific Requirements for High-Risk AI Systems?

This is where the compliance burden is heaviest. If your AI system is classified as high-risk, you must implement all of the following before August 2, 2026:

Risk Management System (Article 9)

You must establish, implement, document, and maintain a continuous risk management system that:

  • Identifies and analyzes known and reasonably foreseeable risks
  • Estimates and evaluates risks that may emerge when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse
  • Evaluates risks based on post-market monitoring data
  • Adopts suitable risk management measures

This is not a one-time assessment. It is a continuous process that must be documented and updated throughout the system's lifecycle.

Data Governance (Article 10)

Training, validation, and testing datasets must meet quality criteria including:

  • Appropriate data governance and management practices
  • Relevant, representative, and to the extent possible, free of errors and complete data
  • Consideration of the specific geographical, contextual, behavioral, or functional setting within which the system is intended to be used
  • Examination for possible biases that are likely to affect health and safety or lead to discrimination

For Canadian companies: if your AI system was trained on data that doesn't adequately represent EU populations, this is a compliance gap. The Act specifically requires that training data be representative of the deployment context.

Technical Documentation (Article 11)

You must prepare and maintain technical documentation that demonstrates compliance with all high-risk requirements. The documentation must include:

  • A general description of the AI system, its intended purpose, and the provider
  • A detailed description of the system's elements, development process, and architecture
  • Detailed information about monitoring, functioning, and control of the system
  • A description of the risk management system
  • A description of data governance measures
  • A description of the human oversight measures
  • Information about the system's performance, including accuracy, robustness, and cybersecurity metrics
  • A detailed description of the conformity assessment procedure followed

This documentation must be kept up to date and made available to market surveillance authorities upon request. For Canadian companies operating remotely from the EU, this means your documentation must be accessible to EU authorities — consider appointing an EU-based authorized representative.

Record-Keeping / Logging (Article 12)

High-risk systems must include automatic logging capabilities that record:

  • The period of each use
  • The reference database against which input data has been checked
  • Input data for which the search has led to a match
  • The identification of natural persons involved in the verification of results

Logs must be retained for a period appropriate to the intended purpose and applicable legal obligations (minimum six months).

Transparency and Information (Article 13)

High-risk systems must be designed and developed to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately. Users must receive information about:

  • The provider's identity and contact details
  • The system's capabilities and limitations, including accuracy levels, foreseeable misuse scenarios, and intended purpose
  • Technical measures for human oversight
  • The expected lifetime of the system and maintenance measures

Human Oversight (Article 14)

High-risk systems must be designed and developed to be effectively overseen by humans during the period in which they are in use. Human oversight measures must:

  • Enable the human overseer to fully understand the AI system's capabilities and limitations
  • Enable the human overseer to correctly interpret the AI system's output
  • Enable the human overseer to decide not to use the AI system or to disregard, override, or reverse the output
  • Enable the human overseer to intervene on the operation of the system or interrupt the system through a "stop" button

This aligns with the human-in-the-loop architecture patterns that we've identified as critical for production AI systems. The EU AI Act effectively mandates what good engineering practice already recommends.

Accuracy, Robustness, and Cybersecurity (Article 15)

High-risk systems must achieve appropriate levels of accuracy, robustness, and cybersecurity, and perform consistently in these respects throughout their lifecycle. This includes:

  • Declared accuracy metrics in the instructions of use
  • Resilience against errors, faults, and inconsistencies
  • Resilience against adversarial manipulation (adversarial robustness)
  • Redundancy solutions including backup or fail-safe plans
  • Technical cybersecurity measures appropriate to the risk

Conformity Assessment (Article 43)

Before placing a high-risk system on the EU market, the provider must undergo a conformity assessment. For most high-risk AI systems, this is a self-assessment using internal controls. However, for certain high-risk systems (notably biometric identification), a third-party conformity assessment by a notified body is required.

What Penalties Do Canadian Companies Face?

The penalty structure is tiered by violation severity:

Violation TypeMaximum Penalty
Prohibited practices35 million EUR or 7% of global annual turnover
High-risk requirements15 million EUR or 3% of global annual turnover
Incorrect information to authorities7.5 million EUR or 1% of global annual turnover

For SMEs and startups, the Act specifies that the lower of the two amounts (fixed amount vs. turnover percentage) applies. But even the lower thresholds are significant.

Important for Canadian companies: The penalty is calculated on global annual turnover, not EU revenue. A Canadian company with $100M CAD in global revenue and $2M in EU-attributed revenue faces a maximum penalty for high-risk non-compliance of approximately $4.5M CAD (3% of global turnover) — far exceeding its EU revenue. This is the same penalty calculus that made GDPR fines against non-EU companies so attention-getting.

"The EU AI Act penalty structure is designed to make non-compliance more expensive than compliance for any company at any scale. The 7% of global turnover cap for prohibited practices exceeds GDPR's 4% maximum. The message is clear: if you want access to the EU market, compliance is not optional."

How Does the EU AI Act Compare to Canada's AIDA?

Canada's Artificial Intelligence and Data Act (AIDA), introduced as Part 3 of Bill C-27 (the Digital Charter Implementation Act), has been in legislative limbo since its introduction in June 2022. As of March 2026, AIDA has not received Royal Assent and its final form remains uncertain following the change in government leadership.

Here's how the two frameworks compare on key dimensions:

DimensionEU AI ActAIDA (as proposed)
StatusIn force (staggered enforcement)Stalled in legislative process
ApproachPrescriptive, risk-based classificationPrinciples-based, delegated to regulations
Risk tiersFour tiers (unacceptable, high, limited, minimal)Two tiers (high-impact, general)
ScopeAll AI systems with EU market/output reach"High-impact" systems (to be defined by regulation)
DocumentationDetailed technical documentation requirements in the ActRequirements to be specified in future regulations
PenaltiesUp to 35M EUR / 7% global turnoverUp to $25M CAD / 5% global revenue
ExtraterritorialYes, broadYes, but narrower
Conformity assessmentSelf-assessment or third-party (depending on category)Not specified in the Act (delegated)
Enforcement bodyEU AI Office + national authoritiesAI and Data Commissioner (proposed)

The key difference: the EU AI Act is prescriptive. It tells you exactly what you need to do, with specific technical requirements spelled out in the regulation and its annexes. AIDA, as proposed, is a framework act that delegates most of the specific requirements to future regulations that have not yet been drafted. Canadian companies cannot wait for AIDA to provide domestic compliance guidance — the EU requirements are concrete and enforceable now.

Note: There is growing discussion among Canadian trade policy analysts that AIDA, if and when it passes, will be intentionally aligned with the EU AI Act to facilitate an adequacy or mutual recognition arrangement — similar to the GDPR adequacy decisions for international data transfers. If this happens, investing in EU AI Act compliance now may serve as a head start on Canadian compliance as well. But this is speculative. Make compliance decisions based on current law, not anticipated harmonization.

What Does Compliance Cost vs. What Does Non-Compliance Cost?

Let me put concrete numbers on this, because I've seen too many companies defer compliance based on a vague sense that "it'll be expensive" without comparing it to the alternative.

Cost of Compliance

Based on conversations with compliance consultants, legal firms, and companies that have completed early EU AI Act compliance programs:

  • Initial compliance assessment (gap analysis, system inventory, risk classification): $50K-$150K for a mid-market company with 3-10 AI systems
  • Technical documentation preparation per high-risk system: $30K-$80K (depending on system complexity)
  • Risk management system implementation: $40K-$100K
  • Data governance audit and remediation: $25K-$75K per system
  • Conformity assessment (self-assessment with legal review): $20K-$50K per system
  • Ongoing compliance (monitoring, documentation updates, re-assessments): $50K-$150K annually
  • EU authorized representative (required for non-EU providers of high-risk systems): $15K-$40K annually
  • Legal counsel (EU AI Act specialist): $30K-$80K for initial setup, $10K-$30K ongoing

Total estimated first-year compliance cost: $200K-$600K for a mid-market Canadian company with 3-5 high-risk AI systems. Ongoing annual costs of $75K-$200K.

Cost of Non-Compliance

  • Regulatory penalties: Up to 3% of global annual turnover for high-risk non-compliance. For a $50M revenue company, that's $1.5M.
  • Market access loss: If your AI system is found non-compliant, EU market surveillance authorities can order its withdrawal from the EU market. For companies with significant EU revenue, this is potentially catastrophic.
  • Contractual exposure: EU enterprise customers are increasingly including AI Act compliance warranties in procurement contracts. Non-compliance creates breach-of-contract liability in addition to regulatory risk.
  • Reputational damage: An EU enforcement action becomes public record. In B2B markets, a compliance failure can trigger customer audits and contract reviews across your entire portfolio.
  • Retroactive remediation: Achieving compliance under enforcement pressure is significantly more expensive than proactive compliance. Emergency legal counsel, accelerated technical remediation, and regulatory engagement under adversarial conditions can easily cost 3-5x the proactive compliance estimate.

The math is straightforward: for most Canadian companies with meaningful EU exposure, the expected cost of non-compliance substantially exceeds the cost of compliance. This is by design. The Act's penalty structure is calibrated to make compliance the economically rational choice.

Step-by-Step Compliance Checklist for Canadian Companies

Here is the compliance program I'd recommend, organized by priority and timeline.

Phase 1: Assessment (Start Immediately — Target Completion: April 2026)

1. Inventory your AI systems. Document every AI system your organization develops, deploys, or distributes. Include internal systems and customer-facing products. For each system, record: what it does, what data it uses, who it affects, and whether its output reaches EU residents.

2. Determine EU market exposure. For each AI system, assess whether it is "placed on the EU market" or its "output is used in the Union." If you have EU customers, EU users, or EU-resident data subjects affected by your AI systems, assume you are in scope.

3. Classify risk tier for each system. Map each AI system to the Act's risk categories: prohibited, high-risk, limited risk, or minimal risk. Use Annex III of the Act as your reference for high-risk classification. When in doubt, classify conservatively (higher risk).

4. Screen for prohibited practices. Review all AI systems against the prohibited practices list. If any system falls into a prohibited category, it must be decommissioned for EU-facing use immediately. This is not a future obligation — it is already enforceable.

5. Engage EU regulatory counsel. Retain a law firm with specific EU AI Act expertise. Generic data privacy counsel is not sufficient — the AI Act has its own compliance framework with distinct requirements. Canadian law firms with EU AI Act practices include major firms in Toronto, Montreal, and Vancouver that have established EU regulatory partnerships.

Phase 2: Gap Analysis and Remediation Planning (April - May 2026)

6. Conduct gap analysis for each high-risk system. For each high-risk AI system, assess current compliance against each Article 9-15 requirement (risk management, data governance, documentation, logging, transparency, human oversight, accuracy/robustness/cybersecurity). Document gaps and estimate remediation effort.

7. Establish a risk management system. Implement the continuous risk management process required by Article 9. This should integrate with your existing product development lifecycle. Document the methodology, assign responsibility, and establish review cadence.

8. Audit training data governance. For each high-risk system, evaluate training, validation, and testing data against Article 10 requirements. Key questions: Is the data representative of EU populations? Have you examined it for biases? Is the data provenance documented? Are there gaps in representativeness for specific EU demographic or geographic contexts?

9. Assess human oversight mechanisms. For each high-risk system, verify that the Article 14 human oversight requirements are met. Can a human overseer understand the system's output, override it, and stop the system? If your system operates autonomously without human oversight capabilities, this is a compliance gap that requires architectural changes — and architectural changes take time. Start now.

Phase 3: Implementation (May - July 2026)

10. Prepare technical documentation. For each high-risk system, prepare the comprehensive technical documentation required by Article 11 and Annex IV. This is the most time-consuming compliance requirement. It requires detailed description of system architecture, training methodology, performance metrics, risk management measures, and testing results. Allocate at minimum four to six weeks per system.

11. Implement logging capabilities. Ensure each high-risk system meets Article 12 automatic logging requirements. If your system doesn't currently log the required data points, implement logging infrastructure. This may require application-level changes.

12. Implement transparency requirements. For high-risk systems: prepare deployer-facing documentation meeting Article 13 requirements. For limited-risk systems: implement user-facing AI disclosures (chatbot disclosure, AI-generated content labeling).

13. Appoint an EU authorized representative. Article 22 requires non-EU providers of high-risk AI systems to appoint an authorized representative established in the EU before placing the system on the EU market. This representative acts as your point of contact for EU authorities. Engage an authorized representative service or establish the role through an EU-based legal entity.

14. Conduct conformity assessment. For most high-risk AI systems, conduct a self-assessment using the internal control procedure (Annex VI of the Act). For biometric identification systems, engage a notified body for third-party assessment. Document the assessment and prepare the EU Declaration of Conformity.

Phase 4: Ongoing Compliance (August 2026 Onward)

15. Establish post-market monitoring. Article 72 requires providers of high-risk systems to establish and document a post-market monitoring system to collect, analyze, and evaluate data on the system's performance throughout its lifetime. Integrate this with your existing product analytics and incident management processes.

16. Implement serious incident reporting. Article 73 requires providers to report serious incidents (defined as incidents leading to death, serious damage to health, property, or the environment, or serious and irreversible disruption of critical infrastructure management) to the market surveillance authority within 15 days. Establish an incident classification and reporting workflow.

17. Maintain documentation currency. Technical documentation, risk assessments, and conformity assessments must be kept up to date. Establish a review cycle (at minimum annually, and upon any significant system change) and assign ownership.

18. Monitor regulatory guidance. The EU AI Office continues to publish implementation guidance, codes of practice, and harmonized standards. Monitor these publications and adjust your compliance program accordingly. The AI governance landscape is evolving rapidly — regulatory guidance published in 2026 may significantly clarify ambiguous requirements.

Recommendation

What I'd Do

If you're a CEO: Make EU AI Act compliance a board-level agenda item this quarter. The August 2026 deadline for high-risk requirements is five months away, and compliance programs for complex AI systems take four to six months to implement. If you haven't started, you're cutting it close. Assign executive ownership, allocate budget ($200K-$600K for a mid-market company, more for enterprise), and engage EU regulatory counsel within the next 30 days. Don't treat this as a legal-team-only problem — it requires engineering, product, and legal coordination. The companies that approach AI Act compliance as a cross-functional program will spend less and move faster than those that treat it as a legal checkbox exercise.

If you're a CTO: Start with the AI system inventory and risk classification. You probably have more AI systems in scope than you think — automated decision-making features, recommendation engines, content moderation tools, and AI-powered analytics can all fall into high-risk or limited-risk categories depending on their application context. The technical documentation requirement (Article 11) is the highest-effort engineering task. Assign a senior engineer to each high-risk system and start documentation now. And prioritize human oversight mechanisms — if your AI systems don't currently support human override and intervention, adding these capabilities requires architectural changes that cannot be rushed. The incremental autonomy architecture that works for production AI agents also happens to satisfy the EU AI Act's human oversight requirements. This is not a coincidence.

If you're a founder (pre-Series B): Don't panic, but don't ignore this. If you have EU customers or plan to, build compliance into your product architecture from the start. Implement logging, document your training data governance, and build human oversight capabilities as product features rather than compliance afterthoughts. The marginal cost of building compliance-aware architecture from the start is a fraction of retrofitting it later. For limited-risk systems (chatbots, content generation), implement AI disclosure early — it's low effort and avoids the most common enforcement trigger. For high-risk systems, consider whether the EU market justifies the compliance investment at your current stage. If EU revenue is less than 5% of total revenue, you might rationally defer full compliance until you've raised enough capital to fund the program — but get legal advice on the risk of operating non-compliantly in the interim.

If you're evaluating AI automation for your business: Factor EU AI Act compliance into your build-vs-buy decisions. If you're building AI systems in-house, you bear the full compliance burden as a provider. If you're deploying commercial AI systems, you bear deployer obligations (which are lighter, but still real — particularly around human oversight and use monitoring). Ask your AI vendors for their EU AI Act compliance documentation. If they can't provide it, that's a red flag — and a contractual risk you don't want to inherit.

Sources

  1. "Regulation (EU) 2024/1689 — Artificial Intelligence Act," Official Journal of the European Union, eur-lex.europa.eu/eli/reg/2024/1689/oj (August 2024)
  2. "AI Act Implementation Timeline," EU AI Office, digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai (accessed March 2026)
  3. "General-Purpose AI Code of Practice — First Draft," EU AI Office, digital-strategy.ec.europa.eu/en/library/code-practice-general-purpose-ai (January 2026)
  4. "Bill C-27 — Digital Charter Implementation Act," Parliament of Canada, parl.ca/legisinfo/en/bill/44-1/c-27 (accessed March 2026)
  5. "The EU AI Act: A Guide for Non-EU Providers," Bird & Bird LLP, twobirds.com/en/insights/2025/eu-ai-act-non-eu-providers (2025)
  6. "AI Act Compliance Costs: Early Estimates and Industry Impact," Centre for European Policy Studies, ceps.eu/publications/ai-act-compliance-costs (2025)
  7. "Extraterritorial Application of the EU AI Act," Stanford HAI Policy Brief, hai.stanford.edu/policy-brief/eu-ai-act-extraterritorial (2025)
  8. "EU AI Act Penalties and Enforcement: What Companies Need to Know," DLA Piper, dlapiper.com/eu-ai-act-penalties (2025)

Need help implementing AI infrastructure for your organization? We help enterprises build, deploy, and optimize production AI systems. Learn about our AI consulting services.

Related insights