news + views + events
Back
AI + Data Privacy: Navigating the Regulatory Landscape

AI and data privacy laws are evolving fast, especially in Canada where the Artificial Intelligence and Data Act was shelved after the April 2025 election. Across Quebec, Ontario, the U.S., and the EU, new rules around automated decision-making and AI use are emerging. Risks tied to training data, real-time processing, non-compliance, and malicious use highlight the need for strong safeguards. Strategies like anonymization, governance frameworks, and adherence to laws like PIPEDA and PIPA are critical. With global acts like the EU AI Act and California’s AI Transparency Act on the horizon, staying compliant and ethical is more important than ever. 

 

As artificial intelligence (AI) evolves, the intersection of AI technologies and data privacy continues to be a crucial area of focus for legislators, industry leaders, and privacy advocates. This article explores the current state of AI regulation in Canada and beyond, providing insights into the legislative developments, risks, and strategies for effective compliance.

Current State of AI Regulation in Canada

The proposed Artificial Intelligence and Data Act (AIDA) was introduced in Parliament as part of Bill C-27 in June 2022 and died on the vine when the federal election was held in April, 2025.  It aimed to establish a regulatory framework for AI systems, enforced by a new government official known as the "AI and Data Commissioner." AIDA was one component of a legislative package that also included the Consumer Privacy Protection Act and the Personal Information and Data Protection Tribunal Act, collectively forming the Digital Charter Implementation Act. All of these legislative initiatives have been terminated, and after May 27, 2025, it will be up to Canada's new session of Parliament to decide on the approach to AI regulation at the federal level.

Automated Decision-Making (ADM) Regulations

Automated decision-making is a significant area of concern, with several jurisdictions implementing regulations. At the federal level, the Directive on Automated Decision-Making, which came into effect in April 2019 and was updated in 2021, applies to federal government institutions, with specific exceptions. In Quebec, recent amendments to the Act respecting personal information mandate that organizations inform individuals when decisions are based exclusively on automated processing. Ontario's Bill 149 and Bill 194 further address the use of AI in employment and public sector contexts, respectively.

In the U.S., states including New York (Local Law 144) and Illinois (AI Video Interview Act) have enacted specific regulations governing the use of ADM technologies in employment, recognizing the privacy implications of such systems.

Main Areas of Risk + Strategies for Mitigation

AI systems present several privacy-related risks, including:

  • Training Risks: Personal data used for training of AI models poses significant privacy concerns.
  • Use Risks: AI systems handling sensitive information in real time, as both input and output, can lead to breaches.
  • Compliance Risks: Non-adherence to regulatory requirements can result in legal repercussions.
  • Intentional Harm Risks: AI can be misused by threat actors to increase the risk of any number of harms, including privacy breach.

Strategies to mitigate these risks include implementing robust governance and education frameworks, employing techniques like anonymization and pseudonymization of personal data, and adhering to foundational privacy principles from laws such as the Personal Information Protection Act in Alberta, and (until we get something different at the federal level) the Personal Information Protection and Electronic Documents Act (PIPEDA).

Guidance from Regulatory Bodies + Industry Standards

The Office of the Privacy Commissioner of Canada (OPC) has been proactive in its oversight role, including its investigation into OpenAI announced in April 2023 regarding compliance with Canadian privacy laws. The OPC also provides recommendations for PIPEDA reform and principles for responsible AI technologies.

Globally, initiatives like the European Union's AI Act (entered into force on August 1, 2024, and will become fully effective on August 2, 2026) and California's AI Transparency Act (which comes into effect January 1, 2026) highlight the growing emphasis on transparency and accountability in AI development.

Conclusion

As AI technologies integrate deeper into various sectors, understanding and navigating the regulatory landscape becomes essential. Stakeholders must stay informed about legislative changes and adopt best practices to ensure that AI systems are developed and used responsibly and ethically.

If you need guidance on aligning your AI practices with evolving privacy laws or developing a compliant governance strategy, contact Richard Stobbe in Calgary, Vita Wensel in Edmonton, or any member of Field Law's Privacy + Data Management Team

 

To gain a deeper understanding of the evolving regulatory landscape and how to build effective, forward-looking AI governance strategies, watch our complimentary webinar: AI Governance: Why Your Business Needs a Plan Now.

Authors
,Partner, Trademark Agent, CLP
rstobbe@fieldlaw.com