How an AI Transcription Tool Triggered a Privacy Breach

Join our email list today to receive alerts, articles, invitations to events and more!

Join Our Email List

2 min read
A hospital experienced a privacy breach when an unapproved AI transcription tool automatically joined a virtual medical meeting through a former physician’s personal calendar and shared sensitive patient information. The incident highlights how easily AI tools can create privacy risks and underscores the importance of clear AI policies, staff training, technical safeguards, careful meeting controls, and strong offboarding processes to prevent similar breaches.

Can AI breach privacy? Once it is set loose, even simple forms of agentic AI can quickly blunder into an inadvertent privacy breach. A recent decision of the Office of the Information and Privacy Commissioner of Ontario (IPC) serves as a case study for how easily this happens, along with some instructive commentary on how to prevent it.

In September 2024, a group of physicians scheduled a virtual hepatology rounds meeting. The meeting was attended by hospital physicians, but one of the physicians on the meeting invite no longer worked at the hospital. This physician also held an account for an artificial intelligence powered transcription tool, Otter.ai (the “Tool”). This Tool uses artificial intelligence to access meeting invitations stored in digital calendars, attend the meeting, transcribe spoken words into text, generate detailed meeting notes and summaries, and then email those notes to meeting attendees.

The Tool was not approved for use at the hospital.

Although the physician in question had not worked for the hospital for over a year at the time of the meeting, the physician retained access to the meeting invite in his personal, non work related digital calendar after leaving the hospital. The transcription Tool was able to access the rounds meeting invite via the physician’s personal digital calendar.

The Tool was able to join the rounds meeting, listen in, generate detailed meeting notes, and then automatically disseminate those notes, which contained personal health information of seven patients, including patient names, gender, physician’s name, diagnoses, medical notes, and treatment information.

Once discovered, this triggered a breach notification protocol and a mandatory report to the IPC, which prompted an investigation.

Key Takeaways

This is one simple case that provides a useful case study on the risks of AI and privacy. Based on the actions of the hospital and the recommendations of the IPC, we can summarize several important steps to prevent and mitigate these risks:

  • Updated training: After reviewing the implications, the hospital updated its privacy training materials to explicitly address AI scribe and transcription tools.
  • Updated policies: The hospital’s Appropriate Use of Information and Information Technology policy was revised following this incident to address the use of AI tools.
  • Firewalls: The hospital decided to block users from using AI scribe tools, such as Otter.ai, while on site through firewall configuration.
  • Review meeting participants: The hospital recommended that physicians routinely review meeting participant lists for any unapproved AI tools or automated agents and remove them from meetings before proceeding or before discussing any personal health information.
  • Mandatory meeting lobbies: Another option is to technically enforce the use of a lobby for all virtual meetings in which health information is discussed, requiring the host to manually approve each participant.
  • Containment: The IPC recommended making a formal request to Otter.ai to delete any personal health information of hospital patients retained from the meeting in question.
  • Offboarding audit: The Commissioner’s recommendations also included an audit of the hospital’s employee and physician offboarding process to verify that proper procedures are in place to ensure all access to hospital information systems, including access to calendar invites, is immediately revoked upon departure.
  • AI governance and accountability framework: The IPC also recommended a review of the hospital’s framework for the procurement, implementation, and use of AI scribes to ensure compliance with best practices.

Contact Richard Stobbe or any member of our Artificial Intelligence Group for a review of AI governance, privacy practices, privacy issues, and related AI legal issues.

Link to decision: https://www.ipc.on.ca/en/decisions/informal-resolution-high-profile-breaches/hospital-privacy-breach-involving-ai-scribes-tool