Image
Neon sign with question mark
رؤى

Experts Examine Key Questions Around AI and Patient Safety

Summary

  • A focus group of patient safety experts believe that AI can improve patient safety through automation and optimized workflows if it is implemented with a quality- and safety-first mindset and not substituted for human clinical judgment.

In late 2024, members of IHI’s Innovation and patient safety leadership teams facilitated a focus group with health care safety leaders. They discussed a timely question: What are the challenges, lessons learned, and areas for continued research related to artificial intelligence (AI) for safety and quality measurement? The group arrived at the following themes and recommendations.

Safety is paramount 

Above all, the safety and well-being of patients must remain core to decisions and considerations for use of AI. Generative AI for safety could save lives by being deployed in service of clinical effectiveness and personalized care, such as identifying when a certain intervention might be a good fit for an individual patient. Real-time AI-supported automation (e.g., deterioration prediction tools) has the potential to improve patient safety by calling attention to precursor signals of changes in patient condition. It would also support efficiency by reducing the need to delve into medical records to identify adverse events and associated patterns. Critically, psychological safety, and meaningful engagement of safety and quality colleagues and care team members, are important to steward safe use of AI amid the realities of both benefits and risks.  

Organizational decision-makers often prioritize making the business case for investment in AI. However, it is challenging to prove return on investment based on safety outcomes alone, as the reduced costs from better outcomes take time to be realized.    

Employ AI for unstructured data

The real lessons from safety events are often found in the unstructured data in medical records — provider and staff notes. Staff members, often nurses, spend substantial time combing through medical records to identify salient information. AI could support synthesizing — and then acting on — qualitative patient feedback in near real-time. 

In addition, employing AI for patient-facing communications might alleviate clinicians’ administrative burden, which could contribute to more empathetic exchanges with patients and subsequently improve patient experience. Recent research has shown that patients are more satisfied with AI-generated messages.  

That said, AI is not a replacement for human clinical judgment. When using AI for unstructured data, users should make sure that the model is well trained on what to look for in notes (e.g., medical keywords). Technology decision-makers should partner with direct care staff to learn what the pain points are in their daily work to deliver, document, and support care and therefore where AI solutions would be most welcomed. 

Introduce new technology responsibly

AI champions and IT teams should collaborate with safety and quality teams to ensure that multiple dimensions of safety are considered during the technology evaluation, adoption, and implementation process (e.g., information security and patient safety implications). Ideally, patient safety would be a forethought and central component of AI deployment. AI tools are only as effective as the supporting processes and workflows implemented around them. Methods like Failure Modes and Effects Analysis (FMEA) — essentially, asking what could go wrong; why it might happen; and what would the consequences be — can be helpful to proactively analyze a process for potential harm. 

With the speed at which new technologies are being developed and implemented, implementation may outpace an organization’s ability to assess safety considerations and adequately train and prepare users. It is important to consider the way that humans and technology interact — a concept known as the AI-human dyad — to understand the true safety implications of new technology. 

Several points raised by focus group participants align with recommendations from the Lucian Leape Institute (LLI)-convened panel and ensuing report. For instance, the authors of the LLI report explore the concept of the AI-human dyad in depth, noting that it is an imperfect means of avoiding errors that could produce harm. They also propose strategies to minimize risks introduced by new AI technologies, such as implementing practices to ensure that humans reviewing AI-generated outputs remain alert to potential errors and systemic gaps.  

As AI-powered tools for patient safety continue to proliferate, health system leaders and AI governance committee members will need to continue to refine their strategies to maximize benefits and identify and monitor risks.  

Marina Renton, MPhil, is an IHI Senior Research associate. Jeff Rakover, MPP, is an IHI Director. Patricia McGaffigan, RN, MS, CPPS, is an IHI Senior Advisor.  

Photo by Emily Morter on Unsplash

You might also be interested in: 

A mainstage session at the 2025 IHI Patient Safety Congress, "Transformative Ideas: AI in Health Care Safety"

 

Share