The use of augmented intelligence (AI)—often called artificial intelligence—in health care is rapidly evolving, but experts say its growth and adoption could stall if physicians aren’t told what the technology is doing and how it’s doing it or if they are unable to explain these functions to their patients.
“We can build all the slick technologies, solving the right problems, giving them the right validations, taking care of the risk, but if there is still no education on the physician side or on the patient side, there will never be trust,” said Sunil Dadlani, chief information and digital officer, and chief information security officer for the Atlantic Health System in Morristown, New Jersey.
Atlantic Health is a member of the AMA Health System Program, which provides enterprise solutions to equip leadership, physicians and care teams with resources to help drive the future of medicine. Find out how Atlantic Health uses AI to ease prior authorization burdens and expedite radiology care.
“What will be the outcome if there is no trust?” Dadlani asked. “The technology, the solution will not be adopted.”
Dadlani spoke at the Healthcare Information and Management Systems Society’s Global Health Conference in Orlando, Florida, where he was joined by AMA Trustee Alexander Ding, MD, MS, MBA.
Patient autonomy is non-negotiable
Patient autonomy is non-negotiable
Dadlani warned against two bad outcomes if AI is not applied correctly: A decline in patient-centered care and an increase in burdens for physicians and staff.
Patient autonomy must be ensured and can be done by maintaining informed consent or offering an option whether to receive AI-enabled care or not.
Goals such as patient outcomes, patient safety, access to care, and equitable care are “non-negotiable,” he said.
Adding to staff work burdens is also not acceptable and to prevent this requires “a clear understanding of end-to-end workflow,” Dadlani said.
“You might be solving a problem on the clinical side, but then you might create a problem on the administrative side where you have extensive support burden,” he explained.
“Or you might solve a problem on the operation side, but it creates extensive in-basket messages on the physician side, and you create more frustration and more burnout,” said Dadlani, who noted that his two children are physicians.
“If one physician leaves the organization because of high burnout, there is extensive cost,” he added. “That’s the last thing you want to do if you really want AI to be adopted.”
Regulation will follow failures
Regulation will follow failures
Both Dadlani and Dr. Ding predicted that more significant regulation of AI in health care will be coming as the Food and Drug Administration (FDA) and other federal agencies figure out whether to adjust existing regulatory frameworks or create new ones.
“There’s a significant amount of uncertainty when it comes to the direction of the regulatory system for AI,” Dr. Ding said. “The FDA’s current regulatory structure, which thinks about software as a medical device, just doesn’t fit the bill for AI.”
Because the technology is advancing so fast, Dadlani said early attempts to regulate health care AI may create rules that quickly become obsolete or stifle innovation, while stepping in too late could lead to adverse events as the technology goes unchecked.
Dadlani also predicted that early attempts to regulate AI in health care will be a response to failures of some organizations to implement the technology—but he added that these failures are how people learn and shouldn’t cause discouragement.
“We will see a multitude of use cases, a multitude of failures, because that’s how this technology is—there’s no straight path, you have to learn from iteration,” he explained. “You have to continuously refine, and you will fail. That’s the nature of the beast.”
The AMA has developed new advocacy principles that builds on current AI policy. These new principles (PDF) address the development, deployment and use of health care AI, with particular emphasis on:
- Health care AI oversight.
- When and what to disclose to advance AI transparency.
- Generative AI policies and governance.
- Physician liability for use of AI-enabled technologies.
- AI data privacy and cybersecurity.
- Payer use of AI and automated decision-making systems.
Change moves at the speed of trust
Change moves at the speed of trust
“That patient-physician relationship is an incredibly important one to keep in mind and I think the foundation of that relationship is trust,” Dr. Ding said.
“Things that get in between that relationship can risk creating mistrust and distrust, which unfortunately, leads to less patient engagement and lower adherence with care plans—lessening the therapeutic relationship and unfortunately, worsened health outcomes,” he explained.
If not done right, AI can drive a “wedge of mistrust” in the patient-physician relationship, said Dr. Ding, a diagnostic and interventional radiologist with University of Louisville Health.
“I’ve seen AI algorithms that can ingest a piece of medical imaging and then spit out a diagnosis,” he said. “The diagnosis could be correct, but we found that in those sorts of systems, there’s not a lot of trust and as a result, there is not very much uptake. There’s not very much utilization.”
Other systems, however, provide radiologists with a “heat map” on the image that illustrates how the diagnosis was determined. The “explainability” of the algorithm builds trust which, in turn, increases use, said Dr. Ding who added that he uses AI every day in his practice.
“I consider it a tool that helps me be a better physician,” he said.
Learn more with the AMA about the emerging landscape of augmented intelligence in health care (PDF).