Artificial intelligence in the clinical laboratory has great promise, but still relies on the human touch to keep diagnoses on track.

By Chris Wolski

Consider: Diagnostic errors constitute up to 60% of all health-care-related errors annually, resulting in 80,000 deaths1. With 70% of health-care decisions made on the basis of clinical tests, that is a sobering statistic.

But, for Asher Lohman, VP of Data and Analytics at technology consultancy, Trace3, that’s where artificial intelligence can play a critical role.

“We’re seeing how AI is helping to provide clinical decision support,” he says. “It’s collecting data, aiding evaluation, and having the analysis completed in a more timely way.”

For patients, this means that care can be provided earlier, and often more successfully. For labs, the benefit is completing more tests even as volume increases dramatically. Certainly, a win-win for everyone. 

Lohman also sees significant benefits for the use of AI in telehealth by providing access to testing and other health services to rural or other underserved communities.

“It’s eliminating some barriers and creating a safe space for patients at the same time,” he observes.

But while artificial intelligence has clear benefits in handling large amounts of data and helping in the decision-making process, that doesn’t mean the human element is lifted out of the equation. Quite the contrary, according to Lohman.

Artificial Intelligence, Data Flattening, and Data Drift

According to Lohman, artificial intelligence solutions are susceptible to data flattening and data drift. Neither are new issues, but need to be considered and carefully evaluated when interacting with an AI solution.

There have been studies showing that far from being a godlike panacea, AI has been responsible for diagnostic errors2

There are generally two causes for these errors, according to Lohman.

“They’re either caused by anomalies in the data the solution is given or the way it’s learning from humans and having that learning reinforced,” he explains.

Anomalies can be caused by something as simple as bad data coming out of a degraded biosensor, which leads to a phenomenon known as data poisoning. Unexpectedly, changes in regulations could cause errors. For example, the solution hasn’t been provided the newest version of a regulatory document, so it doesn’t know what to do—leading to a diagnostic or analytical error.

At this point, what should clinical labs do? One answer is to have the right perspective on artificial intelligence and its capability.

“The way AI is in use today is the toddler phase,” says Lohman. “It’s more designed to augment and not replace.”

Lohman adds that in the near term he sees AI following a use-case model with artificial intelligence solutions designed and trained for specific tasks, such as counting cells on a morphology slide. 

Does this mean that artificial intelligence is fundamentally limited? For Lohman, the answer clearly is “no,” but it has some time to go before removing the human element.

Chris Wolski is chief editor of CLP.

References

  1. “Top 6 Challenges of AI in Healthcare & Overcoming Them in 2024.” Dilmegani, Cem. AIMultiple. June 12, 2023. https://research.aimultiple.com/challenges-of-ai-in-healthcare/
  2. “Artificial Intelligence and Diagnostic Errors.” Hall, Kendall K.; Fitall, Eleanor. PSNet. January 31, 2020. https://psnet.ahrq.gov/perspective/artificial-intelligence-and-diagnostic-errors