Dear Sir/Madam:
The Association for Diagnostics & Laboratory Medicine (ADLM) welcomes the opportunity to provide input to the Department of Health and Human Services (HHS) regarding its December 23, 2025 request for information on how to adopt and accelerate the use of artificial intelligence (AI) in clinical care. As the leading association in diagnostic testing and technology, we offer the following suggestions:
Interoperability refers to the capacity of disparate laboratory and clinical data to be exchanged, understood, and utilized accurately across different care settings. One of the leading barriers to the use of AI within these healthcare settings is the lack of harmonization among laboratory tests. AI systems that use laboratory data often assume that test results are comparable across different testing sites and methods. In practice, this is often not the case. Different testing platforms can produce different numeric results for the same analyte. While each value may be accurate when using a specific measurement system, numeric differences between measurement systems can:
The lack of harmonized clinical laboratory test results can lead to misdiagnosis, improper treatment decisions, and increase healthcare costs. While Congress has provided the Centers for Disease Control and Prevention (CDC) with $2 million annually since fiscal year (FY) 2018 to work on this issue, this amount is insufficient. ADLM urges HHS to provide additional funding in the FY27 budget to harmonize laboratory test results.
One of the biggest challenges facing AI is the need to ensure the data used to make healthcare decisions is representative of the population. In laboratory medicine, two categories of bias-related risk are especially important:
A variety of strategies should be considered to mitigate these issues, such as:
Implementation of such measures could ensure that laboratory-based AI promotes equity, fairness, and patient trust.
Under the Clinical Laboratory Improvement Amendments (CLIA) regulations, clinical laboratories must validate new test systems before use and conduct ongoing quality monitoring—including daily quality control (QC), proficiency testing, and trend analysis—to ensure that performance remains within acceptable limits over time. AI systems that rely on or influence laboratory data introduce similar risks. Notably, models that continuously update or “learn” from new inputs can drift in accuracy, calibration, or bias over time.
Additionally, even if AI models are kept constant, changes in analytical instrument performance or patient characteristics over time may degrade their ability to produce accurate diagnostic results. To effectively verify and monitor AI performance, laboratory professionals require sufficient access to the relevant data and model information from developers. Independent evaluation of a system’s performance without transparency is challenging and can undermine ongoing quality assurance efforts. To address these considerations, several policy needs should be considered:
Oversight should correspond to the level of risk and potential impact an AI tool may have on patient outcomes. Diagnostic applications that rely heavily on laboratory data and lead to decisions with high potential for patient harm should undergo robust validation, ensure transparency, and incorporate continuous monitoring. Lower-risk tools may warrant a more streamlined oversight process consistent with their reduced potential for patient harm.
Inappropriate test ordering protocols cost the US healthcare system up to $200 billion annually. The downstream effects of misutilization can be serious for patients and involve missed diagnoses due to missed testing opportunities on one end of the spectrum, and follow-up visits for additional testing or unnecessary treatments that result in serious complications and injuries on the other end. Many providers struggle to stay on top of the best evidence regarding laboratory medicine as test menus grow and become more specialized. Utilizing analytics can help address this growing problem by monitoring test ordering patterns to improve use of guidelines, provide real-time best practice alerts, and automatically cancel duplicative, obsolete and “look-alike” test orders. Analyzing both laboratory and administrative or financial data may uncover hidden utilization patterns and further reduce costs.
ADLM is a global scientific and medical professional organization dedicated to clinical laboratory science and its application to healthcare. ADLM brings together clinical laboratory professionals, physicians, research scientists, and business leaders from around the world focused on clinical chemistry, molecular diagnostics, mass spectrometry, translational medicine, lab management, and other areas of clinical laboratory science to advance healthcare collaboration, knowledge, expertise, and innovation.
We look forward to working with you on this important issue. If you have any questions, please email Vince Stine, PhD, ADLM’s Senior Director of Government and Global Affairs, at [email protected], or Evan Fortman, MPA, ADLM’s Manager of Government Affairs at [email protected].
Paul J. Jannetto, Ph.D., DABCC, FAACC
President, ADLM