In the realm of clinical laboratories, where billions of tests are conducted annually, the specter of error looms large. Errors can not only raise costs but also impact patient care—which is why laboratories invest significant resources in quality control (QC) activities to mitigate them.
But traditional QC approaches often adopt a one-size-fits-all strategy that can also drive unnecessary costs. Tuesday’s session, "Risk-Based Quality Control: Changing the Way We Do Quality Control," moderated by Joseph Rudolf, MD, proposed a flexible, risk-based alternative.
The session introduced a model known as Precision QC, which balances false-positive and false-negative risks to inform decisions for setting QC limits. The first part of the session highlighted the limitations of current QC approaches and introduced the concept of risk-based analysis. The second part delved into the application of the Precision QC model.
Rudolf opened the discussion with a thought experiment asking the audience to assess the risk of walking, riding a bicycle, or driving a motorcycle. The audience overwhelmingly chose the motorcycle as the highest risk scenario. He then updated the scenario with some context, positing that the motorcycle is operated at low speeds on a closed course and the walking is along the shoulder of a busy freeway at night. The audience quickly adjusted their response to select walking.
“Risk is an intuitive concept, but it needs to be informed by data,” explained Rudolf. Achieving quality involves making a tradeoff between the costs associated with two types of error: false-positives and false-negatives. Balancing these competing costs may allow laboratories to better manage overall patient risk and laboratory expense.
Robert Schmidt, MD, PhD, MBA, kicked-off the session with a primer on risk-based QC theory. Addressing the established dogma of QC rules based on standard deviation (SD), including the “Westgard” rules of QC, he explained how risk-based QC could be used to compare the performance of various policies such as 2-SD, exponentially weighted moving average, and cumulative sum.
Schmidt also shared that some laboratories might already be using a partial implementation of risk-based QC, such as Bio-Rad’s Mission Control, which is based on Parvin’s method for estimating false- negative costs. However, he noted that this method relies on several assumptions that could lead to anomalous results.
“We hope that our work may help to overcome some of these limitations—or at least make some of the assumptions more explicit and further the adoption of risk-based QC approaches within the laboratory medicine community,” he added.
Highlighting the importance of this area in laboratory medicine, Rudolf pointed out the potential for substantial cost savings. “Clinical laboratories perform approximately 13 billion tests per year,” He said. “Even with low costs per test, suboptimal QC management can result in substantial costs to the overall health system. There is potential to develop risk-based QC applications that are easy to understand and improve QC management.”
The session provided a fresh perspective on QC in clinical laboratories, emphasizing the need to shift from traditional approaches to more nuanced, risk-based models. With the potential to significantly reduce costs and improve patient care, risk-based QC is poised to become a game-changer in laboratory medicine.