Labs reap the benefits of automated QC and interlaboratory peer-reporting systems 

By Steve Halasey

The application of interlaboratory peer-review systems for improving quality control (QC) in clinical laboratories is not a new phenomenon. Notable efforts to correlate and reduce variations in laboratory findings took place in the years following World War II, and the development of such peer-reporting systems has continued to the present day.

But considering the ways that such systems have evolved over the past decade, there are a number of dramatic differences between the systems in use 20 years ago and those now being offered. For this article, CLP spoke with experts in the field to get a better idea of where the evolution of interlaboratory peer-review systems has landed for now, and how advancing technologies will reshape them in the future.

The Age of Automated Data Collection

“The most notable difference between older and current peer-review systems is that most of the reporting was formerly done on paper, typically via bubble charts or Levey-Jennings charts,” says John C. Yundt-Pacheco, a scientific fellow in the quality systems division of Bio-Rad Laboratories, Hercules, Calif. “Also, the data collection period was typically one month at a time, and there was generally about a 60-day turnaround time for a lab to get feedback about its data.”

Collecting data on paper is now largely a thing of the past, and that single change has had a dramatic effect on QC processes and results. Now labs are able to make use of automated data collection features, meaning that system data are gathered and sorted via the electronic interfaces built into many modern instruments. When the quality control specimen is run and the results are transmitted, they are also automatically collected and archived in the lab’s QC system, and automatically submitted for peer evaluation.

“Today’s interlaboratory comparisons are much more sophisticated and offer improved statistics and peer group separation, not only by manufacturer but also by the makes and models of analyzers,” says James H. Nichols, PhD, DABCC, FACB, professor of pathology, microbiology, and immunology, and medical director for clinical chemistry and point-of-care testing at the Vanderbilt University School of Medicine.

Indeed, thanks to advances in modern technology, data collection and sharing has become faster, more accurate, and significantly easier to access. 

Yundt-Pacheco

John C. Yundt-Pacheco, Bio-Rad Laboratories.

“Shared data can be accessed in a number of efficient ways that suit the lab in question,” says Yundt-Pacheco. “It can be accessed via an electronic interface, or uploaded through a website or through client software. Moreover, today’s systems operate in real time or near real time, so data submitted by a lab go directly into the peer group system, where the lab’s statistics are computed alongside those of its peers.”

Clearly, through modern interlaboratory peer-review systems, it is now much easier to perform in-depth analysis in real time. When a laboratorian sees a QC response that is not as expected, it’s very simple to enter the information into the peer reporting system and receive relevant, comparative peer data. The data can be analyzed extensively by software or via a Web-based system for a wide variety of comparative analyses. This capability can be particularly helpful for labs that are part of a system of laboratories, where it is important to make sure that a diagnostic result holds true across the entire system.

Immediate and electronic access to peer group data provides lab professionals with the tools they need to perform ongoing monitoring and to react quickly to issues. Historically, labs have not had any interface or electronic view of the performance of their peers, so they would typically find out about problems after they had already manifested themselves and begun to affect the quality of the lab’s performance and accuracy.

“When you’ve got access to real-time analysis, with just a couple of clicks you can see exactly how you compare to everybody else,” says Yundt-Pacheco. “That has huge benefits when it comes to producing quality results.”

“Interlaboratory peer review is an absolutely integral part of our laboratory automation systems,” says Nichols. “Test results are evaluated and submitted to the interlaboratory comparison program automatically through our instrument manager software. With the first QC rules failure, that same middleware can shut down a test on a specific analyzer, preventing further patient testing until the QC exception has been evaluated and undergone troubleshooting, and the analyzer has been cleared for further testing.”

Nichols and his lab aren’t alone. As technological advances have made it easier to participate in peer data sharing systems, the number of participating labs has increased. “Somewherein the range of 60% to 70% of the laboratories we work with participate in the peer-review system,” says Yundt-Pacheco, “and that number is definitely climbing.”

Proficiency Testing

One of the most important factors that leads laboratories to join in an interlaboratory peer-review program is the requirement that they also participate in proficiency testing. While various countries apply such requirements differently, in the United States the requirement applies to any lab that reports test results derived from human specimens.

In proficiency testing, each laboratory receives uncharacterized specimens that it must evaluate. The results are then submitted to the proficiency testing provider. If the lab’s results don’t match with other organizations’ results, the lab will fail the proficiency test. Results of proficiency tests are all fed into regulatory agencies; in the United States test results ultimately end up being transmitted to the Centers for Medicare and Medicaid Services (CMS). If a lab fails two out of three proficiency tests, it will likely be prohibited from continuing to offer that assay.

Because laboratories are obligated to participate in proficiency testing that is reported to regulatory agencies and can have consequences for their accreditation, most labs have an increasing desire to make sure that they have a reasonably high level of commutability between their results and those of other labs. Participating in an interlaboratory peer-review program enables labs to review their status in this regard on an ongoing basis.

“Inspections are more straightforward for labs that utilize shared data from peers,” says Nichols. “Inspectors recognize that patients are unlikely to be tested once the middleware system senses a QC exception and the review rules initiate a hold on further testing pending technologist troubleshooting. This system prevents further analysis that might occur during a manual QC review process, with continuous release of patient results. With larger, high-volume automated systems, manual review may only be undertaken periodically.”

Accreditation and Regulatory Data

Today, some peer review systems have built-in functionality to help meet accreditation requirements. If a lab’s accrediting agency requires a determination of the lab’s uncertainty of measurement, for instance, QC system providers can do that computation for the lab.

The same is true for data that may be needed to meet regulatory requirements. Risk management systems are becoming more commonplace, for instance, and many can interface with collected shared data. Such interfaces are important because risk management has become a more common requirement in some regulatory scenarios. With labs needing to assess their risk of producing erroneous results—historically, not an easy thing to do—automated QC systems are now in place to help perform computations related to risk management.

Through automated QC systems and interlaboratory peer-review programs, labs can have access to a system that will instantly retrieve the data required by accreditation and regulatory agencies. Such ready access to complex data makes a lab’s compliance and accreditation scenario much easier. Being able to provide results and scientific evaluations that meet the criteria requested by various credentialing bodies is core to the mission of providing quality assessment systems for the laboratory.

Bias and Precision

Labs that participate in an interlaboratory peer-review program achieve an enhanced view of how all of their test methods are performing in the broader marketplace. Shared data empowers a lab to be aware of its own performance as well as the performance of all the other reporting labs that are using the same products and processes.

Such comparative performance data provide a basis for the laboratory to determine whether its own results reflect bias, which is difficult to detect when using only data from the lab itself. But when a lab is engaged with an interlaboratory peer-review program and reports higher or lower values than every other lab, bias becomes much simpler to assess.

Another factor to consider is precision. Without access to data from other laboratories, labs are forced to compute their own levels of imprecision. In such situations, the best a lab can do on its own is to calculate a coefficient of variation at different concentration levels. But such results merely lead the lab to question how its variation compares to that of other facilities—a query that is, again, easily addressed through participation in an interlaboratory peer-review program.

“Every lab is different,” says Yundt-Pacheco. “A lab that is performing in the 10th percentile—better than 90% of its peers—probably can’t improve its performance very much with this test method. But if 90% of a lab’s peers have superior performance levels, the lab probably needs to have a conversation with its instrument manufacturer to find out why the lab is doing so much worse than everybody else using the same test method.”

Significant Benefits

Participating in an interlaboratory peer-review program offers laboratories and patients a number of clear-cut benefits (see the companion article, “Saving Money while Improving Quality Control”). Nichols points to the practice’s contribution to the QC review process as chief among them. “For our laboratory, participating in interlaboratory peer reporting has genuinely automated the QC and QC review processes,” says Nichols. “It has also reduced the number of reworks due to failed QC that was not caught until later manual QC review. Automated QC systems catch failures immediately and shut down the test from further analysis until manual troubleshooting can be performed.”

As suggested above, sharing peer group data has also had widespread effects on reducing bias and heightening precision over time. “There’s no question that 20 years ago there was a much larger amount of bias and imprecision in lab results than there is now,” says Yundt-Pacheco. “When a lab is continually monitoring its performance and has the ability to see how it compares to its peers, it can more effectively respond to situations as they occur. The biggest difference that interlaboratory peer reporting has made is a positive effect on the accuracy of test results, which has dramatically improved over the years.”

With reduced bias and improved precision, labs have been able to implement increasingly rigorous performance requirements for certain tests. “The medical and scientific communities have made a really strong push to harmonize test responses for some important analytes,” says Yundt-Pacheco. “Because it is a major marker for diabetes, for instance, there is a whole system in place to try to standardize testing for hemoglobin A1c (HbA1c). One important aspect of this effort has been a gradual reduction in the number of allowable errors in HbA1c testing—a restriction that is significantly tighter today than it was 5 years ago.

“Interlaboratory peer-review programs support ongoing efforts to improve the quality of test results, reduce the amount of error, and improve the commutability of results,” says Yundt-Pacheco. “When these efforts are successful, even a vacationing patient being tested in an unfamiliar lab will get test results that are very comparable to those of their usual lab.”

Thanks in large part to the increased use of peer reporting, problems in the field now tend to be identified and resolved much more quickly than in the past. Because peer-review systems are continually monitoring a large number of laboratories, sudden changes or new trends related to the same types of instruments and tests can be detected immediately. A new lot of reagents might be the cause of unexpected test values detectable locally, for instance, but only with data from other labs would an operator be able to detect a more widespread supply problem. Now, continual monitoring enables labs to surface such issues so that they can be resolved much more quickly.

Troubleshooting and Error Detection

When participating in an interlaboratory peer-review program, the large volume of data available for comparison allows laboratorians to troubleshoot instrument and test performance, including shifts in test values, instrument flags, reagent errors, and drifts over time. Many types of errors can be detected more swiftly as a result of interlaboratory practices.

Reagent issues are a significant area of concern. Such issues typically involve a problem related to a specific lot of reagent, such as contamination that causes performance to degrade over time. More frequently, issues arise with new lots of reagents that are being implemented—and causing unexpected results—in a number of labs. If not caught early, issues related to a specific lot number can lead to erroneous test results and potentially devastating patient outcomes, depending on the type of analytes in play. Serious problems can arise when a lab reports incorrect results about a tumor marker or infectious disease marker, such as a report of false positives on which clinical decisions are based.

“Quality control systems are moving in the direction of managing the risk of producing erroneous results that cause patient harm,” says Yundt-Pacheco. “Such a goal can require a lab to implement different quality practices for different classifications of testing, because problems with some analytes can represent a much greater risk to patients than might occur with other analytes. The lab’s quality control systems should reflect that.”

Instrumentation software represents another category of potential QC problems to be managed. Yundt-Pacheco reports that there have been numerous cases where a new software release has altered test results in an unanticipated way. But it often takes labs some time before they recognize that an erroneous shift in test results is actually taking place.

Software-related errors underscore the need for labs to assess the potential effects of any changes that are made to a test method, including effects on both the test method itself and on patients. The need to carry out such an assessment emphasizes again the value of sharing data with a large peer group, so that comparative data can be drawn from a larger pool of participants.

The Holdouts

While 60% to 70% of labs may be engaged in some form of interlaboratory peer-review program, that still leaves a significant proportion of labs that have chosen not to participate in such a system. The reasons for not participating are varied.

For some laboratories, especially in other parts of world, logistical or technological challenges impose a legitimate limit on the use of peer-review systems, typically as a result of difficulties getting connected and transmitting data in real time. While such difficulties remain a factor in many regions, the number and scope of such regionally based technological limitations is clearly shrinking.

Nichols

James H. Nichols, PhD, DABCC, FACB, Vanderbilt University School of Medicine.

By contrast, around the world there are a small number of laboratories that are so technically proficient and so confident in their own private systems that they choose not to participate in peer-reporting programs. With some of these laboratories, there may be a lack of curiosity or desire to assess their performance against that of other labs. Also, in some cases laboratorians don’t want their labs to be compared to other laboratories, because they may be purposefully conducting a test in a way that is different from everybody else, perhaps using their own validated science and methods. Labs in this group may feel that the results they produce serve a specific and unique purpose, and that their test methods are more useful than those of the majority of other labs.

For some laboratory organizations, cost may also be a consideration. Cost is really the only challenge for some labs, unless data must be manually typed onto forms or the company website for submission,” says Nichols. “With automated interfaces, data submission is transparent to staff and doesn’t require manual intervention except when IT systems are down.”

A more common reason for labs not to participate in peer group reporting is that their organization is simply set in its ways and does not want to make a change. Labs may also have the perception that adding data sharing to their already long list of things to do is an unwanted taxing of time, effort, and resources. Such concerns tend to be alleviated once an interlaboratory peer-reporting system is put into place. But as with any new system to be implemented, there is often a start-up barrier and a steep learning curve.

A few laboratories that choose not to participate in an interlaboratory peer-reporting system nevertheless do seek out and request peer group data. So there appears to be a small category of laboratories that appreciate the value of peer group data, but are unable or unwilling to contribute their own data to the system. Many of these labs ultimately do participate, understanding that with each new lab that reports performance data, value has been added to every lab’s performance data.

Despite the holdouts, it appears that interlaboratory peer review systems are here to stay and are likely to become standard practice on an industrywide level. “It’s my assessment that interlaboratory peer reporting has become a standard best practice for automated chemistry lines,” says Nichols.

A Look to the Future

To satisfy their quality control and accreditation requirements, laboratories are clearly headed in the direction of interlaboratory peer-review systems. It will probably not be too long before such systems have become universally adopted. Advances in technological capabilities are helping to spur the shift. For many years, the primary interface used in laboratory instrumentation was the RS232 serial port. Today, just about all instrumentation uses a network port, making it easier for systems to be monitored continually.

“At Bio-Rad we offer a platform called BRiCare that enables us to remotely monitor instrumentation and proactively schedule service when we’ve detected a problem before the laboratory has,” explains Yundt-Pacheco. “As labs move to higher and higher levels of connectivity, peer group analysis will come right along with everything else. There’s sure to be more analysis done regarding how each lab’s performance compares to that of its peers.”

Nichols believes that forthcoming technical advances will help make peer reporting even more attractive. “Current operations still require technologists to aliquot and program the analysis of quality control,” he says. “However, there is some instrumentation—like some blood gas analyzers—that can automatically perform and assess QC periodically throughout the day, without technologist intervention.

“The next step in the evolution of automated QC systems would seem to be the implementation of continuous QC processes for larger automated clinical chemistry systems,” adds Nichols. “Such automated periodic QC analysis can be performed with minimal technologist intervention, and will facilitate real-time comparisons of analyzer QC against interlaboratory peer databases, so that trends can be detected before the analyzer goes out of range. This is what I would call ‘preventive QC,’ made possible through the use of artificial intelligence algorithms.”

“To detect errors and trends before they become significant, another future focus will be to integrate liquid QC more fully with manufacturer-engineered processes and analyzer error flags, as well as with real-time moving averages and other patient test result algorithms.”

QC design is another area that seems destined to capture the attention of laboratories over the next few years. “QC design has been around for a long time,” says Yundt-Pacheco. “It takes into account the capabilities of a lab’s instrumentation and seeks to determine what kind of QC rules the lab should be following, and how often it should perform QC.”

There was a time when almost all laboratory testing was done in batches, and controls were tested at exactly the same time and in the same environment. In that setting, labs could have a high degree of confidence that if the controls worked, so did the patient samples. But now, the industry has moved into a world of discreet testing. If a lab runs controls at 8:00 am and determines that they indicated an acceptable level of quality, what does that say about a patient result produced at 10:00 am? The answer may be simply ‘nothing,’ because such a procedure measures quality at one point in time and performance at another point in time. When failures happen, they may occur suddenly, from one specimen to the next, making it impossible to ensure that a lab’s processes are within range at all moments.

Such concern about the viability of QC testing has led to an overhaul of the metrics used to measure the quality of a testing process. “In the past, we have focused on instrument-based quality control metrics in order to determine the probability of an error happening on a particular system,” says Yundt-Pacheco. “Labs have directed their attention toward the probability of detecting an error of a certain magnitude, or the probability of a false rejection, while using their instruments.

“But now, labs are moving toward a much more patient-centric viewpoint. To determine what kind of quality control is needed, the trend is to focus on patient-centric metrics that can indicate the probability of producing an erroneous patient result. Now, labs want to compare their performance on these metrics against an accepted and acceptable probability of producing an erroneous patient result.

“In the near future, labs’ performance data are likely to generate a much larger and richer set of guidance information,” adds Yundt-Pacheco. “Systems will detect procedural problems and automatically take steps to correct them, and will then go through a recovery process to determine which patients need to be retested, and which patients need to have their reported results corrected. Some algorithms of this type have already been created, and they’re gradually making their way onto test platforms. More of this technology will start to come out within the next 5 years.”

Steve Halasey is chief editor of CLP.