Optimizing the Effectiveness of Quality Control With Bio-Rad’s QC OnCall Software System and Biological Variation

By Greg Cooper, CLS, MHA

 For some time, laboratories have been frustrated with having to repeat tests because of out-of-control error flags. In many cases, these error flags are false alarms and may reflect poor or no planning for analytical quality before the test is put into operation; an incorrect mean or range; or a process control scheme that is inappropriate for the test in question.

Bio-Rad Laboratories has developed a new software tool to help laboratories assess the effectiveness and appropriateness of the process controls they use to monitor analytical quality.

Most freestanding quality control (QC) software applications, as well as QC applications found on some analytical instruments and laboratory information systems (LIS), monitor the analytical process using one or more statistical process control (SPC) rules and a set of graphs or charts that display data and data patterns. As the stringency and number of rules applied to an analytical process increase, the power to detect statistical errors and the potential for false error flags and run rejections increase as well.

In order to apply SPC rules effectively, laboratorians must understand what the rules signify; the types of errors (random or systematic) detected by the available rule or combination of rules; and which rule or combination works best under a given set of circumstances.

When laboratorians lack this understanding or fail to set targets for analytical quality prior to testing, a patchwork of SPC applications characterized by inconsistent and often ineffective application of the rules can result.

When false-positive error flags are commonplace, test operators can quickly become desensitized to them and merely repeat controls and patient samples until the control materials fall within expectations. Corrective action, when taken, is often a shot-in-the-dark approach that frequently includes at least one of the following costly actions:

  • Recalibrating the instrument using the current lot of calibrator, followed by testing of control materials
  • Recalibrating the instrument using a new lot of calibrator, followed by testing of control materials
  • Performing unscheduled instrument maintenance or repair, frequently unneeded, followed by recalibration and testing of control materials
  • Changing reagents or reagent lots, followed by recalibration and testing of control materials.

Laboratories should be concerned about the number of times a test is recalibrated because each calibration or recalibration potentially introduces new or additional systematic errors. Too frequent calibration may indicate a defective SPC protocol (rules applied, mean, and range in use) set by the laboratory, instrument malfunction, sub-optimal reagent quality, or failure to follow the manufacturer’s instructions and schedule for maintenance.

 QC OnCall Analytical Goals for Total Error

QC OnCall Feedback and Quality Appraisal Tools
Bio-Rad Laboratories offers QC OnCall™, a QC data management system for clinical laboratory application. The system includes the QC OnCall user application, QCNet™ (www.qcnet.com), and InstantQC™. The addition of

InstantQC allows the system to provide online interlaboratory reports and comparisons within minutes of sending data to Bio-Rad’s UNITY Central™ database. This real-time access to interlaboratory reports distinguishes QC OnCall from

Bio-Rad’s earlier QC data management software.

In addition to the SPC rules, QC OnCall contains several tools to provide a process control feedback loop including:

  • State of the Art, which targets test imprecision within specified limits based on currently attainable test imprecision;
  • Medical Relevance, which allows the lab director to set total allowable error limits based on clinical decision criteria;
  • Imprecision-BV, which targets test imprecision based on biological variation data and laboratory-selected performance goals; and Total Error-BV, which, used as a quality appraisal tool, sets upper and lower limits of performance for each test based on total allowable error using biological variation data and laboratory-selected performance goals. Total Error-BV is the most robust feedback tool offered by QC OnCall. It focuses on process improvements for imprecision, bias, and total allowable error. When used to its fullest extent, this tool can be used to appraise those process control protocols and specifications that may not be test-appropriate and that, when undertaken in the laboratory, simply increase laboratory costs due to unnecessary repeat testing, troubleshooting, and recalibration.

The QC OnCall System uses these statistical feedback and appraisal tools along with traditional SPC protocols to identify process “noise” resulting from misapplication of statistical rules. This creates a feedback loop that can be used to adjust the SPC protocol, making control of the analytical process effective and cost-efficient.

Assessing Overall Bias and Imprecision
Laboratories can access information about imprecision using interlaboratory comparison reports supplied by the vendor of their control materials. While these reports may also be used to evaluate bias, some laboratories prefer proficiency report data for this purpose. Participants in Bio-Rad’s UNITY Interlaboratory Program can use the Laboratory Comparison Report, the Laboratory Performance Overview, or the Statistical Profile Report. These reports contain two relevant statistics that laboratories can use as a qualitative assessment of their laboratory bias and imprecision:

• The CVR (coefficient of variation ratio) represents a peer-based evaluation of imprecision. This ratio is calculated as the laboratory CV for the test divided by the average CV reported for the consensus group.

• The SDI (standard deviation index) can be used as a relative peer-based estimate of bias. SDI describes or quantifies bias (difference of the laboratory’s observed mean and a consensus group mean) in terms of standard deviation.

Choosing a Performance Goal
Based on the outcome of the qualitative assessment, the laboratory selects appropriate performance goals (minimum, desirable, or optimum) for imprecision and bias. The laboratory may choose to set performance goals based on overall consensus group performance (that is, capability of methodology or technology) or the capability of the laboratory since internal process and operational variables unique to the laboratory can affect overall test imprecision or bias.

Laboratories should choose bias and imprecision goals that reflect the qualitative assessment, but they can also choose the next level of performance as a target for quality improvement. As the user makes selections, QC OnCall automatically sets performance targets using within-subject and between-subject biological variation data and formulas for acceptable imprecision and bias at the performance level chosen.

 Choosing a Comparison Group for Calculating Analytical Bias
Finally, the laboratory must choose a consensus group for calculating bias, a key component of total error. The Bio-Rad UNITY Interlaboratory Program provides group data for three consensus groups, Peer, Method, and All Laboratories, each of which maintains a unique statistical integrity and can be useful depending on the laboratory’s quality goals and needs.

• In the Peer group, all laboratories use the same instrument, analytical method, reagents, and temperature of assay. Peer is the ideal group for comparison.

• Laboratories in the Method group use the same analytical method. The Method group should be chosen when there is an insufficient number of laboratories in the Peer group.

• In the All Laboratories group, all laboratories report for the analyte, regardless of the instrument or method used. The All Laboratories group should be chosen only when there is an insufficient number of laboratories in both the Peer and Method groups. All Laboratories is the least statistically relevant because between-instrument variables as well as between-method variables affect it.

Laboratories should use care when choosing which group to use for comparison, because statistical outcomes can be quite different based on the nature of the grouping. After the laboratory chooses the appropriate comparison group, the software creates a total error chart using the comparison group mean as the target mean and the previously calculated total allowable error to set and plot performance limits.

Keys to a Productive Review of the Laboratory Quality System
Laboratory quality systems should include a review of QC performance by a qualified person at regular intervals. CAP-accredited laboratories must review and document QC performance at least weekly. These reviews, which are often done retrospectively, offer an excellent opportunity to critically assess the statistical process control that the individual laboratory has in effect for each test. The scope and complexity of the weekly review should be dictated by laboratory policy. Some weekly reviews may be cursory while others may be more detailed.

Issues that reviewers should consider include statistical out-of-control events, frequency of outliers (QC values outside the established total allowable error limits) detection during the period or across periods, and the amount of bias present, if any. Assessing these issues is key to a productive review and can be facilitated by asking the following questions:

  • Are the SPC rules in effect for the test too restrictive when the capability of the methodology or technology and total allowable error are jointly considered?
  • Should another more stringent single rule or a more complex multirule SPC be applied to improve error detection?
  • Should the mean for the test be adjusted?
  • How much imprecision is present and is it a significant contributor to total error? Should the laboratory focus its efforts on improving precision?
  • How much comparative bias is present and is it a significant contributor to total error? Should the laboratory focus its efforts on removing or reducing analytical bias?
  • Is the appropriate consensus group (Peer, Method, All Laboratories) being used to estimate the laboratory’s comparative bias for the test?
  • Are the performance goals for imprecision and bias for the test, which also affect total allowable error, appropriately set?
  • How frequently do SPC error flags occur for the test during this review period? Across review periods? Are these frequent error flags due to inappropriate selection of SPC rules, larger than expected imprecision, the presence of bias, or do the mean and range need adjustment?
  • How frequently is the test being recalibrated? Does calibration exceed the frequency recommended by the manufacturer?

There is probably not enough time to ask all of these questions during each review cycle. However, each of these questions represents an opportunity to measure and appraise the effectiveness of the process control in effect for a specific test.

For additional information contact Jeff Larson, UNITY and QC OnCall Product Manager, Bio-Rad Laboratories Inc, Irvine, Calif (949) 598-1240.

Greg Cooper, CLS, MHA, can be reached at Bio-Rad Laboratories Inc, Irvine, Calif (949) 598-1240.