By Jon Foskett
Jon Foskett, MT (ASCP)
Our lab at Silver Cross Hospital in Joliet, Ill, recently reviewed the benefits of an automation platform for running our analyzers and processing our specimens, versus a system that would autovalidate test results that would meet requirements determined by the lab. We wanted to measure the impact of automation and autovalidation separately because of the significant variance in cost for each of these processes.
In our situation, an automation platform could cost anywhere from $200,000 to $3,000,000. On the other hand, our laboratory information system (LIS) is capable of moderately complex autovalidation rules, which cost us nothing more than time. During two site visits to hospitals that already had an automation line in place, the impact of autovalidation alone was never measured . In these two facilities, with an automation line, we also set up the computer that monitors the instruments to autovalidate test results. Since we implemented both automation and autovalidation simultaneously, we measured only their total impact on productivity. Without knowing the impact of autovalidation, I could not immediately justify the expense of the automation line, so our lab began to set up autovalidation protocols in our LIS.
The first step in this process was to choose which analyzer and test values were good candidates for autovalidation. We felt that the chemistry, immunochemistry, hematology, coagulation, and urine macroscopic analyzers were good candidates for autovalidation. Our blood gas and urine microscopic analyzers, we felt, were not good candidates. The blood gas analyzers rules were too complex for our LIS to handle, and the urine microscopic analyzers sensitivity and accuracy is still being monitored.
Once we chose the analyzers to be affected, we compiled a list of the flags that each one generated. One of the most important components of the autovalidation process is to have the LIS recognize analyzer flags and hold resultseven if theyre normalthat have analyzer flags. After we identified all of the flags, we sent sample results that generated any significant flag to the LIS to see whether the results would be autovalidated.
The next step after testing all of the significant analyzer flags was to determine, for each test, at what level the LIS would autovalidate the result. We determined these values by asking ourselves, At what value(s) would we take investigative action with this sample? Some tests, such as Digoxin, were kept at a very low threshold, while other tests, including TSH and PTT, were set outside of the normal range. While determining the values at which the LIS would autovalidate results, we also re-evaluated our delta check values. In our LIS, delta check failure is another criterion for autovalidation. Once we agreed upon all of these values, we programmed the LIS and turned on our autovalidation protocol.
Initially, we set our thresholds for autovalidation criteria very conservatively so that we could get used to operating within this new environment. Also, we discussed whether or not to phase out the paper that the analyzers generated on samples that met our autovalidation criteria. Unfortunately, our current instruments are not sophisticated enough to differentiate acceptable samples from the others, so we still print hard copies.
By the end of the first day, it was apparent that the staff was still reviewing many LIS results with which they felt comfortable, so we widened some of the autovalidation ranges. This tweaking continued for the first month of operation. Afterward, we compared some turnaround times (TATs) and autovalidation statistics. The first statistics required for comparison were the baseline, preautovalidation volumes. These numbers (Table 1) were also used to project what the impact of autovalidation might be, in terms of percentage autovalidated.
The numbers in Table 1 represent the annual volume for the selected tests and how many of those tests were predicted to have normal values. Tests with multiple analytes will autovalidate only if all of the results are normal. By allowing only samples with all of the values that are normal to autovalidate, the overall impact was 61.43% of the selected tests.
Although the predicted impact was encouraging, we agreed to move the autovalidation range for a number of the analytes out to a range that more resembled our decision-making levels. Once we changed the ranges, we measured the true impact of our autovalidation protocol on the number of tests reviewed by the LIS instead of by a technologist (Table 2).
After reviewing the data, it appeared that the LIS was autovalidating approximately 20% less than expected when using normal ranges as the value criteria, and the ranges we were using were outside of the normal range. Upon examining the LIS decision-making criteria, we found that the LIS will evaluate results only every 60 seconds. We wondered, How many of our results that should have autovalidated were being accepted by technologists at the bench before the 60 seconds had expired? Table 3 shows the results that were entered into the LIS in less than 60 seconds, regardless of whether they were entered by a technologist or by the autovalidation protocol.
The numbers that represent the percentage of samples entered in less than 1 minute more accurately reflect the projected autovalidation statistics. The positive variances are due to the increases in the autovalidation threshold. The negative variances from the projected autovalidation numbers are due to the LISs protocol and analyzer problems, which caused the autovalidation to be deactivated. To further understand the impact of autovalidation, we compared the percentage of samples that were entered into the LIS in less than 1 minute before and after instituting the protocol. Table 4 shows the percentage of values entered in less than 1 minute.
The Impact of Autovalidation
This data is compared to show the impact of autovalidation on the daily operation of the lab and the amount of time it takes to release test results to the patients chart. We also measured the TAT data from sample receipt to sample verification. We chose this measurement because it is the time frame that is most specific to the impact of autovalidation. The receipt time is when the clerk receives the sample in the main lab. The verification time is when the sample is entered into the LIS and the sample status is complete. Table 5 shows the impact that autovalidation had on the TATs from receipt of sample to submission of results to the LIS.
The data from Table 5 also shows that autovalidation impacted about half of the tests measured. The decrease in TATs is helpful in terms of getting results to the patients chart faster, but the data reveal that TATs are not entirely dependent upon the results that the analyzer enters into the LIS. Processing the sample and placing it onto the analyzer are the only two factors left that can be manipulated in the time frame of receipt to verification.
To arrive at a conclusion regarding the amount of technologist time saved by having the LIS review and verify test results, we analyzed a selection of tests. In Table 6, the number of autovalidated results is counted and multiplied by 10 seconds (average result review time). When counting only the tests in Table 6, we can show a time savings of 19.2 hours per month and 230.8 hours per year. With one full time equivalent (FTE) at 2,080 hours per year, the amount of autovalidation measured in Table 6 shows a savings of 0.11 FTE per year.
The tests in the tables apply to approximately 42% of the monthly test volume that is affected by autovalidation. Given the current autovalidation measurements, the total estimated FTE savings should be approximately 0.25. The departments in which autovalidation was implemented are staffed by approximately 21 FTEs, which would lead to a potential 0.11% FTE savings. Although the statistics may show a potential reduction in staffing, the impact is spread across multiple departments and shifts. An individual analysis could demonstrate the impact on each shift and department, but with the continuous increase in test volume, and the potential to bring more testing in-house, it seems largely unnecessary. All of the tests were not measured and presented because the volume of data made it difficult to extricate certain information from the LIS.
Autovalidation alone can save significant technologist time, reduce potential review errors, and decrease some testing TATs. Autovalidation is possible to perform with some LISs already being used in many labs and can be activated with little to no cost to the laboratory. More sophisticated software programs allow much more flexible decision-making rules when using autovalidation, but they are costly; for the extra impact, they may not be worth such an expenditure.
Although autovalidation can be helpful, Table 5 demonstrates that an automated system could have a major impact on TATs. The automated system would reduce the sample processing and handling time, thereby further reducing the TAT in the received to verified stage. However, during these times of increasing workload and shrinking budgets, using ones LIS to perform autovalidation is an inexpensive way to alleviate some of the workload while decreasing some TATs.
Jon Foskett is operational manager at Silver Cross Hospital in Joliet, Ill.