In this special edition of Clinical Lab Chat, sponsored by Sysmex America, Inc., CLP’s Director of Business Intelligence, Chris Wolski, does a deep dive into lab QA & QC best practices with Daniel Johnson, assistant director of marketing for Informatics & Service at Sysmex America, Inc. Among the topics they discuss are how to make QA and QC into a preventative and predictive programs instead of reactive tasks, the importance of data with QA and QC workflows, and the one thing Dan would change about the way labs handle QA and QC right now if he could just snap his fingers.

PODCAST TRANSCRIPT

Chris Wolski:
Welcome to a special edition of Clinical Lab Chat, part of the Med Core Podcast network. Today’s episode is sponsored by Sysmex America, Inc. I’m Chris Wolski, director of business intelligence for CLP. And today I’ll be speaking with Daniel Johnson about best practices for clinical laboratory quality control and quality assurance programs. Dan is assistant director of marketing for informatics and service at Sysmex America, Inc, leading the team responsible for introducing new digital innovations that shape and advance healthcare. Among these innovations are the Caresphere workflow solution, the Caresphere Analyzer Management, Sysmex’s validation solution, Beyond Care Quality Monitor, automated inventory solutions, and Sysmex’s WAM, or W-A-M.

Chris Wolski:
Since taking over CLP more than a year ago, I’ve really gained an appreciation of laboratory operations and best practices. And one thing that I think we can’t emphasize enough is QC and QA workflow processes. And Dan, I’m looking forward to learning a lot and think our audience is as well about all the things that they could do for better QC and QA processes. I always like the start with the tough stuff, Dan, so I’m going to put you on the line here. What’s the top challenges labs face with QC and QA programs today?

Daniel Johnson:
Yeah, well that is a tough one. I’ve identified three areas where I feel there’s top challenges with quality assurance and quality programs. First is that people don’t view QC or quality assurance as a program. They view it more as a task. And the more that they view it as a task, I feel that it becomes a habit. And then the question is, is it a good habit or a bad habit that people fall into? That’s the first challenge. The second challenge is that people are concerned about sharing their data and this goes for patient data, but also QC data as well. And how can we as a healthcare organization, my organization, improve ourselves and everyone else, if we don’t have access to that data as well?

Daniel Johnson:
And then following that second one, the third challenge is that now that we have the data or if we ever get it, what do we do with it? As my organization, we have good ideas what we would do with it, but what about the laboratories? What about the healthcare systems? Are they even using quality control data to improve processes or using the data that’s from other quality aspects in the organization, back into the lab and back to the patient? Those are the top challenges that I see with quality assurance programs or QC today.

Chris Wolski:
Right. And one issue too, that I think we’ve talked about when we were preparing for this program is the difference between looking at QC and QA as a task versus a program. And can you talk a little bit about that because I know that’s an issue too. If you just look at this as just a task oriented issue, as opposed to an overarching program and what does that mean for labs and their workflow?

Daniel Johnson:
Exactly. From a workflow perspective, I’ve seen people have a failed QC points and then they just rerun QC again. And they rerun it again and again and again until it’s in and then they can move on with their day. Something that Albert Einstein said is that “Insanity is doing the same thing over and over again and expecting the same results.” And it’s just insane that I feel that people just run QC and they view it as a task. The component of that is that when QC fails, it’s actually an art to fix it. And people don’t like spending time crafting this art or spending time trying to understand if it’s a reagent issue. If it’s something with the analyzer, it could be that the gauges are going off. They’re not clicking the right number of clicks so you can get the right number of fluids through their systems.

Daniel Johnson:
And they’re just rerunning the QC expecting to go in and be good to go. And that’s where I feel that it’s becoming more of a task than versus a program. And adding to that is that there’s other components for QC or QA programs that when was the maintenance last performed? How did it compare to the other analyzers and where did it compare to a bigger peer mean group? Once we have this information, this is great. But a lot of this information that we have is that it’s reactive in nature. They’re waiting for the failed QC points instead of also being proactive. And we’ve seen that laboratories have had limited staff. And every vendor out there has been saying that there’s limited staff that labs had. They really have a struggle being proactive with QC and QA programs. And what I’m trying to do is, well, how do we make this a proactive program?

Daniel Johnson:
And I think that’s one of the key components of moving this from a task into a program. If we start becoming, on the other side, being reactive, they’re just going to react. It’s going to create that bad habit that we were talking about before. It just deeply ingrains into it. And so we need to move forward to being proactive. And the components of that being proactive is having real time data at your fingertips and that it’s actionable. I’ll talk about it a little bit later because it’s a huge component for this is also becoming automated as well. How do we automate actual insights for that? And we’ve done this before in my organization where we actually track. It’s really crazy. We know a lot about devices of our internet of things. We call it SNCS, but we’re able to dial into our systems and monitor how much electricity and power is going to a laser on our systems.

Daniel Johnson:
If that laser starts increasing a little bit and a little bit more, we actually get a flag in our systems and we automatically send a service engineer out there. And that’s a huge component of a proactive quality assurance program and not just an application or a software, even though I’m the software guy. We need to think beyond that simple aspect is that we need to grow because if we can fix a problem before it starts, then you won’t have to have that bad QC point and have that tech that reruns QC over and over again, just like Albert Einstein said, is just insanity because they’re expecting a different result.

Chris Wolski:
And it’s also more efficient in terms of, and not just you’re getting that machine back, recalibrated, but it’s also, you’re saving that technician’s time. You are being able to run the tests that those clinicians and those patients want to know the answers to. And you’re saving materials too. You’re not having downtime on the machine. You’re not wasting materials as well.

Daniel Johnson:
Absolutely. And that all lines to improving the workflows. And when we consider being proactive, real time quality program, it ends up saving everyone’s time and eventually the patient because they get the results out right the first time. And then the key component is that they get them out consistently because then it becomes a true quality and control program.

Chris Wolski:
For sure. And you’ve talked a little bit about data a couple times and some challenge. You have data, what do you do with it? Talk a little bit about that. And what does that mean to use your data properly, I guess, for QA or a QC program? How should labs be looking at their data, at least in the context of a QA or QC program?

Daniel Johnson:
Another great question. Even going into the laboratory with data, I think there’s a really neat analogy we can start with is that with consumer data, I actually recently got a new dog and I’m very happy with it.

Chris Wolski:
Congrats.

Daniel Johnson:
And I’m on my phone and my phone’s near me and I’m talking about dog food and crates and all of a sudden on my social media, guess what pops up? All the dog foods and crates and stuff like that, that Amazon wants to sell me and all this fun stuff. We’re getting used to this consumerization of data and how it’s being used, but when we take that and then move it into the laboratory and even the healthcare system space reference lab, it doesn’t matter healthcare space, data becomes a very hoarded aspect and it’s almost like we’re trying to create a… There’s that saying that data’s a gold mine and we’re trying to try to get as much as it underneath this as possible, but it doesn’t make sense to me because if we hoard the data, we don’t know what to do with it. That’s another comment, but all of a sudden I feel that it’s in the wrong spot.

Daniel Johnson:
It should be everyone’s data to share, especially if we’re trying to improve overall healthcare outcomes. And I think that when we have QC and QA operational data, I think that’s even a benefit even more to share. It’s not patient data. I think people can adjust to starting with sharing that QC and QA data. And with that information, I feel that they can start seeing trends. They can put into a bigger database. They can say, Hey, a lab in California is seeing somewhere the lab in Germany seeing and seeing these trends so they’re not wasting it, we’re talking about workflows, not wasting their times trying to fix the same problem because they found issues with the reagents or they found issues with the type of lot number of the laser that was an issue. Having this data available and shared is much more powerful than sitting under a huge mine of it with no actual thought or ability to mine that data.

Chris Wolski:
Let’s talk about solutions then. You’re a data guy and I think data is, and particularly what you’re talking about if I understand it too, is one of the big negatives is if you silo data and that’s always a recipe for failure, no matter what industry you’re working in. And now I know you talked a little bit about internet of things. What about big data? Is that something that’s coming out because of things like internet of things and all this data we’re getting out? How do labs take that concept of the big data concept, and maybe you could talk a little bit about what that is, and make it mean something for their labs and for QA and QC programs?

Daniel Johnson:
Yeah. That’s another great question. For the internet of things, a lot of organizations are moving towards it. They probably have it, but they don’t call it that yet. But to level set for everyone to understand, it’s basically an application where we go and pull data safely from a system so we can use that data to do the preventative maintenance I was talking about, checking on the lasers. There’s even new software that I’ve been working on that auto predicts the amount of reagents that are used on the system, so you can get a really cool shopping list to know that this is how much I should order for next month and predict the seasonality and all that fun stuff. This is what we’re using, this internet of things, for and at Sysmex, my organization, we process half a billion quality control points a year.

Daniel Johnson:
And so we’re sitting on a huge mountain of information and we’re able to use that data, and this is one of the issues that we at Challenger were trying to overcome, is how do we smartly use all those half a billion points? And what we’ve done is that we’ve seen we can identify trends in QC. We’re able to look at drift. And one cool point about our QC material that we use on our hematology systems is that it’s an organic material. It’s made of a bunch of cells. And if you notice over time, these cells start degrading organically. They break down. When you have your package insert that someone gives you the day one is perfect, but when you look at day 30 or day 60, the ranges should organically drift because the cells, but does the package insert state that?

Daniel Johnson:
No. What we’ve done with the big data is that we process all this and in real time keep on microprocessing as more people start sharing this information via the internet of things into our cloud and adjust those ranges as we get closer to the end of the lot. And what that does mean for the workflows in the lab is that when they run the QC sample on day one and day 60, that the ranges are dynamic that they’ll have less QC issues because they actually have the right range at the right time because of the life cycle of that QC product. And we’ve been doing this for years actually. And that’s why people really love our systems because they always seem in because they actually are in because we accommodate these organic drifts within our products. And that’s one of the examples of having the big data in our laboratory.

Chris Wolski:
Yeah, that’s great. Now QC and QA programs are really, and I’m not a laboratorian, so this is as I understand it, about standardizing processes. And I think you’ve touched on this a little bit. How are you addressing that at Sysmex? How should labs, at least in general, address standardizing their processes and workflows?

Daniel Johnson:
Standardizing workflows, there’s two major components of it in my mind. First, it’s automation. Soon as we start automating our rules, our processes, the procedures, then we can have this consistency, which leads to my second component, which is standardization. And with automating and standardizing, we’re able to predict when QCs going to go out. We’re able to upload the way that good QC points rather are sent to our big data because we don’t want to contaminate it with a bad QC point. And when I say about automation is that our packet insert says to invert our QC vial a couple dozen times. It’s been a long time since I read a package insert, but it says inverted so many times. If we have some lab in Hawaii that’s doing it 10 times and some lab all the way across the globe in Taiwan that’s doing it another 15 times or 30 times, then that QC point may not fall in the same spot.

Daniel Johnson:
We need to start considering automation. And we can’t just look at software for that as well. We’ve actually really thoughtfully designed new automation components that actually auto mix the sample. It does a consistent [inaudible 00:15:21] 50 times. We’re having the right points. Then it comes to my desk and says, okay, now that we have this really cool system that really starts automating these things, I’m like, oh great. Now I have a peer mean group that has two different populations, ones that use this system that highly automates and standardize it. And then there’s people who use it manual by hand. How do you then start differentiating those two as well so you can have the right peer mean group? There’s some examples of when you start going into the weeds to start standardizing and automating, not everyone can do those two because they might have not the right components and so on and so forth.

Chris Wolski:
That brings up the automation part of this that you’re just talking about and the standardization. Also, I just want to go back and revisit something we were talking about a couple minutes ago, the idea of the art of QC and QA. Even with these standardized processes and such, as a technician or laboratory manager, you have to build flexibility within the system because there is problem solving that you’re still going to have to do if there is an issues, right?

Daniel Johnson:
Correct.

Chris Wolski:
Or am I misunderstanding what you’re saying here?

Daniel Johnson:
Yeah. That’s a really great point of how do we add the arts and make everyone an artist for a QA and QC. And what we’ve actually done is start designing concepts of automated troubleshooting, so then we take the complexity out of it. And what we’re doing is that with our newest generation of QC software tools, is that when we start seeing QC problems on the analyzer, we identify it. We show it as, Hey, we have an issue here. And we actually give them recommendations on how to troubleshoot that. And we mine our big data to say, Hey, once error 1, 2, 3, 4, 5 comes up, the best solution is to do a flow cell cleanse. And once they do that cleanse, then rerun QC instead of immediately running QC, because if you start doing that, you’re doing what Albert Einstein said again was-

Chris Wolski:
Yeah. It’s a lot of insanity processes.

Daniel Johnson:
We’re trying to do the whole Bob Ross paint by number. We’re inventing that into our software because we know they’re busy laboratories as well. And then so adding onto since we’re going to software design aspect with standardization, consistency is that we’ve also designed it to have one dashboard where you can see everything at once, so it’s standard. You can say, Hey, I see this problem here and identify it and see these type of trends automatically. There’s a lot of consideration where we’re trying to artfully, I guess, design our software so it takes a lot of the mystery out that it makes everyone an expert.

Chris Wolski:
Okay, great. And certainly that’s where your big data comes into play again with the problem solving aspects. All right. You mentioned a few times internationally Sysmex, I know has an international presence as well. Beyond the US, what are some of the challenges that you’re seeing across the globe?

Daniel Johnson:
There’s a lot, good challenges and not. There’s a lot of them. I’ve had the opportunity to observe people run patients and run QC and sometimes QCs an option in some countries. They don’t run it. They just assume that the analyzer’s good to go. They only see issues when they’re running patients and they’re not doing patient moving averages. They’re just watching it. Wow. Everyone’s really low for white blood cells. There might be a problem. That’s how [inaudible 00:19:08] QC. We’re talking about best practices, but I’ve seen bad practices there. It’s trying to get everyone at least a basic level of what QC is and run consistently through the globe. And there’s things we can do, mechanisms, engineering, controls like using software applications that warn you that recommend these are the steps you should be taking as well. A lot of the world globally also is standardizing to ISO standards, but there’s a lot of odd interpretations with these types of standards.

Daniel Johnson:
Even in the United States, we have CAP and CLIA globally [inaudible 00:19:46] in certain areas. But those standards are interpreted a little differently everywhere. For example, when you run QC for reagent change, when you have concentrated reagents, how often do you get to run QC? If it’s audit or diluted, there’s a lot of nuances that a lot of innovative vendors are coming out with these type of solutions, but they’re not really addressing QC or quality programs as part of launching new reagents or new operational improvements, like concentrated reagents. And on the other hand, if we’re talking about these countries or customers who only run QC once a week or not at all, we have countries like Germany who have really [inaudible 00:20:25] rules, which are really strict. They’re complex. They don’t make sense in all disciplines. These are my opinions by the way.

Chris Wolski:
Sure. Of course

Daniel Johnson:
It’s hard to design in software just dedicated really [inaudible 00:20:37] rules, but we had to accommodate that. They might be at this level that it’s a super high peak of quality control before patient results are being out there. And then we have some people that are just happenstance. Yeah. I’ll run it when I have time. Trying to get everyone on what a baseline is, what excellence looks like, and what the future will lead. And that could be some really [inaudible 00:20:59] hybrid or something along those lines. It’s hard to get everyone on the same page.

Chris Wolski:
It certainly is. And as you alluded to you, there’s some cultural issues that you come up against as well in way ways that you don’t really expect. Yeah. Certainly can be a challenge. Dan, I’m going to give you a superpower right now. If there was one thing you could snap your fingers today and change in terms of QC, not anything else, QC or QA, what would that be and why?

Daniel Johnson:
Wow. I think the superpower would be is that you would never have to run QC again. And I think the future systems and designs of these systems will have a load of sensors. It will have some great technology into it, abilities to self heal, to self calibrate so you won’t even miss beat. Why even have to run QC? It could be understanding the deep algorithms between the sensors and the patients to say, yeah, oh, we noticed there’s a little deviation and it would auto QC itself by running patients or the reagents themselves. I think there has to be a better way than stopping your entire workflow for the day, shutting everything down, running a QC and then waiting for the results so you can process it again. It needs to be deeply integrated. And if we can do it without QC material. I know there’s probably people who are QC [inaudible 00:22:31]

Chris Wolski:
Yeah. They’re rolling their eyes now. This could be controversial, I think, Dan.

Daniel Johnson:
Yeah. They’re just giving me the shifty eye or the shifty ear at the moment. But it’s just a concept and it doesn’t mean that it’s going away tomorrow, but I think there’s really cool ways that we can maybe think about it as a reagent or it’s a pack or it’s incorporated into the reagents somehow. There’s a better way to approach it. And then to add to that is that I can also see the future of QC being recorded in a way that no one has ever imagined it before, because I think that’s one of the big issues with the laboratory today where they see this QC material, they run it, and they put it in a binder somewhere in the back and they forget about it. And only when an inspector walks in, they pull it off the shelves and hopefully they remember to sign it off.

Daniel Johnson:
There’s other things that we can talk about for futures is that recording of this information into a blockchain structure. And if people don’t know what blockchains, with the cryptocurrencies and how they have it, in other words, it’s an immutable record. If I went into the hospital today and got a CBC and they said, okay, and I got my results and everything looked normal, I would love to pinch and zoom into my result so deep that I would know the raw materials of that QC to see if it passed or it didn’t pass and how many times did it pass? It becomes part of the patient record. And if we start having all of this information in this immutable record or this blockchain, then we can share this information with other vendors with a bigger data. All of this starts becoming open source and available so people can data mine and do these actionable insights and having these type of records that are in the right format so they’re just easily digested and able to help the bigger picture.

Chris Wolski:
Well, I like how you think, Dan. I think that sounds like a good idea to me. I’m not giving you the sideways eye for sure. All right. And unfortunately with that, we’ve reached the end of our time today. Dan, thanks so much for joining me and giving me some really great insights on the QC and QA process. I know you’ve really opened my eyes and I expect you did the same for our clinical lab audience as well. I also want to, again, thank your organization, Sysmex America, for sponsoring this special edition of Clinical Lab Chat and as always, I want to thank you, the laboratorians, for listening and for all you do to contribute to the health of the public. You really are unsung heroes and we’re hoping that we’re singing your song a little bit with Clinical Lab Chat. Again, look for more episodes of Clinical Lab Chat in the future and visit us online at clpmag.com and on all of the major social media platforms. And until next time.