Guidelines bolster at-risk testing for chronic kidney disease

Interview by Steve Halasey

Robert H. Christenson, PhD

Robert H. Christenson, PhD

For laboratorians as for clinicians, keeping up with changing standards of practice can be challenging—especially in the face of advancing technologies. Fortunately, many medical specialty societies play an active role in compiling and updating guidelines for healthcare professionals, often including important recommendations for clinical lab personnel who perform tests to diagnose or monitor patients with serious and life-threatening diseases.

A case in point is the evolution of current guidelines for the evaluation, classification, and risk stratification of chronic kidney disease (CKD), the earlier version of which was published by the National Kidney Foundation’s Kidney Disease Outcomes Quality Initiative (NKF-KDOQI) in 2002. A decade later, making use of evidence developed since those guidelines were published, the global organization Kidney Disease: Improving Global Outcomes (KDIGO) published an updated version of the guidelines. Among several novel recommendations—not all of which have met with universal approval—the 2012 KDIGO guidelines proposed that CKD status be classified by considering the patient’s estimated glomerular filtration rate (eGFR), level of albuminuria, and the cause of the kidney disease.

To interpret the KDIGO guidelines for use in the United States, a working group convened by NKF-KDOQI compiled a commentary that appeared in the May 2014 issue of the American Journal of Kidney Diseases.1 Now, it’s up to healthcare professionals—including laboratorians—to pick up, disseminate, and implement the latest guidelines in their own practices.

To find out more about how the current CKD recommendations related to laboratory and point-of-care testing are being circulated and adopted, CLP spoke with Robert H. Christenson, PhD, DABCC, FACB, professor of pathology and medical and research technology at the University of Maryland School of Medicine, and past president of the American Association for Clinical Chemistry.

CLP: Urine screening for the detection of kidney disease has been around for many years, but organizations such as the National Kidney Foundation (NKF) are now emphasizing the importance of early detection. Has something changed?

Robert H. Christenson, PhD, DABCC, FACB: What has changed is that the number of people now believed to be at risk for developing kidney disease has increased. Certainly, the nationwide epidemic of obesity, and the fact that our population is aging, both trend in the direction of having a greater number of people at increased risk for developing kidney disease.

Trends such as these contribute to the need for more screening to detect kidney disease, because if we detect it early, we can mitigate its impact on the person’s overall health. So early detection is very, very important for reducing adverse outcomes.

CLP: What groups are now considered to be at high risk for kidney disease?

Christenson: In 2012, KDIGO announced new recommendations for testing that specifically highlight high-risk groups, including members of certain US ethnic minorities, those with a family history of kidney disease, and folks who have diabetes or hypertension—that is, high blood pressure. In addition, the recommendations also included anyone over the age of 60, which of course encompasses the general age range of the Baby Boomer generation.

CLP: Why have people over 60 years of age recently been added to the list of high-risk groups?

Christenson: Regardless of the proposed mechanism, the data show that individuals greater than 60 years of age are at high risk, evidently as part of the aging process. Looking at studies such as the Atherosclerosis Risk in Communities (ARIC) study, or the Cardiovascular Health Study (CHS)—epidemiological studies involving large populations—one can identify natural cut points for increased risk.2,3 And the truth of it is, that’s what the evidence shows.

Although we need to consider the development of kidney disease risk as a continuum—and we know that there can be significant differences between an individual’s chronological age and biological age, both for better and worse—the bottom line is that age greater than 60 is associated with greater risk, according to the evidence.

CLP: With updated guidelines, an awful lot of work has gone into standardizing test methodologies. Has the achievement of such standardized test levels been important for making possible the current emphasis on early detection?

Christenson: Standardizing and harmonizing measurements have certainly been an important part of early detection. From the point of view of nephrologists, internal medicine physicians, and other clinicians, probably the most important achievements have come through the evidence available from large epidemiological studies that have demonstrated the importance of test measurements. Those professionals have played an important role in expressing the importance of improving the quality of test measurements.

But from the point of view of laboratorians, there’s no doubt that it is a significant achievement to standardize test practices and harmonize test results, so that a number representing a patient’s risk of CKD means the same thing across regions and around the world.

The way that these come together is apparent in the KDOQI commentary that appeared last May in the American Journal of Kidney Diseases, where the authors described a standardized two-dimensional way of evaluating a patient’s risk for CKD.1

On one dimension, the method considers the level of albumin in urine, termed albuminuria. If you think of the glomerulus of the kidney as a filter, increased pressure will force albumin through that filter and into the urine. So albuminuria helps to measure the patient’s physiological stress at the level of the glomerulus, indicating how much albumin has been passed or spilled into the urine. As a biomarker, the level of albumin in urine is important for examining that organ-specific piece of kidney disease. From the point of view of the laboratorian, it is important that such measurements are standardized and harmonized so they are comparable from one region to another and from one manufacturer’s instruments to those of another company.

In the other dimension, the method considers the patient’s eGFR, a measure typically calculated from the individual patient’s creatinine level, age, sex, and ethnicity, and expressed in mL/min. GFR levels were previously stratified into five categories, but the 2012 KDIGO guidelines added a sixth category in order to better distinguish patients with “mild to moderate” decrease in GFR from those with “moderate to severe” decrease in GFR. All together, the six eGFR categories reflect the stages of a patient’s disease status on the continuum from normal to end-stage renal disease.

Over the years, we’ve learned a lot about how to stratify CKD patients according to their level of risk, and this is why our guidelines have continued to evolve since the beginning of this century. The current categories are based on epidemiological studies showing that patients at various eGFR levels, when there is also evidence of dysfunction at the level of their glomerulus, should be considered to be at higher risk.

CLP: Kidney disease is progressive, and typically leads to renal failure. Nowadays, with earlier diagnosis, is there a point at which clinicians feel that interventions have a better chance of making a difference? Is that why fine-tuning the disease stratifications is so important?

Christenson: Absolutely. Although experts have divided the stages of kidney disease into categories roughly corresponding to the increasing severity of the disease—G1, G2, G3a, G3b, G4, and G5—we should also think of this as a continuum of risk. Whatever category may be assigned, the lower a patient’s eGFR, the higher the risk. And as we age, we naturally begin to progress downward along that continuum.

Other categories of risk have their own progressions that contribute to the overall risk. For instance, years ago it was thought that a normal systolic blood pressure was 100 plus a patient’s age. But no one would now accept 155 mmHg as a normal blood pressure for a 55-year-old; it’s got to be down nearer to 130 and 140 mmHg, in fact, for the systolic pressure. But as more and more has been learned about the contribution of hypertension to kidney disease, the need to be aggressive in treating hypertension as one component for reducing the disease has also become increasingly apparent.

Certainly, it has been very helpful to be able to stratify risk through the use of urine albumin values, and to gain insight into what a patient’s blood pressure is at the level of the glomerulus. This information has enabled physicians to initiate therapies that can effectively manage the risks they detect.

CLP: Physicians typically diagnose and treat patients based on national guidelines. In the case of kidney disease, what is the NKF recommending so far as diagnostic testing is concerned?

Christenson: In 2012, KDIGO issued updated international guidelines to clarify the definition and classification of CKD, and to provide recommendations for the management of patients in at-risk groups. Compared to the previous guidelines of 2002, the 2012 KDIGO guidelines expanded the CKD classifications to incorporate the cause of a patient’s kidney disease, and established the role of albuminuria (categorized from normal to severely increased as A1, A2, and A3) as a factor to be considered with GFR when staging CKD. The importance of a patient’s urine albumin level has become better recognized in just the past 2 years.

The NKF-KDOQI paper in the American Journal of Kidney Diseases provided a further commentary on the KDIGO guidelines, interpreting them for US-based conditions, and identifying diagnostic and treatment strategies with US patients in mind.1 The ultimate goal is to identify kidney disease at its earliest stages, so that it can be effectively treated and in some cases reversed. In line with the need to screen millions of newly defined at-risk individuals, the commentary recommended that testing for a patient’s albumin:creatinine ratio (ACR), to normalize the amount of albumin for urine volume, is the best method for evaluating albuminuria.

Although a lab-based test is needed to obtain a patient’s eGFR, the test to detect albuminuria is a pretty simple urine test that can be performed at the point of care. Using such a test, large groups of people can be screened in a fairly cost-effective manner, with convenience and efficiency for both the patient and the practitioner.

CLP: The NKF-KDOQI commentary argued that it was premature for the KDIGO guidelines to have included the cause of a patient’s kidney disease as a factor in determining therapy. Why did the commenters adopt that view? Does it have any effect on the recommendations for performing diagnostic testing?

Christenson: I believe that the etiology of the disease will inevitably have a decisive influence on recommendations for treatment. If diabetes mellitus is the root cause or a contributing factor in a patient’s renal insufficiency, for example, then that’s going to receive targeted treatment right away. Other sorts of interventions might be required for a patient whose kidney disease is a result of hypertension, or of the genetic makeup of an ethnic group.

But renal injury is something of a catch-all. Many different diseases—including heart failure—may end up causing some level of renal insufficiency. Refining our understanding of how those causative factors operate—so that the fundamental cause can be treated with effective therapy—is really the first step of what we need to do.

CLP: It has been more than 6 months since KDOQI published its commentary on the 2012 KDIGO guidelines. How would you gauge the success of efforts to inform specialists and laboratorians about these practice recommendations?

Christenson: NKF has done a great job of publicizing its guidelines, but it can take years for guidelines to become fully implemented. Nephrologists are believers and agree with the guidance, I think, and overall, the guidelines appear to be having an impact.

There is greater recognition now for the diagnostic value of the albumin:creatinine ratio. And although we’ve known for years that renal function deteriorates as people age, establishing a risk group defined by age has helped to make screening for this population better accepted.

I believe we’re going in the right direction, but we still have a way to go.

CLP: Why did KDIGO (and the KDOQI working group) recommend ACR as the preferred test for screening of kidney disease?

Christenson: Once again, this was a decision based on clinical evidence. ACR is a relatively simple test—just the amount of albumin normalized to the amount of creatinine that’s spilled into urine. But when one looks back through the literature, it’s very clear that this test is highly predictive of patient outcomes.

ACR has become a focal point because it’s a very robust test that is independent of other risk factors and is inexpensive. The virtues of this test have become more and more recognized as we’ve been able to look at studies involving larger populations, such as the ARIC study and other large-cohort studies.

CLP: From the payor perspective, screening a large population for any health condition is a very expensive proposition. Adding people over 60 years of age increases the at-risk population to be screened for kidney disease by 1 billion people globally. What is the most effective way to screen this extremely large population?

Christenson: We are currently witnessing primary care undergoing a long-overdue transition to an emphasis on detecting kidney disease early. Finding and treating disease early enables physicians to help mitigate many of the bad outcomes that might be discovered later, and not only to improve individuals’ health and healthcare, but also to make it more cost-effective.

It’s a failure of the system when a patient progresses all the way to dialysis without meaningful intervention. It’s huge in terms of cost, certainly, but also in terms of the patient’s quality of life and mortality.

So taking a look at that evidence, it’s no wonder that healthcare is moving toward earlier detection of disease. And there are certain tests—like ACR—that can help us recognize and treat disease earlier, so that we can get folks into the right venues of care, and then reduce or mitigate many of the bad outcomes.

It’s certainly a fair question for payors to ask whether there is sufficient evidence to support the use of a particular test. But so long as the test is proven to be effective for screening, it should be strongly considered for coverage and use. In this regard, by the way, the evidence supporting the use of ACR is really very good and consistent.

Guidelines must also take into account the broad economic impact on US healthcare economics. This is why some recommendations may be set to initiate intervention at a certain level of risk—say, 10%—instead of a lower, more-aggressive risk level of 5%. The simple fact may be that implementing therapies at the more-aggressive level would add millions of people to the pool, and the sheer size of the patient population would make it impossible for payors to cover the costs. In the case of the KDIGO recommendation to use ACR and screen people over 60 years of age, I think an economic impact consideration has been made as part of this recommendation.

CLP: Over the past decade or so, does it seem that the clinical community, specialty societies, and payors are increasingly using the results of health economics and outcomes research to guide what practices are adopted, and how they are implemented?

Christenson: Absolutely. Big trials that have been funded by the National Institutes of Health and other government agencies have contributed to this purpose. We have already learned a great deal from the ARIC study, the Framingham Heart Study, and others. And researchers are continuing to mine the information from those sets of individuals, to amplify our understanding of both adverse outcomes and the evidence for improving outcomes.

In a term, these are the building blocks of evidence-based medicine. But you can’t implement evidence-based medicine without first having the right population to query in the right way. These explorations are essential, for example, in determining what tests might be useful for screening, and so forth. Such efforts should be applauded and supported as part of our efforts to advance healthcare during this century.

CLP: When performing an ACR test, what are the differences in the sensitivity and specificity of POC urine strips and automated point-of-care analyzers? Why would a lab choose to use one format over another?

Christenson: This is an area where the perfect can readily become the enemy of the good. While we are searching for the perfect test with 100% sensitivity and 100% specificity, a balance must be struck so that the most good can be done for populations as a whole.

Painting in broad strokes, there are essentially two test formats for measuring a patient’s albumin:creatinine ratio. The first format is a semiquantitative method that makes use of strips and a relatively simple instrument. The second is a quantitative method, typically using a test-specific cartridge for sample processing and an instrument reader for detection, signal processing, and reporting the value. A major difference is that there can be more variability in sampling with the semiquantitative method.

Some studies have compared the performance of these tests in the hands of clinical operators (for instance, nurses or other healthcare professionals) to performance by professional laboratorians who have been trained to be attentive to laboratory details.4

For the semiquantitative method, the outcome of the comparative study was mixed with regard to sensitivity—the proportion of positive test results for people confirmed to have kidney disease. Some feel that one would like to have sensitivity results above 90%. However, test performance by neither lab professionals nor clinicians achieved that level when using a strip-based test. Lab professionals did well, showing a correlation of 83% with central lab results—meaning that 17% of the tested individuals produced falsely negative results that would hopefully be caught on subsequent testing. But clinical professionals didn’t do quite so well, showing a correspondence of just 67% sensitivity, roughly doubling the number of patients producing false negatives.

With regard to specificity—the proportion of negative test results generated for people confirmed not to have kidney disease—the outcome of the comparative study was very good. For all operators—clinical professionals and laboratorians together—the results showed agreement in the range of about 93% to 96% with the central lab.

In my opinion, the performance differences relating to sensitivity are most likely a result of education and training. Thus, if we could train clinical operators on the important information and caveats that apply to the use of strip-based devices, we could probably improve their performance substantially.

When using a quantitative method, which would include instruments such as the DCA Vantage, the performance of all operators improves significantly. With regard to sensitivity, clinical operators achieved a sensitivity of about 91%, while the average for all operators was 96%. So, when using instrumented quantitative tests, there is much less of a disparity between the performance of clinical operators and professional laboratorians, and all operators achieved very good sensitivity.

These comparisons demonstrate that the sensitivity of ACR tests for detecting renal disease is much better with instrumented systems such as the DCA Vantage. But it’s important to remember that such choices are always subject to trade-offs. Instrumented systems are typically more expensive, for example, and they may be less available for public health screening.

CLP: Are there equivalent differences between automated quantitative point-of-care analyzers and central lab methods? Do POC devices still compare favorably?

Christenson: That depends. Although POC analyzers based on strip testing are simple, cost-effective, and accessible to millions who could benefit, a recent meta-analysis in the Annals of Internal Medicine showed that the sensitivity of strip testing is lower than that of central lab methods. On the other hand, the meta-analysis also showed that the POC DCA Vantage instrument has the same sensitivity (96%) and specificity (98%) as lab tests.4

A benefit of using a POC device is that the doctor can provide test results to the patient almost immediately—while they are directly engaged in discussing the patient’s disease—so that there can be a face-to-face discussion about the patient’s risk and management plan. There’s likely a real advantage to being able to accomplish all of this promptly, rather than having to call the patient back, or sending a letter that might not convey the same message and impact.

The immediacy of POC results is particularly important for monitoring, enabling the physician to intervene when, say, a patient hasn’t been compliant with treatment regimens, and to remind the patient how important that is. Or alternatively, if the patient’s numbers reflect their compliance, enabling the physician to offer encouragement, reassurance, and a pat on the back. This is a way to say, “Great job, your numbers are showing that what we are doing is really working.”

There have been a few studies looking at the benefits of POC testing for hemoglobin A1c, and the results suggest that patients whose tests are conducted at the point of care have better outcomes than those whose samples are sent to a lab, with a later follow-up letter or phone call from their doctor.

CLP: In the case of ACR testing, the difference in turnaround time for POC devices versus central lab tests is quite striking.

Christenson: Exactly. The DCA Vantage produces ACR results in about 7 minutes, allowing the clinician to discuss and intervene as appropriate in real time. By contrast, a lab result may take hours or even days to be returned, and then somebody has to find that result, and write the letter or make phone or e-mail contact with the patient. The patient and physician might not have a discussion about the results until the next office visit, which could be months later. In the context of ACR, we would hypothesize that using a POC device would be beneficial for patient health for all of these reasons.

CLP: Last year, the US Department of Health and Human Services adopted new rules permitting patients to receive their own test results directly—including those generated through the use of POC devices. How do current technologies facilitate this goal?

Christenson: This is a really important change that enables patients to take more responsibility for their own health. The idea is that, with information, patients will become more invested and more engaged in their own healthcare, instead of just relying on the person in the white coat that they see only periodically. It encourages patients to follow their own health more consistently.

Now, many health systems are making data available for patients to monitor themselves, so that both patients and their physicians are examining the data. A lot of labs now offer secure online sites where test results can be accessed by both patients and their physicians. They are intended to help patients interact with their caregivers, including appropriate specialists who can help patients interpret what their lab tests might mean.

The important first step for such a system is to get the patient’s data into an electronic health record. POC devices such as the DCA Vantage are interfaceable with laboratory and other health information management systems—large data information systems that serve as repositories and can pull together all of the information about an individual patient. And from there, the patient and healthcare professional can see the data, evaluate options, and make decisions based on that information.

CLP: How do you see practice guidelines for CKD developing in the future?

Christenson: In the past, changes have been necessary, in part, because new information has become available. Studies among large populations have helped us to get to the evidence, which has enabled us to develop better evidence-based guidelines. Having said that, as we learn more, guidelines will evolve along with the rest of healthcare. It’s a fairly standard practice to update guidelines every 5 years or so, such that we put out new information that can be used by clinicians and other health professionals.

A related challenge for the healthcare community is how to get out the word about new guidelines, whether they relate to a test, or a treatment, or a test that guides management. The Internet has been hugely important for doing that, but we need to continue thinking about innovative ways to get out the word, and to identify what channels work most effectively for healthcare.

Ensuring that new guidelines are circulated broadly has always been challenging, and an even greater challenge is to ensure that clinicians are accepting, adopting, and adhering to them. Professional uptake can vary greatly, but it can really be assisted by taking a multimedia approach to getting the word out.

Steve Halasey is chief editor of CLP.

REFERENCES

1. Inker LA, Astor BC, Fox CH, et al. KDOQI US Commentary on the 2012 KDIGO clinical practice guideline for the evaluation and management of CKD. Am J Kidney Dis. 2014;63(5):713–735; doi: http://dx.doi.org/10.1053/j.ajkd.2014.01.416.

2. Atherosclerosis Risk in Communities (ARIC) study description. Available at: https://www2.cscc.unc.edu/aric. Accessed December 23, 2014.

3. The Cardiovascular Health Study. Available at: https://chs-nhlbi.org. Accessed December 23, 2014.

4. McTaggart MP, Newall RG, Hirst JA, et al. Diagnostic accuracy of point-of-care tests for detecting albuminuria: a systematic review and meta-analysis. Ann Intern Med. 2014; 160(8):550–557; doi: http://dx.doi.org/10.7326/M13-2331.