Some DLCO errors the 2017 standards will probably fix

Last week I ran across a couple errors in some DLCO tests that I don’t remember seeing before, or at least not as distinctly as they appeared this time. If I hadn’t been looking carefully I could have missed them but both sets of errors will be a lot more evident when the 2017 ERS/ATS DLCO standards are implemented.

The first error has to do with gas analyzer offsets. What alerted me was a set of irreproducible DLCO results.

Test 1: Test 2: Test 3: Test 4:
DLCO (ml/min/mmHg): 24.53 17.21 12.91 6.74
Inspired Volume: 1.99 2.06 2.32 2.26
VA (L): 3.83 3.52 3.63 2.60
Exhaled CH4: 43.27 49.19 54.80 74.14
Exhaled CO: 16.09 23.15 31.39 49.46

When I first looked at the graphs for each test, there wasn’t anything particularly evident until I pulled up the graph for the fourth DLCO test:

This graph showed that the baseline CH4 and CO readings were significantly elevated, but this hadn’t been evident in the previous tests.

Our lab software does not report the baseline CO and CH4 readings except as part of the DLCO graphs. Fortunately, there is an option to download the raw DLCO test data and when I did this it was evident that the zero offset for the CH4 and CO gas analyzer was changing dramatically from one test to the next.

Prior to each DLCO test our lab software checks the CH4 and CO zero offsets and gain. This is not a calibration despite the fact that it puts the test systems through exactly the same steps as it does during a “real” calibration. It is also evident that it doesn’t compare the values it measures to those from the “real” calibration in order to check for any discrepancies. What it does do is to use them as scaling factors and this is what is throwing the calculated DLCO off.

Specifically, the system assumes that the difference between the baseline values for the room air and inspired CH4 and CO concentrations is 100% but also that that the zero determined during a “real” calibration still applies. Using the inspired concentrations as an anchor, it scales the exhaled CH4 and CO accordingly. What this means is that if the baseline CH4 and CO are below zero (negative zero offset) that the exhaled CH4 and CO will be reduced relative to what they “really” are. If the baseline CH4 and CO are above zero (positive zero offset) then the exhaled CH4 and CO will be elevated relative to what they “really” are.

Since calculation of DLCO is based on the difference between inhaled and exhaled CO, when the exhaled CO is falsely reduced the calculated DLCO will be elevated and when the exhaled CO is falsely elevated then the calculated DLCO will be reduced. This is exactly what the test results show.

Fortunately, even though I don’t review tests until the following day that patient’s tests were the last performed using that system so nobody else was affected. We have, of course, stopped using the system (at least for DLCO testing) until the DLCO gas analyzer can be serviced.

In a sense, the real problem was the decision by our equipment’s manufacturer to not re-calibrate the DLCO gas analyzer before each test or to at least check for significant discrepancies in the baseline and inspired gas concentrations. This is not the first time I’ve run across this error (although not in quite as dramatic a form as this) and I brought it to the attention of our equipment manufacturer over three years ago. We’ve gone through a major software update since then but the problem is still there so apparently it wasn’t considered to be particularly significant.

It was more difficult than it should have been to determine there was a problem however, since our lab software does not report the baseline CH4 and CO values and I had to do some digging in order to determine what they were. The 2017 DLCO standards require that the baseline CH4 and CO be reported. The purpose of this is to primarily to make sure that the patient has washed out the DLCO test gases before being re-tested but also to correct for the patient’s alveolar CO backpressure. When this has been implemented any problems with gas analyzer zero offsets and gains will hopefully be more evident than they are now.

The second error has to do with the alveolar sampling volume. While reviewing DLCO results I came across a test where the reported alveolar sampling volume met the 2005 ATS/ERS criteria but one look at the graphs from the test told me that it was an error. First, though, this is what the alveolar sample window looked like:

Most test systems display the alveolor sample as a exhaled CH4 and CO concentrations versus time, and there is nothing particularly unusual about this graph. When the exhaled volume is added however, the alveolar sample error is immediately evident:

 

After exhaling for a while the patient inhaled and for some reason the alveolar sample volume bridged across this. I’m speculating, but my guess is that the software first measures the washout volume by tracking forward through the volume signal after the final exhalation begins. But instead of continuing to track forward through the volume signal to find the end of the alveolar sample volume (because this is the only way this error can be explained) it tracks backwards from the end of the volume signal until it finds an exhaled volume that when subtracted from the end of the washout volume is equal to the alveolar sample volume. Why the alveolar sample volume is determined this way is completely unclear and my guess about how this algorithm works can of course be completely wrong but it’s hard to see how the alveolar sample volume could have been mis-measured otherwise.

Needless to say I corrected the sample volume.

Interestingly, the DLCO hardly changed at all. When I compared the unadjusted and adjusted values, there was little change in CH4, CO and breath-holding time (BHT), but that brought to light another problem.

Unadjusted: Adjusted:
DLCO: 20.55 21.77
CH4: 50.78 51.08
CO: 23.92 24.09
BHT: 10.91 10.22

BHT should have decreased by about 1.5 seconds.

We use the Jones-Meade algorithm (recommended by both the 2005 and 2017 DLCO standards) for determining the BHT. Specifically:

breath-hold time equals the time starting from 30% of the inspiratory time to the middle of the sample collection time.”

The lack of change in BHT between the adjusted and unadjusted alveolar sample is wrong and points out another possible error in our test system software. My guess is that this statement was read be a programmer as “to the middle of the sample” not “middle of the sample collection time” but since the software algorithms are proprietary and since our equipment manufacturer never responds to questions about things like this, I probably won’t ever know for sure.

The 2017 DLCO standards now requires that the alveolar sample period be displayed as gas concentrations versus volume, not gas concentrations versus time. Once our test systems are updated to meet the new standards problems determining alveolar volume will be a lot more self-evident. Since the 2017 standards requires that graphs of the full DLCO maneuver (and the alveolar sample) are included on reports verifying the accuracy of the BHT shouldn’t be that difficult.

Once again though, these errors raise the issue of problems with proprietary software algorithms. How does your test system handle zero offsets for CH4 and CO? How does it scale the exhaled CH4 and CO? How does it determine washout and alveolar volume? How does your system measure BHT? We don’t know the answers to these questions because our equipment manufacturers haven’t ever revealed their algorithms. I realize that manufacturers have a lot of their resources invested in their software and no reason to make it public but this also means we have to take their word that their test systems meet the ATS/ERS standards and leaves us with little ability to verify that they actually do.

I’m not suggesting that our test systems don’t work reasonably well most of the time but I know over a dozen problems and idiosyncracies (some major, some minor) with our test systems. We’ve known about some of these for so long that a number of work-arounds have been part of our training program for new technicians for at least the last 6 years and sometimes for more than 10 years. This points out another problem and that is that most (not all because there are some exceptions) PFT equipment manufacturers have no mechanism whatsoever for users to report problems and to verify that they’ve been fixed.

Part of the reason for this is that FDA regulations makes medical equipment manufacturers jump through innumerable hoops and that their certification process is very slow and their documentation process very time- and labor-intensive. The FDA also treats pulmonary function equipment under the same rules it does for pacemakers, insulin pumps and heart-lung machines and changes that are only cosmetic (or at least certainly non-critical) are often treated the same as if they were critical changes.

But there is also little or no financial incentive for equipment manufacturers to update their software. Manufacturers make their money by selling new test systems and servicing systems they’ve already sold. Unless they are required to they have little reason to write new software or fix their old software.

The 2017 DLCO standards are a welcome improvement over the 2005 standards. Not only do they bring the standards up to date from a technological point of view but they also close a lot of loopholes (or at least areas where the standards were vague enough to leave a lot of wiggle room for interpretation). The 2017 standards are also welcome because they will require manufacturers to update their software and (hopefully) at the same time fix a lot of long-standing problems.

References:

MacIntyre N, et al. Series ATS/ERS task force: Standardisation of lung function testing. Standardisation of the single-breath determination of carbon monoxide uptake in the lung. Eur Respir J 2005; 26: 720-735.

Graham BL, et al. 2017 ERS/ATS standards fir single-breath carbon monoxide uptake in the lung. Eur Respir J 2017; 49: 1600016

Creative Commons License
PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

1 thought on “Some DLCO errors the 2017 standards will probably fix

  1. Very interesting. I have not seen those errors . But as you say we have to take a closer look at the tests. Thank you very much for your information. Greetings from Mexico.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.