When it’s FVC 1, EOT 2, volume comes out short

I was reviewing a pre- and post-bronchodilator spirometry report that showed a relatively large increase in FVC but the change in FEV1 was not significant. It’s not impossible for a patient to show this kind of a pattern following a bronchodilator but it is somewhat unusual. Usually when I see this it means that the patient exhaled a lot longer post-BD than they did pre-BD. When I looked however, I saw that just the opposite was true, the expiratory time was actually shorter for the post-BD effort than it was for the pre-BD effort.

FVC_Error_Table

The reported expiratory time isn’t always accurate, though. When a patient stops exhaling during an FVC effort but doesn’t inhale our test system will sometimes continue to time the effort. When this happens the volume-time curve becomes flat and the expiratory time is reported with a falsely high value.

FVC Early Termination

This is what I expected to see when I looked at the volume-time graphs for this report. What I saw instead was this:

FVC_Vol_Error_2_redacted

Since it showed pre- and post-BD volume-time curves that were fairly similar why were the volumes so different? Our test system software allows the FVC, FEV1 and the graphs to all be selected from different efforts so my suspicions at this point were that the technician performing the tests had selected these values from the wrong tests by accident. When I pulled up the raw test data I saw that the pre-BD spirometry results were somewhat similar, but they were all more than a half a liter less than the best post-BD FVC.

FVC_Error_Table_2

If this was the case why were the pre- and post-BD volume-time curves so similar? When I looked at the raw graphs I saw what had happened.

FVC_Vol_Error_1_redacted_2

The patient had stopped exhaling around 2-1/2 seconds into the spirometry effort, inhaled a small amount of air and then continued exhaling for another 8 seconds. The test system software had used the volume from the 2-1/2 second point in the exhalation when reporting the FVC but for some reason had continued measuring time until the patient “really” stopped exhaling. This means that there were two end-of-tests for this effort, one for the volume and one for time.

To some extent I can understand why our test system software measured the FVC where it did since a patient inhalation is usually a good signal that they have stopped exhaling. What I don’t understand is why the software continued to measure time until the “real” end of the test, and if it was able to measure time accurately, why didn’t it measure the “real” FVC volume accurately as well?

Surprisingly enough the ATS/ERS statement on spirometry says nothing about stopping the test when a patient inhales and instead only says that the end-of-test criteria is satisfied when “the volume-time curve shows no change in volume (<0.025 L) for >= 1 second”. I have seen this type of expiratory pattern before (i.e. exhalation, short inhalation, continued exhalation) but when I looked at it in the past the software had measured the FVC volume correctly or at least the expiratory time matched the point at which the FVC volume was measured. This is the first time I’ve noticed a distinct discrepancy and it is not clear why this occurred. The software may have a certain inspiration threshold for determining an end of exhalation and in this case maybe it was exceeded where it hadn’t been in the past but again, if that’s the case why didn’t it apply to both volume and time?

The ATS/ERS statement on interpretation says that the largest vital capacity regardless of where it comes from should be used to calculate the FEV1/VC ratio. This says to me that the spirometry software should measure the largest FVC volume even if the patient has stopped exhaling or even inhaled somewhat as long as they re-start their exhalation and reach a higher FVC. The ATS/ERS standard does not touch on this and I would hope that the next time it is updated that the end-of-test criteria are more comprehensive.

I used a graphics program to measure the “real” FVC and found it was about 2.49 L. This meant that the real pre- and post-BD change in FVC was only about 5% and that there really had been no significant response to the bronchodilator. I feel fortunate that I noticed the discrepancy because otherwise this patient’s report would have indicated that they had a significant response to bronchodilator. I will also pay more attention when I see this pattern in the future and not assume that either the FVC volume or the expiratory time are correct.

I checked the manual for our test system and I could find nothing about what conditions the software uses to determine the end of exhalation other than the ATS/ERS end-of-test criteria that has already been mentioned. Test software routinely needs to make sophisticated decisions about test quality and how results should be measured and selected. When the ATS/ERS standards do not speak to a specific situation (even when it occurs somewhat commonly) programmers have to take their best guess about how to handle it. My concern about these types of software decision is that they are usually not documented and probably not evident even to relatively sophisticated users.

Equipment manufacturers put significant resources into their software and they have a right to consider their software algorithms to be proprietary information. These algorithms have a direct bearing on test accuracy however, and over the years I’ve had numerous problems caused by undocumented software issues. I’ve contacted our equipment manufacturer about these problems many times but I’ve rarely gotten any kind of an answer and only infrequently even an acknowledgment that I’ve submitted the request. There needs to be a middle ground of some kind where both the rights of the manufacturers and those of the end-users are respected. Because computers have become an essential and irreplaceable part of testing I’d like to suggest something like an open source software model. An ATS/ERS (or ACCP or AARC or whoever) committee could publish recommended algorithms for spirometry and other pulmonary function tests. Researchers, users and manufacturers could submit and comment on suggested changes and at regular intervals (annually?) a revised standard could be released. It would be up to manufacturers to update their software and then show it meets the new standard. This would help ensure that our pulmonary function equipment uses testing algorithms that are as up-to-date and accurate as possible and yet still leaves plenty of room for manufacturers to differentiate themselves in other ways. Just a thought.

Creative Commons License
PFT Blog by Richard Johnston is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.