Well, not necessarily anything, although as usual that depends on the circumstances. Recently I was contacted by an individual who was concerned that their DLCO had decreased from 120% of predicted to 99% of predicted. They also mentioned that their DLCO results have normally ranged from 117% to 140% of predicted over the last 9 months.
More interestingly however, they said that
“the technician told me before I even took the test that anything over 100% for DLCO is essentially a testing error.”
Wow. That statement is wrong on so many levels it’s hard to know where to start but I’ll give it a shot anyway.
First, there are a variety of DLCO reference equations. The ATS/ERS guidelines recommends that PFT Labs pick the reference values that most closely matches their patient population but how this is done is left to individual labs. There are at least a couple dozen DLCO reference equations to choose from and probably about a half dozen of these are in common use in PFT labs around the world.
Because no patient population is ever going to precisely match those of a study this means that DLCO results are going to tend to be above or below 100% of predicted depending on which reference equation the lab is actually using. This also means that if results from otherwise normal subjects are mostly above or mostly below 100% of predicted then the wrong reference equations are being used.
I’ve been thinking about quality control and quality improvement lately. Mostly this has been about how to go about determining whether the lab has a quality problem with testing and what statistics should be used for this purpose but I was reminded recently about an issue concerning biological quality control that came up a couple months ago on the AARC diagnostics forum. Specifically, one of the participants noted that some of their technicians had refused to perform biological QC on the basis that it violated their HIPAA rights to the privacy of their medical information. Further discussion noted that this was actually a correct interpretation of the HIPAA regulations and that no PFT lab can “force” its technicians to perform biological QC.
I will be the first to admit that I’d never thought about it this way, and I’ve been mulling it over ever since. I’ve performed PFT testing on myself both for formal biological QC and as a quick way to check the operation of a test system for decades but I never thought of my PFT results as being part of my medical information. That’s probably an indication of my own short-sightedness however, and I also realize that over the years I’ve run across a number of testing issues I’d taken for granted up until somebody pointed out a problem with them.
My attitude towards my PFT results may also be due to the fact that I don’t have any notable lung disease. My lab has had technicians who have been asthmatic however, and this has never been a factor in whether they were hired or not (other than not letting them perform methacholine challenges). They’ve usually performed bio-QC on themselves and at the time they seemed to regard it as a way to check on the status of their asthma. In retrospect however, I have to wonder if they were ever concerned that I would use their health status and test information against them in their annual evaluation, or even that the hospital would re-consider their employment because the costs of their health insurance might be higher. Although I don’t think the hospitals I’ve worked for ever thought along these lines, like it or not there are many businesses where this is a factor.
Yesterday I asked myself what would happen if all PFT labs were required to completely end biological quality control because of HIPAA requirements? It didn’t take a lot of thought to realize that there are a number of mechanical test simulators in the marketplace that could do quite well at replacing the biological part of quality control. As importantly, the more I’ve thought about it the more I’ve come to think that biological QC probably isn’t the right way to go about QC in the first place.
The Lung Clearance Index (LCI) is a relatively simple test that provides a measure of ventilation inhomogeneity within the lung. This can be clinically useful information since several studies have shown that increases in LCI often precede decreases in FEV1 in cystic fibrosis and post-lung transplant. LCI results are only a general index into ventilation inhomegeneity however, and other than showing its presence, does not give any further information about its cause or location.
There is additional information that can be derived from an LCI test that can indicate the general anatomic location where ventilation inhomegeneity (or alternatively, ventilation heterogeneity) is occurring; specifically the conducting or acinar airways. This can be done because changes in the slope of the tidal N2 washout waveform during an LCI test are sensitive to the conduction-diffusion wavefront in the terminal bronchioles. Careful analysis of these slopes permits the derivation of two indexes; Scond, an index of the ventilation heterogeneity in the conducting airways; and Sacin, an index of ventilation heterogeneity the acinar airways.
To review, an LCI test is a multi-breath nitrogen washout test. An individual is switched into a breathing circuit with 100% O2.
Once this happens tidal volume is measured continuously and used to determine the cumulative exhaled volume. Exhaled nitrogen is also measured continuously and is used to determine the cumulative exhaled nitrogen volume. The LCI test continues until the end-tidal N2 concentration is 1/40th of what is was initially (nominally 2%). At that point FRC is calculated using the cumulative exhaled nitrogen volume:
and is essentially a measure of how much ventilation is required to clear the FRC. When an individual tidal breath from the LCI test is graphed, it looks similar to a standard single-breath N2 washout:
and can be similarly subdivided into phase I (dead space washout), phase II (transition) and phase III (alveolar gas).
A month or two ago in the AARC Diagnostics forum several members noted that their labs had acquired Impulse Oscillometry systems a number of years ago but that their physicians had since stopped ordering oscillometry tests, mostly because nobody understood what it was measuring and didn’t know how to interpret the results. There are a number of reasons why this is probably not an uncommon scenario and why, despite being first described in 1956, oscillometry is not used more widely.
But first, what is oscillometry, and what’s the best way to understand it?
Oscillometry refers to a closely related group of techniques for measuring respiratory impedance by superimposing small pressure waves on top of normal tidal breathing.
There are three main approaches: the Forced Oscillation Technique (FOT), which is sometimes used a blanket term for all oscillometry techniques but more often refers to a single frequency technique, Impulse Oscillometry (IOS) and Pseudo-Random Noise (PRN). Most commercial oscillometry systems use either PRN or IOS because each approach uses multiple oscillation frequencies more or less simultaneously which allows testing to performed relatively quickly. The mono-frequency technique is used mostly in research because although it is slow to scan all frequencies, it is able to resolve rapid changes occurring at a single frequency.
All techniques share a similar equipment configuration:
The oscillatory pressure is usually generated by a loudspeaker, although the actual waveform and the frequency it produces differ for each technique. The peak pressures are usually on the order of +/- 1 to 5 cm H2O (+/- 0.1 to 0.5 kPa). Because patients have to breathe during testing, the system provides a steady flow of fresh air in one manner or another but this has to include a low pass filter of some kind so that the pressure waveform is not significantly diverted or blunted. The key measurements are flow and the pressure at the mouth. Continue reading →
A couple of days ago I was reviewing (triaging, actually) the spirometry portion of a full panel of PFTs performed with pretty terrible test quality and was trying to decide if the technician responsible for performing the tests had made the right selections from the patient’s test results. I noticed that the FEV1 that had been selected was actually the lowest FEV1 from the all the spirometry efforts the patient made, and was trying to decide whether this was really the correct choice. We use peak flow to help determine which FEV1 to select and that particular spirometry effort appeared to have the highest and sharpest peak flow by a large margin:
particularly when compared to the other spirometry efforts:
But this was hard to reconcile given how low the FEV1 was relative to the others:
A friend recently sent me the links to several YouTube videos on pulmonary function testing. I’ve spent some time off and on over the last year looking at YouTube videos and in particular I’ve been looking for ones that can be used as part of technician education. Maybe I’ve set the bar too high but all too often I’ve been disappointed and frustrated with what I’ve found. One reason for this is that many videos are aimed at other audiences than technicians (i.e. medical students, physicians, patients). Another reason is that too often only simple concepts are presented, often in rote fashion and often without good visual explanations (c’mon, these are videos after all, not podcasts). A final reason is that sometimes they’re outdated, misleading or just plain wrong.
Still, even the flawed videos can be useful. Sometimes this is because they occasionally explain some concepts well; sometimes despite being simplistic they present a good overview; and sometimes because their mistakes can serve as points for discussion. I’ve tried to select videos that have at least some potential for use in technician education.
John B. West Respiratory Physiology Lectures
Based primarily on his classic textbook, ‘Respiratory Physiology’ (which should be on everybody’s bookshelf). Not 100% perfect but this is what many of the other videos should aspire to be. Many complex concepts explained using simple examples. Lots of interesting pictures and illustrations. Should be part of every technician’s education.
The use of Z scores to report PFT results, both clinically and for research is occurring more and more frequently. Both the Z score and the Lower Limit of Normal (LLN) come from the same roots and in that sense can be said to be saying much the same thing. The difference between the two however, is in the emphasis each places on how results are analyzed. The LLN primarily emphasizes only whether a result is normal or abnormal. The Z score is instead a description of how far a result is from the mean value and therefore emphasizes the probability that a result is normal or abnormal.
Reference equations are developed from population studies and the measurements that come from these studies almost always fall into what’s called a normal distribution (also known as a bell-shaped curve).
A normal distribution has two important properties: the mean value and the standard deviation. The mean value is essentially the average of the results while the standard deviation describes whether the distribution of results around the mean is narrow or broad.
The simple definition of the Z score for a particular result is that it is the number of standard deviations that a result is away from the mean. It is calculated as:
A couple weeks ago I was asked whether it was safe for a patient with an abdominal aortic aneurysm (AAA) to have pulmonary function testing. My first thought was that it was probably unsafe but after a moment or two of thought I realized that I hadn’t reviewed the subject for a long time. When I checked the 2005 ATS/ERS general testing guidelines (there are no contraindications in the 2005 spirometry guidelines) I found that AAA wasn’t mentioned at all. In fact, the only absolute contraindication mentioned was that patients with a recent myocardial infarction (<1 month) should not be tested. Some relative contraindications were mentioned:
chest or abdominal pain
oral or facial pain
dementia or confusional state
and activities that should be avoided prior to testing include:
smoking within 1 hour of testing
consuming alcohol within 4 hours of testing
performing vigorous exercise within 30 minutes of testing
wearing clothing that restricts the chest or abdomen
eating a large meal with 2 hours of testing
but these were factors where test results were likely to be suboptimal and not actually contraindications.
This got me curious since I thought that pulmonary function testing was contraindicated for more conditions than just an MI. I reviewed the 1994 and and then the 1987 ATS statements on spirometry but again found no mention of contraindications. Ditto on the 1993 ERS statement on spirometry and lung volumes. Finally, in the 1996 AARC clinical guidelines for spirometry I found a much longer list of contraindications:
hemoptysis of unknown origin
recent mycardial infarction
recent pulmonary embolus
thoracic, abdominal or cerebral aneuysms
recent eye surgery
presence of an acute disease process that might interfere with test performance (e.g. nausea, vomiting)
recent surgery of thorax or abdomen
So where did the AARC’s list of contraindications come from? And why is there such a discrepancy between the ATS/ERS and the AARC guidelines?
The 2005 ATS/ERS standards for assessing post-bronchodilator changes in FVC and FEV1 have been criticized numerous times. A recent article in the May issue of Chest (Quanjer et al) has taken it to task on two specific points:
the change in FVC and FEV1 has to be at least 200 ml
the change is assessed based on the percent change (≥12%) from the baseline value
The article points out that the 200 ml minimum change requires a proportionally larger change for a positive bronchodilator response in the short and the elderly. Additionally, by basing the post-BD change on the baseline value it lowers the threshold (in terms of an absolute change) for a positive bronchodilator response as airway obstruction become more severe. As a way of mitigating these problems the article recommends looking at the post-bronchodilator change as a percent of predicted rather than as a percent of baseline.
The article is notable (and its authors are to be commended) because it studied 31,528 pre- and post-spirometry records from both clinical and epidemiological sources from around the world. For the post-bronchodilator FEV1 and FVC:
A very strange spirometry report came across my desk a couple of days ago.
My first thought was that some of the demographics information had been entered incorrectly but when I checked the patient’s age, height, gender and race all were present, all were reasonably within the normal range for human beings in general and more importantly, all agreed with what was in the hospital’s database for the patient. I tried changing the patient’s height, age, race and gender to see if it would make a difference and although this made small changes in the percent predicted when I did this the predicteds were still zero.
Or were they? They actually couldn’t have been zero, regardless of what was showing up on the report, since the observed test values are divided by the predicted values and if the predicted were really zero, then we’d have gotten a “divide by zero” error, and that wasn’t happening. Instead the predicted values had to be very close to zero, but not actually zero, and the software was rounding the value down to zero for the report. Simple math showed me the predicted value for FVC was (very) approximately 0.0103 liters, but why was this happening?