Thinking about the past

This is the time of the year when it’s traditional to review the past. That’s what “Auld lang syne”, the song most associated with New Year’s celebrations, is all about. I too have been thinking about the past but it’s not been about absent friends, it’s been about trend reports and assessing trends.

In the May 2017 issue of Chest, Quanjer et al reported their study on the post-bronchodilator response in FEV1. I’ve discussed this previously and they noted that the current ATS/ERS standard for a significant post-bronchodilator change of ≥12% and ≥200 ml penalized the short and the elderly. Their finding was that a significant change was better assessed by the absolute change in percent predicted (i.e. 8%) rather than a relative change.

I’ve thought about how this could apply to assessing changes in trends ever since then. The current standards for a significant change in FEV1 over time (also discussed previously) is anything greater than:

which is good in that it is a way to reference changes over any arbitrary time period but it also looks at it as a relative change (i.e. ±15%). A 15% change however, comes from occupational spirometry, not clinical spirometry, and the presumption, to me at least, is that it’s geared towards individuals who have more-or-less normal spirometry to begin with.

A ±15% change may make sense if your FEV1 is already near 100% of predicted but there are some problems with this for individuals who aren’t. For example, a 75 year-old 175 cm Caucasian male would have a predicted FEV1 of 2.93 L from the NHANESIII reference equations. If this individual had severe COPD and an FEV1 of 0.50 L (17% of predicted), then a ±15% relative change in FEV1 would ±0.075 L (75 ml). That amount of change is half the acceptable amount of intrasession repeatability (150 ml) in spirometry testing and it’s hard to consider a change this small as anything but chance or noise. It’s also hard to consider this a clinically significant change. Continue reading

2017 ATS PFT Reporting Standardization

The ATS has released its first standard for reporting pulmonary function results. This report is in the December 1, 2017 issue of the American Journal of Respiratory and Critical Care Medicine. At the present time however, despite its importance it is not an open access article and you must either be a member of the ATS or pay a fee ($25) in order to access it. Hopefully, it will soon be included with the other open access ATS/ERS standards.

There are a number of interesting recommendations made in the standard that supersede or refine recommendations made in prior ATS/ERS standards, or are otherwise presented for the first time. Specific recommendations include (although not necessarily in the order they were discussed within the standard):

  • The lower limit of normal, where available, should be reported for all test results.
  • The Z-score, where available, should be reported for all test results. A linear graphical display for this is recommended for spirometry and DLCO results.
  • Results should be reported in tables, with individual results in rows. The result’s numerical value, LLN, Z-score and percent predicted are reported in columns, in that recommended order. Reporting the predicted value is discouraged.

Part of Figure 1 from page 1466 of the ATS Recommendations for a Standardized Pulmonary Function Report.

Continue reading

What do you do when the predicted is zero?

A very strange spirometry report came across my desk a couple of days ago.

Observed: Predicted: %Predicted:
FVC: 3.07 0 29767
FEV1: 2.15 0 37586
FEV1/FVC: 70 71 101%

My first thought was that some of the demographics information had been entered incorrectly but when I checked the patient’s age, height, gender and race all were present, all were reasonably within the normal range for human beings in general and more importantly, all agreed with what was in the hospital’s database for the patient. I tried changing the patient’s height, age, race and gender to see if it would make a difference and although this made small changes in the percent predicted when I did this the predicteds were still zero.

Or were they? They actually couldn’t have been zero, regardless of what was showing up on the report, since the observed test values are divided by the predicted values and if the predicted were really zero, then we’d have gotten a “divide by zero” error, and that wasn’t happening. Instead the predicted values had to be very close to zero, but not actually zero, and the software was rounding the value down to zero for the report. Simple math showed me the predicted value for FVC was (very) approximately 0.0103 liters, but why was this happening?

Continue reading

Why DIY CPET reports?

When I first started performing CPETs in the 1970’s a patient’s exhaled gas was collected at intervals during the test in Douglas bags and I had a worksheet that I’d use to record the patient’s respiratory rate, heart rate and SaO2. After the test was over I’d analyze the gas concentrations with a mass spectrometer and the gas volumes with a 300 liter Tissot spirometer and then use the results from these to hand calculate VO2, VCO2, Rq, tidal volume and minute volume. These results were then passed on to the lab’s medical director who’d use them when dictating a report.

Around 1990 the PFT lab I was in at the time acquired a metabolic cart for CPET testing. This both decreased the amount of work I had to do to perform a CPET and significantly increased the amount of information we got from a test. The reporting software that came with the metabolic cart however, was very simplistic and neither the lab’s medical director or I felt it met our needs so I created a word processing template, manually transcribed the results from the CPET system printouts and used it to report results.

Twenty five years and 3 metabolic carts later I’m still using a word processing template to report CPET results.

Why?

Well, first the reporting software is still simplistic and using it we still can’t get a report that we think meets our needs (and it’s also not easy to create and modify reports which is a chronic complaint I have about all PFT lab software I’ve ever worked with). Second, there are some values that we think are important that the CPET system’s reporting software does not calculate and there is no easy way to get it on a report as part of the tabular results. Finally, and most importantly, I need to check the results for accuracy.

You’d think that after all these years that you wouldn’t need to check PFT and CPET reports for mathematical errors but unfortunately that’s not true. For example, these results are taken from a recent CPET:

Time: VO2 (LPM): VCO2 (LPM): Reported Rq: “Real” Rq:
Baseline: 0.296 0.220 0.74 0.74
00:30 0.302 0.214 0.77 0.71
01:00 0.363 0.277 0.77 0.76
01:30 0.395 0.306 0.78 0.77
02:00 0.424 0.353 0.99 0.83
02:30 0.459 0.403 0.92 0.88
03:00 0.675 0.594 0.89 0.88
03:30 0.618 0.584 0.94 0.94
04:00 0.836 0.822 1.00 0.98

Continue reading

It doesn’t make any sense

For a variety of reasons my wife recently had a full panel of PFTs (spiro+BD, lung volumes, DLCO) at a different hospital than the one I work at. I went with her and was pleased to see the technician perform the tests pleasantly, competently and thoroughly. I was able to glance at the results as the testing proceeded so I had a fairly good idea what the overall picture looked like by the time she was done.

The difficulty came later when my wife asked me to print out her results so we could go over them together. Many hospitals and medical centers have websites that let patients email their doctor, review their appointments and access their medical test results. They go by a variety of names such as MyChart, MyHealth, Patient Gateway, PatientSite, PatientConnect etc., etc. My hospital first implemented something like this over a dozen years ago so I had thought that by now they were fairly universal but conversations with a couple of friends from around the country have let me know that this isn’t really the case.

Regardless of this, the hospital where my wife had her PFTs does have a website for patients and her PFT results showed up about a week later. When I went to look at them however, I was completely taken aback. Not because the results were wrong but because they were presented in a way that made them incredibly difficult to read and understand.

Here’s the report (and yes, this is exactly what it looked like on the patient website):

Continue reading

A real fixer-upper

I was reviewing reports today when I ran across one with some glaring errors. There were several things that immediately told me that the reported plethysmographic lung volumes were way off; the VA from the DLCO was almost a liter and a half larger than the TLC and the SVC was only about half the volume of the FVC.

Table1

When I took a look at the raw test data I saw at least part of the reason why the technician had selected these results to be reported and that was because the SVC quality from most of the efforts was poor. They mostly looked like this:

Fixer_Upper_01

It is apparent that the patient leaked while panting against the closed shutter and this caused the FRC baseline to shift upwards. I’ve discussed this problem previously, but when this happens the RV is larger than the FRC, there is a negative ERV and the TLC is overestimated. There is no way to fix this problem from within the software. The FRC is determined by the tidal breathing before the shutter closes and cannot be re-measured afterward.

Continue reading

Proposal to improve the readability of flow-volume loops

I’ve been planning on putting together a tutorial on characterizing and interpreting the contours of flow-volume loops so I’ve been accumulating flow-volume loops that are examples of different conditions. Lately I was reviewing some of them and noticed that when I tried to compare loops from different individuals with similar baseline conditions that the different sizes of the flow-volume made this difficult. For example, these two loops are both from individuals with normal spirometry.

FVL_Scaling_05

FVL_Scaling_08

One is from short, elderly female and one is from a tall, young male. If all you had to look at was the flow-volume loops, you might think that the smaller loop was abnormal, but the larger loop actually comes from a spirometry effort with an FVC that was 92% of predicted while the smaller loop’s FVC was 113% of predicted. The difference in sizes of these loops is of course due to the difference in age, gender and height between these individuals but also because of settings we’ve made in our lab software and because of the ATS/ERS spirometry standards.

Continue reading

What’s in a name?

My lab is in the final stages of a software update that will allow for electronic signing of our reports. This has been a long and slow process partly because the release date of the software got pushed back several times but mostly because a wide variety of different hospital departments and sub-departments have had to be involved.

In all the years that I’ve had computers in the pulmonary function lab I’ve never gone through a software update that was either as easy as expected or occurred within the original schedule. This includes the time when all we had was a single IBM PC/AT with a 40 megabyte hard drive, no network and the only people that cared we were going through an update was ourselves. Since we now have a dozen networked PCs located in two different building on-campus as well as three off-site locations using an IS-managed SQL server and HL7 interface I didn’t have any expectations for a speedy update and so far I have not been disappointed.

This time because the update revolves around electronic signing the hospital’s Health Information Management (HIM, i.e. Medical Records) department has been significantly involved. Among other things this has led to HIM reviewing all of our reports and requiring changes to bring them up to hospital standards. To some extent this make sense since, for example, they require that patient identification be exactly the same on all reports from all departments (same fields, same locations).

However, they also questioned some of the terminology used on our test reports. We’ve used the default test names that were in our report format editor (yes, we’re that lazy) and until they were brought to our attention I never really thought how odd some of them were. In particular, some of the terms used for the diffusing capacity didn’t make a lot of sense. For example, DLCO corrected for hemoglobin was DsbHb and DLCO/VA was reported as D/Vasbhb. To some extent I understand where these names came from but the reality is that they are in part holdovers from the past, in part they come from a need to keep names short so they fit in what space is usually available on reports, and in some cases they were probably created by programmers who hadn’t the slightest idea what the correct nomenclature should have been.

Note: Dsb likely comes from a time when you needed to differentiate between the results of different types of DLCO tests (steady-state and single-breath). Since there hasn’t been a test system built for at least 40 years that could perform a steady-state DLCO, the need to make this distinction is long since past.

Continue reading

Static reports, dynamic world

Reports are how patient test results are distributed. Paper versions have become less common because reports are now stored electronically in hospital information systems. Even if the way in which a report’s image is now stored, retrieved and distributed has changed, reports are still generated by our lab’s software systems and the ways in which this is done have not changed in any significant way for quite a while.

Reports are the public face of any pulmonary function lab and they should be designed to be readable and pertinent. It is critically important for any lab to create and manage reports correctly. So why does our lab software make it so hard to do this?

Over the last several months I’ve had the opportunity to compare the reporting systems of the three largest manufacturers of pulmonary function equipment in the US. There are differences of course between each reporting system since each has its own approach towards formatting, editing and printing reports. What they all share however, is a similar underlying model for reports that I call static report pages.

What I mean by static is that the report elements and their position on a report page are determined and fixed in place when the report is formatted. When the report is printed, regardless of whether the results are present or not, the report page does not change. This means that if you format a report to contain spirometry, lung volumes and DLCO, and the only test you perform is spirometry, when you print the report the sections for lung volumes and DLCO will contain no results but they will still appear.

The number of tests that need to be placed on a report will vary from lab to lab depending on what equipment they are equipped with. For example, these tests are available on one manufacturer or another’s test systems:

Spirometry

Lung Volumes – Plethysmography

Lung Volumes – N2 Washout

Lung Volumes – Helium Dilution

Diffusing Capacity

RAW/SGaw

MIP/MEP

MVV

SBN2

6MWT

FOT/IOS

There are probably other tests as well but even if there aren’t, there are other report elements such as demographics, text notes, flow-volume loops, trends etc. that also need to be managed.

Continue reading

Seeing shouldn’t always be believing

Although the numerical results are of course important, visual inspection of the volume-time and flow-volume loop graphs from a spirometry test are a critical part of interpretation. Spirometry quality and performance issues that don’t show up in the numbers are often highly evident in the graphs. Choices we make in creating and configuring reports however, can hide important visual details and have the potential to decrease interpretation quality.

Recently I was inspecting the results for a spirometry test. There wasn’t anything particularly unusual about the numbers or the graphics on the report, I just like to make spot-checks on spirometry quality and wanted to make sure the best results had been selected. When I pulled up the raw test date on my computer screen I noticed an unusual wavering pattern in the volume-time curve. I don’t remember seeing a volume-time curve like this before and when I checked all of the patient’s efforts were similar and all showed similar oscillations.

VT_Curve_waver_redacted

Continue reading