A very strange spirometry report came across my desk a couple of days ago.
My first thought was that some of the demographics information had been entered incorrectly but when I checked the patient’s age, height, gender and race all were present, all were reasonably within the normal range for human beings in general and more importantly, all agreed with what was in the hospital’s database for the patient. I tried changing the patient’s height, age, race and gender to see if it would make a difference and although this made small changes in the percent predicted when I did this the predicteds were still zero.
Or were they? They actually couldn’t have been zero, regardless of what was showing up on the report, since the observed test values are divided by the predicted values and if the predicted were really zero, then we’d have gotten a “divide by zero” error, and that wasn’t happening. Instead the predicted values had to be very close to zero, but not actually zero, and the software was rounding the value down to zero for the report. Simple math showed me the predicted value for FVC was (very) approximately 0.0103 liters, but why was this happening?
When I first started performing CPETs in the 1970’s a patient’s exhaled gas was collected at intervals during the test in Douglas bags and I had a worksheet that I’d use to record the patient’s respiratory rate, heart rate and SaO2. After the test was over I’d analyze the gas concentrations with a mass spectrometer and the gas volumes with a 300 liter Tissot spirometer and then use the results from these to hand calculate VO2, VCO2, Rq, tidal volume and minute volume. These results were then passed on to the lab’s medical director who’d use them when dictating a report.
Around 1990 the PFT lab I was in at the time acquired a metabolic cart for CPET testing. This both decreased the amount of work I had to do to perform a CPET and significantly increased the amount of information we got from a test. The reporting software that came with the metabolic cart however, was very simplistic and neither the lab’s medical director or I felt it met our needs so I created a word processing template, manually transcribed the results from the CPET system printouts and used it to report results.
Twenty five years and 3 metabolic carts later I’m still using a word processing template to report CPET results.
Well, first the reporting software is still simplistic and using it we still can’t get a report that we think meets our needs (and it’s also not easy to create and modify reports which is a chronic complaint I have about all PFT lab software I’ve ever worked with). Second, there are some values that we think are important that the CPET system’s reporting software does not calculate and there is no easy way to get it on a report as part of the tabular results. Finally, and most importantly, I need to check the results for accuracy.
You’d think that after all these years that you wouldn’t need to check PFT and CPET reports for mathematical errors but unfortunately that’s not true. For example, these results are taken from a recent CPET:
For a variety of reasons my wife recently had a full panel of PFTs (spiro+BD, lung volumes, DLCO) at a different hospital than the one I work at. I went with her and was pleased to see the technician perform the tests pleasantly, competently and thoroughly. I was able to glance at the results as the testing proceeded so I had a fairly good idea what the overall picture looked like by the time she was done.
The difficulty came later when my wife asked me to print out her results so we could go over them together. Many hospitals and medical centers have websites that let patients email their doctor, review their appointments and access their medical test results. They go by a variety of names such as MyChart, MyHealth, Patient Gateway, PatientSite, PatientConnect etc., etc. My hospital first implemented something like this over a dozen years ago so I had thought that by now they were fairly universal but conversations with a couple of friends from around the country have let me know that this isn’t really the case.
Regardless of this, the hospital where my wife had her PFTs does have a website for patients and her PFT results showed up about a week later. When I went to look at them however, I was completely taken aback. Not because the results were wrong but because they were presented in a way that made them incredibly difficult to read and understand.
Here’s the report (and yes, this is exactly what it looked like on the patient website):
I was reviewing reports today when I ran across one with some glaring errors. There were several things that immediately told me that the reported plethysmographic lung volumes were way off; the VA from the DLCO was almost a liter and a half larger than the TLC and the SVC was only about half the volume of the FVC.
When I took a look at the raw test data I saw at least part of the reason why the technician had selected these results to be reported and that was because the SVC quality from most of the efforts was poor. They mostly looked like this:
It is apparent that the patient leaked while panting against the closed shutter and this caused the FRC baseline to shift upwards. I’ve discussed this problem previously, but when this happens the RV is larger than the FRC, there is a negative ERV and the TLC is overestimated. There is no way to fix this problem from within the software. The FRC is determined by the tidal breathing before the shutter closes and cannot be re-measured afterward.
I’ve been planning on putting together a tutorial on characterizing and interpreting the contours of flow-volume loops so I’ve been accumulating flow-volume loops that are examples of different conditions. Lately I was reviewing some of them and noticed that when I tried to compare loops from different individuals with similar baseline conditions that the different sizes of the flow-volume made this difficult. For example, these two loops are both from individuals with normal spirometry.
One is from short, elderly female and one is from a tall, young male. If all you had to look at was the flow-volume loops, you might think that the smaller loop was abnormal, but the larger loop actually comes from a spirometry effort with an FVC that was 92% of predicted while the smaller loop’s FVC was 113% of predicted. The difference in sizes of these loops is of course due to the difference in age, gender and height between these individuals but also because of settings we’ve made in our lab software and because of the ATS/ERS spirometry standards.
My lab is in the final stages of a software update that will allow for electronic signing of our reports. This has been a long and slow process partly because the release date of the software got pushed back several times but mostly because a wide variety of different hospital departments and sub-departments have had to be involved.
In all the years that I’ve had computers in the pulmonary function lab I’ve never gone through a software update that was either as easy as expected or occurred within the original schedule. This includes the time when all we had was a single IBM PC/AT with a 40 megabyte hard drive, no network and the only people that cared we were going through an update was ourselves. Since we now have a dozen networked PCs located in two different building on-campus as well as three off-site locations using an IS-managed SQL server and HL7 interface I didn’t have any expectations for a speedy update and so far I have not been disappointed.
This time because the update revolves around electronic signing the hospital’s Health Information Management (HIM, i.e. Medical Records) department has been significantly involved. Among other things this has led to HIM reviewing all of our reports and requiring changes to bring them up to hospital standards. To some extent this make sense since, for example, they require that patient identification be exactly the same on all reports from all departments (same fields, same locations).
However, they also questioned some of the terminology used on our test reports. We’ve used the default test names that were in our report format editor (yes, we’re that lazy) and until they were brought to our attention I never really thought how odd some of them were. In particular, some of the terms used for the diffusing capacity didn’t make a lot of sense. For example, DLCO corrected for hemoglobin was DsbHb and DLCO/VA was reported as D/Vasbhb. To some extent I understand where these names came from but the reality is that they are in part holdovers from the past, in part they come from a need to keep names short so they fit in what space is usually available on reports, and in some cases they were probably created by programmers who hadn’t the slightest idea what the correct nomenclature should have been.
Note: Dsb likely comes from a time when you needed to differentiate between the results of different types of DLCO tests (steady-state and single-breath). Since there hasn’t been a test system built for at least 40 years that could perform a steady-state DLCO, the need to make this distinction is long since past.
Reports are how patient test results are distributed. Paper versions have become less common because reports are now stored electronically in hospital information systems. Even if the way in which a report’s image is now stored, retrieved and distributed has changed, reports are still generated by our lab’s software systems and the ways in which this is done have not changed in any significant way for quite a while.
Reports are the public face of any pulmonary function lab and they should be designed to be readable and pertinent. It is critically important for any lab to create and manage reports correctly. So why does our lab software make it so hard to do this?
Over the last several months I’ve had the opportunity to compare the reporting systems of the three largest manufacturers of pulmonary function equipment in the US. There are differences of course between each reporting system since each has its own approach towards formatting, editing and printing reports. What they all share however, is a similar underlying model for reports that I call static report pages.
What I mean by static is that the report elements and their position on a report page are determined and fixed in place when the report is formatted. When the report is printed, regardless of whether the results are present or not, the report page does not change. This means that if you format a report to contain spirometry, lung volumes and DLCO, and the only test you perform is spirometry, when you print the report the sections for lung volumes and DLCO will contain no results but they will still appear.
The number of tests that need to be placed on a report will vary from lab to lab depending on what equipment they are equipped with. For example, these tests are available on one manufacturer or another’s test systems:
Lung Volumes – Plethysmography
Lung Volumes – N2 Washout
Lung Volumes – Helium Dilution
There are probably other tests as well but even if there aren’t, there are other report elements such as demographics, text notes, flow-volume loops, trends etc. that also need to be managed.
Although the numerical results are of course important, visual inspection of the volume-time and flow-volume loop graphs from a spirometry test are a critical part of interpretation. Spirometry quality and performance issues that don’t show up in the numbers are often highly evident in the graphs. Choices we make in creating and configuring reports however, can hide important visual details and have the potential to decrease interpretation quality.
Recently I was inspecting the results for a spirometry test. There wasn’t anything particularly unusual about the numbers or the graphics on the report, I just like to make spot-checks on spirometry quality and wanted to make sure the best results had been selected. When I pulled up the raw test date on my computer screen I noticed an unusual wavering pattern in the volume-time curve. I don’t remember seeing a volume-time curve like this before and when I checked all of the patient’s efforts were similar and all showed similar oscillations.
The Hospital Information Systems (HIS) at different medical centers have grown up mostly in isolation from each other. Even when an HIS is installed by a national vendor, each individual hospital has tended to make its own customizations and to follow past conventions. This is changing and it is changing because there are a number of issues driving rapid improvements in inter-hospital communication. The Meaningful Use (MU) Act is major factor and one that has been helping to set the pace, but because improved communication lowers costs and improves the quality of care insurers and medical institutions have been moving in this direction for their own reasons as well.
The regulations and standards for Health Information Exchange (HIE) are evolving rapidly. The overall framework for HIE resides in the Consolidated Clinical Data Architecture (C-CDA) and HL7 messaging protocols. This has given hospitals a unified approach towards managing their communication channels between physicians, clinics, other hospitals and insurers but one problem limiting the usefulness of this has been the different nomenclature used by different institutions for the same pieces of information.
When databases are grown in isolation they tend to end up with labels for data elements that are idiosyncratic and unique to each medical center. There needs to be a way to resolve this Tower of Babel and that is what the Logical Observation Identifiers Names and Codes (LOINC) organization is doing.
The last several decades has seen a complete transition to the use of computers in pulmonary function testing. This has improved Lab efficiency, but it is also the new baseline. Further improvements in technology may improve the reliability and accuracy of test equipment and test results, but it is unlikely to improve PFT Lab efficiency any more than it already has.
Report management, which is really information management, has started but hasn’t yet completed the same technological transition and it is here that significant improvements can still be made. These improvement will not only improve the efficiency of the pulmonary function lab, but also its clinical effectiveness for the physicians and patients that are the lab’s customers.
To one degree or another most pulmonary function labs are still dominated by traditional reporting systems which are labor intensive and slow. Managing paper reports for a patient visit usually consists of:
- Patient reports are kept in folders and either a new folder needs to be created or the patient’s existing folders need to be pulled from file cabinets.
- Printing the test results and then collating the reports with patient’s lab folder.
- Delivering a stack of reports and lab folders to a reviewer who makes penciled notes on the reports.
- The stack of reports and lab folders is transferred to a typist who types the interpretation into the lab database.
- The final reports are printed, collated with the patient lab folders and stack of lab folders and reports are delivered to the physician who then physically signs each report.
- Reports are photocopied and snail-mailed to the ordering physician and medical records.
- The lab folders are re-filed.
Not every pulmonary function lab still uses all of these steps to manage reports of course, but large parts of this overall process are often still major components in report management. So why are we still moving paper around when what we really want to do is to move the information that’s on the paper around?