The paper's authors, Elizabeth Drye, MD and Harlan Krumholz, MD of the Yale-New Haven Hospital Center for Outcomes Research and Evaluation and their colleagues, are the experts on this sort of thing. And they suggest that this huge variability in hospital length of stay results in wide variation between in-hospital and 30-day mortality rates across the country.
And that, they say, presents a troublesome problem for measuring quality in death.
Instead, they argue, hospitals and payers should be using the same metric, which should be 30-day mortality, wherever the patient dies. That period uniformly captures most of the patients with these three illnesses who will die after— and perhaps as a function of—hospital care.
"Our results argue against using in-hospital measures," the authors write, adding that "in-hospital measures favor hospitals with shorter mean LOS (length of stay) and transfer rates."
In a phone interview, I asked lead author Drye if hospitals may tend to like scoring in-hospital death for non-Medicare patients. These tend to be younger, and hospitals can deftly move them to home or hospice or skilled nursing facilities and perhaps hide poor quality when maybe death might have been prevented with better care.
"When you measure quality, you have to worry that hospitals might, as we call it, 'game the system.' And some might behave that way," she replied. "But that's not so much our concern," she said.
"Our main point is that you should always be counting mortality within the same number of days for every patient. You should not be looking at in-hospital mortality because lengths of stay (at hospitals around the country) are so variable."
Drye tells me the authors are hoping that their paper and research will inform the National Quality Forum, which endorsed this and other quality measures that factor in-patient mortality, and urge them to "reassess."