Observation Status

Debate over the seemingly arcane subject of “observation status” has blossomed in recent years because billions of Medicare payment dollars are at stake. What is “observation status?”

According to MedPac (page 57):

If a Medicare patient does not initially meet the criteria for inpatient admission but the attending physician concludes the patient should be observed in the hospital for a period of time before being sent home, the patient can remain in the hospital in observation status. Observation stays are billed as outpatient services rather than inpatient admissions.

In most cases, observation patients receive care in a regular inpatient unit, and get treated just like other inpatients. And in many cases, observation stays stretch out to several days: in 2012, 26 percent lasted two nights and 11 percent at least three. But from Medicare’s point of view, this is outpatient care, which leaves patients responsible for more of the bill, and ineligible for Medicare-paid rehab or skilled nursing care.

Hospitals started designating more stays as “observation” after Medicare’s auditors began disallowing the entire payment for some brief hospital “admissions.” Even though “observation stays” pay less than inpatient admissions, hospitals took a better safe than sorry approach, classifying many brief stays as “observation.” Between 2006 and 2013, observation stays increased by 96 percent, accounting for more than half of the apparent decline in total Medicare admissions during that seven-year period (see page 55).

Observation Classification

Medicare’s recent adoption of penalties for readmissions offered hospitals a new incentive to shift some patients returning within 30 days of their discharge to observation status. A patient stay labeled “observation” doesn’t count as a readmission, allowing hospitals that might otherwise be fined for having too many readmissions to skirt the penalty.

Recent data indicates that such gaming isn’t just a theoretical possibility.

About 10 percent of all hospital stays occurring within 30 days of discharge are now classified as “observation;” a quarter of hospitals classified 14.3 percent or more of all repeat stays as “observation.” Moreover, analysis of time trends in observation stays makes it clear that they account for a significant chunk of the reduction in readmissions. Between 2010 and 2013, 36 percent of the claimed decrease in readmissions was actually just a shift to observation stays.

Emergency Department Use

And it’s not just observation stays that have been on the increase. More of the recently discharged patients are being treated in emergency departments (EDs)—without being admitted—as well.

Factoring in the 0.4 percent increase in ED visits within 30 days of discharge, the fall in the percent of discharged patients returning to hospitals for urgent problems is only 0.3 percent over the past three years — less than one-third of the improvement that CMS claims. And even this 0.3 percent overall fall may be partly an artifact of hospitals’ “upcoding” (exaggerating the severity of patients’ illnesses), which boosts diagnosis-related group (DRG) payments, and could also corrupt the formula used to risk-adjust expected readmission rates.

For patients discharged after heart attacks, the urgent return rate has actually risen slightly; the reported 1.8 percent fall in readmission is more than offset by a 0.7 percent increase in observation stays and a 1.2 percent increase in ED visits.

These aggregate figures surely hide vast differences among hospitals. Some hospitals have undoubtedly reduced readmissions by doing the hard work of fully stabilizing fragile patients prior to discharge, improving communications with outpatient providers, assuring diligent follow-up, etc.

But others appear to be hitting their readmission targets mostly by gaming the system — re-labeling rather than re-designing care. Medicare rewards both approaches equally, but for hospitals, re-labeling is probably far cheaper (and more profitable) than re-designing.

Medicare’s readmission penalties are among the growing number of pay-for-performance (P4P) and value-based purchasing initiatives that offer bonuses to high performers and/or penalize the laggards. We previously pointed out that the evidence for this carrot and stick approach is unconvincing. More recently, a long-term follow-up of the English hospital P4P program found that P4P generated no improvement in patient outcomes, damping the enthusiasm generated by the rosy short-term findings, and reinforcing the need for skepticism.

Adopting unproven everywhere P4P strategies that have been proven nowhere risks quality failure on a monumental scale. It pressures hospitals to cheat, saps doctors’ and nurses’ intrinsic motivation to do good work even when no one is looking, and corrupts the data vital for quality improvement.

As the graffiti artist Banksy once said: “Become good at cheating and you never need to become good at anything else.”