Hospital infection monitoring methods vary, skew data

Hospital infection monitoring methods vary, skew data


In about a quarter of U-S states, hospitals must publicly report patient safety data.

One measurement of safety is the number of bloodstream infections acquired by patients staying in each hospital.

These infections are often associated with the use of catheters, which are tubes implanted in the neck, chest or groin to aid in treatment.

Obviously, you can’t compare infection rates unless everyone uses the same assessment method.

But there are two popular ways of detecting hospital-acquired infections.

One involves trained human observers, who employ objective criteria such as blood tests, combined with subjective judgments about how the infections occurred.

The other method involves only objective criteria, analyzed by computer algorithms.

A recent study in The Journal of the American Medical Association used both methods to evaluate the same patients.

The results varied dramatically.

The study involved twenty intensive-care units in a total of four medical centers.

According to the human observers, the overall infection rate averaged about three episodes for every one-thousand days of catheter use.

The computer algorithm yielded an average almost three times as high.

What’s more, the medical center with the lowest infection rate determined by human observers had the highest rate according to the computer algorithm.

Clearly, something’s wrong.

The researchers chalked it up to variability between the human observers. And there’s no clear “gold standard” for observation.

With an estimated ninety-nine-thousand U-S deaths resulting from hospital-acquired infections each year, a gold standard would prove helpful.

Lives are at stake, and patients deserve to make truly informed decisions about their health care options.

Related Episodes