We MUST change how we pay for healthcare in the U.S.

Decades of experiments prove that our current system doesn’t work

Every decade or two, U.S. healthcare industry talking heads insist that this time, they’ve found a way to make money and deliver great patient care.

The latest strategy getting healthcare execs worked up is the value-based care model, which assumes that if providers are paid to deliver good health outcomes rather every service they deliver and test they order, everybody wins.

If you read the breathless articles pumped out by consulting firms, health insurance companies and industry seers, you’d think we were on the brink of a massive breakthrough — that is, unless you’ve been around long enough to see a baker’s dozen of previous models fail.

In reality, this is just one of the countless times the industry has attempted to reinvent how providers get paid. And after watching this happen over and over again, I’ve become convinced that we’re wasting our time.

The truth is that if we want to change healthcare, we really have to rebuild it more or less from the ground up, and completely rethink how it’s financed. Otherwise, we’ll never be able to get healthcare costs, quality and access into line with the other industrialized countries of the world.

For several decades, the people who pay for healthcare — largely health insurers, employers and the government — have been working to tame the staggering growth in healthcare costs.

Their intent has long been to phase out the “fee for service” payment system, in which doctors and hospitals get paid every time they run a test or deliver care. Health insurers feel that this system encourages providers to pad their income by overtreating patients. (They’re not necessarily wrong, but that’s for another story.)

Starting in the late 1970s, the powers that be started began to pressure doctors to deliver as little care and order as few tests as possible. In theory, this was not going to harm patients, as doctors were free to do whatever was truly necessary.

In the real world, though, there was plenty of guesswork involved in medical decision making, and doctors didn’t always have a cut-and-dried standard to cite, so far too often they couldn’t get paid to practice medicine as they thought best.

With this oppressive oversight in place, both doctors and patients both got squeezed.

If Doctor A was thought to have requested too many CT scans or ordered lab tests that didn’t seem to meet “clinical necessity” guidelines, she might get a stern talking-to from her practice administrator or even a nastygram from a health insurance company her group worked with.

Meanwhile, the patient’s needs often got lost in the struggle, which sometimes meant that they stayed sick or got worse, pushing medical expenses up further.

When this kind of pressure didn’t cut costs enough, healthcare payers came up with a new way to bring doctors in line.

The 90s saw the emergence of a new payment model known as “capitation” in which doctors got a flat payment per month for each patient which was supposed to cover all of their medical needs. In essence, they fobbed off insurance risks on doctors.

On the one hand, this model typically offered financial incentives to practices that kept patients healthy, which is certainly a step up from threats and scoldings.

The ugly flip side of capitation contracts, however, was that if the patient consumed a lot of resources, the providers would be on the hook for the costs themselves. To stay afloat, some medical groups began to avoid sicker patients. Some tried to be more efficient but got their clock cleaned financially anyway.

While insurance companies did manage to fob some of their costs off on doctors, the absolute cost of care didn’t fall to any appreciable extent.

Once again, nothing had truly changed.

By the early 2000s, healthcare leaders had finally admitted that capitation didn’t work as planned. Over the next couple of decades, a new approach would rise from the ashes, allegedly one which was informed by the failures of the past.

This new approach, dubbed “value-based care,” is based on the idea that health purchasers — health insurance companies and the employers they represent — aren’t really buying a specific package of services, they’re buying a result, employee health. (This is similar to the idea that drivers are buying the ability to travel, not wheels and trunks and headlights.)

Under value-based care contracts, insurers pay providers such as doctors and hospitals based on the health outcomes patients achieve and the efficiency with which they accomplish these results. In some cases, providers accept a “bundled” payment which covers the expected costs of an episode of care such as the process of providing heart surgery.

This approach is probably better than fee-for-service payments, as it creates strong incentives for avoiding needless procedures, tests and physician encounters. It’s also an improvement on the capitation model, as it doesn’t punish providers who take on sicker patients.

That being said, making value-based care work is expensive for providers, who need at a minimum to have advanced technology in place to track and manage their patients. Also, it does nothing to relieve the financial burdens of patients, which have grown astronomically in recent years with the explosion of high-deductible health plans and steadily higher copay requirements.

While it’s probably too soon to call value-based payment a bust, it’s far from a panacea, and patients are still struggling to get their needs met.

NEXT: In Part 2, I outline some new and better ways to transform healthcare finance in the U.S.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store