Can Doctors Choose Between Saving Lives and Saving a Fortune?


America is great, they say! America is medically advanced — they have the best medicine, the best brains, etc. etc. But American is burdened by the most expensive health care cost in the world. Many Americans go bankrupt after being sick — especially after getting medical treatments for their cancer.

We in the developing countries are going to suffer similar fate like the Americans if we follow the so called Great American medical system!

Dr. Siddhartha Mukherje wrote this article in the New York Times.

Can Doctors Choose Between Saving Lives and Saving a Fortune?

To understand something about the spiraling cost of health care in the United States, we might begin with a typical conundrum:

Imagine a 60-something man — a nonsmoker, overweight, with diabetes — who has just survived a heart attack. Perhaps he had an angioplasty, with the placement of a stent, to open his arteries. The doctor’s job is to keep the vessels open.

The doctor has two choices of medicines to reduce the risk for a second heart attack.

  • There’s Plavix, a tried-and-tested blood thinner, that prevents clot formation; the generic version of the drug costs as little as 25 cents a pill.
  • And there’s Brilinta, a newer medicine that is also effective in clot prevention; it costs about $6.50 a pill — 25 times as much.

Brilinta is admittedly more effective than Plavix — by a 2 percentage points.

In a yearlong trial of 18,600 patients, 10 percent died from vascular causes, heart attack or stroke on Brilinta, while about 12 percent did on Plavix.

Should the doctor prescribe the best possible medicine, assuming that the man has private health insurance that will pay the bulk of the costs? Or should the doctor try to conserve health care costs by prescribing the cheaper medicine that is nearly as good?

“We thought about this nearly every day when discharging patients from the cardiology unit,” Dhruv Khullar, a newly minted hospital attending, told me. “Some of us believed that a doctor’s job is to deliver the best possible care.

Others argued that doctors should aim to find some balance between medical benefit, financial cost and social responsibility.

It’s the kind of question that we aren’t really trained to solve. Are costs something that an individual doctor should do something about? What is a doctor supposed to do?”

The authors, Irene Papanicolas, Liana Woskie and Ashish Jha, wrote …. the United States is a sore-thumb outlier among 11 wealthy nations in medical spending. We spend 18 percent of our G.D.P. on health care, while Australia, Canada, Denmark and Japan seem to make do with about half that amount.

Yet life expectancy in the United States is the lowest in the group, and infant mortality is the highest. Our out-of-control prices have a stifling effect on the economy.

So what is driving the cost? Each time we did go to a doctor, it seems, we managed to spend more. Tests were ordered more frequently: We sat inside M.R.I. and CT scanners more often than patients in most other countries.

We had high-cost surgical procedures performed more often than most other populations in the world. Knee replacements, cataract surgeries, cesarean deliveries, coronary-bypass grafts, angioplasty.

On average, though, the United States ranked among the highest in most operations. Many of these procedures cost more in America (an M.R.I. costs $1,150 in the United States and $140 in Switzerland; it’s hard to insist that an American M.R.I. is eight times as good).

And some of these procedures inevitably led to complications, and then we paid for those complications. The impact on overall life expectancy was evidently minimal.

The United States leads developed nations in what the surgeon and writer Atul Gawande has called an epidemic of “overtesting, overdiagnosis and overtreatment.”

If expensive procedures explain some of the costs accrued by Americans, pharmaceutical prices and spending offer an even more alarming explanation. We spent $1,443 annually per person (yes, you read that number right) on drugs — in part because each medicine costs us more, and in part because we used new drugs that weren’t even available in many other countries.

Humira, the treatment for rheumatoid arthritis, was priced at $2,500 per month in the United States versus $980 in Japan and France.

Lantus, the long-acting form of insulin, cost us $186 per month, four times the price in France. Adding pharmaceutical insult to injury, many more expensive drugs were invented in America — and yet we paid more than any other rich nation to use them ourselves.




New Cancer Drugs — Beware!!!


Should regulators insist on robust evidence that a new drug shows clear benefit to patients as a condition of approval, or are demands for such levels of certainty unrealistic, or even unethical? Marc Beishon reports.

  • two studies from the US and Europe that show that a majority of drugs enter the market without showing Overall Survival or Quality of Life, and only about 15% of these have since done so. The majority of cancer drugs enter the market without showing evidence of benefit on overall survival or quality of life.
  • Other studies have shown no relationship between price and clinical benefit of FDA-approved drugs.
  • There are just not many new cancer drugs that qualify as real game changers, particularly for solid tumours, although some are certainly huge money spinners for the pharmaceutical companies, owing to eye-watering price-tags.
  • A recent and “ridiculous” example, he says, is FDA approval for using adjuvant sunitinib for renal cancer. “Of the two trials, a larger one of 2,000 or so patients was totally negative, and a smaller one of 600 was only positive for progression-free survival but not for overall survival, and it has substantial toxicity.” 
  • “We do have some great new drugs,” says Tannock. “But I am concerned for patients who have little idea how to judge which ones are effective and end up selling everything to get them.” 
  • He argues that the progression-free survival (PFS) findings from trials may be biased, citing the BOLERO-2 trial, which showed that adding everolimus … to exemestane, doubled PFS in patients with advanced HER2+ breast cancer. “But toxicity was such that 25% of patients left the trial – and while the PFS was impressive, longer-term survival was negative. 
  • If you have an agent that improves PFS with minimal toxicity, such as aromatase inhibitors, that’s fine, but for those with high toxicity such as everolimus or sunitinib it is misguided to approve them.”



Starbucks must add cancer warning to coffee, says US court


Bad news, coffee drinkers: A California judge has ruled that coffee companies across the state will have to carry a cancer warning label because of a carcinogen that is present in the brewed beverage.

Starbucks Corp and other coffee sellers must put a cancer warning on coffee sold in California, a Los Angeles judge has ruled, possibly exposing the companies to millions of dollars in fines.

A little-known not-for-profit group sued some 90 coffee retailers, including Starbucks, on grounds they were violating a California law requiring companies to warn consumers of chemicals in their products that could cause cancer.

One of those chemicals is acrylamide, a byproduct of roasting coffee beans that is present in high levels in brewed coffee.


Prostate screening saves no lives and may do more harm than good

Screening for prostate cancer does not save lives, and may do more harm than good, a major study has concluded.

The largest ever trial of PSA (prostate specific antigen) tests – which all men over 50 can obtain on request from their GP – found that death rates were identical among men, whether or not they underwent screening.

Inviting symptomless men for the one-off blood test detects some tumours unlikely to be harmful – while still missing others that were fatal, researchers warned.

PSA tests do not save lives, but they do generate enormous revenues for cancer treatment clinics

The researchers studied 400,000 British men between the ages of 50 and 69 during a ten year follow-up period. The control group, representing 219,439 men, were not screened and had 7,853 cases of prostate cancer (3.6 percent). The 189,386 men who were invited for a PSA test were diagnosed more frequently (4.3 percent). In the follow up period, the same percentage died from prostate cancer (.29 percent), suggesting that PSA screening does not save lives and only leads to dangerous over-treatment.

Lead author Professor Richard Martin, a Cancer Research U.K. scientist at the University of Bristol said, “We found offering a single PSA test to men with no symptoms of prostate cancer does not save lives after an average follow up of 10 years.”





A lot of what is published is incorrect … science has taken a turn towards darkness

Dr. Richard Horton (2015), Editor-in-Chief of The Lancet, wrote in 2015

“A lot of what is published is incorrect.” I’m not allowed to say who made this remark because we were asked to observe Chatham House rules.

The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.

As one participant put it, “poor methods get results”.

The apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world. Or they retrofit hypotheses to fit their data.

Journal editors deserve their fair share of criticism too. We aid and abet the worst behaviours.

Our acquiescence to the impact factor fuels an unhealthy competition to win a place in a select few journals. Our love of “significance” pollutes the literature with many a statistical fairy-tale. We reject important confirmations.

Journals are not the only miscreants. Universities are in a perpetual struggle for money and talent, endpoints that foster reductive metrics, such as high-impact publication.

National assessment procedures, such as the Research Excellence Framework, incentivise bad practices. And individual scientists, including their most senior leaders, do little to alter a research culture that occasionally veers close to misconduct.

Can bad scientific practices be fixed? Part of the problem is that no-one is incentivised to be right. Instead, scientists are incentivised to be productive and innovative.

Would a Hippocratic Oath for science help? Certainly don’t add more layers of research red-tape.

But as to precisely what to do or how to do it, there were no firm answers. Those who have the power to act seem to think somebody else should act first. And every positive action (eg, funding well-powered replications) has a counterargument (science will become less creative). The good news is that science is beginning to take some of its worst failings very seriously. The bad news is that nobody is ready to take the first step to clean up the system.




Why Most Published Research Findings Are False

There is increasing concern that most current published research findings are false.

The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field.

In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance.

Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.

Published research findings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment.

There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims. However, this should not be surprising. It can be proven that most claimed research findings are false. Here I will examine the key factors that influence this problem and some corollaries thereof.


By the same author:

Overall, not only are most research findings false, but, furthermore, most of the true findings are not useful.

Medical interventions should and can result in huge human benefit. It makes no sense to perform clinical research without ensuring clinical utility. Reform and improvement are overdue.


Why most clinical research is not useful

It makes no sense to perform clinical research that has no relevance to patient care, so why do we do it, and how can we stop? John Ioannidis ponders the problem and offers some suggestions.

Practicing doctors and other health care professionals will be familiar with how little of what they find in medical journals is useful.

The term ‘clinical research’ is meant to cover all types of investigation that address questions on the treatment, prevention, diagnosis/screening, or prognosis of disease or enhancement and maintenance of health.

Experimental intervention studies (clinical trials) are the major design intended to answer such questions, but observational studies may also offer relevant evidence.

‘Useful clinical research’ means that it can lead to a favorable change in decision making (when changes in benefits, harms, cost, and any other impact are considered) either by itself or when integrated with other studies and evidence in systematic reviews, meta-analyses, decision analyses, and guidelines.

There are many millions of papers of clinical research – approximately 1 million papers from clinical trials have been published to date, along with tens of thousands of systematic reviews – but most of them are not useful.

In order to be useful, clinical research should be true, but this is not sufficient.

Research inferences should be applicable to real-life circumstances. When the context of clinical research studies deviates from typical real-life circumstances, the question critical readers should ask is, to what extent do these differences invalidate the main conclusions of the study?

A common misconception is that a trial population should be fully representative of the general population of all patients (for treatment) or the entire community (for prevention) to be generalizable.

Randomized trials depend on consent; thus, no trial is a perfect random sample of the general population. However, treatment effects may be similar in nonparticipants, and capturing real-life circumstances is possible, regardless of the representativeness of the study sample, by utilizing pragmatic study designs.

Pragmatism has long been advocated in clinical research, but it is rare. Only nine industry-funded pragmatic comparative drug effectiveness trials were published between 1996 and 2010 according to a systematic review of the literature, while thousands of efficacy trials have been published that explore optimization of testing circumstances.

Studying treatment effects under idealized clinical trial conditions is attractive, but questions then remain over the generalizability of the findings to real-life circumstances.

Observational studies (performed in the thousands) are often precariously interpreted as able to answer questions about causal treatment effects. The use of routinely collected data is typically touted as being more representative of real life, but this is often not true. Most of the widely used observational studies deal with peculiar populations (e.g. nurses, physicians, or workers) and/or peculiar circumstances (e.g. patients managed in specialized health care systems or covered by specific insurance or fitting criteria for inclusion in a registry).

Eventually, observational studies often substantially overestimate treatment effects.

Patient centeredness

Useful research is patient centered. It is done to benefit patients or to preserve health and enhance wellness, not for the needs of physicians, investigators, or sponsors. Useful clinical research should be aligned with patient priorities, the utilities patients assign to different problems and outcomes, and how acceptable they find interventions over the period for which they are indicated. Value for money

Good value for money is an important consideration, especially in an era of limited resources.