Cancer is a popular topic for the media, as people care and worry about it in equal measure.
News reports help people find out what researchers are working on, and how charitable donations are being spent. They also help generate interest in the amazing science going on. But perhaps most of all, health stories and clinical trial results have a direct impact on people, raising interest in the latest discoveries further.
And when it comes to cancer, the emotion that’s tied to the subject means that scientific results must be discussed in a measured and accurate way. And most of the time that’s exactly what happens.
But occasionally, new results are wrongly headlined with phrases like ‘miracle cure’, ‘wonder drug’ or ‘breakthrough’.
So what should you look out for when reading news about cancer research?
Headlines are designed to catch attention and summarise a story. And with only a few words available, there’s always a chance they’ll be missing context or caveats, which can lead to exaggeration.
In some ways, it’s the least important part of the story. If a headline sounds too good – or bad – to be true, it usually is.
And framing new findings as ‘now scientists say’ can give the wrong impression of how research works.
Take cancer. There are over 200 different types. Each with slightly different causes and possible cures. So there are many labs across the world working on each one. And many different scientists in each lab looking at different aspects of the disease.
These scientists are building knowledge bit by bit, like adding pieces to a puzzle, getting a slightly clearer picture with each experiment. Very rarely does anything happen in seismic shifts. Disagreements will happen, but the body of evidence around a topic points to a consensus.
So, if a new study goes completely against this, it’s not that it’s automatically wrong, but raises the question of why didn’t previous studies reach the same conclusion?
And you might not have heard about those other studies. There’s a bias towards publishing ‘positive’ results that show an effect – e.g. cushions cause cancer – over ‘negative’ results that don’t show an effect – e.g. cushions don’t cause cancer. This bias exists in scientific journals as well as the media.
And it makes sense; positive results are more exciting or newsworthy. But negative results are just as important – if they aren’t given attention then scientists will waste time asking questions that have already been answered.
So, while the weight of evidence may be that cushions don’t cause cancer, if one poor quality study suggests the opposite then it could easily be the one you end up hearing about.
New results come out all the time that tip the balance towards an answer. How much they tip the balance depends on the quality of the work – nothing is perfect, and studies will rarely be the final, definitive say.
Here are 6 things to look out for to help you judge a study, and the media coverage it receives, for yourself.
Who carried out the research, and who funded them? Knowing who is commenting is also important. If the Worldwide Cushion Association is telling you about a new study they funded showing cushions are good for your health, it doesn’t mean that they’re wrong, but they have a vested interest.
Stories often include comments from experts who weren’t involved in the study – these opinions can help clarify how the results fit into the wider context, and whether they’re worth paying attention to. But still be aware of who’s commenting on the study.
What did the researchers do? Was the study looking at cells in a dish, mice or patients?
These are all crucial stages of drug testing. New drugs can’t be tested in patients straight away – cell and animal studies are needed to check if experimental treatments are safe, effective and worth further investigation.
But how a drug works in cells and animals won’t be exactly the same as in a group of people.
Consider what else is at play. In an ideal experiment only one thing will be changed at a time, so that you know any differences are because of that one change.
But when looking at effects of real world factors such as alcohol, diet or exercise, it’s impossible to change just one thing. Lots of different changes happen each day across large groups of people, such as exposure to pollutants, viruses and medicines. These are just a few examples, and they’ll also vary between different people.
Information in these studies will often be self-reported, relying on people’s memories and honesty. How much did you have to drink last week? How much did you exercise last month? Even if you avoid the temptation to make yourself appear healthier, would what you reported be entirely accurate?
So, it’s tricky, but not impossible, to try to pin down how much different things affect cancer risk. Scientists can try to account for the things that cloud the picture, but there will always be gaps. And that’s why this type of research takes a long time, and needs to involve a lot of people.
Scientists aim to publish their results in journals, where their findings are scrutinised by other scientists – a kind of quality assurance called peer review. These publications should give enough information for another scientist to repeat the experiment. This verification gives an idea of whether the original results are robust.
But scientists share their work in other ways, particularly at conferences. Often these results are preliminary or don’t have as much information as would be in a published paper, and aren’t yet peer reviewed.
But these can be reported in the news with the same prominence, even though the results might be a long way from the finished article.
When was the study carried out, and how long did it run for? A week? A month? A year? The longer the better – long-term effects might be missed if you only look at a short timeframe.
And if the study looks at people, are the data from the last few years, or from decades ago? An older study might have had more time to look at long-term consequences, but a more recent study could mean that the findings are more relevant to how people live now.
5. How many?
How many subjects took part in the research? And who or what were those participants. Was it 10 mice? 100 patients? 1000? Generally, the more used, the more reliable the results are.
6. How much?
Studies looking at how lifestyle factors such as smoking, obesity, alcohol or certain foods impact on our risk of getting a disease are often a front page favourite.
As well as looking out for the number of people in these studies and the time period, the size of any risk is an important thing to look out for.
- The imaginary headline ‘Cushions double the risk of cancer’ sounds scary. This is what’s called a relative risk.
- If you were told cushions double the chance of developing cancer from 1 in a million to 2 in a million, that’s still a very small absolute risk. And for many people, living without cushions wouldn’t be worth this tiny increase.
When reading another fictional headline ‘Bagel smoothies halve cancer risk’, ask yourself: what’s the absolute risk? How many bagel smoothies do you need to drink to reduce your risk? Is that worth the extra effort, price and weird taste?
Make up your own mind
None of this is trying to talk down research or the news stories it generates. Science should be as accessible as possible, and we’re lucky that cancer news attracts as much interest as it does. But this leaves room for distortion.
Most reporting will be accurate and responsible, and even when it isn’t it doesn’t mean that there’s malice behind any mistakes. Communicating science is complicated, and we’re not saying we get it right every time.
It’s important to think about the wider context of any new findings, as well as a study’s positives and flaws. This blog post is just a start – there are plenty more things to think about and links below can also help you evaluate research.
But hopefully this helps you decide if something really is a breakthrough, or if more research is needed.