GUIDE: How to craft accurate science journalism out of press releases

Some research press releases may be more about institutional reputation than science – and in depleted newsrooms, this inaccurate PR could become news. Here’s how to get the science straight.

There is a lot of “churnalism” in science journalism. Also known as “copy-paste” journalism, this is when media outlets publish some or all of press releases word for word, without acknowledging the source or investigating any of the claims made.

A recent study out of South Africa’s Stellenbosch University compared science articles with the press releases they were based on. It found that in more than half the articles, over 50% of the text was copied from the relevant press release.

Media houses can improve their reporting on scientific topics by hiring freelance science reporters, who bring with them years of experience and a thick black book of contacts. But how can general reporters ensure that their reporting is accurate? 

Here are some pointers on how to write accurate science journalism based on a press release. 

1. Find the original research paper

First, track down the original research paper. The press release should reference it, with a link. Otherwise, contact the university press officer: their details should be in the release. 

You can also contact the paper’s authors, named in the press release. If you have questions about their findings, don’t hesitate to ask. 

The journal in which a paper is published tells you a lot about the calibre of the research. Not all journals are created equal. Read our guide to identifying a predatory journal, one that publishes poor research without peer review.

2. Read the paper

It seems obvious, but read the paper. Many important questions can only be answered by the paper itself. Check if the study has been retracted or if there are corrections. This information is usually listed prominently at the top of the paper.

Veteran science journalist Alexandra Witze explains how to read a scientific paper on the website of The Open Notebook, a non-profit organisation that helps science journalists hone their craft, whatever their experience. Here are some things to look out for.

Has the paper been peer-reviewed?

Traditional academic publishing can take months. This is partly because of peer review, where independent experts scrutinise the findings and methodology of a study. (Note: Peer review is often bypassed by predatory journals so they can fast-track publication and get profits.) 

During the Covid-19 pandemic, many researchers have rushed the publication of their papers by posting them on preprint servers. These websites allow academics to publish results as soon as they are ready – useful in a pandemic, when speedy access to new information could save lives. 

But some journalists have written news articles on these preprints without disclosing that the findings were not vetted by other researchers and that the papers’ methodology could have problems.  

Science journalist Wudan Yan writes in the New York Times: “[There] is a growing audience for these papers that are not yet fully baked, and those readers may not understand the studies’ limitations.”

What is the sample size?

The sample size of a study can indicate how meaningful its results are. It refers to the number of units – anything from people to journal articles or even chameleons – that researchers studied to come up with their results. 

The larger the sample size, the more significant the findings. Research that, for example, detects a trend in a group of 5,000 people is more significant than research that looks at just five people.

Search the paper for the term “in mice”

Does the research say “in mice”? Mice are widely used in biomedical research. Journalists and press officers often report findings without mentioning that the studies were done on mice – not people.

A frustrated scientist even set up the Twitter account @justsaysinmice to call out press releases and articles that failed to mention that results of studies were found only in mice.  

“There are people with breast cancer reading stories about breast cancer,” @justsaysinmice founder James Heathers told Stat, a health and science website affiliated with the US Boston Globe

“There are people who are pregnant reading stories about exercising while pregnant. And these people get a great deal of advice. And if your advice to them consists entirely of, ‘This happened, and it happened in a particular strain of mice, and in a particular set of models’ and then you just call the mice ‘patients’, there’s a point at which that graduates from bad reporting into some kind of misrepresentation.”

If there is change in risk, is it absolute or relative?

“Relative risks are often reported in newspaper headlines, but without the context of absolute (or baseline) risk, this information is meaningless,” says the European Food Information Council, which provides science-based information about the food industry.

Health News Review, a media watchdog that works to improve critical reading of health news, gives the following example.

A headline says, “New wonder drug reduces heart attack risk 50%.” But that is relative risk. If the headline accurately reported the absolute risk, it would be: “New wonder drug reduced heart attacks from two out of every 100 people to one out of every 100 people.” In other words, the drug reduced the incidence of heart attack in a sample from 2% to 1%. 

Uncertainty

Some science journalism or communication may exaggerate scientists’ findings in a way that ignores their uncertainties. 

In the traditional scientific method, scientists form a hypothesis to explain things they observe in the natural world and then test it with observations and experiments. If the hypothesis is supported by data, it becomes a theory – the theory of gravity, for example, or the theory of evolution.

But the natural world is complex. There are still many things scientists don’t know – and good scientists acknowledge this. That’s why, at the end of a paper, responsible researchers will explain the study’s limitations and the factors that may limit its findings.

When you contact the paper’s authors for comment (see step 1), ask them if there are any limitations. And when you contact other experts, ask if they see any limits in the research.

Causation versus correlation

Scientists are careful about making direct links – that event A causes event B, which is a causal relationship. This is an example of “cause-and-effect” and scientists have to supply a lot of proof to prove it. 

Instead, they are more likely to say there was a correlation between event A and event B – there is a relationship between the two. 

Identifying correlation as causation is another feature of hyped (and incorrect) science communication.

In 2018, Time magazine tweeted that “drinking coffee—even decaf—may help you live longer”. But the study Time quoted “only shows a correlation between drinking coffee and a lower risk of early death”, a pharmacologist explained in an article on the Conversation “It doesn’t show coffee was the cause of the lower risk.”

Who funded the research?

At the end of the paper, the authors usually declare who funded their research and whether they have a conflict of interest. This is where you will find out whether a piece of research saying that eating lots of chocolate helps people lose weight was in fact funded by a chocolate company.

3. Speak to the paper’s lead author

It can be difficult to tell who the lead author of a scientific paper is. The lead author contributes the most to the work, depending on several different conventions. But often the best way to identify the lead author is simply to ask – ask the press agent, or the university’s communications team.

The corresponding author, named on the paper, may not have done most of the work. But this is the person chosen as the contact for any questions about the paper.

Press releases by individual universities are written to show the university in a positive light. If one university’s researcher worked on a project in collaboration with academics from other institutions, the university may exaggerate the contribution of its researcher.

Check this with the corresponding author, the lead author or another senior person on the research team.

4. If you do nothing else, get outside comment

Getting comment from someone not involved in the study or the press release is one of the most important steps in reporting on new research. It will help you to find out if the research is actually up to scratch and worth reporting. 

There are a number of ways to find the right person to speak to. Here are a few suggestions:

  • Look at the citations in the paper for someone else who has worked in this field, or whose work the researchers reference.
  • Don’t ask the authors of the study to recommend someone to speak with. They are likely to recommend a colleague or someone who will not want to offend them by being critical. It is better to speak to someone outside the university where the research was published.
  • Do an online search of the topic to find out who the prominent researchers are. You can search for non-local academics by narrowing the search to an international web address: “site:.ac.ug” for Uganda, “site:.ac.uk” for the United Kingdom, or “site:.edu” for the United States.
  • Research shows that journalists disproportionately use men as sources. There is a strong push within the journalism community to include more diversity in reporting. Quote This Woman+ is a South African not-for-profit that curates an online database of female sources.

© Copyright Africa Check 2020. Read our republishing guidelines. You may reproduce this piece or content from it for the purpose of reporting and/or discussing news and current events. This is subject to: Crediting Africa Check in the byline, keeping all hyperlinks to the sources used and adding this sentence at the end of your publication: “This report was written by Africa Check, a non-partisan fact-checking organisation. View the original piece on their website", with a link back to this page.