Do studies with negative results have value or not? We hope that majority of our readers would agree that high-quality results originating from properly designed and conducted studies do have value, irrespective of whether the outcome is positive or negative (null).
A recent post by John LaMattina, a former head of R&D at Pfizer, argues that high-quality negative results are a major contributor to the scientific progress even if it takes years of work and billions of dollars (literally) to achieve such results. Such negative data sets are certainly highly visible and have a major impact on the respective fields. But what if there is no convincingly strong evidence (positive or negative)? We have indeed seen the impact of decisions made on the basis of limited or missing evidence in drug development.
Furthermore, we are now regretfully observing decisions and their consequences that are not based on sufficient evidence in an area quite distant from biomedical research:
Stuttgart, Hamburg, Berlin, Frankfurt, Bonn, Cologne … the list of German cities that ban cars with [not so old] diesel-powered engines gets longer and longer. In Cologne, the NO2 concentration in the air was up to 62 microgram per cubic meter in 2017 (the EU has set a threshold of 40 microgram; note that, in cigarette smoke, NO2 levels reach 300 000 microgram per cubic meter). In Bonn, the value has at some point reached 47 microgram, a level of NO2 declared to be harmful for our health.
It looks like the evidence behind the NO2 threshold regulations may not be as strong as we hoped (link, in German). According to this commentary, existing evidence is based solely on epidemiological studies where regions with cleaner and dirtier air are compared in terms of their population morbidity and lethality. There are, of course, all usual adjustments in place, which take into account various confounding factors BUT correlation studies remain correlation studies and they cannot imply any causal relationships. Further, and perhaps even more remarkable, results of these epidemiological studies are also been challenged regarding the reliability of minor relative risk increases (e.g. 1.5%) in the presence of major confounders.
Lack of evidence is of course not evidence of absence. But should a decision really be made based on “scientific evidence” that is not there yet?
Along the same lines, the Environmental Protection Agency (EPA) proposed a rule to only use scientific studies with “publicly available” data when it develops regulations. This has sparked a huge debate on whether the proposal would prevent the EPA from considering studies that analyse private health information, including those that underpin air pollution standards.
In this commentary, the Michael J. Fox Foundation (MJFF) explained why the proposed rule will most likely lead to the exclusion of important studies from consideration and therefore adversely impact the decision-making process.
It goes without saying that environmental policies should be based on solid, robust data sets but there are many studies where the exposure of data is infeasible, counterproductive or dangerous and EPA may have taken the wrong path.
0 Comments
Leave A Comment