It seems to me that the problems with science as it is done today are open secrets, and can essentially be summarized as:
1: Publication pressure. There is too much emphasis on creating publications. The end result is that people rush out unfinished work, or end up diluting their findings by unnecessarily spreading them across several papers. People are also discouraged from undertaking projects that may take a long time to "pay off" with a publication.
2: Scientific networking. In scientific fields, people get to know each other, and those relationships still heavily influence the process. Fame and reputation matter when getting published, not to mention friendships with publishers or reviewers.
3: Bias towards excitement/positive results. This, I believe, is not entirely the fault of the scientific community. If left to their own devices, I think there would be plenty of scientists willing to double check old findings, or publish negative results. However, it's hard to convince people to pay you or employ you if your work is perceived as "boring," "negative," or as some sort of failure.
There are a handful of smaller issues (e.g. excessive emphasis on "impact factor") but I suspect that they are mostly symptoms of the above issues.
I've always believed that the whole "publication model of science" is due for review. It seems to be a product of the 18th century rather than something we'd decide was a good idea today. If we do it right, I feel like the positive results bias would immediately go away.
I don't think we can "fix" issue #2 in any really meaningful way. Even if you were to take people's names off of publications, they would still chat with each other at meetings or through collaborations. I think the best thing to do is not try to prevent people from influencing each other, but simply to make the whole process more transparent.
The problem is not the publication model, it is the incentive structure that encourages scientists to rush out publications without checking their data in enough detail, or in putting them under so much pressure to generate “exciting” results that they falsify the results.
I will share an anecdote about a paper I was involved in. The work was done in collaboration with another lab within my school. One of the results suggested that one of the instrument was out of alignment and possibly giving false results. I raised this with the head of the other lab and after speaking with his post doc he said that everything was fine. I asked to see the calibration data (there was none), but I was assured that everything was fine and they were going to submit it as it was. At that point I went thermonuclear and said if this paper was submitted without the dubious values being checked I would write to the journal asking for it to be withdrawn. This caused a huge ruckus within the school including visits from the head of school and the dean telling me I was being “unreasonable”. I stood my ground and the post doc eventually ran the calibration experiments which showed the instrument was out of alignment and the results were wrong.
Of course this burned a lot of bridges for me and it would have been better for me to just shut up and let the paper go out wrong.
I'm glad you stood your ground. But what I don't get is the mindset of everyone who wanted to publish the potentially-faulty data. The moment anyone tries to build on the result, its faultiness will be known. Is that not more embarrassing?
The reason is pressure to get the papers out as fast as possible or else you won’t get any more grants. It should be embarrassing to publish garbage, but for lots of people who have succeeded in the current system it appears not to be.
Everyone involved was rather sheepish afterwards except the post doc. He had felt much the same as me, but didn’t feel he could raise the issue with his boss. Privately he was very grateful I had stood up to his boss, as he didn’t feel he could do the same and was under enormous pressure to pump out the data. All round not good.
Sometimes I wonder if I'm in a bubble. My PI thanked me when I told him I had discovered that out our data was systematically biased and it would take weeks to correct the issue.
Yes most PI’s are happy to avoid the embarrassment and only want to publish good data.
The problem is not that most scientists are not trying to do the right thing, but that they are under enormous pressure to pump out the results. It only takes a few falling to this pressure to destroy the public’s faith in science - the consequence of this are catastrophic.
We must solve this problem or we will not have science.
I know very little about academia. But, in the context of today's world of information and collaboration, it always seems to me that it should be a better time than any to (A) have a better than publication and (B) put a lot more into reproducing results independently. I mean what is the whole publication, review, reproduction system if not an early system of crowdsourcing.
To take a simplistic example: a publication norm which includes instructions for reproduction, ideally requiring as little resources as possible.
Instructions for reproduction are mandatory already. That said, you probably can't pick up any paper and reproduce the experiments exactly without asking the authors a question or two, at least in biology.
Systems are getting increasingly complex as well. In my lab, it literally takes several weeks for people to learn how to do our assay, and that's with constant feedback from an experienced user. Every step is published but that still doesn't help when things don't go according to plan. Also, not every lab has the same equipment. Even if we provided free training, a lab would still need to drop $300k to get all the machines required, which are not common.
Not every result is worth reproducing either. If someone publishes a paper that shows several lines of evidence for the same thing and they've done all reasonable controls, and it doesn't disagree with any existing models, why would you reproduce that? Outside of deliberate fraud, it's a pretty solid bet that it's true, and you can save enormous amounts of time and money by proceeding to build off of it instead of reproducing it first. And that's really what it comes down to: no one's paying you to do this kind of work. Governments would have to fund it, because you're fundamentally asking for twice as much work to be done, which will cost twice as much money (and time).
But take heart: when someone DOES make an outrageous claim, it's very common for labs to try to reproduce it (or really, disprove it). See [1] and [2] for recent examples.
If I was a fraud this is exactly what I would do. Take an existing paper and just make up the results that confirm the work with a slight twist to make it publishable. Rinse and repeat and nobody will ever catch you.
It's rare for bad results to get called out. The usual fate of bad research is a fade into obscurity.
Publishing questionable results quickly has many benefits (if you happen to be right) and few consequences. Just don't draw too much attention to yourself by over-hyping.
Yep if I had not put my foot down it probably would have slipped by and never been noticed until some poor unfortunate PhD student tried to build their research off it.
Another anecdote - a certain famous professor (since dead) at my alma mater managed to destroy at least three PhD students careers via fraud. He had faked up the initial data and then put them to work on projects based on the fake data. The whole thing only came to light (internal only as it was all hushed up) when he died and his students moved to a new supervisor.
Yes and no. What I envision is a more collaborative process where the faulty data (and your calibration concerns) could be available for scrutiny before your work was officially finished.
I feel like if we did it right, it would be possible to define "units of scientific work" to be something other than "finished publications." An incentive structure designed to maximize the new units would place more value on verification, collaboration, and negative results.
A system that enables (good) risk and tolerates failure while still clearly recognising failure is relatively rare and powerful. The first parts are relatively well understood in business but I don't think the second is.
In most emergent human systems there are mechanisms that discourage risk/failure and obscure failure. Face saving, diffusion of responsibility and the like. These exist in commercial, public and political organisations and are often really baked in to the core.
Allowing failure and burying it are very different in terms of dynamic effects. Ultimately I think success quires reward, even if it is intrinsic. That means burying doesn't help. There's a reality in science that is risky. Not everyone's talent, luck, or instinct are equal.
4: Grant bureacracy and politics. I have heard of professors and phds that spend 10% to 20% of their time on writing grant applications and fulfilling related bureaucratic requirements like writing progress reports. Furthermore, they tend to focus on topics that increase the likelihood of getting the research grant, instead of going where they honestly believe would be best. In the US, this leads many to claim their research is security-relevant, whereas in the EU, researchers try to find a sustainability-angle in their research in order to get their grants.
4. The publishing model is optimized for human peer review, not machine peer review. Publishing machine-friendly experimental evidence is intrinsically the same thing as improving reproducibility.
there is also political correctness bias as well. as in, some research cannot be easily questioned as intent can be misrepresented by those who want the research to stand or are too sensitive about the subject. The example given in the article was a perfect demonstration. Things that tend to reinforce what we believe or want others to believe we believe are easy to support.
Still as for this door too door study, the Wilder effect is a good example of bias you get when asking people sensitive questions
1: Publication pressure. There is too much emphasis on creating publications. The end result is that people rush out unfinished work, or end up diluting their findings by unnecessarily spreading them across several papers. People are also discouraged from undertaking projects that may take a long time to "pay off" with a publication.
2: Scientific networking. In scientific fields, people get to know each other, and those relationships still heavily influence the process. Fame and reputation matter when getting published, not to mention friendships with publishers or reviewers.
3: Bias towards excitement/positive results. This, I believe, is not entirely the fault of the scientific community. If left to their own devices, I think there would be plenty of scientists willing to double check old findings, or publish negative results. However, it's hard to convince people to pay you or employ you if your work is perceived as "boring," "negative," or as some sort of failure.
There are a handful of smaller issues (e.g. excessive emphasis on "impact factor") but I suspect that they are mostly symptoms of the above issues.
I've always believed that the whole "publication model of science" is due for review. It seems to be a product of the 18th century rather than something we'd decide was a good idea today. If we do it right, I feel like the positive results bias would immediately go away.
I don't think we can "fix" issue #2 in any really meaningful way. Even if you were to take people's names off of publications, they would still chat with each other at meetings or through collaborations. I think the best thing to do is not try to prevent people from influencing each other, but simply to make the whole process more transparent.