Hacker News new | past | comments | ask | show | jobs | submit login
To an alarming degree, science is not self-correcting (economist.com)
257 points by martincmartin on Oct 17, 2013 | hide | past | favorite | 138 comments



This is an excellent overview article on an important topic. It mentions many of the most influential authors on the topic of accuracy of scientific publications.

Hacker News readers may enjoy "Warning Signs in Experimental Design and Interpretation"[1] by Peter Norvig, a LISP hacker who is now director of research at Google, on how to interpret scientific research. Norvig's essay is my all-time favorite link to share in a Hacker News comment. That's because we see submissions here every day of preliminary studies that can be analyzed by Norvig's checklist on research issues to look for when reading about a scientific finding.

I discuss psychology research weekly with a group of psychologists who study human behavior genetics in a "journal club" (graduate seminar course). Those researchers have told me about other researchers who are trying to clean up the published literature in psychology, for example Jelte Wicherts, whose article "Letting the daylight in: reviewing the reviewers and other ways to maximize transparency in science"[2] in an open-access journal suggests general procedures to improve scientific publishing, for example by changing the incentive structure around reviewing papers submitted for publication. Another helpful researcher on statistical tests to verify results is Uri Simonsohn. The papers he and his colleagues produce[3] are thought-provoking, pointed, and sometimes laugh-out-loud funny.

[1] http://norvig.com/experiment-design.html

[2] Jelte M. Wicherts, Rogier A. Kievit, Marjan Bakker and Denny Borsboom. Letting the daylight in: reviewing the reviewers and other ways to maximize transparency in science. Front. Comput. Neurosci., 03 April 2012 doi: 10.3389/fncom.2012.00020

http://www.frontiersin.org/Computational_Neuroscience/10.338...

[3] http://opim.wharton.upenn.edu/~uws/


This beautiful (parent) comment, in contrast to the main article, actually contains links and references to the items it refers to. Is there some compelling reason I don't understand why _so_ many magazines / websites, even high profile reputable ones, cannot manage to link or cite?


Science Exchange: https://www.scienceexchange.com/reproducibility

Ioannidis' article on false pos/neg and stat.power: http://www.plosmedicine.org/article/info:doi/10.1371/journal...

Bohannon's "Who's afraid of a little peer review": http://www.sciencemag.org/content/342/6154/60.full (fake submissions)

Also related I think is this: http://neurotheory.columbia.edu/~ken/cargo_cult.html ;-)


I suspect the reason is that outbound links drive traffic away from ads and internal links. The more visitors you have, the more profit you lose by adding outbound links... It's sad and I wish there was some way to fix the incentives.


That's exactly what people in charge fear in my industry (working in online publishing). The do fear, that the people find the linked ressources better, so that they may never even return.

Kind of like the fear of becoming obsolete. But instead of testing and maybe even becoming better, they just forbid us to link out.


I find this particularly galling since many of these publications have fact checkers who have gone to the effort of identifying claims in the original article, tracking down the evidence to prove/disprove it and checking that the article doesn't materially misrepresent the evidence.

This is the internet, it shouldn't cost any more to let me see the fact checkers report, and it will save me a massive amount of time reproducing their work. Expandable margin notes on a per paragraph basis would be perfect.


> Warning Sign I5: Taking p too Seriously

What does it say about X when you have 10 studies saying X is true with p < .05, and another 10 studies saying that X is false with p < .05? It seems like the closer you come to actually taking p values at face value given conflicting results, the amount of things that can be considered to be 'scientifically proven' rapidly approaches zero.

Also, while I do like norvig's essay, all the errors he goes through are fairly pedestrian. If you read the modern scientific literature on research, the rabbit hole goes vastly deeper. (Assuming researchers researching researchers are more trustworthy than researchers.)


> If you read the modern scientific literature on research, the rabbit hole goes vastly deeper.

On indeed. Heisenberg used to say, "data does not imply theory."[1] Once you deeply internalize that, you realize you can't really be that, ahem, certain about anything in scientific thought outside of the limited domain of how scientific truth is defined (namely that the model has predictive value, which is entirely different from whether it has ontological value).

[1] http://archive.org/details/PhysicsPhilosophy


Try telling that to economists who claim to discover macroeconomic theories through econometric models and call it "empirical"


Nothing is ever "proven". We merely make statements that become increasingly challenging to show are likely to be false based on human judgement of experimental methods.

P values are estimates of the probability that a given observation is likely to have come about by chance, rather than due to a hypothetical model. They have many problems and it is best to think about them with a solid understanding of probability.

If there was a case where you have as many conflicting results as you say, then it would be indicative of a very active scientific community and problem. People would be working hard to resolve the problem!


Norving missed an excellent chance to title this paper "The structure and interpretation of science experiments"


Norvig's essay is an absolute must-read. You may learn more counter-intuitive facts in the next hour than in the last week. For convenience to mobile users, here's the link again: http://norvig.com/experiment-design.html


For people interested in reproducability in computational linguistics and NLP, I recommend http://lingpipe-blog.com/2013/09/25/fokkens-et-al-on-reprodu... and the article linked within. One of the authors of "Replication Failure" has another, slightly older but really awesome paper: http://www.d.umn.edu/~tpederse/Pubs/pedersen-last-word-2008....


In my experience at the university performing graduate research -

* Newness is prized

* Replicating old science is not prized

* Funding and papers happens for breakthroughs

In software engineering -

* many bugs in code creep through exacting peer reviews.

Recapping the short list: new things make you money, reviewing things is error prone, replicating old things doesn't make you money until someone wants to rely on them. Hmmmm. The disincentives to replicate unapplied research speak for themselves.

What then is to be done beyond hand-wringing and moaning?

Several things have to happen: First, grants need to be given for replication of research- replication needs to be an thing that is frequently done. Second, papers (dis)proving prior results (or disproving, period) need to be a recognized category in journals. I do not mean that disproving someone else's pet theory in favor of yours; I mean disproving the result, period; regardless of whether it helps advance your particular line of work.

From someone who's currently in industry (and is looking towards going back for the PhD), I encourage the academics to open up and/or push the area of "negative results" as a recognized category of paper. If you have graduate students that can't reproduce prior work - please have them publish that!

There have been occasional comments about Journals of Negative Results; maybe those could come to fruition sometime. :-)


As an experimental physicist, I want to point out that negative results do get published -- for example, much of the experimental work on gravity amounts to the showing of no measurable difference from theoretical expectation, to a high degree of accuracy [1] -- but writing the papers for them is much harder than writing up positive results. The reason is simple: you have to convincingly show that your failure to show a result is not just due to your mistakes.

Here is an analogy to a trivial situation. Which is more convincing?

Positive result: By following the steps in the documentation, I installed MS Word on my computer. Therefore, MS Word can be installed on my computer.

Negative result: I followed the steps in the documentation, but MS Word still doesn't work on my computer. Therefore, MS Word cannot be installed on my computer.

If you're like me, you barely pause after reading the positive result, but the negative one brings to mind piles of questions: did you have the right version for your OS? Do you have enough disk space? Does your computer work properly in other respects? A really tricky problem could take ages to figure out. At some point it's probably going to seem wiser just to abandon the problem, and get a different computer or a different program.

So, as a scientist, when you're faced with a negative result, you know you have a battle ahead of you. If you don't think anyone will care very much about your result, moving on to the next project may seem to be the right choice. It's a tough situation, and I sympathize with anyone facing it. And is it the right choice for science as a whole? Sometimes it isn't, but sometimes it is.

[1] For one such example, check out the Eotvos experiment and its many descendants. http://en.wikipedia.org/wiki/Eötvös_experiment


Nobody has yet mentioned a very well known negative result... the Michelson-Morley experiment, a complicated and time consuming endeavour.


Even positive results aren't always convincing when they go against the scientific mainstream. For example, consider the Avery/Macleod experiment that showed genes and chromosomes (and thus the heredity) were made of DNA- positive result, easy to analyze the methods and results- and frequently ignored.

It wasn't until the far simpler and more unambiguous experiment- that even physicists could understand- Hershey/Chase was done. But oddly, it was a negative result when it was published: "protein is not the heredity molecule" although by reasonable exclusion, one could easily conclude that DNA was almost certainly the heredity molecule.

Even then, a lot of other evidence was required for the mainstream community to accept this. Protein bias is long and exists today.


Exactly this. Tons of people would love to publish negative results. But when you're in the situation with something not working, the actual mechanics of proving it doesn't work is a much harder problem.


I am sometimes amused to read failed replication studies which do not calculate their own statistical power -- so they do not know whether, assuming the original study was correct, they will have the sample size to reliably detect an effect. A failed replication then says nothing about the original conclusions.

Statistical error is a hobby of mine. The Economist mentions a few pervasive errors, but there are many more. I've been writing a guide:

http://www.refsmmat.com/statistics/

One amusing example is the oft-quoted statistic that 3 million Americans use a gun in self-defense each year. The true figure is several orders of magnitude smaller. Follow the link for more details:

http://www.refsmmat.com/statistics/p-value.html#taking-up-ar...


That's great work and I want more of it!


The version I linked to is about half the size of my work-in-progress, and I'd be happy to share the draft with anyone interested in 25,000+ words on doing stats wrong. My email is in my profile.

Right now I'm a first-semester graduate student in statistics, so my writing has slowed down a bit; once I get some confidence in what I've learned I'll try to wrap it up. I'm not sure how or where to publish it, though.


Start a blog and publish it openly. That's what I will be doing with my lab research. My mind was swayed when a friend's brother (aaronsw) took his life for at least partially this reason.


Nice article, but copying an xkcd image without copying the image's tooltip text is criminal.


Which is: "'So, uh, we did the green study again and got no link. It was probably a--' 'RESEARCH CONFLICTED ON GREEN JELLY BEAN/ACNE LINK; MORE STUDY RECOMMENDED!'"


Um, you mean in the sarcastic sense?


The tooltip text is half the joke a lot of the time.


The truth machine is broken. How do we fix it?

And why is it that whenever I see a list of the "top 10 most important problems" to solve, this isn't on it? Most educated people take the veracity of published science as given, and we clearly know that's a false assumption.

I know we need more transparency in science - sharing of data and code, and negative results. But institutionally, I don't know how we get there with the tools we have.

Part of the reason the current system stays in place, despite failing at its charter mission, is billions of dollars of annual subsidies. Changing the way research dollars are allocated would change the structure of the academic enterprise, but that is incredibly hard to do. Few systems have as much momentum.


Most educated people take the veracity of published science as given

That's the root problem right there.


It's not just the subsidies, but how they are awarded - right on with the "structure of the academic enterprise".

Consider: To get promoted to professorship, you have to selected by slighly hoarier professors; to be selected, you have to have published works, which are (anonymously) reviewed by other professors, and be awarded grants with (anonymous) review committees of other professors. You probably went to grad school under the tutelage of professors in the same circle... And at least in the past scientists were sufficiently generalist that you had a wide pool of fellows who were reviewing your work at all stages; now we have siloed subdisciplines (like "chemical biology" - which, mind you is not the same as "biological chemistry" or "biochemistry"), and the emergence of "interdisciplinary research" which somehow instead of encouraging generalism, instead promoted dilettantes who couldn't hack it in either of their parent fields...

Is there any wonder why research is increasingly unreliable?


So there's an interesting topic hidden in your comment. What is truth? I don't mean to ask that in a flippant way. I mean to ask it in a way that has an answer - or at least a reliable set of answers. Of course this is a philosophical debate of long standing, but it has a lot of relevance for today.

What do you mean by truth? How do you attain to truth?

Is truth what is objectively perceived? If so, how do you determine what is the shared subjective perception of the exterior world? Is truth possible to attain to? Should we instead focus on finding and sharing working(up to tolerance) models of the reality we encounter?

This is sort of a big debate in philosophy of science circles. You can see traces of it in Popper, Feyerabend, and a few others you can rummage up on Wikipedia. These are the questions that frame how you do research, how you present research, and the expectations of reliability of research.


In my experience, when most engineering-oriented people use the word truth, they (wittingly or unwittling) mean the pragmatic definition of "allows us to make stronger-than-previous predictive models about something". Most of the argument in philosophy of science circles strikes me as meta-interesting, not eminently relevant to the practice of science.

Especially when dealing with psychology, a lot of those "what, exactly, is an electron" sort of deeply epistemological statements are not really asked, because we have enough trouble with the simplistic models we have.

So yeah, my recommendation (for what it is worth) is to always assume "most predictive model available" when someone says truth, and be aware that you are making that assumption and that simplification.


That's a reasonable approach when talking with people who have sort of day to day experience with data and systems. Unfortunately, this doesn't per se hold true when talking with people from other disciplines. Some people hold truth as a platonic thing: it's either true or not true, and new research disproving old things implies falsehood, not a less accurate model. Tackling the elephant in that room, I'm not even talking about religious discussions, just normal discussions on physics research being found out.

A historian could speak more pertinently to this, but my understanding is that in 50s-60s "middle" USA, scientists were pretty much treated as oracles of divinity by a great deal of the common population.


I agree and have run into that problem myself. Unfortunately, fixing that impression takes time and to my knowledge there isn't any real way to explain the problem with "Platonic truth" as a tangent off of a discussion around research journals, for instance. It requires a full step back and a couple hours of discussion to explain it at all, if you haven't been exposed to it before.

If anyone has any recommendations on that front, I am all ears.


Actually, I don't think we have to go down that rabbit hole. The current journal system is built such that a published paper is essentially a claim that this result is reproducible and statistically significant in some sense. A philosopher can and should then go and question what exactly that means in practice, whether it's the best standard, etc etc, but in the meantime, it is also acceptable to take the system's claims and observe that they do not match the reality of the journal system on its own terms, without any particular need to dig into philosophy.

In fact, I'm personally not convinced it's the best standard, but, nevertheless, it is a good start to hold journal papers to the standards they proclaim.

Worry about what the best theory is when they show some ability to implement any theory!


Truth is discovered. The process of arriving at the discovery of truth is where the perceptions and subjectivity play a role.

For example, water has certain properties based on the conditions surrounding it. These properties exist whether you know about them or not. If you can, through observation and/or measurement, discover these properties, you have discovered truth. If you never discover these properties, it does not affect the existence of the properties.


> The truth machine is broken. How do we fix it?

> And why is it that whenever I see a list of the "top 10 most important problems" to solve, this isn't on it?

The truth machine has always been substantially broken, what has distinguished the scientific process is that it works at all. Our historical view of science suffers from absolutely enormous selection bias, in general we are only ever interested in following the stories which trace the threads of truth as they track through an absolute maelstrom of error, caused by nepotism, cronyism, prejudice, or a thousand other uncorrected human faults. 'Publish or perish' is only a modern progression of faults which have always been there.

That's not to say that finding improvements to the scientific process is not a hugely important question, but we should spare ourselves the drama of thinking our age has broken the system in some uniquely terrible way.


We remember the big results of science: Newton's gravity, Einstein's special relativity, Darwin's natural selection, the Alvarez hypothesis of dinosaur extinction, etc. In doing so we forget the orders of magnitude more research results which were eventually proven wrong, or inconsequential.

Most of the papers that are wrong today will be caught eventually, or simply won't matter in the long run. The truth machine is not broken, it's just bigger and slower than most people realize.


Move to a bayesian approach for experiment design?


The fundamental problem as I see it: science can either produce a high rate of unreliable results or a low rate of reliable results, but the market demands a high rate of reliable results.

The market produces the current situation because the rate of results is much easier to observe than their reliability.

No one wants to fund a study only to get no result out of it. So they end up getting an unreliable result instead.


>The fundamental problem as I see it: science can either produce a high rate of unreliable results or a low rate of reliable results, but the market demands a high rate of reliable results.

Well said, but the market will accept a high rate of unreliable results because no one knows better.


Interesting. Does this seem like an opportunity to measure reputation from influence, results and funding utilization? Is any site (lexis, etc) already trying this?


The Structure of Scientific Revolutions by Thomas Kuhn has some interesting commentary on how invalid scientific models of our world are eventually (and painfully) discharged. It's the book that coined the term "paradigm shift". This article tends to focus more on bad science, but I think the book is still at least partially relevant.


I agree with you that Kuhn is still worth reading, but perhaps a better intro to Philosophy of Science can be found in Laudan's "Science and Relativism". He is not dismissive of Kuhnian-style relativism, but he is critical of it and helps you see how it fits into the larger debate.

At least that's what I remember. It's been a while since college :/

http://www.amazon.com/Science-Relativism-Controversies-Philo...


I argue the best primer is Feynman's "The meaning of it all: Thoughts of a citizen-scientist" and "The Pleasure of Finding Things Out". I'm not just linkbaiting HN's love for Feynman: Feynman was an active, practicing scientist and besides being introspective, he had a great knack for explaining things in a perspective accessible to non-scientists. Kuhn is worth reading, though if you are a scientist and want to get a more-in-depth analysis of philosophy of science that is laid out formally.



I'm not a fan of Kuhn, largely because he can be read as saying that, indeed, scientific models that get "overthrown" in a paradigm shift are therefore "invalid". (In the extreme, this leads to complete relativism, the idea that there is no such thing as a "valid" model at all. Kuhn himself has waffled on this point: sometimes he says he didn't really mean to take that extreme view, but other times he says things that are really hard to make sense of in any other way.) That leads people to believe that, for example, Newtonian physics must be "invalid" because it was "overthrown" by relativity. But it's really hard to square that claim with the fact that Newtonian physics (along with many other supposedly "invalid" models) gets used every day to make accurate predictions.

I think the bad science described in the article is, at least in part, a reflection of our losing sight of that ultimate objective: we build scientific models of the world in order to make accurate predictions. It's not enough to say, well, I used all the right statistical techniques, I used all the right double-blind control procedures, my results have been replicated. Those things are all necessary, but they're not sufficient; they're not the goal, they're only the starting point. The goal is to make accurate predictions. I think we get a lot of bad science because we don't enforce that requirement enough.


I was under the impression that models are thrown out in a paradigm shift because they aren't robust enough to account for edge cases that start out as weird (and ignored) outliers to the original model. These outliers eventually become glaringly apparent to the mainstream. Ideally, the new model resulting from a paradigm shift supercedes the old model while modeling the edge cases that the older model was unable to account for. Don't take my word for it, though. It's been a long time since I've read the book ;-).


I was under the impression that models are thrown out in a paradigm shift

But they aren't always thrown out in a paradigm shift; that's the point. Newtonian physics was not thrown out in the paradigm shift to relativity. Our understanding of why Newtonian physics works as well as it does within its domain of validity changed; but the fact that it does work within its domain of validity did not change.

Kuhn picks a number of examples where that was not the case--where the old model did get thrown out (for example, the paradigm shift from Aristotelian physics to Galilean/Newtonian physics, and the paradigm shift from the Ptolemaic to the Copernican model of the Solar System). But he tried to apply particular features of those examples to all paradigm shifts, which doesn't work.

Ideally, the new model resulting from a paradigm shift supercedes the old model while modeling the edge cases that the older model was unable to account for.

In the case of relativity, it did account for edge cases that Newtonian physics couldn't; but it didn't supersede Newtonian physics, as I noted above. We still use Newtonian physics where its predictions are accurate enough for our purposes, which is most of the time. It's only in particular domains (such as GPS, to pick an example ordinary people are familiar with) that we need relativity to get accurate enough results.


The "painful" bit is key. There's a difference between constantly self-correcting and massively self-correcting, in a sudden, irregular way (known as "punctured equilibrium") only occasionally.


What a weird article. I agree with nearly every individual fact within it, but after reading it, I still don't know the point of the piece. I don't want to jump to conclusions, but given that this piece is mostly criticism of science the part of me that associates "conservatives" with "anti-intellectual reactionaries" suspects a hidden agenda to undermine science or give climate change "skeptics" something to point at and yell about [1].

Do scientists get funding to replicate research? No, not usually.

Does replication still happen? Yes replication of a previous work is often step 0 of new research. It happens all the time.

Do replication results get published? No, not usually. Unless it's a clear rebuttal of a famous paper.

Does this matter? Debatable. Science proceeds slowly, and bad papers tend to be forgotten unless they're easily replicable. It just takes time.

Does this mean that we can't trust science? Absolutely not. You just can't trust individual papers, prima facie. This is something that scientists are supposed to learn as undergrads. Just because it's in a journal (even a "good" journal, like Science or Nature -- some would say especially if it's in one of those journals) doesn't mean it's right.

What does this tell us about armchair science? It's utterly useless. The reason that you have to be a professional scientist is not because professional scientists are smarter, but because they have a huge depth of knowledge about a particular field. Any working scientist can point out published, cited papers in their field that are total crap. But you can't tell that a paper might be crap unless you've spent years reading all of the literature in the field. Yet people on the internet persist in thinking that they can cherry-pick a single paper from arXiv or PubMed, and make sweeping conclusions about a field.

[1] indeed, the usual HN climate-change critics are in this thread, pointing at and yelling about how science is "broken".


the usual HN climate-change critics are in this thread, pointing at and yelling about how science is "broken".

I don't think the current situation in climate science shows that "science is broken". I think it shows that climate scientists have been too quick to claim the credibility of Science when their work does not measure up to that standard. (Climate science is certainly not the only field that does this: much of what passes for "scientific research" in the social sciences doesn't meet the standard either.) That's not to say we shouldn't be trying to figure out how the climate works; it's saying that you don't claim the mantle of Science until you can back it up.


> climate scientists have been too quick to claim the credibility of Science when their work does not measure up to that standard

What the fuck are you talking about?

Observations and models are repeated and challenged far more in climate science than most other fields. It is a good example of science working exactly as it should. I'm not sure where you think climate science has gone astray but considering how expensive and difficult controlled experiments in climate science are, I think climate science does an amazingly good job.

To clarify: yes, there are lots of bad papers in climate science (see Sturgeon's Law) but science does not require every paper to be accurate (in fact, you need a few extremist ideas to try to shake things up). Science requires rigorous retesting of theories to narrow the target and climate science is one field where that's happening all the time. Climate science's continuous observations does a pretty good job at sidelining inaccurate papers within a couple years (a quick turnaround all things considered).

Climate science's problem is non scientists misrepresenting, misinterpreting and misunderstanding. The science itself is exactly what you'd want for every scientific field.


Observations and models are repeated and challenged far more in climate science than most other fields.

You're kidding, right? Please show your work. [Edit: I saw your correction, and have adjusted my quote above, but my response is still the same: please show your work.]

science does not require every paper to be accurate (in fact, you need a few extremist ideas to try to shake things up).

I quite agree, and I wasn't trying to claim that all, or even most, papers published in other fields are accurate.

Climate science's problem is non scientists misrepresenting, misinterpreting and misunderstanding. The science itself is exactly what you'd want for every scientific field.

I disagree, but I doubt we're going to resolve the question here.


> Observations and models are repeated and challenged far more in climate science than any other field.

That may be an overstatement; see the Standard Model of physics.


You're right, it was overstatement. I've corrected to "most other fields".


+1 on "what the fuck are you talking about?"

Climate science even got a physics prof from Berzerkely (who was convinced that only physicists knew how to do science right) who went in with an anti-climatology bias, and was funded by the Koch brother, and he replicated climatology results in the "BEST" study -- to the point where journals like Nature turned down publishing the BEST study because it was not new information and only confirming already well known scientific facts. So a Koch-backed study has actually determined that climate science really is science. That's like the Catholic church in the 1600s funding someone to investigate Galileo and publishing that it looks like the Earth goes around the Sun after all...


a Koch-backed study has actually determined that climate science really is science

Reference, please?



Thanks for the links. I'll add one link, direct to the project page, since it has links to the actual data, as well as much more detail than is in the articles:

http://berkeleyearth.org/

I note two things on a first read-through:

(1) The project confirmed the data on temperature trends, but the error bar sizes are only reasonable back to about 1860; further back than that, the error bars get quite large. This is glossed over in the statements of findings: for example, the summary of findings (http://berkeleyearth.org/summary-of-findings) says: "Global land temperatures have increased by 1.5 degrees C over the past 250 years", but the error bars in 1750 are about +/- 1.5 C. You can't conclude anything reliably when the error bars are as large as the signal you think you see in the data.

(I also note, btw, that this conclusion is mis-stated in the NYT op-ed by Muller: it says "Our results show that the average temperature of the earth’s land has risen by two and a half degrees Fahrenheit over the past 250 years, including an increase of one and a half degrees over the most recent 50 years". Neither of those statements are consistent with the summary of findings page on the project site, which claims, and shows, 1.5 C rise over the past 250 years, as above, and less than 1 C rise over the past 50 years.)

(2) The project looked at statistics on temperature trends to test whether the "urban heat island" effect might have biased the conclusions; since they saw the same trends when looking at rural areas only, they concluded that it didn't. But that isn't the only criticism Anthony Watts made of the individual station data: he claimed that the locations of many individual thermometers, urban and rural, were poorly chosen (the classic case is one that was placed in the hot air exhaust stream of an air conditioning unit). The Berkeley project did not go and eyeball individual stations, as Watts did, so they had no way of assessing this effect.


They looked at siting effects as well:

http://www.scitechnol.com/2327-4581/2327-4581-1-107.php

They used Watts own siting data and showed that both poor and good sites showed the same temperature trend over time.

The data back to 1860 are also sufficient to establish the current warming trend and through other analysis (e.g. Meehl, et al's work) to establish the climate sensitivity. Climate science as a discipline does not hinge on knowing exactly how warm it was 250 years ago within +/- 0.5C.


Thanks for the link, I'll take a look at it.

The data back to 1860 are also sufficient to establish the current warming trend

Only if you assume that there are no significant cyclic effects on longer time scales.

Climate science as a discipline does not hinge on knowing exactly how warm it was 250 years ago within +/- 0.5C.

It does if you're claiming, as the IPCC does based on the input of climate scientists, that temperatures now are warmer than they have been in at least several thousand years.


This is not as relevant as you're implying - if there are cyclic effects on long time scales, you're implying they're highly exponential - long periods of no warming, and then sudden periods of rapid warming (oddly correlated with the rise in human CO2 emissions at that).

There's no postulated mechanism for how this could be the case - where is the extra energy or heat retention coming from and managing to come into play so suddenly since the 1970s?

Whereas, if one accounts for anthropogenic CO2, it explains it superbly.


if there are cyclic effects on long time scales, you're implying they're highly exponential

I'm implying no such thing. Consider an analogy: I look at the temperature trend from February through May and conclude that I have enough data to show a significant warming trend. I therefore predict that by November, the planet will be dangerously overheated.

Obviously I have ignored a cyclic effect on a longer timescale. Is it "highly exponential" as you describe?

long periods of no warming, and then sudden periods of rapid warming

This is not what the data shows; at least, not if you look at all the data, not just a few selected temperature reconstructions that are questionable (at best) anyway. The data in full, looking at a number of different fields, shows a cycle with a period of about 800 to 1000 years; the last iteration of the cycle consisted of the Medieval Warm Period and the Little Ice Age, and we are nearing the next maximum of the cycle now.

This is not to say that human activities (of which CO2 emissions are only one component: why is it that nobody talks about land use?) have had no effect on the climate. (Arguably, humans terraforming the planet has been a significant factor in keeping another Ice Age from starting.) But if you're using the wrong baseline, it's tough to separate the human effects from the other effects.


Except no one is simply curve fitting the data - you haven't proposed any alternative mechanism that leads to a multi-decadal warming cycle.

That's what anthropogenic climate change is - since the only effect that suitably explains warming is increased eCO2 in the atmosphere.

Absent (effective)CO2 increases, you can't explain the warming trend. There shouldn't be one as pronounced.

If you're going to say "well what if there's a long cycle" - then well, what is it exactly? Where is all this additional heat retention in the atmosphere coming from? There are no multi-decadal phenomena on Earth which can explain the temperature record.

EDIT: The data for example, does not show a 1000 year cycle. You're inferring there's a cycle, by doing what you accuse others of - pointing to a graph, and declaring that because it looks "roughly" cyclical it is.


Absent (effective)CO2 increases, you can't explain the warming trend.

I understand that that is the "official" view of climate science, but that doesn't mean it's right. I don't think we're going to resolve that sort of dispute here, but if your only argument is from authority--"climate science says so"--then that's not enough for me. I don't think climate science as a field has exercised sufficient care to be just taken at their word.

If you're going to say "well what if there's a long cycle" - then well, what is it exactly?

Nobody knows for sure; we don't understand how the climate works well enough for that. But the fact that we don't have a theory that explains the cycle doesn't mean there isn't one.

Where is all this additional heat retention in the atmosphere coming from?

Actually, the amount of heat being retained in the atmosphere is miniscule on a planetary scale. The significant heat sink is the oceans. You have to be careful distinguishing between heat and temperature.

As for where it's coming from, as above, we don't understand all the causal factors at work; the fact that we have studied one intensively (CO2) does not mean that one must be the only significant one. (The IPCC report itself lists a number of significant causal factors as having a "low" level of scientific understanding.)

The data for example, does not show a 1000 year cycle.

Whose data? I understand that the "Hockey Stick" does not show a 1000 year cycle, but the "Hockey Stick" is not data; it's a (questionable at best) reconstruction of temperatures from data. We do not have direct temperature data from 1000 years ago, and as far as indirect data, there is plenty to indicate that there was indeed a Medieval Warm Period and a Little Ice Age, as I said.


"Only if you assume that there are no significant cyclic effects on longer time scales."

We know about cyclical effects on longer time scales like Milankovich cycles, we understand solar variation and volcanic effects. We know about the Younger Dryas and Daansgard-Oeschger cycles. We know about shorter term oscillations like the AMO and PDO.

You apparently know nothing of any of this, and have never heard of Muller's BEST study even though it was splattered all over the New York Times a few years back. Yet, you feel competent enough to make statements about how climatology isn't a real science.

We also know about something called the Dunning-Kruger effect.


you feel competent enough to make statements about how climatology isn't a real science.

I didn't say climatology wasn't a real science. I said it's not an accurate enough science to justify making policy decisions with huge consequences. Plenty of other real sciences are in the same state; that's to be expected, since it's a phase every real science has to go through in order to become accurate enough.


a) I'm not the author of the study.

b) You expect us to believe that you have developed substantive critiques of a 3-year research project in 24 minutes of reading. (the delta between our post timestamps)

c) IMO whatever credibility you might have had to debate the science in detail is forfeited by your un-ironic reliance on the blog of a retired TV weather guy. EDIT: this is wrong, I confused you with another poster in this thread who had previously linked to the Watts blog. My apologies.


You expect us to believe that you have developed substantive critiques of a 3-year research project in 24 minutes of reading.

I don't expect you to "believe" anything. I expect you to look at the items I pointed out and see what you think of them. If you don't agree with them, feel free to give specific reasons why not.

your un-ironic reliance on the blog of a retired TV weather guy.

I didn't bring Watts into it; the Berkeley project did. I was merely commenting on something that Watts had emphasized but I didn't see mention of in the first set of links (lamontcg gave another link that does deal with that specific issue, I'll take a look at it).


"I expect you to look at the items I pointed out and see what you think of them. If you don't agree with them, feel free to give specific reasons why not."

You need to do some research first. The fact that you had never heard of the BEST study indicates that you don't know the most superficial details about climate science and this debate (and ones that were splattered across the New York Times). I'm not going to do the work to educate you. I expect YOU to do research before you demand me to produce a 250-paper annotated bibliography of climate science and its justification. Come back when you can explain the radiative physics behind CO2 being a greenhouse gas and how it can be shown in a table-top experiment, and when you know what Milankovich cycles are. Explain what caused the last Ice Age to end, be prepared to discuss the PETM. Explain the theories behind the warming that led up to the Eocene optimum 49 million years ago and the subsequent cooling after that. Climate science is more than just arguing about anthropogenic global warming. If you want a place to start, go here:

http://www.amazon.com/Earths-Climate-William-F-Ruddiman/dp/0...

Then start hacking away at these:

http://www.amazon.com/Paleoclimatology-International-Geophys... http://www.amazon.com/Principles-of-Planetary-Climate-ebook/...

Try getting off the Internet and exposing yourself to some actual science and not arguments about science.


Climate science is more than just arguing about anthropogenic global warming.

I entirely agree; but to the extent that climate science is being used to justify policy decisions with huge consequences, it is about anthropogenic global warming, since that's what the political debate is about.


Sorry, that was a snap response. Once again I've taught myself the value of posting slowly.

c) is wrong; I mistook you for someone else in the discussion. I noted so in an edit.

For the rest, I think lamontcg put it better than I could have.


c) is wrong; I mistook you for someone else in the discussion. I noted so in an edit.

I just saw the edit, thanks for clarifying.


Did you bother reading the article? Because it specifically says that not only are the usual soft targets like psychology a problem, but so are hard STEM targets like cancer research.

Also, what's with the capitalising of Science like it's some kind of club or organisation? "Science" is not a proper noun - making it one is facilitating dogma.


so are hard STEM targets like cancer research.

Yes, that's true; I shouldn't have singled out social sciences.

what's with the capitalising of Science like it's some kind of club or organisation?

It's meant to convey the fact that climate scientists basically tell politicians "look, Science says you have to do X, Y, Z or we're doomed". They are claiming that their results are good enough to drive policy choices that have huge consequences, and politicians believe it because it's Science--they see it as the same as if astronomers came to them and said "look, we've calculated that asteroid A is going to hit the Earth unless we do X, Y, and Z; we've checked the numbers every which way and there's no other option." But climate science as a field is simply not accurate enough to justify that.


There is a book about the "bubble" of string theory research written by a physicist, but the title and author slip my mind right now.

It discusses the very phenomenon that research grants go to increasingly bigger groups that can be trusted to deliver. And also string theory research itself which did get a ton of grant money during the 1990's even though the results never had impact outside the bubble itself.

Edit, some searching helped me find it: Lee Smolin, The Trouble with Physics http://en.wikipedia.org/wiki/The_Trouble_with_Physics


Where did this article mention climate change?

If you agree with every fact in the article, why would you look for a hidden agenda?

Your comment makes you seem a bit paranoid.


I also think the linked article was somewhat strange. Or maybe just linkbaity. The title can be paraphrased as "To an alarming degree, science is not self-correcting." However, the article actually describes the way science is supposed to work, which is self-correcting.

The main criticism in the article seems to be that, in much of current scientific practice, the self-correction doesn't happen immediately. Well, it would be helpful to know: (1) how does current state of self-correction compare to different times in the past, 50 years ago, 100 years ago, 200 years ago?, (2) what happens over the long term to papers that aren't refuted immediately but which later turn out to be obviously wrong? do they hang around and have a negative impact on progress of science? I'm sure there are more questions.

Another strangeness in the article: The article begins by touting a concern over a 1998 "priming" study. In the third paragraph, the article states that there have been nine (9!) subsequent studies that have failed to replicate the 1998 study's result. Is that not an example of the scientific self-correction working quite well? The article complains, "Either the original research is flawed (as the replicators claim) or the replications are (as many of the original researchers on priming contend). Either way, something is awry." Yes, something is awry. But it's hard to see how it's the scientific method/process itself, which seems to be doing exactly what it's supposed to: calling the 1998 paper into question. (Is the main concern here just that more bad or questionable science is being published than in the past? I'm sure that's true, but there are also many more studies being published in this era and it seems not surprising that overall quality would decline.)

Like some others at HN, I get concerned when I see pieces like this that the science-haters will latch on to and use to discredit the scientific method itself. The article itself _should not_ be taken that way, by anyone who understands science. But there are many who don't understand science, who are anti-science, etc.


"The main criticism in the article seems to be that, in much of current scientific practice, the self-correction doesn't happen immediately."

Wrong. This is not the main criticism of the article. It is not a problem if self-correction doesn't happen immediately. It is, however, a problem if the rate of self-correction is not fast enough to compensate for the ratio of 'horrifyingly wrong' to 'generally correct' papers.


> The article begins by touting a concern over a 1998 "priming" study. In the third paragraph, the article states that there have been nine (9!) subsequent studies that have failed to replicate the 1998 study's result. Is that not an example of the scientific self-correction working quite well?

Ideally, you'd like to see the original study being discredited before the point where 9 separate studies have exposed serious problems with it (and it's not clear that it's been discredited yet). Papers continue to pick up citations even after they've been formally retracted; that's also not a good example of self-correction working well even though the retraction process is supposed to be self-correction. Correction hasn't happened until people stop believing in problematic papers.


He's saying that he wonders if the point of the article is to give those that deny scientific studies in a non-scientific manner ammo.

Something they can point to to rebuke scientific studies because they don't believe the scientific study is valid.

I too didn't understand the point of this article. Nor how it showed that science is not self-correcting. To me it just talked about some issues with scientific research but didn't support it's headline at all. I've never heard anyone claim that science was self-correcting in the short term, these things take decades and sometimes even centuries to self-correct.

And honestly that's ok. Science is all about having the best answer we can have, not the right answer. It's impossible to know we are right beyond the shadow of a doubt. We need to scrutinize how we research and improve our methods as we figure out how to but that doesn't mean science has some major issue. That's jut how it works and how it has historically worked as far as I know.


With a title "To an alarming degree..." it immediately feels like a persuasive paper. Who thinks it's alarming? From the sound of it, everyone should (or does) think this is alarming.

But it may be a stretch to say this is someone trying to discredit global warming.

Science isn't broken. People claim what they see, sometimes they are wrong, sometimes they are right. But eventually we find out the truth.


It didn't mention climate change. I know from past experience that this line of argument -- indeed, any that casts doubt on scientific research -- is a favorite of the "climate skeptic" world. Moreover, I know that the Economist has a center-right political bias.

Maybe they're advocating for something specific here (it's honestly unclear), but given that the Economist almost never writes anything of substance about science, and given their political bias, it isn't at all paranoid to wonder if this article has a subtext. It's called "being an intelligent consumer of media".


They have a "Science and Technology" section in every single issue. Frankly, it contains some of the best, most scrupulously accurate science writing for a general audience that I've seen anywhere.

Also, just two weeks ago, they published a "leader" (primary article expressing the editorial position of the paper) that was very critical of climate change deniers.

[1] http://www.economist.com/news/leaders/21587224-all-means-que...


Evidently you don't consume the Economist very often.

http://www.economist.com/blogs/babbage/2013/09/ipcc-climate-...

To put it in the language you seem to prefer, Bill O'Reilly thinks the Economist leans left. I disagree with Mr. O'Reilly's employers and will not send them traffic, but support for this claim is easily googled.

ETA: The link above is more flip than I'd like. Try this:

http://www.economist.com/news/leaders/21587224-all-means-que...


Economist is conservative (erm, right) by European standards. It just happens that it comes out as fairly left by American standards, given that we are so right of the continent.


I don't think that the Economist is a reactionary right-wing publication (at least, not the editorial board), but I do think there are people on its staff who are sympathetic to the climate change denialists.

That said, I think the piece is puzzling, not that I'm sure it's a right-wing hatchet job on science.


I am a "liberal" and a huge fan of The Economist. I don't find this to be an accurate assessment. Someone else already linked to their climate change stance, and in my experience their economic stances are generally supported in the academic community.


Their lean is far more center than right, and their right-wing nature is more in the European liberal style (think the German FDP, not the CDU/CSU and definitely not the Republican party's views on environmentalism). Very pro-business, but not socially conservative or anti-environmentalism.


When you're trying to protect science by finding nefarious motives among those who criticize it and start not with if the criticism is fair and to the point but if it can be used by your ideological enemies - you're doing it wrong. I.e. if you're in the political meeting, you're doing it exactly right, if facts are on your side, pound the facts, if not, pound the table. But if you're concerned about science as a way to expand our objective knowledge and not as a way to provide a dressing for your political ideas, whatever they could be - it is wrong.


It sort of amuses me that there are HN climate change denialists, but there aren't really any HN creationists.


Well, it's hard to set up an legitimate scientific experiment to prove/disprove something like climate change. The best we can hope for is observations over time.

Another issue that's problematic is that when you look at the archaeological evidence, the earth has actually been through several heating/cooling cycles. I mean, the last glacial period was fairly recent, we know the earth is warmer now, and we strongly suspect it was warmer before the glacial period.

Furthermore, no proper experiment is possible to explain why the earth is warming at the current rate, if the rate is natural or caused by humans, etc...

Questioning the human causes of climate change doesn't automatically lead to creationism, it's simply a rational question to ask.


It all starts with the term "climate change". Back in the day it was global warming, but when reality didn't fit the models, suddenly climate change became more palatable. That is a minor peeve, actually. The killer factor is the hysteria. It's not that I deny that man is changing the climate - heck, we change just about everything else in our environment, climate would be the exception. I just can't bring myself to defend the position that we must reduce per capita energy usage. Reducing energy usage is equivalent to de-evolution.

Putting it another way, environmentalists defend that we can't change anything about the environment to such an absolute extreme, that I smell religious-like zealotry.

Fortunately, the solution is a couple of decades away, through electric transportation, coupled with solar/wind renewables. No CO2 but also no "pristine Earth" nonsense.


The IPCC - which was set up in the 1980s. Note the name.

The questions you ask are legitimate.


There are a few, but they tend to keep pretty quite. HN tolerates creationism/evolution denialism far less than it tolerates climate change denialism.


It makes sense, though. AGW denialism is very much a part of "pro-business" ideology, because it's uncomfortable to accept things that don't align with your worldview, even when there's a mountain of evidence behind it.

HN has no particular attraction for fundamentalist Christians.


I don't have a problem with climate change as a science, or even something we should be concerned about. My larger issues are regarding the hysteria, and anti-corporatism that tends to go along with it.

For the record, I'm pretty anti-corporation, but for different reasons. I feel that given the problems people face in their individual lives, communities, and even in broader society, and the earth as a whole it comes down to, what can we change easily today, what can we change with a minimal impact tomorrow, and what directions can we take to accomplish a given set of goals farther into the future.

First, what are the negative impacts of global warming? Can these be addressed in a more localized level? If they need to be addressed globally, how can that be prioritized and accomplished?

Second, have the issues and solutions been prioritized by impact:cost analysis?

Third, given impact:cost analysis and prioritization, are there smaller steps that could have a larger impact:cost ration that can improve a problem in the near term that could have a higher priority?

It may seem really insensitive to take a step back and look at things objectively.. but the "OMG, my grandkids won't ever see real trees!!!" approach is just as polarizing and ineffective as a whole as the religious nutters.


Indeed - there's very little money to be made from the Intelligent Designer, so there's little to no appeal. Climate change raises the specter of regulation.


>HN has no particular attraction for fundamentalist Christians. //

Totally OT as it is, what are you getting at here?


Hacker News has climate change "denialists" because climate models are terrible at predicting global temperature[1][2]. Once scientists gain the ability to make accurate predictions, more logic-minded hackers will go along. Hackers like predictive models with a good track record, a la Nate Silver. They dislike emotional bullying.

[1] http://wattsupwiththat.com/2013/10/14/90-climate-model-proje...

[2] http://www.economist.com/news/science-and-technology/2157446...


Nate Silver comes out very clearly in support of climate change theory in his The Signal and The Noise, and elucidates why, although he goes carefully through the problems and limitations with many climate models.

Criticising models is not the same as claiming that anthropogenic global warming is false. Indeed, Nate Silver makes the point that climatologists are their own greatest critics -- they collectively know the limits of their models better than almost everyone else.

So I'm not really sure why he's in your comment. If you've read what he writes, then you'll need to be building counterarguments to much of what he wrote on the topic; "models have problems" is just part of it.


It's quite clear that "hackers" don't understand enough about the research to make informed criticisms.

Cherry picking two measurements of temperature data does not make for a convincing rebuttal against the overwhelming independent lines of evidence summarized in (among other places), the IPCC reports.


Argumentum ad verecundiam.


Not actually a fallacy, even the informal kind, despite the Latin name.


Ah yes. I need to listen to authority and ignore the fact that observed data are falling out the bottom side of their models' 95% confidence intervals. This is how science is done in the modern age.


Prove your claim. Cite something other than a picture you found on a blog.


This is from another blog, but the figures are from draft versions of the latest IPCC report:

http://climateaudit.org/2013/10/08/fixing-the-facts-2/

The figure that was removed shows that measured temperatures are falling outside a 95% confidence interval from the multi-model mean.

This does not mean the climate will not warm, that C02 is not the major factor (or at least a major factor), but it does mean the models that are used for 100 year projections have failed after after 15 years and are running too hot.

It should be cause for celebration. The highest warming scenarios are considerably less likely.

This also does not mean that an emissions tax and spending on R&D on low emissions energy sources is unwise. But it should be factored into cost benefit analysis of climate change response.

Ultimately response to climate change is a public policy problem and is not just based on scientific data and projections.

Really, the response boils down to one question, how much is each of us willing to pay per year to respond? $10, $100, $1000, $10000?


Ok, so you're citing three pictures on a blog. I guess I'm supposed to be 300% more convinced?

But seriously: you're cherry picking one set of mean global temperature data, to dispute one graph in one IPCC report, for one line of evidence (of many). And your argument isn't rigorous or statistical -- some guy hand-drew some lines over the graphs, and asserts that it doesn't fit the data, even though the observed data lands pretty cleanly in the range of the models. And even if the models are off here, it doesn't invalidate the hypothesis of AGW. You can't win, even if you make a good argument here.

The reason nobody but climate denialists read these arguments is because they're pedantic and facile, not because there's a vast scientific conspiracy against you.


Climate models are not intended to precisely predict global temperature on short time scales, so the fact that they cannot does not disprove their accuracy or utility.

Furthermore, global climate models are tools for a particular kind of research, not the basis for the theory of global warming--which predates electronic computers by about 40 years.

Evolutionary theory cannot reliably predict complex outcomes either. For example, despite the relative biological simplicity of a flu virus, scientists struggle to predict the strain(s) that will be most virulent each year. But I never see posts on HN holding this failure up as proof that evolution is not believable.


One can argue that science will always have a hard time constructing an accurate model for predicting weather, because of its inherent humongous complexity and nonlinear behavior:

> [...] we provide a number of illustrative examples and highlight key mechanisms that give rise to nonlinear behavior, address scale and methodological issues, suggest a robust alternative to prediction that is based on using integrated assessments within the framework of vulnerability studies and, lastly, recommend a number of research priorities and the establishment of education programs in Earth Systems Science.

http://link.springer.com/article/10.1023/B:CLIM.0000037493.8...


>but because they have a huge depth of knowledge about a particular field. Any working scientist can point out published, cited papers in their field that are total crap.

No. I promise you, there are scientists who literally cannot bring themselves to believe that any of the papers are crap, who buy into the Science as authority dogma. There are also scientists who do not have a depth of knowledge in any field, much less their own.


Thanks for avoiding the strawman "denier" nonsense, but I think you're looking for the phrase "CAGW-informed-legislation critics."

Why would a rational person criticize the assumption that climate changes?


"Climate change" in this context is short-hand for "human modified climate change" and generally refers to climate change accelerated by human production of CO2.

That said, criticising assumptions is perfectly rational.


Perhaps I meant to say "practical person" instead.


>Science proceeds slowly, and bad papers tend to be forgotten unless they're easily replicable. It just takes time.

Tell that to the large group of anti-vaccine zealots.


Technology development is going through a revolution over the last decade or two. The process of teams creating things that have a high degree of precision and detail has been found to overwhelmingly social in nature. That is, while technical skills are irreplaceable, it's the social aspects of developing new things that are much more critical to success than the technical aspects.

Many technologists do not get this. To them, anything important must by definition involve some kind of new tech. They fail to see that even in breakout technology development, the social way we construct teams and interact with each other (and our market) is critical.

Turns out humans are social animals. And this nature has impacts on all kinds of things, even things that are completely analytic.

Science has yet to make this leap. We're still at the stage where the details of the science are published and gushed over. We focus on the science itself instead of the much more interesting and powerful thing: the social structure of how we are conducting our scientific research.

There is a slight bit of light on the horizon. People from technical backgrounds are taking a hard look at how we organize our information and work. There are calls for more open science, there are calls to rethink our how we make funding decisions. There are calls for scientists to stop trying to be priests of knowledge -- arbiters of all that is true -- and work more as servants. These are all taken from mistakes the tech community has made and suffered from (and still suffers, in many cases). Let's hope this trend continues.


Given the tech community's tendency towards populism over genuine technical merit, I don't think it is a good model for discovering truth about the world. The tech community is still busy reinventing the 70's poorly, let alone making any real progress.

Social is only important in so far is it facilitates communication and progress towards technical understanding.


This article re-hashes much of the content of Ben Goldacre's book 'Bad Pharma', however it does not propose any solutions (Ben Goldacre does). There is plenty that can be done with legislation, as a customer of healthcare and in clinical trials. If all trials - good results or bad - were published then we would be half way there.

http://www.badscience.net/2013/10/why-and-how-i-wrote-bad-ph...

The article mentions the company 'Amgen' and how they were not able to reproduce some earlier trial results. Alarm bells went off for me right there at the word 'Amgen'. They were the company that had the wonder drug EPO, as in of Lance Armstrong fame. That story is truly fascinating and best pieced together from the book 'Blood Medicine':

http://www.bloodmedicine.info/

...and from the cycling scandal books that have came out recently, e.g. Tyler Hamilton's.


I wonder if much of it is not being corrected because it simply isn't important enough? Like the example of "think about a professor before an exam to get higher scores". I suspect that if for example some medication against HIV is found to not work, it will be thrown out again quickly.


The guy who wrote Zen and The Art of Motorcycle Maintenance had a problem with the scientific method too:

"Through multiplication upon multiplication of facts, information, theories and hypotheses it is science itself that is leading mankind from single absolute truths to multiple, indeterminate, relative ones. The major producer of the social chaos, the indeterminacy of thoughts and values that rational knowledge is supposed to eliminate, is none other than science itself."

I used to think (10 years ago) he was wrong or concerned about something that would happen far in the future beyond my lifespan.


Robert Pirsig.

I'm re-reading ZatAoMM now, first time in about 20 years.

I'm not entirely convinced that Phaedrus's dilemma poses a true conflict. Science is evolution toward a more complete truth, and it's expected that any given step will be incomplete. Of conflicting or alternate theories, often one will be simpler than the other (Occam's Razor). In other cases, there may be more than one correct answer (wave/particle duality). I don't see either circumstance toppling science.


Thanks for mentioning this book. I think this book by Robert Pirsig should be on the short list of "Must Reads" of any true hacker


I haven't read the article, yet (in a bit of a hurry ATM...), but a crucial part of science is (independent) reproduction and verification.

Any number of things stand in the way of this: Funding and funding restrictions, counter-productive IP laws, "cult of celebrity" (to the detriment of those quiet souls who "quiety grind away"), etc.

I've been screwed over more than once by "medical science" and medical "best practices", so I have been somewhat sensitized to incongruities in this area. A recent, emerging big one: Statins, and more so the premise upon which they became ubiquitous, stand upon increasingly shaky ground -- at least with regard to their current, widespread prescription and use.

E.g. http://www.peoplespharmacy.com/2013/03/09/895-the-great-chol...


The only important discussion to have is "how can we improve our scientific method." Complaining about deficiencies without offering improvements is not particularly useful. There is no alternative to science.

The posters here that are suggesting that people should not trust science ought to be shunned. It is good to be skeptical of particular studies, but if you discard "science" you are reverting to a position of complete ignorance. Always remember that without an alternative model, if you discard a body of research the only valid fallback is that you don't know anything.


>Always remember that without an alternative model, if you discard a body of research the only valid fallback is that you don't know anything.

In the field I am in (sports biomechanics), this would be a vast improvement on things.


And this is why http://xkcd.com/836/ could have a fourth scene at the graveyard. Top hat man and his girlfriend stand by his tombstone. Girlfriend says to top hat man something like, "that study turned out to be bogus" or "he was in the control group"


Science isn't a study, it's the way to make a study. That's like saying the coin toss is bullshit because you didn't win it.


Given all the talk about "Open access" and reforming peer review, one of the most important things to fix should be publication bias. Positive and negative results both need to be published, along with all the tools and data.

Does anyone know if there are groups aiming for that standard?


I suppose it is naive to believe that scientific papers should not be published until they are peer reviewed and more importantly, their results duplicated.

If this is impractical there should be separate sections in journals for non-replicated and replicated results or perhaps there needs to be separate tiers, with something like the "Gold Tier" only containing papers that have had their results replicated multiple times in multiple countries.

Probably wishful thinking, but it seems clear, something needs to change.


Probably not a great idea to learn about science in a magazine called The Economist.

(Yes, I am familiar with this famous and well-liked magazine. Its reputation does not rest on its science reporting.)


Their starting example of priming reminds me of the post/video "Fixing Toxic Online Behavior in League of Legends" [1], where the makers of LoL discuss (among other things) testing how well priming was reducing toxic behavior.

They had absurd results like changing the color of intro text giving (IIRC) ~30% reductions, and were (IMO) obviously accidentally cherry picking.

1: https://news.ycombinator.com/item?id=5616541


How about a karma/bittorrent system for researchers to encourage replication? In such a system, the "application fee" for submitting your experiment to a journal would be that you must also replicate someone else's experiment that was also submitted to the journal. This system could scale arbitrarily.


This would not be a realistic solution, because the time it can take to do experiments might be an extremely long time. This would make it so that everyone would be (at best) half as productive as before, since they would have to spend the time necessary to carry out the replication. This would only be worsened if what you were required to replicate was not in a similar area to the researcher's area of expertise.


20 m€ on rigor:

Institute the "ten {wo,}man rule"

http://impruvism.com/world-war-z/

And more readily, easily reproducible experiments.

And regular joke / high-quality fake papers.


Like democracy, science is the worst way to discover truth… apart from all the others.


This would only be interesting if it were (true) that all truths are scientific ones. Of course, this is an absurd proposition. Logical abstraction is not per-se science, but rather art itself. Cue Irony...another counter-example. &tc.


Status quo apologetics. Ewww.





Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: