Bad Science

CasualBystander

Celestial
There are a number of members who don't know how bad science is done, ths is indicated by a deluded belief in things like global warming.

70-89% of "science" studies are bad science.

This thread is a tribute to those scientists who take perfectly good data, and warp it into unrecognizable shapes.

We will start with the food addition:
Sliced And Diced: The Inside Story Of How An Ivy League Food Scientist Turned Shoddy Data Into Viral Studies
Siğirci analyzed the data over and over until she began “discovering solutions that held up,” he wrote. Her tenacity ultimately turned the buffet experiment into four published studies about pizza eating, all cowritten with Wansink and widely covered in the press.

But that’s not how science is supposed to work.
 

nivek

As Above So Below
Science should be more objective within the subjective topic being researched and studied...
 

nivek

As Above So Below
Sliced And Diced: The Inside Story Of How An Ivy League Food Scientist Turned Shoddy Data Into Viral Studies
Siğirci analyzed the data over and over until she began “discovering solutions that held up,” he wrote. Her tenacity ultimately turned the buffet experiment into four published studies about pizza eating, all cowritten with Wansink and widely covered in the press.

But that’s not how science is supposed to work.

Now because of his actions he’s had five papers retracted and 14 corrected and his career is in jeopardy and is getting investigated...
 

CasualBystander

Celestial
Nature had to change their article standards because 70% of their studies weren't reproducible.

AMGEN could only reproduce 6 landmark heavily cited studies that they were using as a basis for their research, out of 56.

Since Nature is a premier publication the 30% reproducibility is probably generous and the average for the average paper is much lower.

Basically any study of reproducibility ends up in the 10-30% range for p < 0.05. By definition more than 95% of these studies should be reproducible, and given that some scientists are testing the obvious or using conservative standards it should be 98% or higher.

But it isn't.

The problem as someone who analyzed a couple thousand science papers noted, is 90% of "science" studies are some form of avocacy, and only 1% actually follow the scientific method.

Peer review has turned in to rubber stamping or gate keeping depending on which side of policy a paper supports. This isn't helpful.

And I am too honest to be a scientist. I'm an engineer.
 

Dundee

Fading day by day.
Nature had to change their article standards because 70% of their studies weren't reproducible.

AMGEN could only reproduce 6 landmark heavily cited studies that they were using as a basis for their research, out of 56.

Since Nature is a premier paper the 30% reproducibility is probably generous and the average for the average paper is much lower.

Basically any study of reproducibility ends up in the 10-30% range for p < 0.05. By definition more than 95% of these studies should be reproducible, and given that some scientists are testing the obvious or using conservative standards it should be 98% or higher.

But it isn't.

The problem as someone who analyzed a couple thousand science papers noted, is 90% of "science" studies are some form of avocacy, and only 1% actually follow the scientific method.

Peer review has turned in to rubber stamping or gate keeping depending on which side of policy a paper supports. This isn't helpful.

And I am too honest to be a scientist. I'm an engineer.
I will bow out of that particular topic as I know next to nothing about it. So the floor is yours CB. See you when you deny climate change, if I am permitted an opinion that is.
 

CasualBystander

Celestial
I will bow out of that particular topic as I know next to nothing about it. So the floor is yours CB. See you when you deny climate change, if I am permitted an opinion that is.
Well...

I believe CO2 produces a small beneficial
warming.

But I am not a member of the "Cult of Anthropogenic Global Warming" (CAGW), otherwise known as the climate cult or warmunists, as you apparently are.

And it should really be COAGW not CAGW.

I have repeated asked for proof that the harm is greater than the benefit of 60+% plant growth.

I have asked for proof that there is any proven downside.

Cricket chirping is all I get as a response.

You still believe the the Antarctic is melting, even though NASA has been forced to admit it is gaining 83 GT of ice a year. You don't understand Munk's enigma or tried simulating changes to the earth's moment of inertia so you don't realize that claims of net Antarctic melting are refuted by physics and are simply wrong.
 

Castle-Yankee54

Celestial
I will bow out of that particular topic as I know next to nothing about it. So the floor is yours CB. See you when you deny climate change, if I am permitted an opinion that is.

Of course you are....I value your opinion......the earth is in a short term warming trend.
 

Castle-Yankee54

Celestial
I will bow out of that particular topic as I know next to nothing about it. So the floor is yours CB. See you when you deny climate change, if I am permitted an opinion that is.

Actually there has seldom been a time when the climate hasn't been changing one way or the other.
 

CasualBystander

Celestial
https://www.theguardian.com/commentisfree/2011/sep/23/bad-science-ben-goldacre

These news stories were based on a scientific paper by Sigman in The Biologist. It misrepresents individual studies, as Professor Dorothy Bishop demonstrated almost immediately, and it cherry-picks the scientific literature, selectively referencing only the studies that support Sigman's view. Normally this charge of cherry-picking would take a column of effort to prove, but this time Sigman himself admits it, frankly, in a PDF posted on his own website.

Let me explain why this behaviour is a problem. Nobody reading The Biologist, or its press release, could possibly have known that the evidence presented was deliberately incomplete. That is, in my opinion, an act of deceit by the journal: but it also illustrates one of the most important principles in science, and one of the most bafflingly recent to emerge.

Here is the paradox. In science, we design every individual experiment as cleanly as possible. In a trial comparing two pills, for example, we make sure that participants don't know which pill they're getting, so that their expectations don't change the symptoms they report. We design experiments carefully like this to exclude bias: to isolate individual factors, and ensure that the findings we get really do reflect the thing we're trying to measure.

But individual experiments are not the end of the story. There is a second, crucial process in science, which is synthesising that evidence together to create a coherent picture.

The lead cherry picker of our time is Michael Mann.
 

CasualBystander

Celestial
The statistical error that just keeps on coming | Ben Goldacre

How often? Nieuwenhuis looked at 513 papers published in five prestigious neuroscience journals over two years. In half the 157 studies where this error could have been made, it was. They broadened their search to 120 cellular and molecular articles in Nature Neuroscience, during 2009 and 2010: they found 25 studies committing this fallacy, and not one single paper analysed differences in effect sizes correctly.
...
But the darkest thought of all is this: analysing a "difference in differences" properly is much less likely to give you a statistically significant result, and so it's much less likely to produce the kind of positive finding you need to look good on your CV, get claps at conferences, and feel good in your belly. Seriously: I hope this is all just incompetence.


If the change as a result of an experiment, say a 15% increase, isn't significant. Then a 15% difference between this group and a second group (15% vs 30%) isn't significant either.

The experiment caused a statistically significant effect on the second group.

But there is no statistically significant difference between the first and second groups.
 

CasualBystander

Celestial
Good article on science. Supposed to beca defense of science but covers a lot of problems.

Science Isn’t Broken

truth-vigilantes-retractions-mobile.png
 

nivek

As Above So Below
Good article on science. Supposed to beca defense of science but covers a lot of problems.

Science Isn’t Broken

truth-vigilantes-retractions-mobile.png

Agreed, this shows something twofold, the scientific community is catching many bogus research papers and whatnot but also shows there are many lame duck alleged scientists trying to publish bogus papers, which I think is intentional for whatever reason...
 

CasualBystander

Celestial
Agreed, this shows something twofold, the scientific community is catching many bogus research papers and whatnot but also shows there are many lame duck alleged scientists trying to publish bogus papers, which I think is intentional for whatever reason...

It is like this (even we are ignoring the careerism and gate-keeping that block papers that buck mainstream views), if you don't show a significant result you can't publish.

So when you get a p < 0.07 result (which is insignificant) there is a strong urge to twiddle with the data so you can publish.

After all you want to meet your publishing requirement and keep your university job. And you are sure your result is actually significant, aren't you?

One suggested solution is to go to p < 0.01 or less so that twiddling with data doesn't produce papers.

This will make studies more expensive due to larger sample sizes, etc. But meaningful results will actually be meaningful.
 

CasualBystander

Celestial
From the previous link (that the retraction table came from):
As you manipulated all those variables in the p-hacking exercise above, you shaped your result by exploiting what psychologists Uri Simonsohn, Joseph Simmons and Leif Nelson call “researcher degrees of freedom,” the decisions scientists make as they conduct a study. These choices include things like which observations to record, which ones to compare, which factors to control for, or, in your case, whether to measure the economy using employment or inflation numbers (or both). Researchers often make these calls as they go, and often there’s no obviously correct way to proceed, which makes it tempting to try different things until you get the result you’re looking for.

Seems pretty clear that outside pressure can influence results.

I was talking to a developer that did number crunching for studies and he moved to a policy consulting position because he got disgusted.

Studies that he was involved with that got the "wrong" result were either buried, or the authors were asked to "rework" the data.
 
Top