Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: More on the "bad math" after an important email...

Author: Roger D Davis

Date: 17:20:48 09/03/02

Go up one level in this thread



>I don't know if you faked the results to look better or not. Maybe I don't
>want to know. But whatever be of it, there is little scientific ground
>to keep them standing, IMHO.
>
>--
>GCP

Wow, this comment is in exceptionally bad taste. You don't question the
scientific integrity of a researcher lightly, particularly in a public forum.

I can tell you my own experience. My disseration was on methods of creating
shorts forms of personality tests. The test I choose to work with was one on
which we had a large amount of archival data, about 1000 subjects.

The whole point of my dissertation was to show that short forms of psychological
instruments are psychometrically unstable, and that while the scores on 10-item
abbreviated versions of a 30-item scale might indeed have a high correlation in
group data, at the level of the individual, the variability of scores was such
that a great many people who were classified as having a personality disorder on
the longer scale were not thus classified with the shorter scale, and vice
versa. In other words, the shorts forms were unusable in a clinical situation.

Unfortunately, my committee missed this point of my dissertation. And since I
wasn't particularly interested in arguing with them, and since dissertations
typically just sit on the shelves gathering dust anyway, I let it go.

My experience since then has confirmed my experience with my committee: The the
researcher usually knows his or her research better than anyone else. I have
seen many journal editors insist on including details which have little or no
true relevance to the article, or insist that this be altered or that be
altered, even though, from the perspective of the author, this created some form
of distortion. Typically, the power dynamics at work are such that junior
authors make changes asked of them by senior professors simply out of respect,
and senior professors make junior authors jump through such hoops to pay their
dues. That's life.

In psychological journals, there are failures of replication all the time, and
the reasons are well known. They typically have to do with two researchers who
use a slightly different way of doing things, and end up with divergent results.
Sometimes the operational definitions of their constructs are slightly
different, sometimes their methodologies are different, or a thousand other
things.

I'm convinced that faking of results is, thankfully, quite rare in scholarly
journals.

Roger Davis, PhD








This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.