President Larry Millstein called the 2342nd meeting of the Society to order on February 20, 2015 at 8:01 pm. He announced the order of business and introduced new members of the Society. The minutes of the previous meeting were read and approved. President Millstein discussed the 17th meeting of the Society on December 16, 1871. He then introduced the evening's speaker Marcia McNutt, Editor-in-Chief of Science publications. Her lecture was entitled "Validating Research: Steps for Restoring Trust in Experimental Results."
Dr. McNutt began her lecture by pointing out that the scientific enterprise is changing. It has become globalized, it is more competitive, and harder to secure funding. At the same time it is more collaborative, as the average number of authors on papers has doubled in the last 20 years. There is increasing pressure on researchers to publish more often and publish more important results. She asked how will researchers be able to resist cutting corners and inflating the importance of their research? How will they ensure the quality of their collaborators' work? As the number of annual publications has grown exponentially for decades, it has strained the peer review system and made it increasingly difficult to ensure quality. Publications today must deal with results that cannot be reproduced, conflicts of interest, retractions, and shaken public trust. She presented two examples that illustrate these problems.
The first example was Andrew Wakefield. In 1998 he co-authored a paper published in the Lancet purporting to show that subjects administered the MMR vaccine were more likely to develop autism. There was considerable skepticism about the results before publication, and a critique was published with the article. Despite this cautionary note, Wakefield's institution widely publicized the results, and Wakefield presented them as "a moral issue." His co-authors' concerns were ignored. Further extensive studies repudiated Wakefield's results, and it was discovered that Wakefield was funded by an attorney to specifically find a link to autism. The paper was eventually retracted, Wakefield was discredited, and his medical license was revoked. But the damage was done.
What went right in this example, Dr. McNutt said, was peer review, journalists uncovering Wakefield's conflict of interest, and Wakefield being sanctioned and losing his license. But what went wrong was that the Lancet lacked clear conflict disclosure rules, it took far too long to retract the paper, Wakefield's institution sensationalized the results for its own gain, and political leaders failed to inform the public convincingly of the crucial importance of the vaccine to public health.
The second example was Haruko Obokata and her publication on culture conditions that allegedly turned differentiated cells into stem cells. What went right in this example was that the scientific community quickly discovered the problems in her work, her institution investigated and found that she had falsified her results, she was punished, and Nature retracted the paper. What went wrong, Dr. McNutt said, was that peer reviewers failed to question the researcher's extraordinary claims, and institutions failed to investigate them.
So, Dr. McNutt asked, how do we ensure that science publications are reliable, and what does that mean? "Repeatability" means that others who analyze the same data by the same methodology obtain the same results. Repeatability does not address all problems, however, and the gold standard is "replication," which means that others independently carry out the same study de novo and obtain the same results. Replication, however, is not possible in all fields.
If fraud is discovered, there is general agreement on the appropriate course of action: retraction, sanctions, and discrediting. But what is the best course of action when a publication lacks replication without actual fraud? This can occur when crucial information is omitted, or there are unrecognized variables, or because of genuine mistakes. Dr. McNutt recounted several examples of this and noted that it is becoming a more significant problem as publications increase.
Dr. McNutt also noted that statistical analyses used in many science publications will produce false positives and negatives. False negatives are less of a problem because negative results are not often published. Positive results are more problematic as journals, such as Science, seek interesting and exciting results. Fortunately, there is growing awareness of the need to bring negative outcomes to light and to decrease the likelihood that false positives are published.
Clearly, repeating experiments or at least the data analysis will uncover problems. However, funding for this work is scarce. Science has been collaborating with other organizations to find solutions to this problem. And to get around the stigmatizing label of "retraction," a more palatable label such as "withdrawn by author" is needed.
In conclusion, Dr. McNutt said that we need incentives to encourage scientists to verify other scientists' published work, to encourage technical comments on publications, and to reward scientists who consistently produce high quality, reproducible results. Pertinent data must be more comprehensively available, and there has to be transparency about potential sources of bias. Alleged cases of fraud need to investigated rapidly and appropriate actions taken without delay. Scientific journals can collaborate with other parties to ensure the quality of science publication, much as well-choreographed dancing partners must work in concert together to get good results.
After the question and answer period, President Millstein thanked the speaker, made the usual housekeeping announcements, and invited guests to join the Society. At 9:32 p.m., President Millstein adjourned the 2342nd meeting of the Society to the social hour.
External Communications Director & Recording Secretary
Abstract & Speaker Biography
Semester Index - Home