My input to the ASC publication committee

The ASC Publication Committee asked for input on policy and process for publication complaints. The background is of course the now retracted papers in Criminology for reasons detailed by Justin Pickett, and the statements published by the ASC, as well as the video from the forum on scientific integrity. I have previously commented upon it here and here.

I submitted the following to the ASC Publication Committee:

Dear ASC publications committee,
First of all, and I am glad to see the ASC taking steps to improve procedures, and I appreciate you giving everyone the opportunity to give input.
 
One important issue in the recent debates is access to data and the reproducibility of the results. To re-analyse the original data is clearly crucial when there are allegations of research misconduct. At the more general level, when there are such difficulties, then it also becomes clear that the data used in the publications have not sufficiently well documented in the first place. I think this needs to improve.
 
There are now strong moves towards what is often referred to as “open science”. Obviously, if data were made publicly available in the first place, it is much easier to check the results by others. However, while making data openly available to all is in many respects desirable, it is also very often not possible with the kinds of sensitive data criminologists typically use. But many of the ethos of “open science” are general principles of science, and some minimum measures should be taken even without committing to any specific “open” framework. At the core is the documentation of research procedures, including data collection and data management. The focus should be on such documentation, and I would like to see some minimum standards of such reporting to be implemented for all studies.
 
Others have probably provided more thorough suggestions, but I think the following could be a good starting point. My suggestions are simple and should not require much additional effort by anyone (neither authors or editors). I suggest that all published quantitative studies should include the following information:
a)       Regardless of data sources, there should be a note detailing how others can get access to the same data. If special permissions needs to be obtained, information on where to apply must be provided as well as the main conditions for access. If data cannot be made available to others, then the reason for this must be stated. If data will be made available to others at some later point in time, then information on when and how should be included.
b)      When and who collected the data. If a survey company have been hired, there should be some reference to contract or other documentation.
c)       If data have been obtained from an existing study (e.g. AddHealth or NLYS) there should be a reference to when and how the data were handed over, including specifications of sub-samples (when relevant). Thus, others should be able to get access to the exact same data.
d)      If data have been obtained from administrative records, there should be references to who handed over the data, including dates and permissions etc.
e)      Most studies require ethics approvals. Reference to such approvals should always be provided.
f)        Reproducible code should be made available for all studies regardless of data availability. This code should at least cover the estimation procedures, but preferably also the entire workflow from raw data to end results. Whether code is stored as supplementary files at the journal or some repository is of no importance as long as it is specified.
 
These suggestions are primarily relevant for quantitative studies, but some would apply to qualitative studies as well. One should also create similar guidelines appropriate for qualitative studies.
 
Please recognize that I expect all researchers to be able to provide this information with minimum effort. It is simply providing basic documentation. Indeed, if researchers cannot do so, then journals such as Criminology should not publish the article at all simply because the study is not well documented. I consider this to be a minimum requirement.
 
I would also like to see journals to make conditional acceptance of articles based pre-registration, but that would require a bit more work on the principles. I consider also pre-registration as a kind of documentation of ideas and plans. I do not think it should be mandatory, only encouraged.
 
I think Criminology would benefit from this in at least two major ways: 1) Increase the quality of the work published, just by making the studies more reproducible and well documented. 2) Increase the status of the journal, and gaining international reputation for being at the forefront in this development. 3) Increase trust in the results published in the journal.

I should probably have added that if there are complaints regarding errors in the above documentation (which cannot be fixed within a couple of weeks or so), retraction should be considered based on that alone.

I could have referred to e.g. the Open Science Framework (which is great), and others have probably written more thoroughly on such issues. But I think such documentation is so basic that it is embarrassing it is not already standard requirements.

The former flagship journal Criminology

I’m so incredible disappointed in the journal Criminology. It is meant to be the flagship journal in our field, but it is clearly not up to the task these days.

The journal is published by Wiley, so lets start reviewing the publishing house’s general policy on retractions here: https://authorservices.wiley.com/ethics-guidelines/retractions-and-expressions-of-concern.html. Just take a look at the first point:

“Wiley is committed to playing its part in maintaining the integrity of the scholarly record, therefore on occasion, it is necessary to retract articles. Articles may be retracted if:

– There is major scientific error which would invalidate the conclusions of the article, for example where there is clear evidence that findings are unreliable, either as a result of misconduct (e.g. data fabrication) or honest error (e.g. miscalculation or experimental error).”

What I know about the story has been in the public for a while. In July, Justin Pickett posed this paper on SocArXiv here: https://osf.io/preprints/socarxiv/9b2k3/ , explaining that an earlier paper has fundamental errors. Surprisingly, a survey of 500, but the article reports n = 1,184. While I can understand errors can lead to duplicates, I do not understand that it can happen without noticing. Pickett details numerous other errors, and asks for the article to be retracted. That seems like a perfectly reasonable request, and I fail to see how it could be declined. But it has.

A story in The Chronicle Review (behind paywall, but is also available here) reveals astonishing statements from the chief editor, David McDowall, who even says he has not read Picketts letter thoroughly. Any editor receiving such a letter should be highly alarmed and should indeed consider all details very carefully. Apparently, the editorial team does little or nothing. Or at least: fail to communicate that they are doing anything.

I find the following quote particularly disturbing:

First, McDowall seems to think a correction of errors has the goal of ruining other people’s career. I have to say that Pickett’s letter seems to me to be sober and to the point. Pickett gave his co-author more than fair chance to make the corrections himself before publishing his note. It seems like a last resort, not a blood sport at all. If the authors had just admitted the errors and agreed to retract, it would have been a regrettable mistake, but now it is a scandal.

Second, a flagship journal should never publish “complete gibberish”! That some (or even many) articles turned out to be wrong, fail to replicate and contains errors is not that surprising (although not desirable, of course), but “complete gibberish” should not occur. If it nevertheless happens, those articles should be retracted.

The unwillingness of the journal’s chief editor to take this matter seriously reveals a serious lack of concern with the truth. That should be unacceptable to Wiley as well as the American Society of Criminology.

I am just so very, very disappointed.

P.S. I do not have any solutions to the systemic problems here, but improvements should be easy. Criminology as a field has to improve in terms of making data available with full documentation and reproducible code. That would make errors detectable sooner.

No market mechanisms accounted for in Plan S

The so-called cOAlition-S, consisting of 13 research funding organizations and three charitable foundations, recently launched it’s Plan S which has the aim of creating a major shift in publishing practices. (The “S” stands for “science, speed, shock, solution”). It was launched on 4th September this year, and the implementation plan was published 27th November. It has received a fair amount of critisism, and more importantly: a great deal of uncertainty and concern. (See our report here, and others here and here. For those reading Norwegian, some debate is collected here). While it is easy to agree on the ideal aims of the plan, it is a bit harder to judge the realism, unintended consequences and how the publishing industry will adapt. Here in Norway, about 1000 scientist have signed a letter demanding that the Norwegian research council would make a range of clarifications and do a thorough report on the plan’s consequences. So far, the Norwegian research council has refused to do so. Neither is there any report available on the homepage of cOAlition-S. In short: consequences have not been clarified, and certainly not in the open. I find this latter point quite ironic. 

While making research results openly available to the public and policy makers is obviously necessary, but as is generally recognized, not all means are necessarily justified by a good cause. Plan S is a specific plan demaning all funded research to be published in Gold Open Access. Here is the reasoning from cOAlition-S

Universality is a fundamental principle of science (the term “science” as used here includes the humanities): only results that can be discussed, challenged, and, where  appropriate, tested and reproduced by others qualify as scientific. Science, as an institution of organised criticism,  can therefore only function properly if research results are made openly available to the community so that they can be submitted to the test and scrutiny of other researchers. Furthermore, new research builds on established results from previous research. The chain, whereby new scientific discoveries are built on previously established results, can only work optimally if all research results are made openly available to the scientific community.

Publication paywalls are withholding a substantial amount of research results from a large fraction of the scientific community and from society as a whole. This constitutes an absolute anomaly, which hinders the scientific  enterprise in its very foundations and hampers its uptake by society. Monetising the access to new and existing research results is profoundly at odds with the ethos of science (Merton, 1973). There is no longer any justification for this state of affairs to prevail and the subscription-based model of scientific publishing, including its so-called  ‘hybrid’ variants, should therefore be terminated. In the 21st century, science publishers should provide a service to help researchers disseminate their results. They may be paid fair  value for the services they are providing, but no science should be locked behind paywalls!

I belive the sentence is at heart of the plan: “Monetising the access to new and existing research results is profoundly at odds with the ethos of science“. Easy to agree in principle. However, I do not see why monetising publishing is much better. Keep in mind that several big OA publishing houses are indeed commercial. Their source of income is the article processing charges (APC) instead of subscriptions. Thus, someone pays for publishing, one way or another. Plan S is portrayed as an ambitious plan to change the publishing models in science more broadly. There will be an end to “paying for research twice”, as they say. I am less convinced a new publishing model necessarily change the amount paid, though. But it will change who pays. Importantly, the plan does not do anything about the for-profit nature of publishing as such. cOAlition-S says it will make a report on article processing fees, but that is yet to come. They do say, though, that they belive prices will go down because of competition. Some market mechanisms are assumed to be involved, then.

Let’s assume Plan S works perfectly as intended: The whole publishing industry will be transformed and move to Gold Open Access. Subscription-based journals will perish. What is a likely scenario in this case?

Here are some basic conditions: 1) All costs and profits needs to be covered by article processing fees. 2) From the publishing houses’ perspective, the customers are no longer the libraries, but the individual researchers. 3) The research councils puts a ceiling on how much article publishing charge they will fund, 4) Existinc top-journals will switch to Gold OA or new top-journals will emerge, thus some kind of publishing hierachy will remain.

Then there is a basic question of how markets work: if demand goes up, so does the price, right? The very few generally recognized high-quality Gold OA journals will be able to charge more in article processing fees precisely because they are considered high quality. There is prestigue in publishing in the best outlets and such publications tend to have greater impact on the research community. That means both research institutions and individual researchers will be willing to pay for getting published there even if the research councils do not support it. Moreover, as long as such publications will help you land job (or tenure), researchers might even be willing to pay from their own pocket as it might pay in the longer run. For this reason, it is reasonable to expect that APCs for high-ranking journals might become very high while low-ranking journals might even be cheap. These are ordinary market mechanisms. However, Plan S states that compliance will be monitored and sanctioned, so that will restrict the prices, I suppose. Which leads us to the next point. 

The supply-side will probably also increase. Small profits on each single article might be compensated by publishing more. Actually, high profits on each article would also motivate for publishing more. Importantly, the founders of Plan S states that in the age of internet, publishing costs are low and there is no need to charge much. This also means that scaling up is cheap. All journals could just publish a lot more. Why reject papers unless they’re absolutely crap? Any mediocre paper might be published for some extra dollars. Why not? Indeed, some journals do so already.

I am concerned it will create an environment where unneeded journals will thrive: Journals that might not be quite predatory – although not necessarily far from it. The direct economic incentives will be to publish each paper, and too few papers will be rejected. There is already too much junk out there, the last thing we need is to lower the bar for getting published. I do not see how Plan S will handle any of such concerns. There is no credible plan for marked regulations. I do not mind some regulations, and I think that is generally important, but market regulations is not that easy! It is way too optimistic to think the plan will have the desired consequences when no specific concerns have been detailed. The cOalition-S homepage is currently strikingly void of information beyond the ten principles and the implementation plan, and both are pretty vague and in general terms. I am not sure it is a plan at all, but rather some high sounding language made into regulations. A plan that sets out to change the economic model for scientific publishing should pay close attention to the market mechanisms of which it interfers.

From what I have seen so far, cOAlition-S have not done any analysis of how the plan might work in that regard. At least, such analyses have not at all been anywhere close to Gold Open Access, if published at all. An open debate and explicit considerations should be made in writing and open for anyone to see. I do not at all understand why the cOAlition-S decides not to be open!

I would also like to point out that how compliance will be monitored and sanctioned is very opaque. What would it take to curb the market? It might be rather rough measures – and it might be directed at the individual researcher. It remains open how far the cOAlition-S will be willing to go to ensure compliance.

I belive there are lots of problems with our publishing system. I could go on a bit about that. However, I fail to see Plan S solving any of my concerns with the current system, and I am worried it will even increase some of the problems.

Social Media Auto Publish Powered By : XYZScripts.com