My input to the ASC publication committee

The ASC Publication Committee asked for input on policy and process for publication complaints. The background is of course the now retracted papers in Criminology for reasons detailed by Justin Pickett, and the statements published by the ASC, as well as the video from the forum on scientific integrity. I have previously commented upon it here and here.

I submitted the following to the ASC Publication Committee:

Dear ASC publications committee,
First of all, and I am glad to see the ASC taking steps to improve procedures, and I appreciate you giving everyone the opportunity to give input.
 
One important issue in the recent debates is access to data and the reproducibility of the results. To re-analyse the original data is clearly crucial when there are allegations of research misconduct. At the more general level, when there are such difficulties, then it also becomes clear that the data used in the publications have not sufficiently well documented in the first place. I think this needs to improve.
 
There are now strong moves towards what is often referred to as “open science”. Obviously, if data were made publicly available in the first place, it is much easier to check the results by others. However, while making data openly available to all is in many respects desirable, it is also very often not possible with the kinds of sensitive data criminologists typically use. But many of the ethos of “open science” are general principles of science, and some minimum measures should be taken even without committing to any specific “open” framework. At the core is the documentation of research procedures, including data collection and data management. The focus should be on such documentation, and I would like to see some minimum standards of such reporting to be implemented for all studies.
 
Others have probably provided more thorough suggestions, but I think the following could be a good starting point. My suggestions are simple and should not require much additional effort by anyone (neither authors or editors). I suggest that all published quantitative studies should include the following information:
a)       Regardless of data sources, there should be a note detailing how others can get access to the same data. If special permissions needs to be obtained, information on where to apply must be provided as well as the main conditions for access. If data cannot be made available to others, then the reason for this must be stated. If data will be made available to others at some later point in time, then information on when and how should be included.
b)      When and who collected the data. If a survey company have been hired, there should be some reference to contract or other documentation.
c)       If data have been obtained from an existing study (e.g. AddHealth or NLYS) there should be a reference to when and how the data were handed over, including specifications of sub-samples (when relevant). Thus, others should be able to get access to the exact same data.
d)      If data have been obtained from administrative records, there should be references to who handed over the data, including dates and permissions etc.
e)      Most studies require ethics approvals. Reference to such approvals should always be provided.
f)        Reproducible code should be made available for all studies regardless of data availability. This code should at least cover the estimation procedures, but preferably also the entire workflow from raw data to end results. Whether code is stored as supplementary files at the journal or some repository is of no importance as long as it is specified.
 
These suggestions are primarily relevant for quantitative studies, but some would apply to qualitative studies as well. One should also create similar guidelines appropriate for qualitative studies.
 
Please recognize that I expect all researchers to be able to provide this information with minimum effort. It is simply providing basic documentation. Indeed, if researchers cannot do so, then journals such as Criminology should not publish the article at all simply because the study is not well documented. I consider this to be a minimum requirement.
 
I would also like to see journals to make conditional acceptance of articles based pre-registration, but that would require a bit more work on the principles. I consider also pre-registration as a kind of documentation of ideas and plans. I do not think it should be mandatory, only encouraged.
 
I think Criminology would benefit from this in at least two major ways: 1) Increase the quality of the work published, just by making the studies more reproducible and well documented. 2) Increase the status of the journal, and gaining international reputation for being at the forefront in this development. 3) Increase trust in the results published in the journal.

I should probably have added that if there are complaints regarding errors in the above documentation (which cannot be fixed within a couple of weeks or so), retraction should be considered based on that alone.

I could have referred to e.g. the Open Science Framework (which is great), and others have probably written more thoroughly on such issues. But I think such documentation is so basic that it is embarrassing it is not already standard requirements.

Taking raised concerns seriously – but do not regret other statements?

Yesterday, the American Society of Criminology posted two statements regarding the Chronicle of Higher Education article of September 24. The first is by the ASC executive committee, stating support for how the editorial team is handling the matter, and ensuring that the process follows the COPE framework. This is very good. Even though Criminology is not a member of COPE, their guidelines are very sensible, and similar to Wiley’s guidelines. COPE has a flow chart that describe the process.

The second statement is from the co-editors of Criminology. This statement explains how the journal handles cases where there is raised concerns about an article. The main approach is a comment-and-reply model, where critics submit their comment to the journal and the original author is offered to reply. They also state that this is not appropriate in all instances, and additional steps may be necessary, including retractions if the evidence is strong. This is all fine, and I agree.

The comment from the co-authors also details the time line from when they got an anonymous email on the May 29, 2019 and up to today. The also emphasize that they did issue a statement July 26 notifying that investigations were being done.

This is all good. I expect nothing less.

However, the statements are not really a comment directly on the article in the Chronicle, although on the same topic. I guess their main message is just to ensure that they are pursuing the case, as is clear from the following statement:

“Social media attention to Dr. Pickett’s online statement led to what we perceive as a rush to judgment against the authors and the journal, including the mischaracterization that we are not taking the issue seriously and are not committed to resolving it.  Nothing could be further from the truth. We have taken several steps aimed at obtaining a fair and transparent resolution.”

From my point of view, the editorial statement July 26 was fine, and I trusted the journal to do an appropriate investigation as stated. I did think it now took a bit long time, but I have no problem accepting that there might be good reasons for that.

I was alarmed and disappointed only when I read the article in The Chronicle. There were stories, speculations and rumors that are not the responsibility of the journal, whether true or false. Criminology is not to blame for any of that. However, the chief editor, David McDowall, was quoted in the article saying things that gave the impression that Criminology did not carry out an appropriate investigation. I believe it was precisely his statements that made people doubt whether Criminology took the issue seriously. I think there are three main points:

First, the chief editor was quoted questioning Pickett’s personal motives. It seemed like McDowall actively defended Stewart, and tried to make Pickett look bad. Given that the journal’s investigation is not ready, it is highly inappropriate for the chief editor to make such statements.  

Second, the chief editor was quoted on claiming that the journal has published “complete gibberish” before, referring to one specific instance. He even seems to be fine with that as it appeared to be an argument against retracting the article in question. Let’s just hope he was misquoted.

Third, the chief editor was portrayed as “no fan of the move toward more scrutiny in the social sciences, which he sees as overly aggressive”. That was not a direct quote, but there is a direct quote where he refers to such scrutiny having a “blood-sport aspect to it” (which obviously does not sound positive). Scrutiny should be at the heart of social science, and so should reproducibility and accountability. While I do expect journals to handle such instances in a professional manner (no blood-sport), it is hard to accept that the chief editor is not in favor of such scrutiny.

My point here is that the statement from the co-editors do not clarify these three conserns following from the quotes in the Chronicle. It would be good to know if the chief editor was misquoted or cited out of context. Or maybe he was just sloppy and did not really mean those things, or even regretted that it came out that way. Whatever. Does he and the journal stand by these things or not? I would have hoped that the statement from the co-editors would 1) apologize for prematurely questioning Pickett’s motives in public and hopefully also state that it was not the intention at all, 2) ensure that Criminology do not accept publishing “complete gibberish”, but will now look into also the other article mentioned by the chief editor to check if that was actually the case, and 3) ensure that Criminology supports the move to increased scrutiny in the social sciences.

In any case, the co-editors have been very clear that they are taking the issue seriously, and the ASC executive committee ensures the process will follow the COPE guidelines. I trust that is happening.

Clearly, there are ways of improving research integrity and accountability without any aspects of blood-sport. Some improvements might even be easy. I might come back to that in a later post.

UPDATE: The chief editor just sent an email to all ASC members where he clarifies that some of the words he used were regrettable and do not reflect what he really means neither about editorial policy nor about persons involved. That is good! It goes a long way answering my concerns in this blog post.

The former flagship journal Criminology

I’m so incredible disappointed in the journal Criminology. It is meant to be the flagship journal in our field, but it is clearly not up to the task these days.

The journal is published by Wiley, so lets start reviewing the publishing house’s general policy on retractions here: https://authorservices.wiley.com/ethics-guidelines/retractions-and-expressions-of-concern.html. Just take a look at the first point:

“Wiley is committed to playing its part in maintaining the integrity of the scholarly record, therefore on occasion, it is necessary to retract articles. Articles may be retracted if:

– There is major scientific error which would invalidate the conclusions of the article, for example where there is clear evidence that findings are unreliable, either as a result of misconduct (e.g. data fabrication) or honest error (e.g. miscalculation or experimental error).”

What I know about the story has been in the public for a while. In July, Justin Pickett posed this paper on SocArXiv here: https://osf.io/preprints/socarxiv/9b2k3/ , explaining that an earlier paper has fundamental errors. Surprisingly, a survey of 500, but the article reports n = 1,184. While I can understand errors can lead to duplicates, I do not understand that it can happen without noticing. Pickett details numerous other errors, and asks for the article to be retracted. That seems like a perfectly reasonable request, and I fail to see how it could be declined. But it has.

A story in The Chronicle Review (behind paywall, but is also available here) reveals astonishing statements from the chief editor, David McDowall, who even says he has not read Picketts letter thoroughly. Any editor receiving such a letter should be highly alarmed and should indeed consider all details very carefully. Apparently, the editorial team does little or nothing. Or at least: fail to communicate that they are doing anything.

I find the following quote particularly disturbing:

First, McDowall seems to think a correction of errors has the goal of ruining other people’s career. I have to say that Pickett’s letter seems to me to be sober and to the point. Pickett gave his co-author more than fair chance to make the corrections himself before publishing his note. It seems like a last resort, not a blood sport at all. If the authors had just admitted the errors and agreed to retract, it would have been a regrettable mistake, but now it is a scandal.

Second, a flagship journal should never publish “complete gibberish”! That some (or even many) articles turned out to be wrong, fail to replicate and contains errors is not that surprising (although not desirable, of course), but “complete gibberish” should not occur. If it nevertheless happens, those articles should be retracted.

The unwillingness of the journal’s chief editor to take this matter seriously reveals a serious lack of concern with the truth. That should be unacceptable to Wiley as well as the American Society of Criminology.

I am just so very, very disappointed.

P.S. I do not have any solutions to the systemic problems here, but improvements should be easy. Criminology as a field has to improve in terms of making data available with full documentation and reproducible code. That would make errors detectable sooner.

A comment on Laub et al on marriage effects on crime

The just published Oxford Handbook of Developmental and Life Course Criminology includes an article where John Laub, Zachary Rowan and Robert Sampson gives an update on the age-graded theory of informal social control, which has dominated the field of life course criminology for the past couple of decades.

A key proposition of the theory is that life-course transitions can represent turning points in a criminal career, and marriage is the transition that has received the most attention in the empirical literature. A few years ago, I wrote a critical review together with my colleagues, Jukka Savolainen, Kjersti Aase and Torkild Lyngstad. In their book chapter, Laub et al are clearly critical of our review. I am a bit flattered that they bothered criticising us, but I do have a few comments.

First, Laub et al. correctly point out that we are unsatisfied with the existing studies considering estimating causal effects, and they do not really contradict our claim. Nevertheless, they point out that we “do not offer a viable alternative to build on this existing knowledge” (p 302). That might be right, but I do not think that is our responsibility either. I think those who advocate for a theory also has the responsibility for providing convincing empirical evidence.

Second, I think we actually did suggest a viable alternative. Importantly, we doubt that a causal effect of marriage on crime can be estimated at all, since it is hard to see how there might be any plausible exogenous variation. (I do not rule that out completely, but I am still waiting to see such a study). Instead, we suggest checking the preconditions for the theory to be true. For example, one suggested mechanism is that the spouse oppose criminal behaviour and exercises social control. If so, a survey of spouses’ attitudes to offending and how they react to their husband’s behaviour would provide relevant empirical evidence to the extent the premises for the theory are true. Providing any such evidence would make the theory more plausible. (If the spouses are favourable to crime and/or do not excertise any meaningful control over their husband, then that mechanism is not supported. Otherwise, it is corroborated). So, a viable alternative would be to check more carefully the preconditions empirically. It would still not provide an estimate of a causal effect, that is true, but it might be the best we can do.

Third, Laub, Zachry and Sampson states that “to rule out evidence that only comes from non-randomized experiments is to rule out most of criminology” (p 302). Now, that does not quite follow from our argument. Estimates of causal effects can only be provided if there is some kind of exogenous variation to be exploited. A causal theory can be corroborated in other ways, but it is not easy either. A careful descriptive study might provide evidence that are inconsistent with competing theories. Empirical findings that are equally consistent with a selection effect (or other competing theories), does not really count as a test of the theory.

Fourth, they refer to my joint work with Jukka Savolainen where we show that change occurs prior to employment rather than as a response to it, which Torkild Lyngstad and I also showed regarding the transition to marriage. Laub et al point out that Laub and Sampson (2003) acknowledge that ‘turning points’ are a part of a gradual process, and that turning points “are not a constant once set in motion, and they vary through time” (p 307). While this might sound reasonable, it also makes it a bit hard to understand what a turning point is. If changes in offending before marriage (or work) are consistent with the theory, then I am not sure it is possible to say when a turning point occurs. That makes it harder to empirically test the theory.

Fifth, Laub et al hint that since almost only studies using Norwegian register data shows the pattern of decline prior to a turning point, it might be something particular with the Norwegian setting. We actually suggested that the family formation patterns in the Nordic countries differ from the US in our review of research (see our article, page 438). While the context might indeed be important, that is not the main reason why so few other studies have found the same pattern. Actually, we argue that our findings are consistent with previously published results. Earlier studies should be repeated using an approach similar to what we did: just check the timing of change in offending relative to the transition in question. Until that is done, there is no basis for claiming the Norwegian patterns are any different from other contexts. (They might be, but we do not know yet).

Sixth, Laub et al discuss the role of cohabitation, and make a similar argument as we did in our review article: that the role of cohabitation is often a ‘trial-period’ or a ‘stepping stone’ towards marriage, and if it works out they will often marry. But Laub et al’s discussion of evidence focuses on whether the marriage effect translates into cohabiting couples, which is a discussion that does not take into account the point that marriage is an increasingly selective state, as well as it is becoming increasingly difficult to say when we should expect to see changes in offending.

In sum, I do apprechiate Laub et al. making an effort discussing specific arguments in our work. However, I am not quite convinced. I actually tend to think a romantic partner, a good job and generally changes in life situation might have an effect on crime. I find that reasonable, and I hope it is true. I am nevertheless not quite convinced by the empirical evidence, and I am hesitant to make claims about ‘turning points’. However, I do believe the empirical evidence can be improved through: 1) Check the timing of change, and 2) empirically investigate the specific preconditions for the mechanisms at work.

No market mechanisms accounted for in Plan S

The so-called cOAlition-S, consisting of 13 research funding organizations and three charitable foundations, recently launched it’s Plan S which has the aim of creating a major shift in publishing practices. (The “S” stands for “science, speed, shock, solution”). It was launched on 4th September this year, and the implementation plan was published 27th November. It has received a fair amount of critisism, and more importantly: a great deal of uncertainty and concern. (See our report here, and others here and here. For those reading Norwegian, some debate is collected here). While it is easy to agree on the ideal aims of the plan, it is a bit harder to judge the realism, unintended consequences and how the publishing industry will adapt. Here in Norway, about 1000 scientist have signed a letter demanding that the Norwegian research council would make a range of clarifications and do a thorough report on the plan’s consequences. So far, the Norwegian research council has refused to do so. Neither is there any report available on the homepage of cOAlition-S. In short: consequences have not been clarified, and certainly not in the open. I find this latter point quite ironic. 

While making research results openly available to the public and policy makers is obviously necessary, but as is generally recognized, not all means are necessarily justified by a good cause. Plan S is a specific plan demaning all funded research to be published in Gold Open Access. Here is the reasoning from cOAlition-S

Universality is a fundamental principle of science (the term “science” as used here includes the humanities): only results that can be discussed, challenged, and, where  appropriate, tested and reproduced by others qualify as scientific. Science, as an institution of organised criticism,  can therefore only function properly if research results are made openly available to the community so that they can be submitted to the test and scrutiny of other researchers. Furthermore, new research builds on established results from previous research. The chain, whereby new scientific discoveries are built on previously established results, can only work optimally if all research results are made openly available to the scientific community.

Publication paywalls are withholding a substantial amount of research results from a large fraction of the scientific community and from society as a whole. This constitutes an absolute anomaly, which hinders the scientific  enterprise in its very foundations and hampers its uptake by society. Monetising the access to new and existing research results is profoundly at odds with the ethos of science (Merton, 1973). There is no longer any justification for this state of affairs to prevail and the subscription-based model of scientific publishing, including its so-called  ‘hybrid’ variants, should therefore be terminated. In the 21st century, science publishers should provide a service to help researchers disseminate their results. They may be paid fair  value for the services they are providing, but no science should be locked behind paywalls!

I belive the sentence is at heart of the plan: “Monetising the access to new and existing research results is profoundly at odds with the ethos of science“. Easy to agree in principle. However, I do not see why monetising publishing is much better. Keep in mind that several big OA publishing houses are indeed commercial. Their source of income is the article processing charges (APC) instead of subscriptions. Thus, someone pays for publishing, one way or another. Plan S is portrayed as an ambitious plan to change the publishing models in science more broadly. There will be an end to “paying for research twice”, as they say. I am less convinced a new publishing model necessarily change the amount paid, though. But it will change who pays. Importantly, the plan does not do anything about the for-profit nature of publishing as such. cOAlition-S says it will make a report on article processing fees, but that is yet to come. They do say, though, that they belive prices will go down because of competition. Some market mechanisms are assumed to be involved, then.

Let’s assume Plan S works perfectly as intended: The whole publishing industry will be transformed and move to Gold Open Access. Subscription-based journals will perish. What is a likely scenario in this case?

Here are some basic conditions: 1) All costs and profits needs to be covered by article processing fees. 2) From the publishing houses’ perspective, the customers are no longer the libraries, but the individual researchers. 3) The research councils puts a ceiling on how much article publishing charge they will fund, 4) Existinc top-journals will switch to Gold OA or new top-journals will emerge, thus some kind of publishing hierachy will remain.

Then there is a basic question of how markets work: if demand goes up, so does the price, right? The very few generally recognized high-quality Gold OA journals will be able to charge more in article processing fees precisely because they are considered high quality. There is prestigue in publishing in the best outlets and such publications tend to have greater impact on the research community. That means both research institutions and individual researchers will be willing to pay for getting published there even if the research councils do not support it. Moreover, as long as such publications will help you land job (or tenure), researchers might even be willing to pay from their own pocket as it might pay in the longer run. For this reason, it is reasonable to expect that APCs for high-ranking journals might become very high while low-ranking journals might even be cheap. These are ordinary market mechanisms. However, Plan S states that compliance will be monitored and sanctioned, so that will restrict the prices, I suppose. Which leads us to the next point. 

The supply-side will probably also increase. Small profits on each single article might be compensated by publishing more. Actually, high profits on each article would also motivate for publishing more. Importantly, the founders of Plan S states that in the age of internet, publishing costs are low and there is no need to charge much. This also means that scaling up is cheap. All journals could just publish a lot more. Why reject papers unless they’re absolutely crap? Any mediocre paper might be published for some extra dollars. Why not? Indeed, some journals do so already.

I am concerned it will create an environment where unneeded journals will thrive: Journals that might not be quite predatory – although not necessarily far from it. The direct economic incentives will be to publish each paper, and too few papers will be rejected. There is already too much junk out there, the last thing we need is to lower the bar for getting published. I do not see how Plan S will handle any of such concerns. There is no credible plan for marked regulations. I do not mind some regulations, and I think that is generally important, but market regulations is not that easy! It is way too optimistic to think the plan will have the desired consequences when no specific concerns have been detailed. The cOalition-S homepage is currently strikingly void of information beyond the ten principles and the implementation plan, and both are pretty vague and in general terms. I am not sure it is a plan at all, but rather some high sounding language made into regulations. A plan that sets out to change the economic model for scientific publishing should pay close attention to the market mechanisms of which it interfers.

From what I have seen so far, cOAlition-S have not done any analysis of how the plan might work in that regard. At least, such analyses have not at all been anywhere close to Gold Open Access, if published at all. An open debate and explicit considerations should be made in writing and open for anyone to see. I do not at all understand why the cOAlition-S decides not to be open!

I would also like to point out that how compliance will be monitored and sanctioned is very opaque. What would it take to curb the market? It might be rather rough measures – and it might be directed at the individual researcher. It remains open how far the cOAlition-S will be willing to go to ensure compliance.

I belive there are lots of problems with our publishing system. I could go on a bit about that. However, I fail to see Plan S solving any of my concerns with the current system, and I am worried it will even increase some of the problems.

Our paper on the paradox is out now

Our paper on the “Weisburd paradox” is now out in Journal of Quantitative Criminology. Mikko and I had initially put out own attempt in in the drawer since it turned out that Gelman had written a much better workingpaper on the same thing. It turned out that some additions were required for publication, and Gelman offered us to help out in the final rounds. We’re grateful for the opportunity. The story is here, here, and here.

 

Social Media Auto Publish Powered By : XYZScripts.com