The vast majority of hours spent has nothing to do with any editorial functions. All reviewers volunteer their time to review submitted papers, so that's not a cost. Typesetting to a publishable state is not even included in some cases (see a3_nm's comment). What else is left? Filtering through already reviewed papers to list out already typeset documents? Why are researchers paying $2k+ for that?
I work for one of these publishers doing all of the nothing you're complaining about, so to clarify: how do you suppose we determine who reviews which paper? How do we convince them to actually write the review? How do we know the review is any good? Also, we do typeset and improve prose quality, not sure what those other slackers are doing...
> how do you suppose we determine who reviews which paper? How do we convince them to actually write the review? How do we know the review is any good?
At least in theoretical computer science these tasks are all done by the volunteer conference organizers.
It's possible that this is different than in fields that primarily publish in journals, but it certainly seems like the volunteer approach works just fine.
I have also never seen anything getting improved typesetting or prose. The reviews may reject based on bad writing, or the journal may reject due to latex warnings, but never more than that.
There are many academic fields where people work long hours for little pay, but I imagine theoretical CS is much more comfortable. I'm not surprised people have time and effort to spare, but they are still getting paid, you know. It just happens that the resume byline "conference organizer" is valuable enough in that field.
Sometimes conferences pay a publishing company to do the review process for submissions, by the way. I'm not sure that your submissions are actually getting processed for free by conference volunteers.
I'd rather not disclose details of my employer, sorry.
No conference in CS pays the publishing company (usually IEEE or ACM) to pick reviewers. None whatsoever. Instead, program committee (PC) chairs are selected, and they pick PC members who are responsible for reviews. All of this occurs on a volunteer basis, no matter which institution the PC chairs/members belong to.
> just happens that the resume byline "conference organizer" is valuable enough in that field.
Being editor of a journal/PC chair is a valuable thing in any field, not just CS.
> Sometimes conferences pay a publishing company to do the review process for submissions, by the way. I'm not sure that your submissions are actually getting processed for free by conference volunteers.
Well, I'm sure, because I also review for the same conferences and journals that I submit to!
I don't think anyone will say that publishers should do their work for free or are totally useless.
The problem is about the business practices of the big publishers - asking huge amounts of money for access to journals, forcing universities to buy bundles of journals, to get access to interesting articles they also have to subscribe to stuff they don't want, and so on.
What field are you in? Because at least in mine (information retrieval/machine learning) all the stuff you're talking about is done by volunteers, not paid staff.
> The vast majority of hours spent has nothing to do with any editorial functions.
Unfortunately, I have to admit that most of the money I pay for scholarly papers goes to Springer/Elsevier shareholders and not the editors who barely edit.
But the principle still remains. Reading through crap papers so you don't have to is hard work that should be paid for. (And editors have to do this before sending stuff out to reviewers, or else the reviewers drop out.)
ANd I would rather pay the editors out of my own pockets than expect the authors to do it, or an ad-supported mechanism, because if you're not the customer, you're the product.
> The editor has to put in the hours to reject the crap that people submit
Does that process need to be any different from PRs on github? Can't all that be distributed across a team of researchers working on the topic of the journal, just as PRs on github are distributed across all developers working on the software?
>Does that process need to be any different from PRs on github? Can't all that be distributed across a team of researchers working on the topic of the journal, just as PRs on github
It's not the same as a github PR because wading through a mountain of badly written papers is work researchers do not want to do. That's what a publisher (like Elsevier) does. See example of Elsevier employee Angelica Kerr.[1] It should be easy to see that scientists would think it's a waste of their time to do Angelica Kerr's work.
I didn't get a chance to respond to the reply by allenz that suggested that journal chiefs should hire their own editing and administration staff. Again, that "solution" makes the same mistake of thinking scientists will do something they have no interest in doing. They don't want the hassles of "contracting" an editing team.
The research publishing is not a "software problem" that's solvable by a "software platform solution." You can't solve it by applying social platform mechanics such as github PRs and/or votes of StackOverflow/Reddit. (My parent comment tries to explain this.[2])
So I read your links. How is Angela Kerr's job any different than a spam filter function? We've got production grade techniques for handling that now, and without going into technical details everything you mentioned as being in her set of tasks could be implemented using ML and NLP techniques, most of them not even that cutting edge. In many ways, an algorithmic approach could do her job better because some "crackpot" articles will not be crackpot but genius and if you give researchers the ability to tweak the filter settings by adjusting sensitivity / specificity to their own tastes you're mathematically guaranteed to increase the likelihood of getting the next breakthrough out there.
The actual problem, which I think you implicitly identify when talking about Nature and Cell, is the "prestige" factor. As long as researchers are motivated by a prestige level of a journal b/c they may fear being ostracized by their peers or not getting recognition for their research - which has monetary and career costs involved - I think it will be very difficult to convince anyone to switch, regardless of how effective the platform could be.
I haven't thought about that problem before so I'm not sure how to address it as of now.
To whom should you send it in order to get an expert review?
Which of those people has an ax to grind with one of the authors?
Of the remaining people, who do you think you could convince to invest a day or more to carefully evaluate the paper?
Okay, so you got replies back from two of your carefully selected referees (after two months of badgering one of them):
Referee A thinks the paper is great, insightful, and advances the field, but wants extensive changes.
Referee B thinks the paper is derivative drek and should be rejected because his friend C has already done something similar.
Your journal publishes twenty similar articles a week but receives three hundred a week. The careers of the authors are partially on the line, as is the prestige of the journal, the attention of the readership, and the future submission of articles by prospective authors.
Good luck training a neural net to do this well. I suspect a neural net can be trained to reject the worst crackpots, but little more, without rejecting insightful but unique/important papers.
The problem is that most of the (substantial) work you highlight above is done by the academic community (mostly paid by the public purse), while most of the (substantial and above-market) profits go to the private publishers.
> In addition to basic copy-editing, she prevents crackpot articles such as "Darwin's Theory of Evolution is proven wrong by Trump" from reaching the journal editors and wasting their time. (She may inadvertently forward some bad articles but she has enough education to reject many of them outright to minimize the effort by the journal's scientists.)
Do I read it right that an Elsevier manager with no degree in the subject rejects research papers without consulting the editors? To me as scientist this is a big red flag and concern about journal's quality. While the example given here looks obvious (and probably contrived), most real examples are less so. And while recognizing those takes little time and effort to a trained eye, this task certainly should not be left to a subject non-specialist. It will not save much time and create more concerns than help.
The editor has to put in the hours to reject the crap that people submit, and those hours don't come free.