Yet Another Rant About CS Reviewing (YARACSR)

Inspired by a re-tweet from Tom Diettrich. There’s been lots of complaints about reviewing for CS conferences; it’s random, there’s lots of biasses, you have to be one of the in crowd already. All of these are true, but they are also true of reviewing in any other field as well.

But I do think CS conference system isn’t great for the field. That’s not new; the purpose of the blog is to add details to the dynamics that I suspect are at play in ways that aren’t so different from the journal systems in other fields, but nonetheless serve to make the field more random, and more insular, than others.

To begin with, CS conference deadlines certainly do provide incentives to get research done and written up (as an advisor of PhD students, I definitely sympathise with this!), and this process also generates papers that are less complete/well thought out and more incremental than journal-based publications. The sheer volume of submissions to CS conferences also serves to put pressure on the refereeing system, reducing the thoughtfulness and deliberation of the review process. The insistence on giving all papers the same treatment — while apparently democratic — also accentuates this; the editorial board of a journal weeds out clearly unsuitable papers, and then provides more nuanced judgements about who to use as referees, reducing the over-all burden and increasing the relevance of referee expertise.

But Tom’s tweet really reflected a complaint that I have long had about the CS review system as a very occasional reviewer: the lack of iteration. I’ve handled a not small number of papers that could be great with more work, but without the opportunity to iterate a couple of times, the paper just didn’t meet the standard.

Now you could certainly say that iteration occurs between CS conferences. I’m frankly skeptical that this occurs as much, as opposed to simply submitting much the same manuscript to the next conference. But more, I think it misunderstands the review process: it’s only partly about making the paper better. I think a good portion of iterative review is a conversation between the authors and the referees; one that clarifies the ideas in the paper and allows both parties to understand each other better.

Isn’t this just about reviewing improving the paper? Not really. I think of this as being a bit like teaching: the ideas need to be “turned around” until they’re presented in a way that clicks with the referee. I’ve often been struck by how much the fate of paper lies in how its introduction is written, and if the referee doesn’t see the point, initially, they’re prone to misread the rest of the material. This often seems to be pretty idiosyncratic to the referees.

I’m not sure that this iteration always makes the paper better for those who aren’t the particular referee (although it now makes sense to both referee and authors, so perhaps that’s something?). So is this better than a one-off review process? I think it does decrease the randomness, to some extent; I’m doubt that there is a presentation that works for everyone, but it does help to get around the presentation and judge the papers on ideas. The perceived randomness in CS conferences is partly about “did I happen to get referees for whom my presentation worked”? Whereas that’s less relevant in an iterative review process. It’s made worse, because at some level we all understand that to be the case, and this reduces the incentive to make genuine improvements when we can always say “It’s just that these particular referees didn’t get it.” The dynamic also tends to make the discipline more insular — subjects further from the referee’s way of thinking (and statisticians publishing in CS conferences see this a lot) get much shorter shrift.

There is, however, a further aspect that adds to this: the time pressure in refereeing CS conferences means that you don’t write as long or as thoughtful reviews. That means that the authors also get much less by the way of specifics “you really should do this”, and this also makes the process feel more adversarial and provides a lot less guidance to indicate that there really are things to be done to make the paper better. Slowing down really can be beneficial.

I gave up on CS conferences a while ago, due to frustrations both as an author and a referee. And I’ve had the same rants as oh so many people. I used to think that was a failing on my part, until I met increasing numbers of researchers whose work I greatly admire who have said that they can’t get published in CS conferences either. This certainly doesn’t make journal reviewing perfect: it’s still full of biases and still depends on who is assigned as editors and reviewers for a particular paper. But I still think it might do a bit better, both from taking more time, and from the ability to push back from both directions. I’d love to see that studied, though by how and what metrics would be difficult to work out.

Leave a comment