Why bother reviewing papers?

Author

Oren Bochman

Published

Friday, December 20, 2024

Keywords

papers, meta, review

Why bother reviewing papers?

Ground Breaking v.s. Record Breaking

While all papers should have break new ground, I follow (Kuhn 1962) in differentiating between what in … considered a jump and a gradual improvement. Most papers that announce a new SOTA results are gradual improvements - essentially tweaks to well understood experiments while the truly ground-breaking papers are rare. Often, we see the authors taking an innovation from another field (perhaps a theoretical result) and applying it to a new domain. Less frequently, the author connects several previously unrelated results or techniques and does something considered impossible. Finally, there are papers where the authors ignore prior art and develop several original ideas, creating a new field. (Like Nash did with non-cooperative game theory)

Kuhn, Thomas S. 1962. The Structure of Scientific Revolutions. Chicago: University of Chicago Press.
Arora, Sanjeev, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2018. “Linear Algebraic Structure of Word Senses, with Applications to Polysemy.” https://arxiv.org/abs/1601.03764.

They introduce new capabilities that aren’t widely used. e.g. in(Arora et al. 2018) the authors introduce sparse coding to embeddings. However, research from information science indicates that it can take a few years for results to circulate, become cited, and adopted. This paper often seems irrelevant later on when there is new research that can get more powerful results. Record-breaking papers (most common) beat the SOTAs by a few fractions of a point and typically arrive in black-box models, and their ‘black magic’ is neither fully understood nor transferable and hard to apply. So, going back to the paper that broke the ground, I find it interesting.

Learning to think like a researcher

The way top-tier researchers like Christopher Manning, Geoffrey Hinton, Christopher Bishop and David MacKay came up with their breakthroughs is frequently motivated and outlined in their papers. These can be as simple as making a PCA that works on Neural Networks for TSNE 1. In (Pennington, Socher, and Manning 2014) such as getting embedding to work using a covariance matrix rather than a moving window. Or as complicated as introducing causal regret for credit assignment for agents solving social dilemmas. In many ways, the intellectual journey is more fascinating than any specific magic trick.

1 you do know how regular PCA work right

Pennington, Jeffrey, Richard Socher, and Christopher Manning. 2014. GloVe: Global Vectors for Word Representation.” In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), edited by Alessandro Moschitti, Bo Pang, and Walter Daelemans, 1532–43. Doha, Qatar: Association for Computational Linguistics. https://doi.org/10.3115/v1/D14-1162.

Reviewing paper it not as good as taking a class with them. If you can do that. However, but teaching is often not the forte for most gifted researchers - sometimes their finest hours are when writing papers. Another point is that in the class teachers are very much limited by the material they can present - the students can only cover so much new mathematics in a class. In a paper they are writing to their peers so that all bets are off and the material can be as advanced as needed and touch on many disparate fields. In Skrym’s book on the evolution of signaling systems routinely touch on game theory, evolution, information theory, sociology, and classical philosophy. The reader left to catch up on their own if they can and to dive into the literature if they are dissatisfied with some of the the author’s claim. but most courses are not as advanced as most papers. I took courses by Christopher Manning or Geoffrey Hinton. However And yet, most of their publications are just as useful to review. It is not enough to read them if you want to assimilate some of their creativity or problem solving approaches. away in the sense that they provide a unique insight into the workings of their minds. It’s fascinating and inspiring to see how these influential figures think about problems and how they approach them. Their papers offer a valuable perspective that can’t be found elsewhere, and this understanding can be a great source of motivation for your own research. What I like about these two authors is that they can take some old ideas/techniques and figure out how to use them in new settings. T-SNE came from PCA, and GLOVE came from Topic Modelling.

I won’t say that ground-breaking papers are easy to read—yet some are written with great clarity, and others, like the LSTM papers, are notoriously difficult to understand. But you may discover that reading through the paper beats watching videos or blog posts by others.

Some points to consider when reviewing the paper.

  1. the big picture
  2. what is the main innovation
  3. what is new for you as a reader
  4. anything you feel left out.
  5. anything you disagree with or would have done differently.

Literature Reviews

Reading some papers in fast-moving areas like ML or deep learning can provide a good overview of recent developments. These tell how the field has changed and delineate the landmark approaches and the papers in which they arrived.

More Techniques

Some talented authors come with diverse backgrounds. They will list many fascinating ideas, algorithms and techniques. Taking a few minutes to check these out Another interesting aspect of many papers is use of algorithms or techniques that I am unfamiliar with.

List of Papers I want to look at

WordSense Disambiguation

Citation

BibTeX citation:
@online{bochman2024,
  author = {Bochman, Oren},
  title = {Why Bother Reviewing Papers?},
  date = {2024-12-20},
  url = {https://orenbochman.github.io/reviews/meta/meta.html},
  langid = {en}
}
For attribution, please cite this work as:
Bochman, Oren. 2024. “Why Bother Reviewing Papers?” December 20, 2024. https://orenbochman.github.io/reviews/meta/meta.html.