CiteULike Ranking System

Posté par Timothée le 17 Jan 2007 dans , , 3 commentaires

On January 16th, Richard Cameron, founder and developper of the well known social publication managing website CiteULike announced the release of a new feature : paper ranking, via user votes. This method, even if based on a very good idea, and aiming to serve user by enhancing visibility of hot papers, is in my own and humble opinion, biased, and released far too early.

As we discussed a lot of time in the C@fe des sciences mailing list, the best way to rank paper is highly controversial. Many people view CiteULike as a homepage, intented solely to keep their pdf files and manage their bibliography, via the blessed tag function. As a matter of fact, this is the way I explained CiteULike at my co-students, when presenting the advantages of the tool in a mail few weeks ago.

If we look good, except for seeing the tags and authors shared between you and other users, and knowing how many users share an item with you, CiteULike was not so social. We can easily understand the idea of Richard Cameron when implementing this new functionality of collective vote. As he said in the CUL mailing-list:

The aim is to try to provide a place where researchers can always find something worth reading in an idle moment. Those papers which everyone in your field reads and talks about? You can be first to find them by using this feature (or, at least, that’s the plan).

From now, when submitting a paper to your CiteULike database, you will be asked to classify it into a field, just as in arXiv. This is the first, and according to me the worst, weakness of this system. The fields are in very low number : Computer Science, Biological Sciences, Social Sciences, Medicine, Engineering and Technology, Economics and Business, Arts and Humanities, Mathematics, Physics, Psychology, Philosophy. As a biologist, I still wonder what would be the actual relevance of the hottest paper for one moment, if the vote involves all researchers of my discipline…

In this very moment, it is a review published in an old Nature Rev Gen, about structural variations in the human genome. It could be highly interresting to read, I don’t doubt of it, but as an Immunology major, it is of little interest to me and my immediate research preoccupation. If I want to discover a new ”my own little research”-related paper, I’m obviously at the wrong place, no matter R. Cameron said. CiteULike relies on a wonderful (yet uncomplete) tag system, why not applying this voting system to tags?

The second weakness was pointed out by Dario Taraborelli. The system is not spam-immune at all! Let’s imagine an average lab, say, twenty people, voting each-other for their own papers. Knowing that the average vote count by paper is 4 or 5, you can easily imagine the bias. R. Cameron immediatly explained that he expected the community to downvote any paper which would be out-of-place. It is probably the most obvious reaction, indeed. I still think that the induced bias will be highly important.

A good point, however, is that bookmarking an article counts as a positive vote for it. The most CiteULike users include a paper in their bibliography, the most it will obtain high scores. Obviously, if you bookmark an article, you consider it as interesting then vote for it. However, if the number of users rise, and it will surely do knowing that the access to category is now part of each page header, this won’t induce an important bias.

D. Taraborelli proposed that the ranking system use number of citations. I don’t think it is a brilliant idea, for two main reasons. Firstly, because it would be difficult to implement inside CUL; but mostly because this estimator is probably one of the most biased estimator ever. Being cited by a Nobel laureate or by a PhD. student publishing his/her first paper is exactly the same/ Also, being cited in a highly specialized journal, even if in a breaking-edge paper, won’t be as good as being cited in a blockbuster, such as Nature, Science or Cell. Of course, there is also an important lag between the publication and the moment were you are most cited by others (even if in certain journals, cross-citations between articles in the same issue is frequent). Knowing this, the vote system R. Cameron chose to implement is far more reactive (who said ”far more Web 2.0”?).

As a conclusion, I must say that the system, if not completely functionnal, is good. A beta, some said, but a good and promising beta. Almost everybody agree on the fact that a tag-based ranking system, in place of the brand new field-based one, will be a valuable enhancment. As R. Cameron said in the CUL mailing list, it will all be about finding an equilibrium between specialisation and impact. A classification with just Biology is far too inaccurate. Oppositely, one with Biology -> Immunology -> Membrane receptors -> CD86 -> Splicing variants would be relevant for about four people all over the world. Maybe a limit of two levels would be acceptable, even if I don’t think that this many people will be comfortable with an arXiv-like classification. Maybe this time it will be good to just wait and see.

More than just a CiteULike related issue, this new outcome raises the point of the best way to rank articles (and therefore, authors!). The new vision induced by the apparition of social-like system, as CiteULike is, will necessary be followed by a new ranking method. I see the first implementation of this feature as a first draft, upon which a more complete, stable, and trustable online ranking system can be debated and builded. This confirm, anyway, the fact that interaction between end-user (the reader) and the material is mutating. After PLoS’s peercommentary, and the aborted Nature initiative in this way, CiteULike ”I read therefore I vote” system is a highly interesting experiment in how new indicators of the actual value (what means its actual use) of a paper could be influenced by readers, not only the title of the journal the paper is cited in.

As the subject is web-related, I allowed myself to blog it in Shakespeare’s language. As a non-native speaker, I am open to any kind of corrections.

Timothée is
| All posts by Timothée

3 commentaires

» Flux RSS des commentaires
  1. As you mentioned, the purpose of social tools e.g. CiteULike is perhaps more to enable “collective intelligence” through the magic of actions of millions of individuals and to connect/track related papers, tags and so on than to ask each of us to vote for his/her favorite paper — which is exactly the reason why I prefer del.icio.us and its hotlist to Digg and Digg-like websites! In CiteULike, I like to watch tags or people I’m interested in rather than having the pseudo-voting-buzz stuff. On the other hand, if I want hot papers with comments and discussions, I’d rather look into blogs and specialized tools like Postgenomic (for instance, Postgenomic’s literature agregator http://www.postgenomic.com/papers.php ).

    P.S. A few mistakes I have noticed: - “arXive” instead of “arXiv” (and what about an hyperlink?) - “if the vote involve” instead of “if the vote involves” - “to classify him” instead of “to classify it” - “hotest” instead of “hottest” - “functionnal” instead of “functional”

  2. Well, trains are made for moving from a point A to a point B, not writing in English, I assume :) Thank you, anyway.

    While talking about neighborhood in CiteULike, here is a site that could be of interest : CiteULike neighbors (just fill-in your username).

    Richard Cameron came up with a whole bunch of new ideas, mostly about the best way to score papers filled under different categories. The concept seems promising :

    A paper filled under Bio -> Cell Biol, with 20 hits, and under Bio -> Physio with 3 hits, will obtain a score of 23 hits in bio, plus its respective scores in each category. And as you can see, the subdivision, asked by a great majority of members, is in progress.

    It could be more informative than just a tag-related voting, mainly because a tag like 2beta (one of mines, actually mhc-related) is less likely to be shared among a lot of people than evolution (maybe the most used tag in citeulike).

    The best I kept for the end. The actual value of a vote is now depending of its age. To avoid, of course, that citation classics blocks the first place. Each vote has now an half-life of a few days. So, the votelist is more likely to become a v-hotlist (this one is so poor).

    The social software inspiration is maybe due to the fact that R Cameron is fond of them, and tries to apply them to his baby. Anyway, with a few tricks, I think the two ideas are not incompatible/irrelevant.

  3. and what about a number-of-visits ranking system ? It will have some other kinds of bias, of course, but it will easily point out the “hottest” paper (in opposition to “the best”), as initially intended.

    oh and “if we look good” looks a bit strange. maybe “if we look carefully” ? I hope it helps.

Commenter


Sitemap