Balkinization  

Wednesday, February 09, 2011

Protean Rankings in the Economy of Prestige

Frank Pasquale

Paul Caron brings news of the ranking system from Thomas M. Cooley School of Law, which pegs itself at #2, between Harvard and Georgetown. Caron calls it "the most extreme example of the phenomenon we observed [in 2004]: in every alternative ranking of law schools, the ranker's school ranks higher than it does under U.S. News." I just wanted to note a few other problems with such systems, apart from what I've discussed in earlier blog posts and articles on search engine rankings.

Legendary computer scientist Brian W. Kernighan (co-author of the classic textbook on the C programming language) wrote a delightful editorial on rankings last fall:

In the 1980s, statisticians at Bell Laboratories studied the data from the 1985 “Places Rated Almanac,” which ranked 329 American cities on how desirable they were as places to live. (This book is still published every couple of years.) My colleagues at Bell Labs tried to assess the data objectively. To summarize a lot of first-rate statistical analysis and exposition in a few sentences, what they showed was that if one combines flaky data with arbitrary weights, it’s possible to come up with pretty much any order you like. They were able, by juggling the weights on the nine attributes of the original data, to move any one of 134 cities to first position, and (separately) to move any one of 150 cities to the bottom. Depending on the weights, 59 cities could rank either first or last! [emphasis added]


To illustrate the problem in a local setting, suppose that US News rated universities only on alumni giving rate, which today is just one of their criteria. Princeton is miles ahead on this measure and would always rank first. If instead the single criterion were SAT score, we’d be down in the list, well behind MIT and California Institute of Technology. . . . I often ask students in COS 109: Computers in Our World to explore the malleability of rankings. With factors and weights loosely based on US News data that ranks Princeton first, their task is to adjust the weights to push Princeton down as far as possible, while simultaneously raising Harvard up as much as they can.


Kernighan has also recently given talks on innumeracy, describing how easy it is to distort important debates with misleading or false numerical indicators.

So it's clear that ranking systems can either be structured to produce certain results, or provoke gaming once they are structured. Perhaps only a diversity of rankings can solve that problem. But as James F. English shows in his book, The Economy of Prestige, in many cases the "alternative rankings" must migrate toward the opinions of the establishment rankings, or risk irrelevance. Network power makes it difficult to break out of the pack, however worthy the effort may be.

As Balkin and Levinson have shown in a different context, in the world of "citology," "it's a jungle out there!"

We can at least be thankful that the extant ranking criteria are public, so we can detect arbitrary weighting. In other spheres of life, ranking systems are secret---a problem I'm exploring in a book I'm writing called The Black Box Society. For an entertaining example of a secret scoring system, check out this article on the "web's social scorekeepers:"

People have been burnishing their online reputations for years, padding their resumes on professional networking site LinkedIn and trying to affect the search results that appear when someone Googles their names. Now, they're targeting something once thought to be far more difficult to measure: influence over fellow consumers.


The arbiters of the new social hierarchy have names like Klout, PeerIndex and Twitalyzer. Each company essentially works the same way: They feed public data, mostly from Twitter, but also from sites like LinkedIn and Facebook, into secret formulas and then generate scores that gauge users' influence. Think of it as the credit score of friendship or, as PeerIndex calls it, "the S&P of social relationships."


Zach Bussey, a 25-year-old consultant, started trying to improve his social-media mojo last year. "It is an ego thing," says Mr. Bussey, who describes himself as a social-media "passionisto." One of the services he turned to was TweetLevel, created by public-relations firm Edelman. It grades users' influence, popularity, trust and "engagement" on a scale of 1 to 100. He decided to try to improve his score by boosting the ratio of people who follow him to the number he follows. So he halved the number of people he was following to 4,000. His TweetLevel score rose about 5 points and his Klout score jumped from a 51 to a 60.


The Klout rankings remind me of Avvo rankings for lawyers (which I discussed in this book). I can't say too much about them because their algorithms are secret. But I think any system that ranks Justin Bieber (K=100) over Lady Gaga (K=90) is inherently suspect.

X-Posted: Concurring Opinions.

Older Posts
Newer Posts
Home