Monday, November 26, 2007

Skepticism About Leiter's Citation Rankings

Brian Tamanaha

Like Mary Dudziak, I’m skeptical about Brian Leiter’s latest citation rankings of law professors. In response to Dudziak’s concerns about the study, Leiter claims that “The study is a ‘true measure’ of what it purports to measure, namely, impact in legal scholarship.”

It is more accurate to say that citation counts measure citations. Whether citations reflect impact is an altogether different question.

My objection to use of citations as a proxy for “impact” is not the claim that articles and books may have an influence without being cited in law review articles, although this is clearly the case. [I have read and learned from Isaiah Berlin, for example, but have never cited him].

Rather, the problem has to do with the bizarre citation practices that have developed in U.S. law reviews. Law reviews typically require that almost every assertion be backed up by a reference; articles often have in excess of 400 footnotes, nearly one for every sentence [invited and symposium pieces escape these constraints].

As a result, law professors are required to produce reams of citations, even for commonplace assertions, a task they sometimes push off on research assistants. Over time, stock or standard citations develop, which are cited again and again. An easy way to come up with a citation is to plumb (or loot) the footnotes of earlier articles on the subject. A lot of parasitic opportunism of this kind takes place because it is an efficient way to come up with the required footnote. [Concrete evidence of this practice: an article I wrote a dozen years ago was erroneously identified as co-authored in an early article that cited it, and to this day it is still occasionally cited as co-authored, which suggests that the people who cite the article are not actually reading it but are merely "borrowing" the (erroneous) cite from earlier articles.]

Owing to this practice (common?), the fact that a book or article is cited does not necessarily indicate that it was read by the law professor who cited it. Even if the professor actually reads it, moreover, the citation does not mean the article or book cited had any impact on the professor, particularly when the citation is produced after the passage was written. Again, many sources are cited solely because a citation is required by law reviews.

A more refined measure of impact or influence would count only the times when a source is actually discussed in the article in some fashion, even minimally. While there is much reason to doubt that a stock cite (or single reference in a string cite) is a reliable measure of impact, there is no question that actually engaging with the article or book constitutes impact.

Even if this problem is corrected, there are other serious problems with Leiter’s citation study as a measure of impact on legal scholarship.

Consider, for example, Leiter’s ranking of Critical Theorists. Roberto Unger is ranked 20th, with 480 citations. Setting aside what one might think of the merits of critical theory, it is absurd to suggest that the “true measure” of Unger’s impact in this field places him behind all the others cited. His Knowledge and Politics and Law in Modern Society influenced a generation of critical theorists (and others), although these works might not be cited very often today. This example alone demonstrates that the citation study is deeply flawed as a measure of impact.

Take a look at the “Law & Philosophy” ranking. A case can be made that Duncan Kennedy (1290 citations) and Roberto Unger, both relegated (or banished?) by Leiter to the Critical Theorists list, should also have been included on this list (both placing in the top ten, with Kennedy second). Leiter will no doubt assert that they do not engage in “legal philosophy” proper, which is a plausible claim, though by no means uncontroversial (Nussbaum and Waldron, on the list, also do much work that does not fit within a narrow definition of "legal philosophy"). Even conceding this, one might ask why such a narrowly defined category was utilized that excludes such important contemporary legal theorists.

Another general problem with the ranking is that many people are cited for work in other fields: Raz for moral theory; Waldron for political theory; Leiter for his rankings; and so forth. This is true for many professors, not just those in legal philosophy. Leiter does not correct for this, which undermines the accuracy of the rankings (relative position and who makes the cut).

Leiter, to his credit, admits that there are flaws in his citation ranking, although he nonetheless believes that it offers a valid way to measure scholarly impact.

However, it may well be that there is no way to truly measure scholarly impact, which is impossible to quantify. If so, even a measure that is arguably “better” than its competitors is worse than no measure at all because it falsely suggests that it is measuring something which cannot be measured.

Our culture suffers from a ferocious ranking fetish. Leiter’s citation study feeds the beast, when we should instead be starving it.


Sounds like an example of the classic maxim:

There are lies; there are damned lies; and then, there are statistics.

As a practicing criminal defense attorney, my problem with the rankings in criminal law and procedure is the practical experience level of the professors ranked. Any of them voir dire a jury? Cross-examine a hostile witness? File, brief, and argue a motion to suppress? Writ up a judge? It's a bit hard to take these folks seriously from here in the trenches.


Hey, all of you law professor types: one of you ought to join in the discussion over at Glenn Greenwald and Atrios and the Anonymous Liberal and Firedoglake concerning Joe Klein's idiocy over at the Time Magazine Swampland blog regarding the FISA legislation before Congress.

In a column to be published in Time Magazine, he utterly mischaracterized the FISA legislation and then slammed the Democrats for being weak on national security based on his mischaracterization. He actually accuses them of giving foreign terrorists the same rights as Americans. As Greenwald said:

"The most obvious and harmful inaccuracy was his claim that that bill "would require the surveillance of every foreign-terrorist target's calls to be approved by the FISA court" and that it therefore "would give terrorists the same legal protections as Americans." Based on those outright falsehoods, Klein called the House Democrats' bill "well beyond stupid.""

Why doesn't one of you join in the criticism of Joe Klein for his idiocy? His piece is slated to be published in the next issue of Time Magazine, helping reinforce among millions of readers the stereotypical critique of the Democrats as being soft on terror.

Klein refuses to make an honest correction and admits that he got his interpretation from a Republican congressional staffer. He appears not to have consulted any Democrats or, say, this web site or any of the others that could have explained the facts to him.

He deserves to be made an example of -- to take a shot across the bow of the media whores to let them know that they are not going to sabotage another Democratic election unchallenged.

A trenchant analysis of exactly how wrong Klein is -- and why and how inexcusable his "error" is -- coming from one of you would be very powerful.

fo wheel? Who cares? Is balkinization really worthy of this discussion of shameless academic-penis-measuring?

As an example of how citation count can mislead as to impact, I wrote a law review article which had a footnote with the outcome of a particular question in every state. This article so far has had 12 citations--11 for the summary in this single footnote and only 1 for the substantive topic addressed in the article. The "impact" of the article should best be measured by just counting that 1 citation, not the other 11.

BT is absolutely right: Leiter's methodology is nonsense. And I'd go farther than BT does in condemning Leiter's fascist tendency to relegate to the trash heap any approaches to law & philosophy that aren't on all fours with the particular kind of training he had in analytic philosophy.

I think the part of Leiter's study that bothers me most is that he thinks students care about these rankings. Why on earth should 99% of students (those without academic aspirations) care if their teachers are much-cited (or impactful) scholars?

And if it's true that students don't care, then who are these rankings for? Are they just a glory list, similar to a law firm sending around a list of who billed the most hours this month? Is there any actual information here for scholars?

(Where Leiter to permit comments on his blog, I could have asked these questions in a more appropriate forum.)

Post a Comment