Balkinization  

Wednesday, September 22, 2010

Doubts about the New "Scholarly Impact" Ranking

Brian Tamanaha

A new law school ranking is out which purports to objectively measure the “scholarly impact” of the “Top 70” law faculties. The ranking has generated a great deal of interest and commentary among legal academics. Reflecting this interest, their paper has been downloaded over a thousand times in little more than a week (earning a high rank for number of SSRN downloads within two weeks of posting). While the authors concede that their study, which extends Brian Leiter’s ranking of the top 25 law faculties, has limitations, they assert that it is superior to the US News ranking.

The US news ranking might well be garbage, as many law professors complain, but it is not a persuasive reason in favor of the new ranking system to assert that it is superior to garbage. If we are to take it seriously, it must constitute a valid way of measuring what it purports to measure: “scholarly impact.” I will raise a few doubts about the validity of this new ranking system.

A core theme of their paper is that several law faculties are treated unfairly by the US News Academic Reputation rating. “Based on Scholarly Impact Scores,” the authors write, “several law faculties appear to be significantly under-valued law schools.” For example:

Presumably due to its recent entry on the scene, the University of St. Thomas (Minnesota) is the most dramatically under-valued law school among the top 40 in scholarly impact. The University of St. Thomas enters into the First Tier at #38 in the Scholarly Impact Ranking, while being ranked by U.S. News in the Third Tier.

The authors of the study, it bears noting, are professors at St. Thomas. I sympathize with their frustration. St. Thomas has an excellent faculty (I have visited the school, and two dear friends of mine, leading scholars in their respective fields, are on their faculty.).

What the authors fail to mention, however, is the reverse implication of their contention. For every law faculty that is under-valued, there must be a law-faculty that is over-valued. This raises uncomfortable questions. Are the authors claiming that their faculty has a greater “scholarly impact” than, say, the law faculties at Iowa, Alabama, San Diego, William & Mary, Fordham, Florida, Wake Forest, Wisconsin, Boston College, Tulane, etc.? (just a few of the 150 or so law faculties that score below St. Thomas).

A more dramatic example will help illustrate my point. Cal-Irvine, a brand new law school, is ranked 9th by the “scholarly impact” study. Would the authors say that the Cal-Irvine faculty (impressive as it is) has a greater “scholarly impact” than the law faculties at Cornell, Duke, Michigan, Pennsylvania, UCLA, Virginia, Vanderbilt, Texas, Georgetown, Minnesota, Illinois, etc.? (all ranked below Cal-Irvine).

These claims strike me as patently implausible. And I suspect many law professors would have the same negative reaction. This contrary intuition suggests that perhaps there is something wrong with how the authors have constructed the ranking or with what they claim the ranking represents (of course, it is also possible that my contrary intuition is wrong). If the authors are not in fact making these claims, then what are they claiming?

This is not just about where St. Thomas and Cal-Irvine stand. All of the law faculties mentioned above are talented and prodigious. The proposition that each can be precisely slotted relative to the others by “scholarly impact” calls out for skepticism.

Not surprisingly, the closer one looks at the study the less sound it appears. To produce their ranking (following Leiter’s methodology), the authors count the number of times tenured law professors at each school are cited in the Westlaw law journal data base (more accurately: they count the number of times a professor’s name is mentioned, including acknowledgments and other non-substantive references). They multiply the mean citation count of the tenured faculty by 2 and add this to the median citation count.

If we are truly measuring the “scholarly impact” of a law faculty—and if we assume that citations are a good proxy for scholarly impact (more on this later)—then it seems that the most straightforward way to compile the ranking is simply to add up the total number of citations for all professors on each faculty.

The smaller schools will protest that this would put them at a disadvantage, but why should that be a reason to adjust how we count? If the ranking purports to measure the scholarly impact of each law faculty, size matters. Cal-Irvine has a small group of frequently cited professors (boosted by the presence of the most highly cited person in legal academia, Erwin Chermerinsky), but it can hardly be maintained that their “impact” exceeds that of larger faculties with a greater total number of citations (think of Texas or Georgetown). It might even be said that the study design hands an unwarranted advantage to small faculties that possess a couple of highly cited scholars (because the mean—which counts double—is more subject to influence on a small faculty).

Another counter-intuitive oddity lies in the details of the study.

The authors presumably would agree that there is no real difference in “scholarly impact” between schools ranked right next to each other. It seems reasonable to assume, for example, that the faculties ranked 27th and 28th are roughly equal, especially since their citation scores are not far apart. A real difference in “scholarly impact” exists only between schools that have markedly different citation scores—like the 200 points that separates the school ranked 14th (Penn) from the school ranked 23rd (Emory). Right?

Not so fast. It turns out that Yale (#1) scores 200 points higher than Harvard (#2). Is it plausible to say that Yale exceeds Harvard in “scholarly impact” by a significant margin? Consider further that Yale’s weighted score is almost 600 points above Columbia’s (#6). Yale’s score exceeds Columbia’s by more than a third. For perspective, note that the next school on the list, Berkeley (#7), is 600 points above Wisconsin (#55). In terms of citation score, then, Yale is superior to Columbia in “scholarly impact” to the same extent that Berkeley is superior to Wisconsin.

It should be evident that there is something problematic with measuring “scholarly impact” in this way. I doubt that even the folks at Yale believe they are so superior to Harvard and Columbia (and Chicago, Stanford, NYU, etc.). These fabulous faculties are interchangeable (literally, professors at these schools regularly move from one to the other, and back). Indeed, in this respect the US News Academic Reputation rating appears to be more accurate than the scholarly impact ranking because the former has Yale and Harvard in a tie, followed closely by Chicago, Stanford and Columbia in a tie.

Another problem with the study becomes apparent when we understand why Yale fares so well. Substantially more articles are published in law journals on constitutional law than any other subject. Consequently, constitutional law scholars are cited far more often than other scholars. (When ranked by citation count, for example, the twentieth ranked constitutional law scholar has more citations than the first or second ranked scholar in almost all other fields.) Yale scores so highly in large part owing to its extraordinary cadre of constitutional law scholars—with five faculty members placing among the top ten in citation counts in this category. Yale has a terrific faculty across the board, of course, with leading scholars in every field. But so does its peer schools. What sets Yale apart in citation counts is its particular strength in constitutional law, combined with the fact that constitutional law is the most written on topic.

Is it sensible to construct a “scholarly impact” score that is skewed by academic fashions? The infatuation of law review editors with constitutional law is not quite a fad—it has been dominant for some time—but I have never seen a persuasive argument that constitutional law scholarship is the most important or influential legal field. Perhaps a more accurate ranking of “scholarly impact” would equalize the weight of citation counts across legal fields.

I don’t have an answer to this question. I raise it to show, once again, that this new ranking system is problematic in fundamental ways.

When raising these doubts, I have deliberately stayed away from a pivotal underlying question: Is a citation a reliable indication of “scholarly impact”? The legal academy follows a notoriously bizarre citation system: virtually every assertion in an article must be supported by a citation (example: “The rule of law is an important notion around the world today.” See Tamanaha, On the Rule of Law). This requirement inflates citation counts; law professors often rely upon research assistants and law review editors to produce the obligatory citations; often the students produce these citations by raiding the footnotes of other articles that previously made the same point; a source cited once can thereby become a source cited many times (dirty secret: one cannot assume that a source cited was actually read by the professor who cites it.). This underside of citation practices—which throws doubt on the proposition that citation counts are a reliable measure of scholarly impact—is well known.

Some of my concern would be allayed if the authors renamed the study with a more accurate title: “Ranking of Law Faculties by Mean (x 2) plus Median Citation Counts.” To claim anything more than that is unsupported.

The authors might respond that if I don’t like their study I should devise a better one. But that would miss my overarching point: the entire exercise is misguided. Scholarly impact—a judgment that depends upon various concrete and inchoate factors—cannot be measured in terms refined enough to produce a valid ordinal ranking. We can make broad brush assessments (supported by relevant data) about the superior productivity of certain law faculties in comparison to others, but little more can be said.

Legal academics excoriate US News for its ranking. The right course of action, it seems to me, is not to construct an alternative ranking of law faculties—particularly when every such ranking will suffer from significant distortions—but instead to eschew the ranking enterprise altogether. An ordinal ranking of law faculties is particularly inapt (US News ranks law schools, not law faculties as such).

Alas, the new ranking will have predictable consequences, some of them unseemly or downright pernicious. Soon, a number of law school websites and promotional mailings will proudly tout their impressive “Scholarly Impact” ranking. Deans will brainstorm ways to increase citation counts on their faculty (easy tips: hire more constitutional law scholars; encourage your faculty to liberally cite colleagues or thank them by name). The citation counts of potential hires will be considered in lateral hiring decisions. Law schools will be forced to pay a premium to keep high citation professors on their roster for the purposes of the count (even if in name and paycheck only); more of these folks will get simultaneous appointments on two faculties (both schools are credited with the full amount of citations). Professors will point to their high citation count as grounds for a bigger raise (Deans will use low citation counts to justify a lower raise). And so on.

The authors might protest that they cannot be blamed for misuses of their ranking system, but that hardly matters in the intensely competitive environment of legal academia, which puts pressure on law schools to maximize every measure that factors into anything that bears the dreaded title “Ranking.”

As the collective over-emphasis on citations gathers momentum, it will be too late to remind ourselves that the “scholarly impact” ranking has no more merit than a superficial beauty contest that disproportionally isolates on a couple of flashy features. The ranking will have a negative impact on legal academia simply because it exists and cannot be ignored.

The authors are not doing something better than US News—they are repeating its error of producing a ranking of dubious validity without heed to its negative consequences. The harm inflicted on law schools by US News will be compounded by an additional set of harms that follow from this new “scholarly impact” ranking. This time, however, the wounds will be self-inflicted because law professors are constructing and promoting the new ranking (lending it a patina of credibility).

Comments:

This comment has been removed by the author.
 

These are all excellent points, Prof. Tamanaha. I just wanted to point out one large additional cost of normalizing use of and reliance on such citation-count statistics, and that is that they would be an immensely tempting and ultimately damaging proxy for law review editors to use in the "blind" submissions process.

If, as is commonly acknowledged, law review editors are already unduly influenced by the prestige of of the authors submitting (usually ascertained by CV) the effect of citation rank (which would at some point become standard information on a CV), would be to aggravate the problem. One could easily see editors at top journals adopting rules-of-thumb such as not considering anyone in the top-X of citation rank.

All this would, of course, push the importance of citation rankings even more, further incentivize gaming the system, and produce even more damaging status-stratification, and less attention to the merits of pieces, in law publication than currently exists...
 

Sorry, in the previous comment meant to say "not considering anyone not in the top X of citation ranks."
 

I'd have more sympathy with academics in their complaints about all these sorts of rating systems if academia had a history of providing good, concrete, usable information to students, prospective faculty members, donors, and taxpayers that would allow outsiders to discern the differences in quality between various schools.

However, that has never happened. For instance, I remember that when I was applying to colleges and then again law schools, I received lots of information from the admissions offices of various institutions, but almost all of it was basically ad copy. US News at least tries to give people useful information that might help them make better decisions.

If academics really want to decrease the influence of ratings that they believe are misleading, the only way they will ever succeed in doing this is to create their own metrics that give people information they need and allow them to make comparative decisions. And I wouldn't hold my breath waiting for this to happen.
 

One thing law school did probably do for me and likely does for many others is provide an exposure to many things and offers a safety net if their true passion doesn't work out. So if that is worth $120,000.00 to you then law school is probably a great fit.




school grants
 

I totally support the new ranking system, except for the part where Yale and Harvard are on top of Chicago. Once they fix the bug that created such a clearly anomalous result, it will be perfect.
 

I would like to see more metadata transparency in ranking systems. In reading the St Thomas paper's grid of top names at each lawschool, several appeared to me to reflect the news cycle and public debate on the blogs rather than research that demonstrates a discernible quantum of excellence.

However, if approaches such as St Thomas' new instrument reflect a greater instantaneity of understanding the tapestry of many branches of the law, I support such fresh efforts simply for the diversification they add to the dialog about quality and quantity of curricula and faculty.
 

Brian Tamanaha's argument against the scholarly impact ranking we at the University of St. Thomas recently released hinges in large part on the following assertion of implausibility: "Cal-Irvine, a brand new law school, is ranked 9th by the 'scholarly impact' study. Would the authors say that the Cal-Irvine faculty (impressive as it is) has a greater 'scholarly impact' than the law faculties at Cornell, Duke, Michigan, Pennsylvania, UCLA, Virginia, Vanderbilt, Texas, Georgetown, Minnesota, Illinois, etc.? (all ranked below Cal-Irvine). These claims strike me as patently implausible."

Why? Why is that "patently implausible"? Why is it so obvious that every single one of these law faculties must be endorsed as manifestly making a stronger scholarly impact than California-Irvine? Beyond the echo chamber that entrenches past law school reputations, what concrete basis is there for this conclusion? Brian draws upon his "intuition" about the relative strengths of scholarly activity at various law schools, while dismissing objective evidence that may contradict the conventional wisdom. In our view, this common tendency toward reliance on gut feelings and anecdotes only confirms that a methodical measure of scholarly impact is valuable and a worthwhile challenge to too-easy assumptions.

With respect to California-Irvine, certain qualifications and limitations should be emphasized, which were forthrightly stated by Brian Leiter when he posted the top 25 scholarly impact ranking earlier this spring. (I should emphasize here, however, that I speak only for myself and my comments shouldn’t be attributed to Brian Leiter.) First, whether looking at the top 25 in scholarly impact or at the extension of that ranking to the top 70 that we at the University of St. Thomas recently released, those law faculties that are ranked closely together should not be seen as significantly different, if at all, from each other. So no one should assert, and I do not, that California-Irvine has a "greater 'scholarly impact' than" all of the other schools that Brian Tamanaha lists above, several of which are nestled together in the top 25 rankings. Second, because it is so new and is still building its faculty, California-Irvine is the one law faculty for which scholarly impact scores were calculated based on prognostications according to a formula that, again, was candidly described when the top 25 ranking was posted. Third, we’ve never claimed that scholarly impact scores say everything about scholarly prominence of a law faculty and instead have carefully outlined in our report various limitations and qualifications.

That being said, instead of resting on subjective and historical impressions, I'd suggest that anyone truly interested in the subject spend a couple of hours looking at the faculty assembled at California-Irvine (their faculty web page is very well-organized and easy to navigate), checking faculty C.V.s, running some Westlaw searches, reading some of their published works, etc.

The pertinent scholarly impact question is whether the work of the present roster of faculty at a law school is percolating among other legal scholars and helping define and contribute to the national scholarly discussion in the legal literature.

Whether based on scholarly impact scores or an alternative measure of scholarly productivity -- both of which are addressed in our scholarly impact study report -- is it plausible that the present law faculty gathered at California-Irvine may have a greater scholarly impact than many, perhaps most, of the law schools typically ranked in the top 40 or so? In a word, "yes."
 

This comment has been removed by the author.
 

This comment has been removed by the author.
 

This comment has been removed by the author.
 

This comment has been removed by the author.
 

Your articles are very well written and unique.best citation service
 

Big thanks to you for sharing such great information.full coverage insurance costs
 

This article has some vast and valuable information about this subject. can new yorkers still get payday loans
 

Post a Comment

Older Posts
Newer Posts
Home