E-mail:
Jack Balkin: jackbalkin at yahoo.com
Bruce Ackerman bruce.ackerman at yale.edu
Ian Ayres ian.ayres at yale.edu
Corey Brettschneider corey_brettschneider at brown.edu
Mary Dudziak mary.l.dudziak at emory.edu
Joey Fishkin joey.fishkin at gmail.com
Heather Gerken heather.gerken at yale.edu
Abbe Gluck abbe.gluck at yale.edu
Mark Graber mgraber at law.umaryland.edu
Stephen Griffin sgriffin at tulane.edu
Jonathan Hafetz jonathan.hafetz at shu.edu
Jeremy Kessler jkessler at law.columbia.edu
Andrew Koppelman akoppelman at law.northwestern.edu
Marty Lederman msl46 at law.georgetown.edu
Sanford Levinson slevinson at law.utexas.edu
David Luban david.luban at gmail.com
Gerard Magliocca gmaglioc at iupui.edu
Jason Mazzone mazzonej at illinois.edu
Linda McClain lmcclain at bu.edu
John Mikhail mikhail at law.georgetown.edu
Frank Pasquale pasquale.frank at gmail.com
Nate Persily npersily at gmail.com
Michael Stokes Paulsen michaelstokespaulsen at gmail.com
Deborah Pearlstein dpearlst at yu.edu
Rick Pildes rick.pildes at nyu.edu
David Pozen dpozen at law.columbia.edu
Richard Primus raprimus at umich.edu
K. Sabeel Rahmansabeel.rahman at brooklaw.edu
Alice Ristroph alice.ristroph at shu.edu
Neil Siegel siegel at law.duke.edu
David Super david.super at law.georgetown.edu
Brian Tamanaha btamanaha at wulaw.wustl.edu
Nelson Tebbe nelson.tebbe at brooklaw.edu
Mark Tushnet mtushnet at law.harvard.edu
Adam Winkler winkler at ucla.edu
Can Crunching Help You Prepare for a Colbert Interview?
Ian Ayres
Greetings from London where Super Crunchers has come out under the alternative subtitle: "How Anything Can Be Predicted." There are at least two meanings here (neither which I particularly like). For example, a Balkin-like wag could say that statistical analysis is so easily manipulated than it can predict anything you want it to. Or it can be taken to mean that Super Crunching can usefully produce predictions about every and any process.
Crunching numbers can’t help you predict everything, but it can uncover the hidden causes behind a lot more things than you might imagine.
There’s almost an iron law that people have trouble imagining how number crunching could help them make decisions. People tend to think that whatever they do is too specialized, too unquantifiable for computers to help? It’s easier for us to imagine how it could help someone else than ourselves.
Stepping back, number crunching is most likely to help when there are lots of examples from the past and when the thing that you want to predict is a function of several causal factors. It turns out that humans are pretty bad at putting the proper weights on multiple causes.
For fun, I tried to see if number crunching could help predict what kinds of questions Steven Colbert asks when he interviews people at the end of his show. So my 12-year-old coauthor Henry and I coded up more than 250 questions. Let me be clear. This is not Super Crunching. This dataset is miniscule compared to the terabytes of data that are often mined these days. And I only controlled for a handful of variables. This exercise is more a provocation than scholarship.
Nonetheless, here’s what I found.
On average, Colbert asked hostile questions (e.g., “Isn’t it true that . . .”) 31% of the time. His questions were self-referential 39% of the time. His questions took a premise to a logical extreme a whopping 56% of the time. And his questions were grammatically framed as statements about 39% of the time.
But what’s more interesting is that these percentages varied depending on the type of guest.
When the guest was identifiably liberal, Colbert was . . . .
20.5% more likely to ask a hostile question (p. < .01) 15.7% more likely to ask a self referential question (p. < .05).
[There were no statistically significant effects for identifiable “conservative” guests.]
When the guest was a put forth as an “expert,” Colbert was . . .
14.9% less likely to ask a hostile question (p. < .05); and 15.0% less likely to frame the question as a statement (p. < .05).
It’s easy (after the fact, once you've seen the results) to say to yourself, I could have predicted he would ask liberals more hostile questions. So to keep you honest, you can test your ability to predict before seeing the statistical results.
Post a comment and tell me if you think there are any differences in how he treats men vs. women or in how he treats famous vs. non-famous guests? I’ll post the results later on. Posted
1:04 AM
by Ian Ayres [link]
Comments:
That's fun. I'll bite. I guess he'll take a premise to a logical extreme more often with famous guests than with non-famous ones.
Maybe the analysis should be on how you have decided to frame the questions, and how you present your analysis.
If 31% of the questions are hostile, and if the guest is 'liberal' there are 20% more hostile questions. That moves the percent up to 36%. So the increase is _only_ 5% of the total questions. Wow! See, the lying part comes by putting percentages everywhere as if they have the same basis.
Because of my high cholesterol numbers, statistically I have a 200% increase in potential death over what it would be with lower numbers. But overall, my chance of death is 3% over 10 years (up from 1%). Looking at all the numbers, my chance of death is lower than the average for a person 10 years younger than I am. Should I take a statin drug to lower the numbers? If the risk of side effects are 1%, but my potential absolute reduced risk is only 2% (not 67% reduction), what should I do? Maybe avoid hostile questions from Colbert, or just watch them directed at others will do the trick.
Looking at these numbers alone obscures the fact that a qualitative judgment has to be done to classify them in the first place. There were no quantitative measures used to determine whether Colbert asked a "hostile" question (a dubious category, considering the post-modern nature of the Colbert persona). These judgments have to be made by judging word content (possibly quantifiable, but not done in this case), as well as facial expressions and body language (much more difficult to quantify).
This sounds precisely like political science . . . for better and often for worse, I'm afraid. Of course your hypothetical is practical (if you are asked to guest on Colbert), while most political science proxies are rather inane.
I agree with Zathrus that Colbert is an odd choice here, given that he plays a character whose "opinions" the real Stephen Colbert is often trying to mock. So his actual intent behind questions such as, "isn't it true that people who say what you just said hate America?" isn't really hostile.
Maybe that doesn't matter, because the point is to predict certain types of question, whether they are meant sincerely/literally or not. But I would wonder about accurate classifications.
I also wonder where this repeated Colbert question fits on the scale:
"George W. Bush: great president, or greatest president ever?"
I find it a bit difficult to believe your categorization of "hostile" questions (apparently based on syntax) is accurate. For example, while "Isn't it true that X" frames a hostile question most of the time, the question can easily be a softball if X is supposed to be a parody of a conservative point of view or otherwise obviously absurd.
I'm not a big Colbert fan, but from what I've seen of the show he seems to ask a nontrivial number of fake hardball questions. If your description of how the questions were coded is accurate, I find it difficult to take the results seriously.
Why are avoiding the real question about Colbert? We need to know whether is claim that he doesn't see color is true or not? Is there a difference between the questions he asks people based on their skin color?
[For the irony-impaired or people who haven't seen the show, Colbert's supposed inability to see skin color is one of his longest-running gags.]