Tuesday, December 11, 2007

Why Don’t Sports Teams Use Randomization?

Ian Ayres

Here's a cross post to a entry I just put up on the Freakonomics blog.

Randomized trials are the gold standard of medical testing, but so far no one has come up with a single example of a team that has used a randomized control trial to test alternative sports strategies.

I hereby offer a free copy of Super Crunchers to the first person who points to the first published randomized study of strategy. (I might even send one to first person who can convince me that one has been done even if it hasn't been published).

I'd be happy to help a coach of virtually any sport at any level help design and run a randomized test of alternative strategies.


I suspect the reason most teams don't use random strategies is that it doesn't allow the players to make decisions that will allow them to exploit an opponent's weakness.

If a football team is really bad at defending the run, you don't want to run at them in a random pattern, you want to run as much as possible. Even so, if the quarterback gets up to the line and sees that the other team is expecting a run, you don't want to force the QB to execute a play that he knows is going to fail.

Of course, if your QB is facing the Miami Dolphins, he can run pretty much any play he wants. The Dolphins really suck.

Didn't Mike Tyson once say that everyone has a plan until I hit them?

After that the strategy is randomized.

Other than that, it is definitely going to be difficult, mostly because the team which loses appears to employ the double-blind randomized trial, whereas the winning team appears to have unsealed the keys to that trial before the game.

Do you consider chess a sport? Although it isn't usually a team sport, there are lots of randomized trials performed by computers in chess.

This stuff is above my pay grade but I believe there has been a lot of this kind of work on penalty kick strategies in soccer (including a paper by Chiappori, Groseclose, Levitt).

I think there are some players that use the random strategy but they'd be at a disadvantage to admit it, no?

Oops. I guess you meant BESIDES that Levitt study. I didn't read your full NYT post until after I replied here in the blog.

Prof. Ayers,

Great post. I've often wondered about this too (but only in a cursory way). Quantitative analysis is slowly gaining traction in sports decision making, but there is resistance.

After practicing sports law for several years, I recently returned to school to earn a Ph.D. I follow tennis closely (mainstream and academic). Off hand, the earliest "randomization" article I know of is by Jackson and Mosurski (1997) - Heavy Defeats in Tennis, Pyschological Momentum or Random Effects.

Regards, Ryan

Yes I've been thinking about alternative strategies for baseball. Why is there always a closing pitcher, but not an opener? Is there a rule against it? Why aren't there more trick plays like they had in the movie "Rookie of the Year" where the first baseman takes the ball back to the base after a meeting at the mound for the easy pick off. In basketball, how about devising a team entirely of good three point shooters, and only shoot 3's, nothing else. Thoughts anyone??

When I was a kid, like in elementary school, I read a book on my family bookshelf _How to Lie With Statistics_ (1954). Maybe this fed into my skepticism or just put me on guard, I don't know. But number crunching itself needs to be subjected to some analysis.

The key to useful analysis is first to collect data as accurately as possible.

Analysis may reveal a potential trend, but analysis that has the end goal of producing a trend is already biased, regardless of the desires of the analyst. What do I mean by this? Analysis is not an end product, but merely a way to sharpen your original hypothesis. The cycle must start again to test (prove) the hypothesis.

Maybe after a number of iterations the analysis becomes more valid, but not on the first iteration, and maybe never by a disinterested analyst looking to discover a real link between cause and effect.

As I said, the key is taking data, but what this means is not instantly obvious. Taking data, recording observations implies that the researcher is paying attention to something. It might be the wrong thing, the instruments might not be very accurate, but the researcher is paying attention.

Contrast this with a disinterested analyst. Possibly the analyst has no domain specific knowledge, no unrecorded operational knowledge on which to add or subtract from the raw analysis. This could be good or bad, but without consultation with the researchers who performed the measurements which produced the data, raw analysis is not necessarily meaningful.

I think there are some players that use the random strategy but they'd be at a disadvantage to admit it, no?

# posted by joejoejoe : 6:17 PM

Every player/team should include some randomness as part of their strategy. If you don't you develop patterns that can be exploited.

There are no controlled experiments in professional sports because winning real games is too important. However, there is a lot of analysis (among smart teams) of natural experiments in actual play.

Commenters seem to misunderstand you to mean randomized, game-theoretic play. If I'm the one who misread, I give you Bill Walsh's scripted games, dating from the late '70s at the latest.

About 7 years ago Mark Cuban responded to a vision research scientist fan who suggested that the wiggle sticks used to distract opponent free-throw shooters would be more effective if deployed in unison, as a wave effect rather than random wiggling. He suggested a controlled trial of random vs organized wiggling. Cuban had the fans follow advice of scientist, and the sticks were waved in unison. Free-throw percentage for the game actually increased, so experiment terminated at n=1. I have forgotten the scientist who related this situation, but someone in the Mavericks home office should be able to confirm.
Considering all the voodoo in sports, I think they generally don't have the stomach for randomized, controlled trials.


Your comment reminded me of a question asked of the loser. Would you do it again the same way? No. Why? It didn't work. Doh. When the measure of success is winning or losing it is very difficult to overcome this.

Maybe an individualized sport like golf where it is very easy to measure more than just win lose. On each hole you can measure yourself against the field and against par. I understand that the handicap system is very well tuned. Each course, and each hole have more than just a par, they have handicaps, so you can judge yourself 18 times a round, and compare this to yourself and everyone else over time.

In addition, in golf there is at least the deliberate intent to implement a strategy on each shot. And at least the player knows how well the strategy worked. Usually these strategies are developed over a period of time with a coach and the caddy. In one tournament, you get four shots at each hole. Lots of data which is relatively free of the confusion of sports where your opponent can influence your success. Also, the golfers not in the running for winning are more willing to discuss their strategy.

Excellent posts. I particularly liked the wiggle sticks idea. It would be easy for the student cheerleaders at Duke to randomize over the dozen or so distractions to see which one is most effective at reducing freethrow percentage.

wcw is correct that many of the comments fail to distinguish between randomized strategies and randomized tests of which strategies work best. Of course lots of teams play random strategies (and there are empirical studies testing whether their strategies are in fact random). But there haven't been random tests of whether one (random or non-random) strategy works better than another.

Lots of interesting explanations for the absence of randomization. The one that doesn't work for me is the game is too important. We use randomized tests in medicine with life and death is at stake. And professional teams could pay minor leagues and colleges to run tests for them (and pay them to keep the results quiet).

Mike Marshall (the pitcher, not the batter) suggested throwing a rtandom selection of pitches to prevent the hitter from being able to guess what was coming next.

Post a Comment

Older Posts
Newer Posts