Balkinization  

Friday, March 09, 2007

The Autoadmit controversy: Some notes about social software, code, and norms

JB

The Washington Post recently ran a story about Autoadmit.com, a bulletin board site in which prospective students at colleges and law schools talk about the admissions process. Several Yale law students have been targeted for harassment and defamation at the site; links to one student's pictures have been placed on the site [not uploaded to the site as previously reported] and commenters have made lewd comments about her appearance, her breasts and her alleged sexual habits. After the student complained, other commenters have exhorted people to follow her around and take pictures of her when she goes to the gym to work out. The site has also had various racist and bigoted comments, not to mention plenty of other comments that are in extremely bad taste.

The Washington Post story reflects an all too familiar tale about social software: there are a lot of people using the site who can speak anonymously and therefore feel they have no responsibility for how they harm other people. When you have that combination, and when nobody in the community-- including the site administrator-- stands up for and signals the importance of decent behavior on the site, you inevitably get bad results.

I'm a professor of First Amendment law, so let me make one thing clear at the outset: People may have free speech rights to make some or even most of these comments. But we should never confuse whether one has a constitutional right to say something with the question of whether one is right in saying it. Comments that harass people and invade their privacy are completely inappropriate; moreover, they have nothing to do with the goal of the site, which is to share useful information about the admissions process.

It is very unlikely that the operators of Autoadmit can be held liable for the comments posted on the site; nor should they be, in my opinion. But that is really not the most important question. The real question is whether the site administrators should, as a matter of common decency, work to change social norms or to change the code on their site to prevent the site from being used to harass people and invade their privacy. Doing this does not violate anybody's free speech rights. In fact, what it does is make the site more useful for people's speech about admissions and encourage a better kind of information sharing.

One of the central insights of cyberlaw is that how you structure a social software system has a great deal to do with how valuable the system turns out to be. A second insight is that code can't do everything; social norms are particularly important too. If the site administrator or the other users of the site make clear that they do not condone harassing behavior, that would go a long way toward making people behave. Nevertheless, code and norms interact in important ways. Redesigning the code can help beneficial norms develop and sustain them over time. And that leads to my basic point: The site administrator controls the code, and he or she can go a long way toward encouraging good social norms.

The problem with the owners of Autoadmit is not that they have done something illegal; it is that they have failed to design and maintain their system properly. They have implemented what turned out in practice to be a bad design whose problems should have become obvious to them; moreover, once its flaws became apparent they failed to lift a finger to fix things. They completely abdicated their responsibility to the community who uses the site to encourage socially beneficial norms for the user base. As a result, the site is far less valuable than it could be, and it allows a small group of immature miscreants to injure people at will. The owners of the site may bear no legal responsibility for this, but that does not mean that they bear no responsibility at all.

Let me say something about the prevailing law. Under section 230 of the 1996 Telecom Act, Autoadmit is probably immune from suit for hosting comments that are defamatory or harassing. Although section 230 does not immunize some violations of federal electronic privacy law, it probably does apply to the ordinary privacy tort of disclosure of true embarrassing private facts. Autoadmit is probably also not liable for the use of the site to encourage other persons to stalk one of the female Yale law students and take pictures of her at her gym.

Since Autoadmit can't be sued, the only targets of legal action are the actual persons who posted the comments. It may be hard to find them because they used pseudonyms. In theory, the aggrieved law students could try to subpoena the ISP's the commenters used to find out their identities. But that might not succeed.

A better approach, I think would be to recognize that this case is about code and social norms. People should put moral pressure on the site administrators of Autoadmit to denounce bad behavior on the site and to change the code on the site to encourage good behavior and to limit comments that harass and invade people's privacy.

There are two obvious ways to recode the system. One involves moderation. The other involves identified anonymity. Each solution has its costs and benefits. Although neither solution is required by law, each is perfectly legal. Moreover, if the site owners institute either solution they are insulated from liability under the very same statute, section 230, which holds site owners harmless if they remove content, even if they do so selectively.

The first way to deal with harassing and defamatory comments is for site owners to moderate comments. The difficulty is that this can be very time consuming give the volume of comments. However, the site owners can create a system to delegate moderating to a team of volunteers like Slashdot does. This takes much more programming work but Slashdot shows that such a system can be very successful at self-policing. It essentially uses community norms backed by code to give people incentives to behave themselves.

In fact, even the Slashdot solution may not be necessary. Moderation doesn't have to be all that comprehensive to work. It merely has to signal what appropriate behavior in the community is. Even if the site owners simply announces that they will occasionally remove some comments that, in their opinion, go way over the line, they will be making an important statement about social norms. At the end of the day, social software sites rely heavily on social expectations. Autoadmit has degenerated into a place where people feel they can get away with exhorting others to stalk female law students, harass them online and invade their privacy. Clearly something has gone wrong here. The site administrators should lead by example. They should make a statement that this conduct is unacceptable, and encourage the rest of the community to join in. Failing to do even this much to discourage bad behavior is not protecting free speech. It is simply shirking responsibility.

The second way to deal with harassing comments on bulletin board systems like Autoadmit is to use identified anonymity. This requires posters to adopt pseudonyms connected to verifiable e-mail accounts. (This is what Google now does on Blogger, for example). A verifiable e-mail address means that the identity of the commenter can be traced. Indeed, if this system is adopted, the site owner could even institute a notice and take down system along the lines of section 512(c) of the Digital Millennium Copyright Act (DMCA). But this, too, might be unnecessary because the mere fact that one has to provide an traceable e-mail address should deter most people from misbehavior.

There are probably other ways to change social norms and to rework the code to discourage harassing behavior. I invite suggestions in the comments, and please remember, this blog is also a social software system, so play nice!

Comments:

Autoadmit has long been a staging area for this kind of behavior. It is ironic that a board that professes to also be a gathering place for the elite minds of T14 schools is known for such abhorrent behavior. Posting photos from a Harvard admitted students reception and rating the women 1-10, listing the most ridiculous African-American names, or singling out a student who was questioned about URM status, are personal and intrusive acts that should be moderated.
 

All of which is excellent reason that *all* people with dignity and with respect for our fellow human beings need to work together to improve our communities and workplaces. It's so unfortunate that in many of the legal workplaces and in academia, the burdens fall on affinity groups like the law school women's group, or BLSA, or other affinity groups, to combat hatred and injustice perpetrated by the immature and deviant few who diminish everyone by their actions. I hope the legal community can use the awareness created by the Autoadmit controversy to move beyond such a narrow vision of where "our" interests lie.

Discussion forthcoming at the Legally Female conference on Saturday March 31 - including panels on "The Importance of Alliances: The Place of Men in a 21st Century 'Women's' Movement" and "Technology as Tool" (which will discuss the opportunities and downsides of internet and other technologies changing the legal workplace). Info/registration at www.legallyfemale.com.
 

There are, unsurprisingly, services that keep track of free email accounts, and thus allows moderators to enusre that their users don't use such.

Salon did it for years with TableTalk (before it became pay to play), and they had a worldwide audience.

To say it's impossible to implement is nonsense. It's already implemented many place.
 

I came here via a link in the discussion at Feministe, where I have been saying exactly what you're saying -- only with smaller words, because I wasn't sure how many of the visitors there are painful newbies.

What is striking to me is how resistant most of the young lawyers and lawyer-wannabes (not just the XOXO posters) are to the idea that extra-legal social norms can have any force. e.g. Pierre Stallworth's comment above that "moderation would never work".

The fact is that moderation has a long track record of working elsewhere on the 'net, so why does it fail here?

It's not because the posters are "children", as Pierre suggests -- I asked an 11-year-old if it’s OK to put someone’s real name, address & phone number on the Internet if they didn’t ask you to. She said, “Of course not! Everyone knows *that*.”

AutoAdmit isn't full of children, it's full of adults who don't have the social consciences of children. It's so strikingly in contrast to the many, many other moderated groups I've seen on the 'net that I'm coming to the conclusion that there's something fundamentally wrong with the way American law students are selected and trained. Basically, you-all are getting *way* more than your share of sociopaths and near-sociopaths: people who see social rules (including laws) as something to be manipulated, but don't seem to have internalized social norms.

This may explain why, of the many lawyers I've known, the only ones who really seem happy in their work are the public interest lawyers. The rest of you are spending too much of your time with assholes.
 

It's not obvious that Section 230 immunizes the AutoAdmit owners from liability. If it can be shown that they encouraged the use of their site for purposes of defamation, in order to drive traffic to it, then I see no immunity from suit under sec. 230(c)(1).
 

It's not obvious that Section 230 immunizes the AutoAdmit owners from liability. If it can be shown that they encouraged the use of their site for purposes of defamation, in order to drive traffic to it, then I see no immunity from suit under sec. 230(c)(1).
 

"Cohen said he no longer keeps identifying information on users because he does not want to encourage lawsuits and drive traffic away."

Comments like this might form the basis for a conspiracy to defraud claim. Even if section 230 precludes liability for the defamatory comment, a Court willing to do equity could enjoin a message board that is a conduit for unlawful conduct to retain basic user information that would be subject to a court order to turn over the information upon the correct showing.
 

Jimmy: your experience with XO doesn't explain-- at all-- why personally harassing comments should not be "censored." You can give people quite a bit of leeway and still draw a line somewhere. You seem to be suggesting that the only two options are completely unfettered speech, even when that speech is injurious, defies any and all social norms, and serves no purpose whatsoever; and Orwellian speech-control. There's no reason that it can't continue to function (1) as a community and (2) as a provider of useful information about law firms and law schools if Anthony Ciolli were to start minimally respecting the privacy and wish to not be sexually harassed or stalked of a handful of female law students-- not unless XO is in fact full of sociopaths (you would seemingly resist such a description; I might not).
 

Actually Jimmy's comments do help me see a reason for the general level of harrassment at AutoAdmit.

Jimmy's experience shows that the AutoAdmit boards include seriously valuable information -- but "information wants to be free". The harrassing atmosphere raises the cost of that information for certain groups, especially women -- who are a very large proportion of the people who might theoretically benefit from that information.

Harrassment is a way of having a purportedly open and honest resource that is nonetheless, in practice, only useful for a certain subgroup, mostly white males.

So in that sense this behavior is not sociopathic at all, but a rational strategy for dealing with the threat of free information.

But they're still assholes.
 

Jimmy:

No one has to go to the site. There are plenty of heavily censored sites a law student can choose from, so I don't see how this raises the "price of information."

You've said that AutoAdmit has more comprehensive info than other sites. Therefore, a woman (for instance) who wants that information must pay the price of dodging through the mounds of crap there. It is a non-monetary price, but it's still a price.

The fact that it's not an overtly commercial site shouldn't make any difference to the site's social norms: there are a metric gazillion of free sites out there that are still well-moderated. All that's needed is a group of sensible people who can agree on moderation parameters. Other communities can do this, why can't AutoAdmit?

My reference to public interest lawyers is an observation based mostly on people farther along in their careers. In particular, when leafing through my Ivy League alma mater's 25th reunion book -- which included a *lot* of lawyers -- it was striking how much *happier* the public interest lawyers seemed to be in their careers, even though they're bound to be making less money. And that was how it seemed seeing them face-to-face, too: the PI lawyers tend to smile and bounce more than their cohorts.
 

Jimmy --

A woman doesn't have to be personally harrassed to experience emotional costs from going to AutoAdmit. The posts there create a hostile environment even for women who are just lurking. This stress is what raises the cost of information for women, even lurkers.

As for the rest, I'm just trying to give enough info so you can see where my impressions are coming from. The important point is not the Ivy Leagueness, but that they're based on people a lot older than you are, old enough so we can see how each others' lives are shaking out.

What I see is that having a calling, a vocation, makes people happier than just having a job, and that PI lawyers are more likely to feel that their work is a vocation.
 

"It is very unlikely that the operators of Autoadmit can be held liable for the comments posted on the site; nor should they be, in my opinion. But that is really not the most important question."

Actually, it IS the most important question. The issue of Section 230 immunity is far from resolved, and when it is, a lot of internet companies may find themsleves in bankruptcy because they refused to acknowledge the incredible risks they have been taking as distributors of defamation, which AutoAdmit is alleged as in this case.

Distributor liability for defamation has been the norm ever since defamation laws were developed in this country in the mid-19th century. That we passed these laws as an alternative to DUELING to defend our reputation should convey the original intent of the laws. The damage is hardly trivial.

The genesis of Section 230 immunity was to overturn Stratton Oakmont v. Prodigy, where Prodigy was held liable. Then came Zeran v. AOL, and the courts were off to the races. These rulings, and the law, however, came in the pre-Google era, and dealt with ISPs who had contact information for the original source. That is not true of search engines or even web boards.

In Barrett v. Rosenthal, in California, the Superior Court refused the immunity, but the state Supreme Court reversed, claiming that federal precedent required them to. They expressed severe reluctance at making this ruling, and expressly invited Congress to revisit Section 230.

In a recent fair housing case in Chicago, Craigslist was found not liable under the Fair Housing Act, but the court spent several pages attacking Section 230 in its ruling, hinting that it would not be sympathetic to it in other cases. The 7th Circuit declined to grant the immunity in a Dow Jones case but the case was decided on other grounds.

The Supreme Court has never addressed Section 230. I have a case in the Third Circuit (Parker v. Google, 04-cv-3918 E.D.Pa., I think the appeal is 06-2246), where a ruling is due at any moment (overdue in fact). I claimed that Google shouldn't be immune under Section 230 because they couldn't identify the original source of the defamation. Indeed, right now, someone can use an anonymous remailer, or a public computer, or other means, to be untraceable, "googlebomb" someone, and there is no way to sue the source, while Google claims immunity. This is not just either, since Google could remove the statements with a takedown notice. Still other folks just post from hosts in other countries (like Bulgaria) to make discovery a nightmare.



You also write:

"The real question is whether the site administrators should, as a matter of common decency, work to change social norms or to change the code on their site to prevent the site from being used to harass people and invade their privacy. Doing this does not violate anybody's free speech rights. In fact, what it does is make the site more useful for people's speech about admissions and encourage a better kind of information sharing."

That's a fairy tale. Without the weight of legal action, people will fall through the cracks, and be even more marginalized since their problems will be more the exception than the rule.

And you write:



"The problem with the owners of Autoadmit is not that they have done something illegal; it is that they have failed to design and maintain their system properly. They have implemented what turned out in practice to be a bad design whose problems should have become obvious to them; moreover, once its flaws became apparent they failed to lift a finger to fix things. They completely abdicated their responsibility to the community who uses the site to encourage socially beneficial norms for the user base. As a result, the site is far less valuable than it could be, and it allows a small group of immature miscreants to injure people at will. The owners of the site may bear no legal responsibility for this, but that does not mean that they bear no responsibility at all."

When women are getting stalked and threatened, with picturs of them posted online, and no one stands down, this empowers the harassers. Section 230 immunity was never designed for negligence claims, but was "umbrellaed" along with the distributor immunity that the courts invented for Congressional intent that never existed.

Other fundamental principles of law include holding those who cause the most damage responsible for that damage. If a board has a small audience that views defamation in context, that is nothing compared to someone's employer searching their name and finding the board and the defamation when they otherwise wouldn't have.

At least 90 percent of the damage this woman has endured has been caused by Google, who uses the postings to build traffic and revenue. To make them immune when they could stop the harm goes against the principles of justice, and against every defamation precedent from the Supreme Court dating back to the 1800s.
 

Post a Comment

Older Posts
Newer Posts
Home