Counterpoint: algorithms are not free speech

In the New York Times, Tim Wu advances a fairly nuanced article about the risks of letting technology companies claim First Amendment protection for the product of their algorithms, something I discussed in a recent column. Tim worries that if an algorithm's product — such as a page of search results — are considered protected speech, then it will be more difficult to rein in anticompetitive or privacy-violating commercial activity:

The line can be easily drawn: as a general rule, nonhuman or automated choices should not be granted the full protection of the First Amendment, and often should not be considered “speech” at all. (Where a human does make a specific choice about specific content, the question is different.)

Defenders of Google’s position have argued that since humans programmed the computers that are “speaking,” the computers have speech rights as if by digital inheritance. But the fact that a programmer has the First Amendment right to program pretty much anything he likes doesn’t mean his creation is thereby endowed with his constitutional rights. Doctor Frankenstein’s monster could walk and talk, but that didn’t qualify him to vote in the doctor’s place.

Computers make trillions of invisible decisions each day; the possibility that each decision could be protected speech should give us pause. To Google’s credit, while it has claimed First Amendment rights for its search results, it has never formally asserted that it has the constitutional right to ignore privacy or antitrust laws. As a nation we must hesitate before allowing the higher principles of the Bill of Rights to become little more than lowly tools of commercial advantage. To give computers the rights intended for humans is to elevate our machines above ourselves.

I think that this is a valuable addition to the debate, but I don't wholly agree. There is clearly a difference between choosing what to say and designing an algorithm that speaks on your behalf, but programmers can and do make expressive choices when they write code. A camera isn't a human eye, but rather, a machine that translates the eye and the brain behind it into a mechanical object, and yet photos are still entitled to protection. A programmer sits down at a powerful machine and makes a bunch of choices that prefigure its output, and can, in so doing, design algorithms that express political messages (for example, algorithms that automatically parse elected officials' public utterances and rank them for subjective measures like clarity and truthfulness), artistic choices (algorithms that use human judgment to perform guided iterations through aesthetic options to produce beauty) and other forms of speech that are normally afforded the highest level of First Amendment protections.

That is not to say that algorithms can't produce illegal speech — anticompetitive speech, fraudulent speech — but I think the right way to address this is to punish the bad speech, not to deny that it is speech altogether.

And while we're on the subject, why shouldn't Frankenstein's Monster get a vote all on its own — not a proxy for the doctor, but in its own right?

Free Speech for Computers?

(via /.)

(Image: Frankenstein Face Vector, a Creative Commons Attribution (2.0) image from vectorportal's photostream)