Digg this story   Add to del.icio.us   (page 4 of 4 ) previous 
Aye, Robot, or Can Computers Contract?
Mark Rasch, 2007-11-16

Story continued from Page 3

Under traditional contract law you can agree to the terms of a contract expressly by word ("sign here") or ("click 'I agree.'") You can also agree to the terms of a contract by a certain bargained for action. For example, "by entering these premises, you agree to be bound by the rules of decorum which are clearly posted here" You don't have to come in, but if you do, you are likely bound.

Most cases of "clickwrap" contracts revolve around whether the terms of the agreement are clearly and conspicuously posted, whether a reasonable person could have been aware of them, and then only secondarily, whether they are either reasonable themselves or whether a person has a reasonable alternative to the goods or services provided. In those cases, the fact that the terms can be read generally precludes a defense of "I didn't read it."

The problem for automated or robotic actions is that the robot -- unless sent out for that purpose -- has neither the ability nor the intention of binding its creator. Science fiction writer Isaac Asimov's First Law of Robotics is "A robot may not injure a human being or, through inaction, allow a human being to come to harm." I presume Asimov included legal as well as physical harm in this rule. Imagine a circumstance where I send you an email that says, "If you agree to [whatever] just hit 'reply.'" If you actively hit reply because you intend to be bound, then you likely are. If you accidentally hit reply, you can probably argue no intent to be bound, unless you get the benefit of the bargain and do nothing. But what if you have an "autoreply" agent on your email? Since there was no "meeting of the minds" you would likely argue that you were not bound because you never read the inbound email which made you bound. Chicken? Egg? Who knows.

Remember the Second Law of Robotics? "A robot must obey orders given to it by human beings except where such orders would conflict with the First Law." If you send out a spider, a worm, or an autobot, and it is just following your directions, aren't you bound?

As a practical matter, we can expect courts to say that you can't avoid contract liability simply because you used an automated program. Fair enough. If you as a human know or are reasonably aware of contract terms or conditions, you can't write a program to get around them. Unfortunately, the court will likely next look at whether the terms on the clickwrap contact are "reasonable." Like I said before, that's all well and good if I agreed to the terms -- which is the real issue for a court to decide.

Ultimately, if all of these terms and conditions, EULA's, and the like are to be considered enforceable, they will have to be enforceable against humans and their avatars. Asimov had to add a "Zeroeth" Law of Robotics: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

Too bad that doesn't apply to lawyers.



Mark D. Rasch is an attorney and technology expert in the areas of intellectual property protection, computer security, privacy and regulatory compliance. He formerly worked at the Department of Justice, where he was responsible for the prosecution of Robert Morris, the Cornell University graduate student responsible for the so-called Morris Worm and the investigations of the Hannover hackers featured in Clifford Stoll’s book, "The Cuckoo’s Egg."
    Digg this story   Add to del.icio.us   (page 4 of 4 ) previous 
Comments Mode:


 

Privacy Statement
Copyright 2010, SecurityFocus