DL Seminar | Spyware vs. Spyware
By Yan Ji | PhD Student in Computer Science, Cornell Tech
It is a challenge for the legal system to find a coherent way of distinguishing bad programs that prevent users from doing what they want or disable other good programs that help users to achieve their goals. The current legal theory for such distinction is user autonomy. In his talk entitled “Spyware vs. Spyware” at the Digital Life Seminar on November 7, James Grimmelmann, a professor of law at Cornell Tech and Cornell Law School, guided us to take a close look at program-vs-program conflicts and understand where current legal theories of what users want go wrong.
Here is a thing.
Programs running on people’s computers may interfere with each other’s functioning in numerous cases. For example, Zoom, a conference application, automatically installs a local webserver on each user’s computer. When you run the Zoom installer, it creates a server such that anyone can make a request to your computer for a webpage. And that kind of request will launch the Zoom server. If you try to uninstall it, it will promptly reinstall itself because some important part of it won’t work. Apple delivered an overnight silent update to uninstall the web server so it no longer works. This is a case Prof. Grimmelmann calls spyware vs. spyware. Other examples include antivirus software, ad blocker, Botter and anti-Botter on Warcraft, etc.
Why the thing matters.
If you have two programs that both in some cases want to disable each other, you may ask the question that what law policy should do about this. Unauthorized use claims such as Computer Fraud and Abuse Act, or the Trespass to Chattels Torts complaints. It could also be a breach of contract when users or some software disable another software which users agree to not uninstall at installation. Modified or disabled programs are derivative works, therefore interference may lead to copyright infringement. If you think of a program that may or may not be malware, classifying it as malware could be cause for defamation. If a program disables its competitor, it could be an antitrust issue. There are a lot of relevant laws here but if you don’t get the same answer out of all of them, how do you decide which one gets it right? The general structure of legal regimes consists of two programs and the user somewhere in between, trying to figure out which of them has a better claim to be on the user’s computer.
How to think about the thing.
Prof. Grimmelmann introduced three heuristics of what a legal system should focus on, i.e., help us think about how to decide which of two conflicting programs belongs on the computer and which of them should be modified or disabled, each of which has something right about it, but also turns out to be incomplete. Firstly, bad programs are bad. The legal system should focus on identifying what software can and can’t do. And it should prohibit people from selling or trying to install bad software that prevents users from accessing their data or using other valuable software.
However, there is a large range of very ambiguous cases. For instance, remote desktop software can be used by legitimate technical support inside an organization to provide real support while it can also be abused for scams. It is not that good software and bad software have different functionalities. There is no clear technical borderline. An alternative answer would be to let users to decide for themselves. Thus, the second way is to give users freedom to tinker. Users should have the freedom to install the software of their choice. For example, CleanMyMac helps users cleanly uninstall software they don’t want. However, the deference to users’ choices doesn’t say on its own what counts as a choice. It doesn’t have a clear way of picking sides between software that improves functionality and software that doesn’t.
This brings us to the theory of agreement, i.e., how do users express their consent to what happens on their computer. The leading legal theory uses the click-through, that you can agree to a bunch of terms and the terms govern the legal rules for your interaction. The problem with it is that people may not know whether to agree. It takes hours to read those terms and even if users read the full agreement, they don’t know what these terms imply. Therefore, the theory of click-to-agree is enough to meet all the needs in software vs. software cases.
What to do about the thing.
Users have diversity of skills and goals. Some users have technical skills and want stuff in their way so they want to be protected from malware. Some of them want to enter into pretty binding agreements. Others want to have nothing to do with software that makes demands of them. Prof. Grimmelmann pointed out that spyware vs. spyware cases are interesting because they involve conflicting expressions of user consent. User consent has always been problematic, especially when you have more than one program, each of which can make a plausible claim to be carrying out the users’ intentions. Good software helps users do what they want to do, so software law should focus on users’ goals. Although user choice is an essential value, some choices may be harmful to others, the result of mistake, fraud or coercion, or implicit, which software law should apply limitations on. Click-to-agree, one of the most widely used choice architectures, is a bad one. It doesn’t surface the right questions. A better theory would be more realistic of what users are trying to do, more willing to say that certain choices don’t comply with the consent and others do.
Back to the Zoom and Apple case, Apple allows users to uncheck the box for automatic installation of security patches, so users could rationally choose Zoom’s server. Automatic security updates with a system preference opt-out are a reasonable balance between beneficence and respect. This is the place in which law can learn from the industry in some important ways, as summarized by Prof. Grimmelmann.