Law in the Internet Society

Imagining a Regulatory Regime For Data

-- By ZainHaq - 12 Nov 2017


Today, the internet runs on data. That this is a dangerous development for human freedom, and that it is possible to imagine a world in which this is not the case, is the focus of this course. In the spirit of not letting the perfect be the enemy of the good, however, I want to think about regulating a data-driven world. To this end, I will suggest cornerstone principles for governmental regulation that disarms the most dangerous chambers of this ticking time bomb.

In particular, I believe there is already a highly regulated domain – environmental protection – that can guide regulatory agencies looking to regulate data collection, storage, and usage. After briefly discussing three aspects of this regulatory regime that could guide data regulators, I will consider three arguments against and respond.

Environmental Protection Analogies

I believe that the increasingly public effects of data usage mirror the public effects of private pollution the US began to see more of in the 20th century, and that similar principles could underlie regulation of those effects.

In the same way that people share the harms of contaminants in the soil, water, and air, greater data collection and usage means, increasingly, we are all going to feel data’s pull – whether we want to or not. As collection grows more robust and analysis gets more sophisticated, the predictive capabilities of these systems will reach users who aren’t even in the system. Instead, all new users need to show are patterns similar to those already in the system for these new users’ privacy to come under threat. In order to protect the public from the effects of chemical releases, the EPA looks primarily to regulate actions that impact public health – giving the agency a limited mandate, a limitation that data regulators could roughly mirror. While all data collection can present risks, the most powerful and dangerous uses combine the knowledge companies collect about their users and create predictions on that basis. This limitation preempts (to a degree) complaints about regulatory overreach and provides structural clarity for the regulators’ mission.

Lastly, as companies attempted to disclaim responsibility for shoddy disposal and spills – between creators, transporters, and disposers – the EPA stepped in and declared, in many circumstances, every participant liable. Such a scattershot approach could also make sense in the world of data regulation because of the ease with which data moves and transforms. Because many companies that control data (especially established, old-line firms) don’t have deep expertise in how to handle data, they often outsource both collection and analysis to second- or third-parties. These parties may then combine that knowledge with knowledge from others – including, sometimes, direct competitors. I believe all parties involved– the collectors, the analyzers, and the beneficiaries – could and should be liable. Some may argue that such a rule would scare away new prospective data collectors – as I well think it should. If the requirement to be responsible with (or else be liable for) data scares a player out of the game, they probably shouldn’t have considered playing.

Objections & Conclusions

So let’s say that data’s externality effects make data regulation analogous to environmental regulation. Let’s say that, by limiting the public effects of data, we can limit the complaints of regulatory overreach. And let’s say we impose responsibility widely, to seal system leaks. Who might oppose such a system? Three groups, each with their own basis of opposition, come to mind. The first group, established tech giants, could argue that this regulatory body will imperil the future of their businesses. They could (will) say that regulators have no idea what the risks are, that these regulations make data collection worthless, and that user experiences (and prices) will suffer as a result. Instead of responding to each of these individually, I would just point to both short- and long-term history. In the short term, I’d point to these own companies’ stories. Analysts doubted that internet giants would find a way to monetize, and then the giants found advertising - and did it (debatably) better than any of the players in the market were doing it. In the long run, we’ve seen countless industries adapt to and even benefit from standards that improve collective industry responsibilities. I would also suggest that such a regime will protect them in the long run from both startups and the advocates who would push for a data-free internet.

So what of these startups – will they actually be harmed? Well, the truth is, maybe. If upstart apps and such can’t quickly bootstrap data on their users to target ads, they’ll be in a tough spot when it comes to monetizing. But this is a hard capability to fully develop for most startups anyway – and the full picture is that most startups fail, in part for this reason. The data they collect, then, may well be for naught regardless. And if they do go under, there is no entity left to bear responsibility for potential harms done. So, by imposing this standard and likely outsourcing this task to competent players, startups can both focus on what they do best and not worry about the liability they might face for their data down the road.

And what of the advocates for a data-free rework of the internet, concerned that this regime precludes getting to a data-free world? I would say two things. First, these sorts of institutional regimes often operate like one-way ratchets; crises end up leading to more disclosure, more liability, and a more data-less internet. Second, getting the US to adopt standards instigates the country’s global concern about the issue. This will put targets on the backs of the worst offenders.

Ultimately, the fight on this question is global. Recruiting the US’ support in the fight for data privacy will embolden the forces of freedom moving forward. But, in order to embolden the country in this effort, it must first feel as though it’s doing right at home – and this regime moves a giant leap closer to that goal.

You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Webs Webs

r3 - 22 Dec 2017 - 01:12:25 - ZainHaq
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM