top of page
  • Writer's pictureJessie G Taft

DL Seminar | Radical Markets

Updated: Jan 8, 2019

By Natalie Chyi | Masters Student | Cornell Tech

Illustration by Gary Zamchick

DL Seminar Speaker Glen Weyl

Glen Weyl believes the primary source of inequality today is the massive imbalance of power between capital and labour. Because capital is concentrated in the hands of the few, most people are unable to work freely, and are instead oppressed by capital. To add to this, labour has been devalued, especially online, where there is the expectation that everyone’s work should be put up for free. As a result, the flourishing data economy is built on a quasi feudal relationship where technology platforms are the lords and individuals are its servants. When the platform is free, you pay for it through the data you provide - everything you create in the online space of these platforms belongs to the owners of that space. Which makes it no surprise a dramatic share of income in tech companies goes to the owners of that capital, rather than to labour - the labour share of Facebook is only 15%, while it is 20% at Google and Microsoft. If companies like these represent the future of the economy, which many think they do, then Weyl warns that it’s a future with almost unimaginable levels of inequality.  Aside from increasing levels of inequality, this feudalistic data economy is also inefficient from an economic point of view because it doesn’t work as a competitive labour market. And the biggest impediment to a more productive data economy, he suggests, is that people don’t know the value they’re creating. This is intentional on the part of large tech platforms, who keep the work hidden from “us” - the people who are both producing and consuming the work - to make the functioning of the platform and production of content seem almost like magic. His idea is that if people realised that their actions were creating value, they would do additional work and further improve the products they’re already contributing to. This incentive comes when people have a real economic stake in the outcome, and are compensated for the data the provide and the labour they expend in producing that data. If we were able to change the way data is conceptualised and valued, the balance of power could change significantly. The current conceptual frame sees and treats data as capital, but Weyl advocates for a transition to treat data as labour instead. For example, under data as capital, we think of data as “exhaust to be picked up by whoever is smart enough to make useful” (in this case, tech platforms and data brokers). But if we thought of data as labour, we'd see data as intentionally created by user actions and efforts, where fruits should belong first to individual contributors. Treating data as labour would also have implications for how AI is predicted to displace jobs. Rather than making the majority of human workers “obsolete”, AI could be considered another production technology that “raises individual productivity by allowing people to amplify the value of their data".  So if we appropriately valued the producer & production process of data, Weyl believes that the whole economy would be more efficient. If people were compensated for their data, people would produce better data because they would feel that they had a stake in the data economy. It would also mean that income would be directed to labour rather than capital. Of course, just changing the theoretical narrative around data is not enough on its own - there have to be accompanying mechanisms which allow individuals to actually be compensated for their data. Weyl mentions 3 countervailing powers that could be used to address this: competition, government action, and collective action.  He focuses particularly on the last countervailing power (collective action) as a solution, in the form of data unions that mediate individual data (MIDs). He envisions a diverse range of these organisations that will facilitate the collective organisation of individuals and protect their agency against the hegemonic power of technology companies. The three roles they would play include: collective bargaining, training to help individuals deal with choice overload, and certifying the quality and trustworthiness of data people produce. A number of these types of organizations have popped up in recent years, including datawallet, DigiLocker, EQITII, and others. However, Weyl doesn’t feel that these organisations embody the principles that ensure adequate protection of individuals. One such principle is the idea that organisations should work in a fiduciary capacity. He means that they should have an exclusive obligation to represent the best interests of their members, and not put themselves in any position of conflict. This would apply in legal, financial, and structural ways. For example, the organisation should not be funded through transaction fees on sales of data because that is fundamentally inconsistent with their role.  Other guiding principles include competence, rent sharing, quality standards, operating with the goal of longevity, protecting inalienable provenance, and the appreciation of biological realism and cognitive realism. He also believes that competition is crucial, and that there should be a diverse range of MIDs protecting the interests of different groups, rather than one large organisation that mediates for everyone. While I think that MIDs have a role to play in ensuring a more liberal digital economy (especially if required to act as fiduciaries), there are a few questions I have about Weyl’s conception of them.  He mentioned that a primary role of MIDs should be to provide training that helps individuals deal with choice overload. He explained this to mean that a fiduciary should be manage your attention for you, and to help people filter out what the right uses of their time and attention should be. Some examples of the “right” uses of time he gave included the completion of digital tasks that are micro productive, or tasks that could earn the user money (such as providing data). This suggestion that MIDs should be helping people assess what they should or should not be doing with their time seems problematic to me, and inconsistent with the aim of creating a more equitable digital economy. I am also not sure how pushing people to do more work in their spare time would improve inequality. I also find it disappointing that he doesn’t address how personal data is used by tech companies, and how this data could be (and already is) misused. He mentions that monetary compensation would incentivise individuals to contribute more and better data because they feel they have a stake. But he doesn’t say whether that stake includes a say in how their data is being used or shared with. He also doesn’t include the safeguarding of this as a major role of MIDs. When something like Cambridge Analytica happens, or when it gets out that 22andme is selling customer data to big pharma companies, individuals are unhappy that their personal information is used in a way that they do not want it to be used, not because they weren’t paid for providing that data. Simply paying people for providing that data does nothing to solve the problem of the misuse of data. In fact, incentivising people to provide more data just increases the amount of their information that could be misused. He also doesn’t give any insight into how pricing of data would work - monetary compensation to each person is likely to be a small sum (it was estimated that data scientist Aleksandr Kogan paid less than two cents for each Facebook profile used), so how much a “stake” would this actually be then? And what groups of people would be more incentivised to give up more of their personal information?  To conclude, I found Weyl’s presentation to be extremely interesting, and I especially enjoyed (and agreed with) his argument for a paradigm shift to see data as labour rather than as capital. But I am skeptical of the roles Weyl thinks MIDs should play - facilitating data monetisation (which could lead to exploitation), deciding how people should spend their time (which I’m not sure is relevant to making a just data economy), and policing the “quality" of the data people provide (which seems to be for the benefit of companies than users). I appreciate that his argument is an economic one, but it would be interesting to see how MIDs could expand their scope of protection for users if more principles of data privacy and consumer protection were incorporated as primary concerns.

bottom of page