Web
Analytics
Dilemmas of Digital Platform Power
top of page
  • Writer's pictureJessie G Taft

Dilemmas of Digital Platform Power

Updated: Oct 13, 2021


By Amy B.Z. Zhang (Cornell Tech)


Our lives now increasingly involve interactions with digital platforms of various kinds, with functions ranging from ride-hailing to social media. At the same time, it is also coming to our attention the amount of power they wield, on both the supply and demand side that they help to channel. For example, gig work platforms have repeatedly provided examples of the degree of impact they could have on the livelihood of those providing services, in addition to the users of these services; examples include Uber/Lyft, Substack, OnlyFans, and in a sense delivery services. E-commerce platforms have been the primary sources of our consumption in the past year, dictating what we did or did not have access to; on the other hand, they also have significant influence with the suppliers, and some major players in particular are accused of exploiting sellers, driving product homogeneity, and even contributing to declines of independent businesses in physical communities. Then there are the social media sites that almost never left the spotlight amid the recent waves of momentous events, all of which bring to debate the role these platforms played and/or should play in various capacities, from the transmission of misinformation/ disinformation, with underlined severity amid a global pandemic, to being the vehicle for polarizing political messages, particular in extreme cases such Jan 6 and the Taliban.


What is the origin of power for digital platforms?


While how the power came to be and the specific dynamics of course are complex and site-dependent, at the most fundamental level, the root is in the platforms’ roles as information exchanges. By maintaining a space to host information, a platform dictates how information from suppliers enters, and how it gets retrieved by consumers. This is the aspect of power we will focus on in this discussion. Note we use “information” in the general sense, which could include product listings, ride requests, and contents of various media. Similarly, “supplier” and “consumer” also refer generally to any entities creating and consuming the corresponding information; they could even be interchangeable, for example on a social media platform, a person can be simultaneously a supplier and a consumer.


Digital platforms hold significant control in the flow of information between the suppliers and the consumers. At the base level, they decide which content (and often actors from both sides) are allowed in the space. Furthermore, platforms dictate the level of connectivity of the two sides; this could be through strict separation, for example based on some criteria such as age, subscription status, etc, but could also be through less direct influences such as the format of information. Here is where an additional layer comes in, and in fact the special edge digital platforms have over more traditional counterparts. Because of the scale of information made possible by the connectivity of the internet and the decentralized production, digital platforms often must select only a subset of information to present to consumers. This step of information filtering has increasingly become a critical part of the operations, with direct consequences on the adoption of the service and also the bottom line. Thus success of a digital platform, beyond strategic considerations such as finding a suitable niche, also depends largely on the quality of technology used in this information retrieval process, which is where the use of algorithms becomes center stage. This includes matching and dynamic pricing algorithms in two-sided markets such as ride hailing, or personalized recommender systems used on e-commerce and content platforms. Generally speaking, the central functionality is often achieved with a specialized machine learning algorithm that is trained using historical data and minimizes pre-defined objectives.


This is also the step where the use of algorithms in digital platforms most often comes under fire, garnering accusations from the lack of interpretability/ accountability to being a tool of manipulation. So next we will attempt to calibrate, using recommender systems as example, what are some reasonable concerns, in order to understand the implications in addressing them. We focus here on the inherent concerns with the use of technological solutions themselves, and thus will not treat the case where this control is explicitly exploited (such as to artificially manifest popularity). Those are of course real perils that have come with the use of behind-the-scenes technology, which is why they merit entirely separate discussions to do justice to the complexity.


What should we be concerned about?


Under this limited assumption that technology on the digital platforms is used neutrally, for natural purposes such as to increase user satisfaction for suppliers and consumers, what are sources of concern from platforms utilizing algorithms to capitalize on the power of scale?


Recommender systems for example, as a representative form of information retrieval algorithms that we are familiar with, do have characteristics that could have concerning effects as discussed in the presentation from earlier this year. Some are due to the mechanism employed; popularity bias is one example, where more popular items are recommended more and quality of recommendations for users with mainstream taste are better, fueling more homogenous content pushes and viral phenomena. Others are tied more fundamentally to the underlying goal the algorithms are set for, namely to recover what users have expressed interest in previously. For example, when recommended content closely mimics historical interactions, a user’s exposure could become increasingly centered around what they already expressed interest in; in the context of information on a societal level, this could lend wind to what is known as the filter bubble, where all that the user sees are confirmations of beliefs they already hold, limiting their awareness of alternative views.


However, it must be noted that these are not new tendencies created by technology, but rather are largely human in nature. For example, even in a world without internet, popular content is still more likely to be transmitted farther along social networks, as more people pass it on through word-of-mouth. Similarly, psychologists have studied the phenomenon of confirmation bias, defined on Wikipedia as “the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one's prior beliefs or values”, long before the widespread use of machine learning.


Then is it really more concerning when these behaviors are induced by technology? Unfortunately, it is, most prominently because of the connectivity and scale that empower the platforms in the first place. Take popularity bias as an example. Whereas fads used to sweep through one specific segment or local community, such as one high school, now they are no longer stopped by physical barriers, and oftentimes going beyond even the associated segment such as age or class. On the flip side, lesser-known contents become even harder to be discovered, damping value transmission and consequently creation. This becomes more troubling when combined with our attraction towards the exciting and sensational, manifesting for example in the spread of fake news.


In fact, this points to the secondary peril of algorithms in their reinforcing effect on past patterns, and how by interacting with our less desirable tendencies, could lead to troubling results. Take as an example what we mentioned before about our preference for exciting and sensational content. When encountering a title making outlandish claims, one may consciously realize its nature as clickbait, but nevertheless be curious about the content. This signals their preference for similar content and may result in more of these articles showing up in the future. Now can they guarantee their curiosity will never take over again? If not, the cycle continues. The same goes for any temptations in general, which compete for our attention against activities that we place higher value on but are also more effortful. In fact, the compounding effects tend to be two-fold. On the one hand, tempting contents could have disproportionately high occurrence due to less effort and often shorter form, consider for example funny video clips vs hour-long educational workshops, possibly resulting in screwed training signals. On the other hand, recommendations represent external cues, which could have a strong nudging effect on our behavior, especially when it triggers association with pleasure, increasing our likelihood to engage. We can see the effect of this reinforcing cycle in the addictive usage patterns many of us experience firsthand, whether with video, social media, or content platforms.


However, the effect can also be more profound, most notably when such reinforcing cycles affect social dynamics. This happens when a user who is predisposed towards a certain viewpoint favors contents that align with that viewpoint, where the same dynamic could take place: more contents of this view get recommended, which in turn promotes the user to engage with it more, compounding into what is commonly called an “echo chamber”. The effect is most pronounced on social media platforms, where interactions happen between people, resulting in almost a third level of reinforcing effect through the desire for confirmation. Some criticisms towards algorithms used by social media platforms call into question the role of the echo chamber effect in creating what’s perceived as increased political polarization; growing number of empirical studies are conducted with varying conclusions (for examples see 2020 survey), and efforts are made towards modelling the effects mathematically (e.g. see recent work on misinformation and references within; also distilled in a Vox article). Nuanced discussions are needed to fairly dissect the issue, but there are at least two caveats to be noted. First, to reiterate, this behavior is human in nature and happens independent of technology. Even in physical interactions, people who do not wish to engage with dissenting opinions choose to shun anyone they don’t agree with. While technology again does make it easier to find a much larger group of people one agrees with no matter the opinion, it also makes different viewpoints more accessible if one chooses to look for them. Second is that of course, the incentive of platforms to design algorithms that explicitly exploit such psychological tendencies for the sake of engagement must be put in check. But a separate point of consideration is regarding personal identity and free will. The ready availability of technological guidance is leading to more and more delegations, in some cases even the capacity to think and decide. While it could feel like one is made wiser because of the volume of information and expert knowledge they could lean on, without sufficient awareness, one’s identity could be influenced as they implicitly conform to who the algorithm categorizes them as. After all, they are choosing to read these contents out of free will; or are they?


Related to reinforcing effects also is the lack of true intelligence in current AI systems in terms of the ability to initiate changes. While it should be cause for relief for anyone worried about evil AI taking over the world, it could also cause different dangers when algorithms are used for decisions without human intervention, since they reproduce the past without the ability to reflect and correct. This can lead to perpetuation of old patterns, even the problematic ones, effectively causing past failures to be frozen in place. A highly illustrative example is when a platform matches job candidates to roles based on past successful hires; if the task is handed over to an algorithm blindly, the results might never escape any past paradigms, favoring anything from education in certain schools to being white and male.


What are the implications on solutions?


The aim of discussing these example concerns of the algorithms used on digital platforms is not only to look more objectively at some of the unique harms that might result from their power, but also to illustrate the underlying complexities in these issues. It is hopefully clear that there is no easy way to produce a fix that can fundamentally “solve” the concerns of platform power. In particular, certainly none that needs only the actions from any single entity, be them regulators, researchers, companies, or users.


For example, in the research community, the limitation of optimizing for accuracy alone is indeed recognized, and there are extensive works on building recommender systems that incorporate alternative criteria. To list some examples: diversity, which would reduce the homogeneity of the set of things being shown; novelty, to induce more variation from your past interactions or what is most popular; fairness, so there is more “equitable” distribution of content to audiences; etc.


Such effort is certainly highly valuable. However, these methods often come at the cost of recovery accuracy, and in addition could be more complex and computationally expensive, creating obstacles in practical adoptions. More fundamentally, the algorithm-directed way of thinking requires quantifying both the goal and the evaluation criteria. For example, say there is a diversity conscious method; it could produce an algorithm that incorporates 1 part diversity to 9 parts accuracy, or any other proportion. But what is the right split, and who should have the ability to decide? In fact, is the given definition of diversity valuable? After all, evidence demonstrated for the methods are usually in terms of specific mathematical constructs, which are not easy to translate to normative terms uncontested. One approach some companies are taking is to collect user opinion surveys. However as mentioned earlier, some of the effects discussed as concerning result from human nature. So it is conceivable for example to see individual users preferring contents more aligned with their views, even though as a society we worry about the consequences of insufficient exchanges of opinions.


Now take the example of regulators. We mentioned before that it is definitely important that they curb the companies’ incentive to use their algorithm design to exploit human psychology for engagement, but as a direct consequence of the preceding discussion, the subtlety needed in the precise terms of the regulations must not be overlooked.


Another potential action for regulators is to try and break up big technology platforms, with the recent example of the FTC's lawsuit against Facebook. It is looking to be a hard case to argue that there is excessive market power, since competitors certainly exist when measuring the size of user population or active time. However, what is hard to measure and point to is how much monopoly of information a given platform has on any given individual. If one platform has control over most of the person’s interactions with the world, it could essentially determine their perceived reality. The omnipresence and information density of the internet easily dwarf real life contacts for many individuals in this day and age, so if someone treats one single platform as the net, what they access through that platform could effectively shape their reality. If it sounds absurd that one platform could be the only interaction with the internet for an individual, we must remember that those most active online are often not the best representatives of the whole population. Many people simply do not have the time and energy to sift through the overwhelming amount of information out there, not talking even of the capacity. For them, what is reasonable might be limited to straightforward principles, so they look to sources they feel assured by or groups that they feel a sense of belonging to. Since breaking up a big platform doesn’t break up a person’s exposure, the information monopoly on an individual could easily persist. Moreover, as the same voices seem to occupy all platforms nowadays, the sources they consult may not change regardless of the platform.


Final thoughts...


The technologies that lend power to platforms can also be causes for concerns, but as the discussions of some of the example concerns that platform powers hopefully illustrate, the issues are complex, and a product of interwoven effects from technology, business incentives, and most importantly human psychology. As such, we cannot rely on any single entity to produce solutions to mitigate these potential harms, and certainly no fix will be easy. So, it is crucial that rather than pointing fingers, we reflect on what it exposes in terms of our current ways both as individuals and as a society, and seek means to amend going forward. It is undoubtedly true that the technical capabilities of technology have outpaced the preparedness of us as a society, but such is the trend of any revolution. Now that we have gone past the initial celebrations of the benefits, growing pains may simply be unavoidable as we recognize problems and try to catch up. Rather than focusing on the blame, it is much more important that we reflect deeply as only humans can, and see this for both challenge and opportunity in exploring a path that moves the society forward as a whole.


Related Articles






Amy B.Z. Zhang

DLI Doctoral Fellow (2020/21)

Cornell Tech

bz275@cornell.edu



Cornell Tech | 2021


bottom of page