Web
Analytics
  • Digital Life Initiative

DL Seminar | From Civics to COVID: Dynamics of Misinformation


By Jamie Geng

Cornell Tech


From Civics to COVID: Dynamics of Misinformation


The discussion revolved around findings from a multi-stakeholder partnership called the Election Integrity Partnership (EIP). EIP’s mission was to rapidly detect high-velocity and potentially impactful false and misleading narratives related to voting. The findings were revolved around how incidents became narratives, the rise of bottom-up misinformation, the dynamics of repeat spreaders, and the way in which platform policies shape message propagation.


Renée started the talk with some of the work they did last year for EIP that began 100 days prior to the presidential election. It was very narrowly scoped to understand misinformation and disinformation related to voting.


How incidents became narratives


In Renée’s teams findings, this is a reproducible process that happened over and over again. People would document incidents that were shoehorn into broader narratives, take those narratives and then turn them into conspiracy theories. And then these conspiracies would often become conventional wisdom in certain echo chamber communities online.


The rise of bottom-up misinformation


In Renée’s teams findings, bottom-up misinformation came in the opposite direction of top-down dynamics. Top-down dynamics is a traditional pathway where information emerges in the media. The media reports a story and then people begin to discuss it. Bottom-up misinformation, on the other hand, begins with a narrative that would go viral within certain communities online and would ultimately make its way to mainstream media through this process by which hyperpartisan press or significant influences would pick it up. Then it would make its way to get press coverage. There is a cycle where media and social media bouncing information back and forth between them.


The dynamics of repeat spreaders


In Renée’s teams findings, repeat spreaders are people in the hyperpartisan media layer and influencer space where they reproduce the bottom-up misinformation process and make them go virial repeatedly. It is an interesting dynamic that in hyperpartisan media, social platforms allow people who are essentially one person to become a media of one with millions of followers. Renée mentioned in their findings, it was happening through domestic network activism.


Platform policies shape message propagation


What's allowed to propagate is shaped by policy. What decides what actually goes viral and reaches massive numbers of people is the combination of affordances and policies.

In their report, they went through a number of different examples and how the progressions from concern to suspicion to accusation to massive viral conspiracy happened over and over again through this very participatory process.


In the process, there would be content that was created, disseminated, amplified, and that would lead to certain real life action. There would also be top-down and bottom-up pathways and repeat spreaders from the emergent class of influencers and hyperpartisan media. In their findings, significant impactful figures were part of that mid tier influence or amplification process, that telling people that they were going to see fraud. In the narrative specifically related to allegations of fraud, most of the repeat spreaders were right-leaning influencers. They would pop up repeatedly and frequently. And these narratives would go viral.


Their work also included a comparative analysis of all the misinformation policies that the platforms had that related to the election. There's differentiation between the platforms and what they think is appropriate for their platform. Many of the platforms made their policies stronger, clearer over time. They also laid out much more specific enforcement tools, like labeling content, that they were going to use. But just because there is a policy for the enforcement mechanisms doesn't mean that it was effectively enforced. Gateway Fund was mentioned as an example where there were demonstrably easily falsifiable claims that are made by the same actor. Their enforcement mechanism is distorted, like repeatedly labeling that actor. Their research saw such a repetitive daily drumbeat alleging the steel was something that they felt that platform should be paying much closer attention to those dynamics in the future. Cross-platform complexities is also another challenge because different platforms have different policies. Content on one platform might violate another platform’s policy.


Renée also mentioned areas for improvement. Like how can we improve transparency. Concerns about censorship might lead to migration over to other platforms. There are also concerns about false positives and opportunistically manufactured concerns. There's a lot of debate in the literature about echo chambers on mainstream social platforms. But they are extremely homogenous in some platforms.


Renée mentioned in the last that for them, their work was as much about researching and understanding the dynamics of misinformation as investigating the possibility of what is a multi-stakeholder effort to potentially address or combat the types of false and misleading narratives. They communicated with a variety of different stakeholders within Jira to assess election misinformation. She would state clearly in the Jira and then they would get an e-mail and routed out to whoever was supposed to address it.

That was how they facilitate rapid responses.

Contact

NoC_Logo.png

Address

Cornell Tech

2 W Loop Rd,

New York, NY 10044

Get Here >

DLI Queries

Jessie G. Taft

jgt43@cornell.edu