Web
Analytics
DL Seminar | Misinformation and the Conservative as Victim
top of page
  • Writer's pictureJessie G Taft

DL Seminar | Misinformation and the Conservative as Victim

Updated: Apr 24, 2021




By Frank C. Barile

Cornell Tech


Defining Misinformation in the Age of Proliferating Technology


Part I: Introduction


Background

On March 18th, 2021, Cornell Tech’s Digital Life Initiative (“DLI”) welcomed Professor Ari Waldman of Northeastern Law School for a presentation entitled “Misinformation and the Conservative as Victim”[1]. Professor Waldman is Professor of Law and Computer Science and the Faculty Director for the Center for Law, Innovation and Creativity.[2]


Hypothesis

Professor Waldman’s presentation was admittedly nascent; it was posited as the beginnings of a research project, seeking comment and inspiration from the Cornell Tech DLI community. Professor Waldman hypothesizes that misinformation is largely the province of conservative news media, and conservatives weaponize discourse by manipulating victimization. The argument is that legal institutions are undermined by such misinformation and the legal community has been thwarted in its efforts to combat conservative misinformation, largely due to fake news or lies misinforming both courts and lawmakers. The draft hypothesis contemplates how current institutions are vulnerable to misinformation, given that many were created before social media and virality.


Framing the Issue

This writing is my attempt to assist Professor Waldman by providing a framework for research, albeit incomplete. The draft hypothesis raises many questions and issues, likely most of which Professor Waldman has already contemplated or is an expert in. In an attempt to participate in the discussion and respond to Professor Waldman’s entreaties, below are some thoughts, questions, and analysis from a non-expert with the intention of fostering a fruitful discussion resulting in targeted research and progress. It is a mere framework of thought, focusing on defining misinformation and the troubles with regulating speech in a free-expression society. This writing admittedly raises many rhetorical questions, yet sadly proposes few elegant solutions.


Terminology

The general conservative/liberal and democrat/republican dichotomies require caution.[3] Political affiliation is nuanced; not all conservatives believe in the same ideals, just as not all conservatives are as extreme as others. Similarly, not all republicans are conservative. It also omits moderates and those that don’t fit into a spectrum. Consider the Nolan Chart[4], which poses that there are gradations in between the traditional affiliations, and in fact goes further to create categories of libertarians and authoritarians. The danger of such generalizations is that it ignores those who don’t fit neatly into a category, and attributes policy preferences without any regard for detail. Lastly, labels can doom discourse from the start; eschewing them may just foster discussion and empathy.



PART II: Defining Misinformation


History

Misinformation is hardly new; in fact, it’s somewhat impossible to cite due to the fact that the first occurrence must have been prehistoric. This begs the question: why is misinformation a prominent discussion now? Some posit that misinformation earnestly became a topic of concern during the 2016 Presidential election.[5] But again, opinions and lies undoubtedly have existed well before such event. Most argue that the new channels of misinformation- Twitter, Facebook, Instagram, etc. amplify misinformation, rendering the misinformation problem to be merely static; the only distinction now is its greater degree of impact. Some may argue (rightly or wrongly) that such channels are in fact progressive creations as most sprouted from American tech hubs, or were funded by the tech industry, a bastion for progress.[6]


Misinformation

Professor Waldman admits that a definition of misinformation is elusive. Without such a definition, it is difficult to have an informed discussion on regulating misinformation. Stakeholders can’t agree on a concrete definition[7], foiling conversation and making any progress arduous. What is clear is that misinformation is a form of speech. Speech is a continuum that contains opinions, facts, lies, truths, and all of the gradations in between. The quandary is further complicated by the overlap between such types of speech- an opinion can partially be fact; even lies and truths surely have gradations. Aside from gradation, what is an opinion naturally can vary not just amongst like-minded people, but also across cultural divides, age, color, socio-economic status, or other demographic.


Threshold Issue

If misinformation likely needs to be defined before even discussing it, it must certainly require a definition before legislating. How can the state regulate what it cannot define? When presented with a definitional predicament (albeit in a different context), Justice Potter Stewart infamously spawned a standard of flexibility: “…I know it when I see it”.[8] The danger of creating any boundaries on misinformation is that human error may naturally impute its own bias into a definition. This not only thwarts a workable definition, but may serve to anger political opponents. Angering political opponents is a sport, not just amongst elected officials[9], but also largely across American discourse. Both history and the current state of American polarization evidence that sport won’t resolve political differences nor misinformation.


Intentions

Defining misinformation presents other quandaries too. Must one intend to misinform? Construing one’s intentions after the fact is a slope to slip on. Assuming we can even define misinformation, what can the law do to fix this? Do we intend to regulate misinformation (requiring no intention to misinform), or disinformation (intentionally disinforming)?[10] Who gets to decide what someone else is thinking at the time of speech?


Gradation

Another issue with defining misinformation is gradation. Some speech is patently false, some is arguably false. If a New Yorker said you can arrive in California by heading east, that could be a true statement. Of course, it’s either a matter of perspective, or maybe even a technicality. Perhaps then regulating misinformation would require defining perspectives or technicalities for it to be effective. Is a human better equipped to judge these, or perhaps an algorithm? If the latter, who gets to program the algorithm?


Subjectivity

Aside from facts, true or false, there exists opinions. Subjectivity allows for two opposing views to diverge on value judgment. Opinions surely must be exempt from misinformation, as they likely cannot be false. However, as mentioned earlier, opinion can be mixed with fact. In such gradation, the open question then is whether the falsehood renders the entire opinion to be misinformation. Secondly, what even constitutes an opinion, such that it is exempt from misinformation? This of course applies to the new world of editorial journalism, where boundaries of fact and opinion are increasingly porous and hard to decipher[11]. Who gets to decide what is an opinion?


Perception

Consider this droll wordplay: “There are 10 types of people in the world; those who understand binary, and those who don’t”. Depending on the reader’s interpretation, “10” can have divergent meanings.[12] That’s why it’s so clever; it’s an entendre. Regardless, it proves that perception differs. Whose perception controls here?


Harm

Taking a common modern scenario- a teenager sends a harmless image via mobile phone to another teenager, who at the touch of a button, initiates an application that enhances the image by altering facial hair, skin tone, eyewear, and jewelry. The image is then published online, but does not accurately represent the truth. All actions were firmly intentional, but perhaps without intention to cause harm. Meanwhile, it could cause harm. This act could rise to the level of misinformation using the above definitions. Is this harmless fun accidentally qualifying as misinformation? Is this expressive art? Sure, the stakes are different in this scenario, but the extremity illustrates the unintended consequence.


Contemporaneity

The above largely covers speech made temporally during or after an event. What about future statements of intention? President Trump may have threatened future violence by stating that “when the looting starts, the shooting starts”[13]. What if it’s ingenuine though? For example, someone insincerely threatening future violent action, such as President Reagan joking “We begin bombing in 5 minutes”[14]. How to decipher one’s state of mind at the moment speech is made? Opinion causes the same issue. Life Magazine in 1948 printed a headline stating “Our Next President; Thomas E. Dewey”.[15] Surely this was a statement of opinion as Life Magazine could not have known the election result before the votes were cast or tallied.[16] The publication was opining on who might win the ensuing election. Yet, the way it is stated, it still qualifies as misinformation by most definitions. The extreme case here raises the question whether such forward-looking aspirational statements should qualify as misinformation. And if so, can an algorithm properly distinguish these above cases?


Strict Liability

If intentions don’t matter, then an entirely wider exposure of strict liability exists. Every tweet, every forwarded email, and every gossip passed on is potentially actionable. The New York Times publishes a daily list of corrections of prior speech- undertaking 11 corrections for March 18, 2021[17]. Surely the erroneous prior speech doesn’t rise to the level of misinformation; it may have been inadvertent. Yet, it is misinformation by many definitions; it was false and the New York Times printed it for distribution to the public. Should we regulate the New York Times for all of its multiple daily errors? NBC’s Brian Williams merely forgot 12 years later that he was not in a helicopter that was hit with ground fire[18]; so at best, that may qualify as misinformation.


Due Diligence

Should publishers be required to perform due diligence on sources before publishing speech, given that the US is a free-expression regime? Again consider the 1948 presidential election; this time the Chicago Daily Tribune’s headline: “Dewey Defeats Truman”[19]. While dated, it’s contemporary because it exemplifies a publisher repeating a factually untrue statement, leaving the question whether it knew or should have known it was untrue. That begs the question, should there be a reckless standard, whereby the publisher should have known[20] the news was untrue?[21] Perhaps this would impose a diligence burden on publishers. Yet, Professor Waldman’s draft hypothesis correctly includes that nearly anyone with a mobile phone and an internet provider can be a publisher, with great reach to most ends of the globe. Can hundreds of millions of Americans all be held to this standard every time they post something? This illustrates the Sisyphean struggle in front of society today.


Impeachability

By way of example, the New York Times reported a quote stating that former CNN President Jeff Zucker strong-armed CNN into an anti-Trump network.[22] This raises two issues; if false, has the New York Times impeached itself by printing a false statement that rises to the level of misinformation? If true, should CNN be impeached as a source for evincing an editorial bent that interferes with factual reporting? If Zucker had a “feud” with Trump[23], how could we expect objectivity from CNN with Zucker making editorially biased decisions? If a source claimed Zucker forced an agenda into the news to intentionally distort the truth, what makes that source more impeachable than CNN?


Newsworthiness

There is also a question of newsworthiness- can the media republish misinformation, whether or not it knows it’s untruthful, simply because it is newsworthy[24]? What if the untruth is the news? For example, if Trump claims on Twitter that a combination of drugs may cure a virus[25], and the Boston Globe reports on the story[26], has the Boston Globe spread misinformation?[27] Applying Potter Stewart’s flexible standard, probably not; but it may depend on how the news is presented (context) in that story.


Context

Does context matter? Some speech can be true or false depending on their surroundings. The world’s leading social media enterprise, Facebook, thinks context matters when judging truth.[28] Again there may be gradations here, as truths and lies can overlap, just as fact and opinion can overlap. Because human bias can make for imperfect judgment, who will be the arbiter of context? If we appoint a political extremist, the likely result is vitriol that only serves to divide. If we appoint a moderate, surely the extremists will take umbrage.


Expression

Another complication is expressive art, whether that is humor, satire, comedy, or anything in between. If an artist depicted lies on a public wall[29], that could be misinformation. Or is it exempt as expressive art? Perhaps satire or comedy is similarly exempt. If so much can easily claim exemption as expressive art, misinformation rules are in peril. The alternative here is placing value judgments on expressive arts, which could be a fool’s errand.


Deepfakes

A relatively recent issue is what to do about deepfakes. Deepfakes are electronic representations of a person’s likeness, materially altered to appear as engaging in speech that likely never occurred.[30] Technology has developed deepfakes such that they can be created by a novice user with alarming accuracy.[31] Surely the creator (whether human or machine) intended to falsify a representation of the original, which may render all deepfakes to be disinformation, at least using the above definitions. Deepfakes are especially troublesome because they can portray a speaker as supporting a cause (perhaps even a reprehensible one), and the viewer has little reason to question the authenticity. Once the speech is falsified, it can be difficult to claw it back or to “un-see” it, rendering many solutions too late and ineffectual. Should all deepfakes be regulated, actionable or even criminalized?


Misleading

Perhaps the thorniest issue is speech that is true but misleading. Decades ago, a psychologist opined in a lecture I attended: “in 1850, 100% of the people who ate carrots, died”. This is an incontestably true statement. It’s also misleading because it implies implausible causation, drawing the reader to a conclusion that is partially false.[32] This category of speech is yet another hazard of regulating what is and what isn’t misinformation. Similarly, if a newspaper publishes a story of a plane crash, when it has knowledge that it was maliciously shot down, the speech isn’t untrue, but could be a material distortion of the truth. While the statement isn’t false, it hardly informs the reader to the point where the speech is materially deficient. Perhaps here Potter Stewart’s standard would provide a flexible inquiry to mete out justice.



PART III: Existing Approaches


Options

If we can’t properly define misinformation, then perhaps we are left with two options- one is maintaining the status quo of a free-expression society which fundamentally guarantees rights to free speech, resulting in no effective solution nor progress. The other option is Potter Stewart’s flexible standard, “I know it when I see it”. The former is faulty in that it allows proliferation of misinformation; the latter is rife with disparate enforcement and political bias.


Lawmaking

Even if all of the above issues of defining misinformation are resolved- who pens the law? Who enforces it and to what parameters? If the solution doesn’t encompass the large majority of the stakeholders, it may be destined for failure. Thus, the solution should not be solely in the hands of one political faction (however large), but instead should be a solution that resolves the issue for the many. Even that solution has obvious flaws; just because something is popular doesn’t necessarily make it just nor righteous.


Hybrid Approach

Perhaps there is a hybrid approach- an attempt to define misinformation, with a committee tasked to adjudicate appeals that aren’t clearly accounted for. This is essentially what Facebook has created with its Oversight Board. The Oversight Board is populated with a broad mix of the political spectrum, endeavoring to represent the broader mores of society.[33] It is not without its own flaws[34] though, as human bias[35] may prevail.[36] As a corollary, perhaps academic research maybe has a better chance of success if it seeks a solution for all, not just one faction. Perhaps replicating the Oversight Board model- even with its blemishes- is the wisest course if no better option exists.[37]



PART IV: Objectivity and Solutions


Source Primacy

Professor Waldman implied that claims of fraud during the 2020 elections were false. Some may refute this with particular evidence, for example a Heritage Foundation study[38] revealing over 1,300 instances of voter fraud, resulting in over a thousand convictions. The source is certainly impeachable for lack of objectivity, but the threshold inquiry is why any other source would have primacy over another. This cleanly reflects the aforementioned human bias, and is the very same reason human error cannot dispassionately enforce misinformation laws. An ancillary question is who decides that a source of speech is even associated with a political affiliation? As an opening bid, any research should at least provide evidence of why certain sources must take precedence over others.


Human Bias

What if a “moderate” source presented the Heritage Foundation study? Or perhaps a more progressive publication? Addressing such inquiry would be telling because it isolates whether the source is impeachable or the content. Put differently, why might any hypothesis assume the primacy of non-conservative media? Does such media have a clean record of not misinforming the public, given the difficulty in defining misinformation, and the aforementioned untruths? Regardless of the outcome, these inquiries, if answered, would add objectivity to a researched conclusion.


Target Accuracy

Subjectivity leads to value judgments; and value judgments are opinions that can flare tempers. A research project that generalizes that conservatives are all liars is fatally flawed at the outset- it’s a conclusion in search of analysis. Instead, using a more balanced approach that embraces an immutable truth- that opinions and lies can emanate from all kinds of people- will place the target over all of the culprits, and not half of the culprits. How can a problem be solved by analyzing half of it?


Content-Neutral

The issue of misinformation, as defined herein, is widespread and thus can’t be the sole province of any one faction. It is therefore a content-neutral issue; it shouldn’t matter what the speech conveys politically, or its source, its history, etc. As a corollary, any solutions should also be content-neutral, hinging on whether it is false or not, without regard to policy or source. The focus of research should similarly be on misinformation generally, not conservative misinformation. Resolving the issue of conservative misinformation will naturally seek to solve just that, and no more. A deficient analysis can only result in an unbalanced, one-sided discussion that violates the historical discourse that engenders policy and law.


Conundrum

Solving the misinformation issue can appear circular and futile. On one end, if speech is left largely unregulated, misinformation may prevail. On the other extreme, regulated speech may subvert fundamental freedoms and the unfettered exchange of ideas, and more dangerously, result in a one-sided discussion of policy ideals. Even if we somehow define and regulate misinformation, with the enormous amount of existing content- and the exponential future content- the chances of eradicating misinformation are likely low. However, just because a task appears futile doesn’t mean we shouldn’t act to remedy society’s ills. Thus, we likely need to take action, which likely entails a detailed discussion on defining misinformation. Perhaps the best outset is finding anything common- maybe agreement that we even need to take any action, and then build on that consensus.


Polarization

Professor Waldman repeated a dislike for certain news media organizations and attributed them to conservatives. The draft hypothesis is therefore reflective of current polarization; efforts aren’t made to find common ground nor an inclusive, diverse solution; the task is merely to correct the record for a favored faction. Instead, we should focus our efforts on finding commonality and progress, which is a solution that requires a critical mass to agree with- not just one half of American society, or a large political faction, etc.[39] A solution for half the population will result in just that; an incomplete solution, leading to more division in society. Solutions that are exclusive and homogenous necessarily violate the ideals of inclusion and diversity.


Bipartisanship

Sweeping, bombastic generalizations that paint a large portion of the population with one brush- whether true, false, or somewhere in between- alienates readers right from the title. This risks not communicating the true message to the very ears that need to hear it, and instead results in an echo chamber of self-fulfilling accord. This compartmentalization further drives the wedge between factions. Displaying imagery of maligned conservative leadership may strike a nerve with the bicoastal elite and the urban intelligentsia, but it does little to unify our divided electorate nor resolve any underlying conflicts. Without balance, one risks endangering the solution.


Tolerance

Professor Waldman freely admitted that the fledgling hypothesis isn’t fully formed. The efforts to reach out to the Cornell Tech community for input is reflective of aisle-crossing, bipartisan behavior and is laudatory as a model for modern discourse. Professor Waldman graciously accepted differing views from the audience, whether or not they aligned with policy preferences. Toleration of opposing views is respectful, educational, and will lead to workable solution that suits the many, not the few. A fundamental pillar of secondary education is open discussion and exchange of perspective. We should embrace differing ideals and beliefs of those who disagree with us politically. Inspired by Professor Waldman’s example, I welcome all and any discussion, regardless of its content or merit, in response to this writing. In fact, Cornell University[40], Cornell Tech[41], and Northeastern University[42] all maintain a policy welcoming all viewpoints, no matter how controversial. Perhaps the very problem of misinformation discussed here can be resolved with the same policy- tolerance. As a result, I very much appreciate and thank Professor Waldman for the efforts and for enlightening me on this topic.


PART V: Conclusion


Professor Waldman has initiated a benevolent but difficult task of tackling an information problem that has grown exponentially in recent years. The stakes are high, and are getting higher. There is a difficulty in resolving misinformation if we cannot clearly define misinformation, and what factors may inform such definition (intentions, context, newsworthiness, etc.). Existing approaches are imperfect, yet inaction will not resolve the problem. In order for a solution to a widespread problem to be effective, it must encompass the mores of the widespread public, and nothing less. If the solution is focused too narrowly, it may only serve to alienate a large portion of the public. Free discussion and tolerance of opposing opinion is more embracing of inclusion and diversity and can lead to an effective solution for all stakeholders.


[1] https://www.dli.tech.cornell.edu/seminars/Misinformation-and-the-Conservative-as-Victim [2] https://www.northeastern.edu/law/faculty/directory/waldman.html [3] As a result, I will use such labels in this writing in response to Professor Waldman’s usage. [4] https://en.wikipedia.org/wiki/Nolan_Chart [5] https://news.stanford.edu/2017/01/18/stanford-study-examines-fake-news-2016-presidential-election/ [6] https://www.nytimes.com/2017/09/06/technology/silicon-valley-politics.html [7] https://www.politico.com/story/2018/01/16/americans-fake-news-study-339184 [8] https://www.law.cornell.edu/supremecourt/text/378/184 [9] https://www.politico.com/news/magazine/2021/03/21/owning-the-libs-history-trump-politics-pop-culture-477203 [10] https://www.businessinsider.com/misinformation-vs-disinformation [11] https://www.journalism.org/2018/06/18/distinguishing-between-factual-and-opinion-statements-in-the-news/ [12] https://en.wikipedia.org/wiki/Mathematical_joke#Jokes_with_numeral_bases [13] https://www.facebook.com/DonaldTrump/posts/i-cant-stand-back-watch-this-happen-to-a-great-american-city-minneapolis-a-total/10164767134275725/ [14] https://www.theguardian.com/world/2014/aug/14/ronald-reagan-bombing-russia-joke-archive-1984 [15] https://www.trumanlibrary.gov/photograph-records/64-1-03 [16] https://www.chicagotribune.com/featured/sns-dewey-defeats-truman-1942-20201031-5kkw5lpdavejpf4mx5k2pr7trm-story.html [17] https://www.nytimes.com/section/corrections [18] https://www.nytimes.com/2015/02/05/business/media/brian-williamsapologizes-for-saying-he-was-shot-down-over-iraq.html [19] Another publication, the Journal of Commerce, printed eight articles about “President Dewey”. https://web.archive.org/web/20070706152417/http://www.joc.com/history/p14.asp [20] The New York Times was a participant in a landmark 1964 Supreme Court case regarding the publication of factual inaccuracies. The plaintiff’s claim was dismissed for failing to evidence that the New York Times knew it printed a false statement or was reckless in doing so. https://supreme.justia.com/cases/federal/us/376/254/ [21] This instance presents another interesting question of whether the publication of a photo of the erroneous headline is also in fact misinformation, when it clearly shares a falsehood with the public. [22] https://www.nytimes.com/2021/02/04/business/media/cnn-jeff-zucker.html [23] https://www.hollywoodreporter.com/features/cnn-chief-jeff-zucker-unveils-plan-dominate-digital-new-shows-a-25m-youtuber-donald-trump-c [24] https://law.justia.com/cases/new-york/court-of-appeals/1999/94-n-y-2d-296-0.html [25] https://www.nbcnews.com/politics/donald-trump/twitter-removes-tweet-highlighted-trump-falsely-claiming-covid-cure-n1235075 [26] https://www.bostonglobe.com/2020/10/08/nation/5-things-know-about-antibody-cocktail-president-trump-received-treat-his-coronavirus/ [27] If yes, then the author of this writing has similarly misinformed the public merely by discussing a published untruth. [28] https://www.lawfareblog.com/facebook-oversight-boards-first-decisions-ambitious-and-perhaps-impractical [29] https://www.huffpost.com/entry/trump-wall-of-lies-public-art-manhattan_n_5f9cc0e4c5b6bef9f18d8255 [30] https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them [31] https://abcnews.go.com/US/cheerleaders-mom-created-deepfake-videos-allegedly-harass-daughters/story?id=76437596 [32] The implication is that eating carrots caused the deaths of 100% of those who ate carrots in 1850, [33] https://www.newyorker.com/preview/article/5d0b9b93240799cc25a950a4?status=draft&cb=951745 [34] The Oversight Board is created by and funded by Facebook and reviews Facebook decisions for compliance with Facebook policies. The rulings are advisory recommendations. Facebook has the authority to revoke the Oversight Board’s charter and remove board members. Facebook is tasked with enforcing the Oversight Board’s recommendations. https://about.fb.com/wp-content/uploads/2019/09/oversight_board_charter.pdf [35] At the inception of the Facebook Oversight Board, Facebook employees were “angry” with the appointment of a retired Republican judge appointed by George W. Bush. https://www.newyorker.com/preview/article/5d0b9b93240799cc25a950a4?status=draft&cb=951745 [36] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3639234 [37] A critical distinction between the Oversight Board and government regulation is the Oversight Board isn’t necessarily subject to First Amendment limitations. [38] https://www.heritage.org/voterfraud [39] https://news.cornell.edu/stories/2021/03/president-clinton-us-dogfight-democracy [40] https://www.dfa.cornell.edu/sites/default/files/policy/CCC.pdf [41] https://studentservices.tech.cornell.edu/academics/campus-policies/ [42] http://www.northeastern.edu/osccr/wp-content/uploads/2020/08/Code-of-Student-Conduct-2020-2021.pdf


bottom of page