Web
Analytics
DL Seminar | Debate: "Does AI Pose an Existential Threat to Humanity?"
top of page
  • Writer's pictureJessie G Taft

DL Seminar | Debate: "Does AI Pose an Existential Threat to Humanity?"

Updated: Apr 24, 2021


Individual reflections by Heidi He, and Andrew Koo (scroll down).



By Heidi He

Cornell Tech

Does AI Pose an Existential Threat to Humanity?

Led by Prof. Etchemendy, the DLI seminar this week took the form of debate around the question ”Does AI pose an existential threat to humanity”? The “No” team is led by Meg Young, Meggie Jack and Amy Zhang. The “Yes” team is Salome Viljoen, Lee McGuigan, Ido Sivan-Sevilla, who believed that AI posed an existential threat to humanity. The teams together provided us a fruitful and wide range conversation that covered various aspects of the relationship between AI and society as well as to ethics, politics, and quality of life. Instead of a binary and linear point of view, this spectacular discussion between the two teams provided us a comprehensive understanding, and a structure of reflecting on this topic, which convinced the audiences to think harder about this question.


The “No” team decoded the topic in three aspects: bostrom canonical position, political economy position, and existential autonomy position. With strong AI that outformed humans with cognitive tasks, it might be programmed with good intention but develop destructive means. Meg thus pointed out that humans should be responsible to align in the incentive of humanity and AI with developed control and rules. Lee stated that AI fundamentally served for automating global capitalism. For example, to serve its profitable market, it could be an environmental menace that took substantial power in training. The threat from AI of making life unlivable was embedded inseparably from the applications of political economy. For challenges in existential autonomy position, Ido believed that AI can create potentially new levels of attack. The erosion of AI to democratice society is a gradual process which outperformed the speed that policy adapted to it. It cannot ensure AI to be in control with the existing business model and incentives.


Team “Yes” argued against those ideas, believing that AI does not pose an existential threat to humanity. Meg pointed out that consciousness is absent in AI’s capacity but was fundamental to intelligence. Players amplified the problem of AI to attract investment and took attention away from other social problems. “Humans make AI racist, violent, and environmentally damaging” said Meggie who later pointed out that technology can be appropriately created and controlled by users. Instead of an unique gap in history, the technical transition is a continuation of the longstanding system including social norm, policies, institutional framework and etc. Amy also defended the idea that AI needs a human agent to define specifically its purpose. For social and political adaption, Amy mentioned that the lag was inevitable by nature with any technological advances before it got widespread and passed the honeymoon period. It was a new context in the legislative process and we had the opportunity to perform proactively and act autonomously with respect to technology.


Given the page limit, this brief summary can only cover so much of the discussion that built up “a puzzle piece by piece with considerate and rational thoughts that tackles the question in a comprehensive and systematic manner.” There are many thoughtful and insightful ideas from the debate and the Q&A session. For example, Prof.Etchemendy mentioned the complex motivational structures and Gilbert from UC Berkeley brought up the discussion on passion in AI. Don’t miss out the full discussion at this link: https://vod.video.cornell.edu/media/Digital+Life+Seminar+%7C+Debate/1_je1dsmx6/175727441

The debate provoked us to think more deeply about this question. So last but not least, in addition to the summary of the debate, there are some artworks that are refreshing and relevant in this context, for art can distill abstract messages of this comprehensive question. Artist Sougwen Chung drew with a mechanical arm that is programmed in AI to learn the drawing style of the artist. Behind the peaceful while fluid lines, it is a collaboration that embodies what the “YES” team idea on human creatively designing and using the technical tool. The artwork “Edmond de Belamy, from La Famille de Belamy, ” known for its high price of $432,500 in auction, promoted the debate on art critique on AI artwork. The AI program develops its own art style directed by the artist. Given its attention and financial value on the market, what’s next for it? It clearly has its own aesthetics of art, how do we know when it develops a “passion” for it?



By Andrew Koo

Cornell Tech

Does AI Pose an Existential Threat to Humanity?

The potential of artificial intelligence (AI) has long inspired our imaginations through science fiction, however AI research has matured considerably over recent years possibly blurring the boundaries of fantasy and reality. With even prominent technical leaders such as Stephen Hawking, Elon Musk, and Bill Gates raising concerns about the very real risks associated with the unfettered progress of AI, does AI actually pose an existential threat to humanity?

On March 4th, 2021 Stanford Provost Emeritus and Co-director of the Institute for Human-Centered AI, John Etchemendy moderated an engaging debate between two teams, captained by joint postdoctoral fellow at the NYU School of Law Information Law Institute and the Cornell Tech Digital Life Initiative Salomé Viljoen and postdoctoral fellow at the Cornell Tech Digital Life Initiative Meg Young, arguing for the merits and shortcomings of this idea, respectively.


The “Nick Bostram” Position on General AI


The Argument


AI today has competency that is relatively narrow in scope, in that computers usually only outperform humans in a very small subset of specified tasks. For example, the same AI’s that are famously extraordinary at chess are also completely inept at all other tasks. However, the first argument forwarding the existential risk of AI to humanity is concerned with what happens if we develop “general AI,” or AI that are smarter than humans in potentially any rather than just one specific task. In this hypothetical scenario, we can posit two failure modes in which AI poses an existential threat to humanity.


First, AI may fail by breaking down. That is, that as AI becomes more and more competent, we will become more and more reliant upon its capabilities. As the responsibility of AI grows, humanity may become over-reliant on technology and lose reasonable control over issues that carry existential risk. Then, if AI were to fail then all the systems depending upon it would fail, including humanity.


The second failure mode addresses the misalignment problem in AI. What if AI were extremely competent and successful, but at a task that is even slightly misaligned from the welfare of humanity? Something as seemingly simple as having an AI reduce carbon emissions may carry unintended consequences, such as if the AI concludes that the most effective method for reducing carbon emissions is the extinction of humanity. It is very difficult to align an AI to human goals that are often assumed based upon our shared experiences and common sense. Therefore, we may not necessarily be worried about an “evil” AI but perhaps we should be worried about an extremely competent AI and should be dedicating ourselves to align incentives of machines with humans.


The Rebuttal


Much of the argument presented above is based upon the hypothetical development of a “general AI” which is capable of existential threat to humanity or, defined more precisely, is capable of ending all life on Earth. In order for this to happen, the AI must need to be able to reason and think for itself: the AI would need to be conscious. The concept of consciousness is a complex one, but it stands to reason that it is essential for intelligence, and it is what separates us from computers. One could claim that Deep Blue and other smart Chess agents do not play Chess as humans do, because there is an inherent difference between computation and intelligence.


With the current state of AI research, there is no reason to believe that we would be able to reproduce consciousness. In the same way that we do not perceive basic calculators as intelligent because they cannot reason, we should not fear AI simply because they have stronger computational power. AI intelligence today is only an illusion of intelligence; common technologies today, such as the voice assistants Siri and Alexa, could have seemed like conscious AI to humanity one hundred years ago; yet Siri and Alexa are no closer to being a threat to humanity than basic calculators.


AI Risk in Political and Economic Systems


The Argument


AI is already entrenched in political and economic systems and its effects within those contexts could pose substantial existential risk to humanity. Evidence of this can be seem in various examples in the real world, including the automation of jobs in manufacturing, surveillance government, and the deteriorating state of climate change globally.

In manufacturing and many other industries, the automation of jobs has caused economic and social turmoil. Productivity may increase with high-tech automation but not without long-term externalities including displacing entire workforce populations and exacerbating wealth inequality. Political damage is also visible in our world today. In China, advanced AI tools are used to systematically monitor Uighur and other ethnic populations, enabling unprecedented authoritarian government powers. Lastly, industries that are dangerous to humanity’s sustainability, such as the fossil fuels industry, use AI to legitimize their actions, and the sheer computational resources needed to run AI is itself an environmental hazard.


As AI technology becomes more advanced, it enables users huge potential to do damage on an existential scale. Where the previous argument considers a more abstract possibility of the advancement of AI intelligence, this argument is grounded in the realities of society today.


The Rebuttal


Opponents of this argument state that these examples overemphasize the power of AI and that the origins of these issues are more human than technological. In other words, AI is simply a tool and humans should bear the responsibility of using it for evil. On global issues such as climate change, the issues are primarily political; we face problems with fossil fuels not because of technology but because of the lack of political will to change the economics of electricity. In addition, while it is true that technology expands the abilities of government and groups in power to impose dangerous goals, it is also true that technology can be and is appropriated by minority and groups marginalized by status quo. In fact, technology is often at the core of what allows revolutionary change for the betterment of minority groups. Finally, the tech determinist stance obscures the fact that we, as humans, have power over the technology itself.


Existential Risk to Human Autonomy


The Argument


AI poses a threat to human autonomy, taking into consideration both the number of ways it can be used and the realities of regulatory speed. AI can potentially create attacks on individual autonomy by covertly changing the ways we make decisions beyond the norms of traditional marketing. The influence of technology has already expanded from simple changes in purchase behavior to an emotional dependence on social app views, and most of these changes are made without the users’ knowing. Status quo never changes at once, and similarly the erosion of privacy also happens slowly. As AI and technology become increasingly embedded into our lives, the boundary of human autonomy becomes blurred with the influences of ever-present AI.

Regulators and policy makers are used to responding to issues at human speed, not AI speed, and are not sufficient safeguards to protect us from the existential risks AI imposes. Technologists cannot ensure that AI will not get out of control when we do not entirely understand how AI works.


The Rebuttal


The opposition claims these issues are human in nature, rather than AI. AI is a tool whose benefits and damages are heavily dependent on human design and use. AI is not smart in a general sense and does not have its own ethnics, it requires a human-defined goal and success metric to function. Therefore, if we observe problems in our society through the lens of AI tools, such as a racially-biased computer vision tool, is it an algorithm’s fault for succeeding or is it simply a problem we have in our human society?

It is true that governance and rule setting bodies have not risen to the task today for many of the societal issues we have seen in recent years caused by disruptive technologies. However, this is natural for disruptive change. Regulators have historically responded to issues, and just because it takes time for change to be enacted does not mean it will not get done.


AI cannot change itself and also has no notion of ethics that are predisposed against humanity. Therefore AI tools provide humans the opportunity to take more autonomy, not less, of any problems that we see in our society.


Closing Thoughts


At the conclusion of this rich debate, both teams emerged winners: polarized opinions in the audience, both for and against AI risk, gravitated towards the middle-ground of greater consideration for the complexities and nuances in the question. During closing statements and Q&A, additional topics were introduced to enrich the depth of conversation even further, including the philosophical question of whether it is possible for AI’s to develop the complexity of motivational structure that humans have, the financial incentives behind forwarding this existential risk argument, and the consideration of physical manifestations of AI to create existential risk.


Regardless of one’s position in this argument, the debate brought to light the importance of deep consideration for how technological advancement affects our society. From addressing long-term hypothetical outcomes to facing risks already apparent today, this exercise shows how technology has wide-ranging impacts that can far-exceed a creator’s original intent and why we must challenge ourselves to be ceaselessly thoughtful when pushing the boundaries of technological innovation.

bottom of page