MODERATOR
John Etchemendy
TEAM 'YES'
Salome Viljoen (Captain)
Lee McGuigan
Ido Sivan-Sevilla
TEAM 'NO'
Meg Young (Captain)
Maggie Jack
Amy Zhang
RECORDING
Condensed Transcript From DLI's Inaugural Debate:
JOHN ETCHEMENDY: Now, when I was growing up, there was very little doubt. The Jetsons’ robot Rosie cleaned house and cooked dinner and wouldn't hurt a fly. And Robby the Robot was just there to help. And Asimov's three laws kept all of the rest of them inline. Then something happened. It started with SkyNet and the Terminator. But pretty soon they had all gone astray. Then it leaped from fiction to the real world. Suddenly we had people like Stephen Hawking saying the development of full artificial intelligence could spell the end of the human race. Elon Musk warned that AI would overtake humans within the next five years. And Bill Gates wrote, “I am in the camp that is concerned about Superintelligence. I agree with Elon Musk and don't understand why some people are not concerned.” On the other hand, many, if not most of the leaders of AI itself were unconcerned. For example, Yann LeCun, a pioneer of deep learning, says, “Worry about the Terminator distracts us from the real risks of AI.”
To adjudicate these questions, we have two teams led respectively by Salome Viljoen and Meg Young.
SALOME VILJOEN: Thank you, John. I'm going to be presenting a few opening remarks before jumping into the substantive position that I'm in charge of for today. As has been noted by John we’re going to be presenting some key positions in the debate over whether AI poses an existential threat to humanity. And we're going to be presenting this in the form of a stylized debate. So team yes, which I’m team captain of, will be arguing that AI does pose an existential threat to humanity. And to make this argument, we're going to be advancing three arguments. First, I will be presenting what we call the Bostrom position, which is kind of the canonical set of concerns raised by philosopher Nick Bostrom as well as plenty of his co-thinkers that an intelligent general AI may turn on and exterminate humanity if appropriate design and governance measures are not put into place now. My colleague Lee McGuigan will be presenting what we call the political economy position that like climate change, AI does pose an existential threat. But one that, like climate change, can be understood as the product of economic and militaristic systems whose incentives and imperatives drive AI production. To meet the existential threat posed by AI, therefore requires addressing the incentives and imperatives of economic production and militaristic engagement that AI's development and adoption enacts. And finally, my colleague, Ido Sivan-Sevilla will present what we're calling the existential autonomy position that AI already poses an existential threat to humanity as smart programmed environments subject us to techno-social engineering. So these smart systems change how we think, perceive and act, remaking us to be more compatible with machines at the expense of vital aspects of what makes us human.
So team No in turn will be rebutting these positions. My colleague, Meg Young will be refuting my Bostrom or canonical position. My colleague Maggie Jack will be refuting Lee's political economy position. And my colleague Amy Zhang will be refuting the existential autonomy position. And a final two caveats before we jump in. So first, please note that not all of us are presenting views that we hold personally or even are the primary subjects of our research. And second, while we will be adversaries in some sense in this debate, the debate has been a lot of fun for us to prepare. And we hope the audience will engage in that spirit of play as much as that spirit of competition. So with that, I'm going to turn things over to presenting and arguing for the canonical AI existential threat position.
While most AI today is what we would call narrow or weak AI, the long-term goal of a lot of researchers is to create general or strong AI. Narrow AI can be understood to outperform humans at a very specific task, such as playing chess or solving an equation. But general AI would be defined as an artificial intelligence system that would outperform humans at nearly every cognitive task. So what if that wish or that research goal comes true? Intelligent, general AI may greatly improve human life. It could alleviate humans from degrading work. It could solve problems like war, disease, and poverty that we as humans have been unable to solve. These are certainly I think the animating goals of a lot of AI researchers and developers. But if the research objective of general AI is achieved, AI could far exceed human intelligence and continue to do so beyond our comprehension or ability to govern. After all, designing smarter AI is itself a task of human cognition. So if AI exceeds human cognition in this task, it will be able to undergo self-improvement and potentially develop Superintelligence that leaves us far behind. The possibility of super-intelligence poses an existential issue, because intelligence is vital to power and control. This raises a real concern or a real challenge if our research goal to create a general AI comes true. And that is sort of core to the threat posed by intelligence. For instance, humans aren't the fastest creatures, we aren't the strongest creatures, we aren't the deadliest predators by physical design. We are the smartest creatures, which is what has allowed us to become the dominant species on Earth. So if we take the lessons of the Anthropocene seriously, we should take the importance of intelligence in machines seriously.
What are the threats that intelligent machines might pose? Now, with narrow AI, that sort of threat of, of failure or of a dangerous AI might be a nuisance, even a considerable nuisance. But the more tasks that AI takes on, the bigger the possible risks from its failure. So if general AI succeeds it becoming better than humans at all cognitive tasks. This failure could be catastrophic in two key ways. First, if this AI generally fails by breaking down, it could lead to a breakdown of all the systems that rely on it. This could include many key infrastructures that future humans will rely on. But worse, super intelligent AI may be successful. And it may be either programmed with or design self-developed goals that diverge from the goals and the values of humanity. Thus, the gravest existential risk AI poses isn't that AI might fail or that AI might turn evil, but that AI turns competent with goals that are misaligned with the goals of humanity. This again, could occur in one of two ways. It might be programmed to be intentionally devastating to humanity. We might be concerned that autonomous weapons, which are systems programmed to kill, could achieve this goal, such that weapons that are intentionally designed to be very difficult to switch off fall into the hands of an enemy or are developed absent international controls over the development of these capabilities. AI weapons such as these could cause mass casualties. Now this is already a risk with narrow AI, but this risk would only grow in scale as AI becomes more intelligent and more autonomous.
Second, AI may be programmed to do something that at first glance seems beneficial, but may develop a destructive means for achieving this otherwise apparently beneficial goal. So for example, we could program the general AI to do something like minimize carbon production. And in the course of pursuing this goal, the AI realizes that it could most effectively minimize carbon production by eliminating human life. Or we could even think of something simple, like optimizing traffic, which leads the AI to deactivate all of our cars. And as humans attempt to stop it from deactivating all of our cars, it begins to see the human life as a threat to achieving this otherwise straightforward and potentially beneficial goal. Now these are stylized examples, but they illustrate that it's very difficult to fully align a program goal for an AI with all the exact caveats and boundaries assumed in the human articulation of that goal. That a system doing what we program it to do, and a system doing what we mean for it to do are in fact two very different things. So this is why a lot of researchers and prominent industry leaders who express a concern about AI's possible existential threat are worried not about evil AI, but competent AI. And that carries with it all the risks of inadvertently weaponizing this gap between what we ask of competent systems and what we mean when we make such asks.
Now does all of this mean that the problem is around the corner? No. Given the state of AI research, it's clear that it isn't. But it does mean that we should plan now to develop controls and methods to align the incentives of humanity with those of intelligent machines. So all of this is to say that as we develop more and more intelligent machines, they may pose well an existential threat. And that this existential threat deserves a concerted intellectual effort to meet that threat. So with that, I'm going to be turning it over to my colleague Meg Young to just brutally refute all of my arguments. Thank you.
MEG YOUNG: Thank you for that, Salome, and thank you everyone for being here. As captain of Team No, a few remarks. Here, I will refute Salome’s presentation of the Bostrom position by arguing that consciousness is not possible in AI which are computational tools, and that instead the existential risk debate represents a fundraising frenzy that will give way to another AI winter. My colleague Maggie Jack will refute Lee’s political economy argument through tracing how these concerns are technologically determinist and ignore the role of law and policy. And finally, my colleague Amy Zhang will refute Ido’s existential autonomy argument by underscoring that this argument diminishes humankinds’ creativity and ability to wield new technologies as tools to our own ends.
Returning to what Salome has asserted, let's pause for a moment to take in the argument that AI poses an existential risk to humanity. An existential threat is one that will destroy life on earth---to literally wipe it out. This argument warns that AI will become super intelligent, overpower its human creators to pursue its own unimaginable ends, and to menace life as it takes over the planet. But what would need to be true for that to be possible? AI would need to be intelligent in a meaningful sense. Meaning it can reason, form its own goals, and pursue those goals across contexts. In other words, it would need to be able to think for itself. The philosopher John Searle refers to this idea that AI will ever be able to do that as quote, “an enormous philosophical confusion about the correct interpretation of AI technology.” He points out that consciousness is essential to intelligence. And that without it, even an advanced system like IBM Deep Blue is not playing chess in the same way a chess master like Garry Kasparov is. Instead, it is just performing a computation. We know that AI is not conscious and not at risk of becoming so, because consciousness is an enduring mystery in philosophy, neuroscience, and psychology. Basic science is not able to characterize consciousness and where it comes from in vivo, so why do we think that computer scientists will be able to recreate it in silico?
Proponents of super-intelligent AI argue that the technology is already on course to emulate human intelligence. They say that because computational power is becoming cheaper and faster, eventually machine learning systems like neural nets will function akin to a human brain and exceed it in unpredictable ways. But this argument mis-apprehends what intelligence is. It is much more than the ability to solve problems. A driverless car, while much more computationally intensive, is no closer to being sentient than a calculator is; both are machines purpose-built to solve problems, and both are equally unlikely to plot to kill humankind. Instead, increases in computational power are merely bringing us a better and more convincing illusion that AI is super intelligent. To people decades ago, Siri and Alexa would have seemed akin to Rosie, the humanoid robot on The Jetsons. But knowing Siri and Alexa as we do today, we know they cannot be called “intelligent” in a meaningful sense at all. So yes, we are on course to get better simulations of intelligence, but they'll have none of the underlying capability necessary for consciousness itself--and as such, they pose no risk of overthrowing humanity.
So we might ask, why? Why does this myth of AI as an existential risk have traction? It has gained a foothold in our culture because it is a very profitable myth. It works the same way a lot of political lies do: making money and garnering attention by making people scared. People who propagate this idea stand to themselves profit from raising the alarm about the existential risk of AI because investment in AI is an investment in their own technologies. Elon Musk, Stephen Hawking, and Bill Gates: these people already have access to money, power, and computational technology. Funding this problem redounds to their own economic and social benefit. Oxford’s Future of Humanity Institute, Cambridge’s Leverhulme center for the Future of Intelligence, and Boston’s Future of Life Institute are all, in part, billionaire-funded. Large donors who invest in the threat of an existential risk of AI amplify the perception of its gravity and imminence and take political capital away from other pressing social problems.
But this isn't the first time this has happened; as a society we've been here before in successive waves in the past of hype about AI spurring further investment. These gave way to “AI winters” when funding was right-sized in recognition of AI demonstrating more modest computational capabilities than were promised. Sometimes proponents of this theory argue that AI already poses a threat to humanity at its current capabilities. But I'll flag for you that that argument relies on an important dodge by setting aside the question of whether AI can be intelligent. In this way. They already concede the point that it is not and it will not be. Remember, an existential threat means to literally wipe humanity out. That AI will not do that is true on its face. So under pressure, proponents tend to change what they mean by an existential threat to accommodate other types of threats that AI poses-- or by expanding the definition of what AI is. I encourage you all to be on the lookout for this when weighing the opposing arguments. I turn it over now to Lee McGuigan in from the opposing team.
LEE McGUIGAN: Thank you very much. That's actually a perfect segue into what I want to talk about. Because the issue before us is not to debate some abstract artificial intelligence. As the history of computing tells us the definition of AI changes, and it's always historically specific. As a generic term, it simply marks the perceived boundaries between human and technological capacities for symbol processing, cognition, consciousness, and other boundary concepts as recognized at a given time and place. That means we're debating the threats of actually existing AI, which means a loosely grouped set of tools for making and acting upon statistical predictions and inferences, entangled with social relations and embedded in a political economy. And for the most part, those tools we call AI today are primarily, not exclusively, but most fundamentally applied as a means of automating global capitalism and coercive state power, mainly through surveillance, profiling, and weapons of war. These are not the only applications of AI, but they're clearly the most prominent, the best financed, and the most impactful. I'm sure we're all familiar with the scenes for modern factories and product distribution warehouses, where AI systems and robotics impose inhuman labor discipline on low-wage workers who must conform to machinic embodiments of corporate management. That means that today, as we speak, the weaponization of AI by global capital has made everyday life nearly unlivable for millions of people worldwide. On a longer timescale the outlook is even more dire.
But again, we don't need to speculate about potential uses of AI to witness this trajectory now. For example, companies like Amazon and Microsoft provide AI products and services to large companies in the natural resource extraction industries, such as oil and gas. They're using AI tools and techniques to accelerate fossil fuel production and using the allure and mystique of AI as part of the public relations effort to make extractive industries seem more modernized and legitimate, in spite of everything we know about their environmental and human exploitation. In other words, in addition to facilitating extraction, AI also provides a legitimizing halo for this industry. And that's not even to mention the devastating emissions from the energy required simply to power artificial intelligence and machine learning systems, whatever their purpose. Some computer science researchers have found that the process of training and optimizing a single machine learning model for natural language processing can release as much carbon dioxide into the atmosphere as five automobiles would emit over their entire useful life based on US averages. So that is to say that AI is an environmental menace even when it's not being used in natural resource extraction and fossil fuel production, as it currently being used. And I trust we're all familiar with the ways that technologies like facial recognition have been used for policing, repression, and other forms of political violence in the US and around the world. The brutal campaign against the Uighurs in China, which makes use of AI for widespread surveillance and profiling, is perhaps the best-known and most grim example.
What this all means is that we're talking about how science, math, and technology are being developed and leveraged to increase, both extensively and intensively, some forms of human organization and domination that are unequivocally on a trajectory to destroy human life and the habitability of this planet for many other organisms. Let's be clear, without a radical departure caused by organized intervention, the future promised by the continuation of high-tech global capitalism is no future at all. The future of global capitalism is the destruction of the world. Actually-existing AI is first and foremost an extension of that project of expanding and accelerating commodification, extraction, and control.
We don't need to speculate about a runaway Superintelligence in the image of Skynet to appreciate how AI, as it currently exists, embedded in and inseparable from the political economy of its applications, is a hazard to the existence of life on this planet. In the shorter term, through the magnification of labor discipline, state violence, and authoritarian rule, AI is also helping capital and the state subject humans (particularly racialized groups and ethnic minorities) to intolerable levels of misery—arguably to the extent that we would consider them existential threats, making life essentially unlivable. If we're serious about debating something real, debating AI as it actually exists in the world and not science fiction fantasies or personal impressions of what AI could be, as if it were a neutral and abstract resource floating in the ether of possibility, then I cannot fathom a defensible objection to the argument that AI, as an instrument of capital and empire, is on track to precipitate the annihilation of life on Earth. We, as people, may take actions to interrupt that trajectory. But that would mean that we have also interrupted or changed the meaning of AI as it actually exists. And so that would be another debate. With that, I hand the challenge over to Maggie Jack.
MAGGIE JACK: Thanks, Lee, Meg and Salome. Building on Meg’s first rebuttal: I want to suggest that the question itself - Is AI an existential threat - is misleading. The question, and team yes’s approach to the question - overemphasizes the importance and power of AI. We are not saying that AI is not powerful or detrimental. Nor are we saying that it isn’t important to understand the technical properties or material power of AI. We are saying that the origins of its detriment come more from social, ideological, and essentially human causes than technological ones.
I’m going to make three points to illustrate my argument against the technologically determinist leanings of team yes and to attempt to work towards a more balanced understanding of the co-production of technology and society.
Point one is that humans make AI racist, violent and environmentally damaging. The political economy of platforms is harmful to many marginalized groups because of the economic model of racial capitalism and the ideologies of patriarchy and white supremacy in which we live. These are political problems: the problems that are built into the tech are secondary. The same came be said for why AI accounts for environmental harm. There is no political will to prioritize changing the infrastructure of electricity which currently makes dirty energy run computation, the health and environmental effects of which disproportionately impact poor communities of color around the world.
We – team NO - argue that the digital turn – same too with this so-called AI turn - does not represent a revolution, a break or gap in history, but rather a continuation of long-stranding structures of power and privilege. Here I want to quote Ruha Benjamin from her 2019 book “Captivating Technology:”
“In rethinking the relationship between technology and society, a more expansive conceptual tool kit is necessary, one that bridges science and technology studies (STS) with critical race studies, two fields not often put in direct conversation. This hybrid approach illuminates not only how society is impacted by technological development, as techno-determinists would argue, but how social norms, policies, and institutional frameworks shape a context that make some technologies appear inevitable and others impossible. This process of mutual constitution wherein techno-science and society shape one another is called coproduction.”
Point two is that technology can be appropriated creatively by users. The technologically determinist stance that team yes overemphasizes the brittleness of technology. For example, feminist scholars have shown how tech has been appropriated by women creatively in domestic settings.
Judy Wajcman gives the example of a microwave oven which was developed for food preparation in US navy submarines. When the developers of the microwave turned to the domestic market, they thought of single men using it for reheating food. It was sold as a ‘brown good’ next to tv equipment and other leisure goods. Women then appropriated this tool for their own domestic work in cooking and its space in the department store moved to “white goods” primarily oriented towards housewives.
Similarly, many scholars of transnational HCI or STS (I can point to some of my colleagues at the DLI like Anthony Poon and Nicola Dell) have studied and shown the ways that emerging technology users in the Global South use tech in ways that are creative, unexpected to their designers in the Global North particularly Silicon Valley, and are appropriate to their setting. There is a subtle James Scott-esque ‘weapons of the weak’ power in subversive resistance there. For instance, in 2016, I observed how mom and pop shops in Phnom Penh Cambodia, for example, appropriated Facebook to buy, sell and deliver goods in ways that were appropriate to the city, but that also conformed to long-standing gender dynamics and transport infrastructures.
Why does it matter to debate whether it is technology, society or users that primarily determine the social impacts of an emerging tool? Point three is that the technological determinist stance that AI is an existential threat obscures the fact that we have political power over it. Effective resistance to AI comes through political initiatives (ie voting people into power who will regulate and enforce regulation such as antitrust and privacy law), worker organizing (unions, cooperatives, etc), and mutual aid within neighborhoods and communities.
To recap my three points: Point one is that humans make AI racist, violent and environmentally damaging. Point two is that technology can be appropriated creatively by users. Point three is that the technological determinist stance that AI is an existential threat obscures the fact that we have political power over it. And now I’ll hand it over to for Ido, back to team yes.
IDO SIVAN-SEVILLA: Thanks, Maggie. Hi, everyone. It's good to see everybody. Thank you for this great opportunity. So I will start my six minutes of fame by just relating to what Maggie said. I think context does matter, right? We're not debating whether AI poses existential threat to life on Mars. We are here on earth. We know the power relations between governments, corporations and citizens, and it’s not a matter of whether AI is posing an existential threat right now as we speak, March 4th, 12:39 PM. None of us can really measure that and capture that. Instead, I think we should debate whether AI can pose an existential on society given its development trajectory and the inability of regulators and policymakers to catch the speed of technological development and pose meaningful checks and balances to govern risks for society in such a data driven capitalist world. And I'm going to argue that the answer is definitely yes. AI can pose in fact an existential threat, given the way it can be applied to undermine human autonomy, potentially manipulating populations in new and in unprecedented ways. Now couple that again with the already proven inability of public policy systems to stop previous technological threats, such as the commodification of personal information, the creation of information monopolies like Google and Facebook that have institutionalized themselves to become imbedded in every corner of our online lives, and etc.
So, I'm not optimistic, and I'm going to build my argument around three main points:
Number one, for me, the fact that AI can potentially create new levels of attack on individual autonomy is an existential threat to society. AI can be used covertly to influence another person's decision-making, just by targeting and exploiting decision-making vulnerabilities of the individual. Applying such hidden influence on individuals is nothing new, but AI potentially takes this at least one step further. Our digital traces are no longer just our purchasing behavior or the way we engage with websites. In our contemporary, all inclusive, digital society, digital traces are also our physical movements, locations, heartbeat, face expressions, and etc. The comprehensive set of our traces is secretly compiled and tracked in searchable databases available to God knows who. In a world where governments, politicians, and corporations constantly race for our attention, the highly personalized choice environments offered by platforms for instance are very attractive mediums to push individuals toward certain content, nudging them to take decisions without their knowledge.
Dark patterns are just one example where this is being used. It is important to note that AI systems that are based on personal information are not a thing of the future. They have been here for a while, utilizing personal information of users in order to deliver profit for their creators: The Facebook news feed, Amazon voice assistant (Alexa), targeted advertising - these are all examples of artificially intelligent systems since they are able to autonomously do the right thing at the right time for their creators. Over time, user behavior is studied at scale, allowing AI creators to predict the best measures to control and segment people according to their needs. A recent CHI paper showed a causal bi-directional link between emotions and mobile app use. Now if a group of researchers were able to do so, rest assured that creators of AI algorithms will find marvelous ways to exploit our emotions. Autonomy, defined as the capacity to make meaningful independent decisions, maybe the core of liberal democratic societies, is at great risk in the age of AI. And this is for me, without a doubt, an existential threat to society.
Number two, the status quo of liberties in democratic societies never changes at once, but history shows the inability of policymakers to act as gatekeepers and stop interested stakeholders from utilizing technology against the public interest. For example, Dan Solove (2013), in his book about privacy and national security in the US, already showed us how the erosion of privacy by the US government is a gradual process: Building up layer after layer with gatekeepers failing time and again to do the right thing for society. Nowadays, the struggle to regulate facial recognition is a recent example. It is only now being regulated in Massachusetts, for the first time in US States history, after fundamentally undermining the rights of many. It takes time for policy systems to adapt to technological changes. We are used to doing policy in human speed, not in a machine speed, but this is what we're expecting from our policymakers in the age of AI.
Number three, the settings, the business models, and incentives are all there for actors to take that behavioral route by using AI to influence individuals. We’re building a monster and cannot ensure it won't get out of control. For example, the creators of AlphaGo, Google's DeepMind ‘Go-playing’ algorithm, claim they already do not understand how the algorithm actually operates. This is the blessing and the curse of AI. We should ensure that we're using technology and are not used by technology by deciding what are the acceptable and non-acceptable uses of AI. But again, history shows that the chances for getting this right, fairly soon, are pretty low. The settings and the environment in which AI develops just increases the threat of AI to autonomy and therefore increases the likelihood that AI will pose an existential threat.
AMY ZHANG: All right, thank you Ido. Those were some very great points. So again, like my teammates have said previously, we by no means say that these are not valid concerns and should be cause for alarm and immediate action. But in fact, as you have already pointed out, these are things that are very much human in nature. To build on top of what my teammate Maggie already said, at its current trajectory, AI is still a tool. So its merits and damages are heavily, or may we say entirely, dependent on how we design it and how we use it. As humans who, as you said, should have autonomy, we should ultimately be the ones to bear the responsibility to face up to these issues, and try to solve them.
First of all, AI as a tool is not smart. It needs you to tell it exactly 1. what is the specific goal to achieve, 2. a success metric that is meticulously defined, and 3. information to use as a basis that you prepare and feed in. Then to what extent can you say that AI does bad things? For example, if a company is allowed to set maximizing the time spent in an app as its ultimate goal, is it the algorithm’s fault for achieving it? Or if we task a model to mimic our speech by showing it all of Twitter to use as example, and it produces language that mirrors some of the toxicity, can we call the model racist? What it really comes down to is in how these systems are designed, and whether it is done with the level of awareness and sensitivity required in really thinking through these human aspects.
Indeed, an important part of this relies on checks and balances put in by the government and other rule-setting agencies. And it is true that it can be frustrating to feel like they're not measuring up, which then could lead to a pessimistic outlook that they never will and this will never work. But there are two important points to consider here. Number one, there will be a lag in legislative actions with any technological advances almost by definition. A nascent technology first needs time to emerge, to grow, and to become widespread enough that it matters to the general public. Then, only once it gets past the point where people are just immersed in the excitement and the benefits - because it does provide real benefits - do problems become apparent and gain attraction. That’s when the agencies can go in and try to understand them, which in itself is not easy. Then a second point to consider is that this is creating a new context, and any change in contexts is very difficult to grapple with for the legislative. Keep in mind that even the processes that seem very established today, have gotten here only after being studied and refined over a long period of time. One should expect this case to be no different. So yes, it will take time, but that does not mean that we will never get there.
In addition, it is very important to bear in mind that society itself is adaptive. Humans are adaptive. This is not the first time that technological breakthroughs have become a disruptive force in human society. Yes, it's true that such disruptions bring out previously dormant conflicts, in a way that can feel like a crisis. But it is also a chance for us to re-examine and to make changes. We have done this in history, and we have seen through history that it's by facing up to these challenges that societies progress. We have seen this through many revolutions, and this will be another one. One is right to argue that it will be all on a much larger scale, but that exponential rate of change is part of the nature of progress, and part of what societies have potential to adapt to. So if anything, the concern with AI is that it's still unable to change itself. This means that unless we do something about it, past mistakes could be frozen in place and propagated, and that could be the actual risk to our progress as societies. That's why, again, the burden of responsibility is on us humans.
Thus, while it's true that AI is surfacing many problems that should be faced with alarm and urgency, this should in fact be a prompt for us to exercise more human autonomy, not less. As individuals, it should be an opportunity for us to think more critically, for example about how we get information, or how we process external nudges from ads, etc. And it should be an opportunity to be more reflective about how we're interacting with the world, with each other, with ideas. As a society, it is even more important to treat this as the opportunity to really take a hard look at certain issues, which we may not have paid close attention to or intentionally tried to ignore, as well as some of the structures, which we accepted as normal but have now come under examination of what needs to be fixed.
So, if we're concerned about challenges to our identity as intellectual beings, not exercising the core capabilities that come with it - namely the autonomy to think and to grow - that would be the real existential threat to humanity. Now I will pass it onto Salome for an overview.
SALOME VILJOEN: Thanks, Amy. So, team No has raised several, I think, very compelling points. They note that we're a long way from computation becoming consciousness. They emphasize the ideological and political power we wield over the trajectory of our technological development. And they rightfully point out that what is one man's autonomy erosion is another woman's adaptation to new technological systems. That's definitely not the same thing as being wiped out.
Now I don't think the team Yes disagrees with these points. Where we do depart from team No is in thinking about how to characterize and meet the threat that AI poses. We believe it's minimizing the extent to which technologies can enact and exacerbate political goals to say things like: guns don't kill people, people do. AI don’t pose existential threats, people do. Understanding how technologies materialize and coalesce a project and taking those materializations seriously is not the same thing as dismissing the human role in creating those political technologies.
We also believe that it's historically reductive to imagine that because the problem is a long way away, it doesn't matter. We are mindful that downplaying AI's ability to enact and amplify human led destruction isn't technologically determinist. It's taking seriously the threat that bad governance of intelligent systems as an existential threat demands, both of us as scholars and researchers, but also of all of us as members of our larger technologically mediated society. So with that as a sort of an overview of team Yes, I'm going to pass it over to Meg for similarly quick remarks before we open all of this up for Q and A.
MEG YOUNG: Thank you so much, Salome. Everyone, let's just take a moment to reflect on this example that Salome gives us - that guns don't kill people, people kill people. I think that that’s an important jumping off point for what team No has presented here, which is a more enduring point about the social and political contexts that this technology is presented in, in which AI isn't an existential threat to humanity. Humanity is an existential threat to humanity.
As Maggie points out, humans are racist, violet, and environmentally damaging. And we have an opportunity, as Amy says, to take AI and to change it with political will and not to do so would be an abdication of our own ability to respond and act autonomously with respect to technology. So I have a final question for you all. Who stands to profit from convincing you that AI is an existential threat to humanity? We know that Ido said, we need to make sure we can use technology instead of being used by technology. But don't buy it, don't buy this idea that technology can wander responsibility for a specific company's exploitation and pass it off to AI as an independent agent, rather than holding the creators of AI themselves responsible for the actions of their AI. If we're building AI that's anti-social and has a negative effect on society, we can pass laws and guard rails to rein in the people who are doing that. Instead of inflating the perceived agency of AI and passing the buck onto an inert computational device.
So instead of being taxed or scaffolded by laws, the people who are propagating these myths, you have managed to raise funds off the idea that AI as an existential threat. And I'd like us all to follow Amy's inspiration and saying this is a chance to intervene in this moment and to circumscribe, not the technology, but the actors responsible for creating it. Thank you. I'll turn it over now to John.
JOHN ETCHEMENDY: All I can say is, wow, that discussion was outstanding and wide ranging.
I actually have a question for each team. For Salome, it seems to me I'd like to focus on the idea of a rogue AI. Because it seems to me that that's the key idea as opposed to the idea of people who are misusing AI in one way or another, which of course is going to happen. That's possible. But whenever I hear people talk about a rogue AI, super-intelligent AI, I am concerned that there's something that they're just not considering and that is the complexity of motivational structures. So, one of the things that actions require more than beliefs and rationality, you have to be motivated to act. And I think we fail to appreciate the incredible complexity of how we work and how we're motivated. As Hume first observed, humans act not because of their beliefs alone, but because of beliefs combined with motivations, their desires, goals, instincts, urges, passions, has Hume put it. And that's the result of a long evolutionary history. And I think that’s something I believe no AIs will ever have. And we have no motivation, we humans, the creators of AI, have no motivation to give a comparably complex set of motivations to these artificial agents that we create. And I take comfort in that.
Okay, so Meg, I want to ask a question a couple of times. You raised the issue of look, you have to ask who's making money off of this threat? It's people who want to scare you, and collect money either for the movies and things like that, but collect money for their research labs and so forth and so on. I worry about that argument. I worry terribly about that argument because can't we use that same argument to criticize climate change, the view that climate change poses an existential threat. I mean, there are lots of research institutes that have gotten their funding from wealthy people who are afraid of climate change. But that doesn't mean that we necessarily don't have an existential threat. So those are the final questions I'd like to throw out there.
SALOME VILJOEN: Thanks, John. And thank you everyone for a great debate. I will caveat that this is, I'm by no means an expert in existential threat posed by AI. This is not my regular beat as a scholar, but I'll do my best to meet the question. I think what canonical, what people hold as the canonical view of the threat the general AI poses would, would respond to your question to say it's precisely this complexity of motivational structure, precisely the disconnect between what Meg beautifully calls the in vivo sort of conditions of consciousness that motivates us to act, and the in silica set of programmed computational goals, that keeps people up at night worrying about kind of the gaps between the program goals and this may be the stated goals. So I guess one re-articulation of your question is this gap, and our lack of knowledge about how that gap gets filled in vivo, is precisely the problem. That's the threat.
MEG YOUNG: Thank you, John. I think that's such an important point that you make about climate change labs. And, I just quickly want to close by saying that I think there's a conflict of interest in the AI field in a way that there isn't in climate change labs, wherein the people who are closest to AI research stand to profit from inflating the idea that AI is of an imminent social threat to us and able to take on more resources for AI. When you have industrialists and people who themselves own AI products closely involved with the research, that is not structurally similar to climate change research.
John Etchemendy
Denning Co-Director,
Stanford Institute for
Human-Centered Artificial Intelligence
Salome Viljoen
Joint Postdoctoral Fellow,
Digital Life Initiative, Cornell Tech
& New York University
Meg Young
Postdoctoral Fellow,
Digital Life Initiative, Cornell Tech
Maggie Jack
Visiting Postdoctoral Fellow,
Digital Life Initiative, Cornell Tech
Lee McGuigan
Assistant Professor,
University of North Carolina, Chapel Hill
(DLI Alum)
Ido Sivan-Sevilla
Assistant Professor,
University of Maryland
(DLI Alum)
Amy Zhang
Doctoral Fellow,
Digital Life Initiative, Cornell Tech
Cornell Tech | 2021
연락 주시고, 몸과 마음을 편안하게 쉬어갈 수 있는 최고의 출장안마 선택하세요
출장마사지 는 고객의 편안함과 건강을 최우선으로 생각하며 맞춤형 힐링 서비스를 제공합니다.
출장마사지 는 20대 미녀 테라피스트 와 함께하는 특별한 힐링 100%후불제
시간제한업는 바나나 출장안마 업체 많이 찾아주세요
Hello i'm kaya787 nice to meet you
It has gained a foothold in our culture because it is a very profitable myth. Solar
Thanks, John. And thank you everyone for a great debate. URL
I'm amazed, I must say. Seldom do I come across a blog that's both educative and engaging, and without a doubt, you've hit the nail on the head. The problem is something which too few people are speaking intelligently abou Click here