by Lauryn Bray
On June 6, Humanities Washington and Bickersons Brewhouse hosted “AI Anxiety: How Should We Think About Artificial Intelligence,” a panel discussing artificial intelligence (AI) and what its developments might mean for the future of humanity. The panelists included University of Washington professors Andrea Woody, divisional dean of the social sciences and professor of philosophy; Chirag Shah, professor in the Information School and founding co-director of Center for Responsibility in AI Systems & Experiences (RAISE); Ryan Calo, School of Law professor; and Aylin Caliskan, assistant professor in Information School. Their discussion focused primarily on addressing some of the facts, misconceptions, and fears concerning the implementation of AI and its potentially detrimental consequences for humanity.
Last month, the Biden-Harris Administration announced new efforts toward advancing responsible AI research, deployment, and development. These announcements included the release of several documents, including an update to The White House Office of Science and Technology Policy’s (OSTP) 2019 National Artificial Intelligence Research and Development Strategic Plan; a Request for Information soliciting public comments to be considered when reassessing U.S. national priorities and planning future actions on AI; and a new report from the U.S. Department of Education’s Office of Educational Technology providing recommendations for addressing the use of AI in schools.
AI has appeared consistently across pop culture mediums since the inception of the science-fiction genre. Early science-fiction stories, like Mary Shelley’s Frankenstein, played with the idea of AI in the form of a man-made, anthropomorphic organism; however, as the genre evolved with the advancement of technology, fears around AI seemed to exist solely in a digital context.
For some people, stories like Harlan Ellison’s “I Have No Mouth, and I Must Scream” and films like I, Robot and M3GAN may act as omens symbolizing an impending reality where technology suddenly aims to kill us all; while other stories and films, like Disney’s WaLL-E, depict AI technology as gentle, good-natured, and loving — possibly more so than humans.
Just last week, the Center for AI Safety released a statement that says, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The statement, signed by AI experts, developers, journalists, and policymakers, eerily suggests that the fear of technology becoming conscious and killing us all might not be irrational.
However, according to the panelists, this fear is misplaced, and we could instead be focused on the consequences AI is already having.
Caliskan says discrimination is already a primary issue with AI systems. “I focus on bias in artificial intelligence, and humans are biased,” said Caliskan. “In 2016, we discovered that artificial intelligence systems that are feeding on large-scale language data, in particular, will automatically learn implicit biases in a way that is similar to humans; and, accordingly, machines become biased. This is quite problematic, because especially with generative AI or the tests they are performing, they end up discriminating in consequential decision-making tasks that are related [to], for example, job candidates, resume screening, predictive policing, language generation, machine translation, and so on.”
Caliskan continued, “All the biases that exist in society get transferred to machines, and they don’t get transferred directly — they get transferred in ways that are skewed because they are trained on data that is collected from the internet, which doesn’t have a proper representation of society.”
Panelists say another concern that should be taking priority is the lack of legislation surrounding AI and the lack of accountability that causes.
“When a self-driving car hits a pedestrian, who takes responsibility? The driver? Well, they weren’t driving. That’s the whole point of self-driving cars,” Calo said. “So is that the manufacturer of the car, the software developer, the local authority that authorizes the car to be on the road? We don’t have that framework.”
Although bias, accountability, and misinformation may not be as exciting to talk about, Calo argues that conversations around mass extinction as a consequence of the misuse of AI technology are a distraction from the real harm being done by AI today.
“My thing with this extinction idea: Yes, it could happen if we’re not careful with this, but the harms of AI are already happening. Now. What are we doing about that?” said Calo. “It seems like just a distraction, in a way. They’re taking away from the real discussion that needs to happen about what AI is currently doing, not just what it might do 10 years from now or 100 years from now.”
Lauryn Bray is a writer and reporter for the South Seattle Emerald. She has a degree in English with a concentration in creative writing from CUNY Hunter College. She is from Sacramento, California, and has been living in King County since June 2022.
📸 Featured Image: Photo via LookerStudio/Shutterstock.com
Before you move on to the next story … The South Seattle Emerald is brought to you by Rainmakers. Rainmakers give recurring gifts at any amount. With over 1,000 Rainmakers, the Emerald is truly community-driven local media. Help us keep BIPOC-led media free and accessible. If just half of our readers signed up to give $6 a month, we wouldn't have to fundraise for the rest of the year. Small amounts make a difference. We cannot do this work without you. Become a Rainmaker today!