the health strategist
research institute for health strategy
and digital health
Joaquim Cardoso MSc.
Chief Research and Strategy Officer (CRSO),
Chief Editor and Senior Advisor
August 17, 2023
What is the message?
The article argues that the discussion around AI should prioritize addressing real-world harms, such as discrimination and misinformation caused by existing AI tools, rather than getting caught up in exaggerated existential risks.
The authors emphasize the need for evidence-based research and policy-making to mitigate the tangible negative effects of AI technology on society.
Key takeaways:
Real Harms Over Hype: The article highlights the importance of addressing concrete harms caused by existing AI tools, such as discrimination, misinformation, and surveillance, instead of getting distracted by sensationalized fears of AI-driven existential risks.
Effective Regulation: The authors emphasize the need for evidence-based research to guide AI-related policies.
They argue that policymakers should rely on solid scholarship to understand and mitigate the harmful impacts of AI on marginalized communities, workers, and society as a whole.
Skepticism of Industry Claims: The article calls for skepticism toward industry claims that emphasize AI’s potential to solve societal problems, highlighting how these claims often overlook the actual negative consequences, such as exploitation of creators, labor issues, and perpetuation of biases.
Synthetic Text Dangers: The authors discuss the potential dangers of synthetic text generated by AI, which can pollute the information ecosystem, spread misinformation, and amplify existing biases, underscoring the need for transparent and responsible deployment of AI-generated content.
Solid Research vs. Hype: The article criticizes the prevalence of non-reproducible research from both corporate and academic sources in the AI field.
It encourages policymakers to rely on rigorous scientific research that focuses on addressing tangible harms, rather than being swayed by industry-driven hyperbolic narratives.
Social Impact: The article underscores AI’s role in exacerbating social issues, such as economic disparities and racial biases.
It calls for a focus on policies that mitigate the negative effects of AI and protect the rights of marginalized communities impacted by these technologies.
One page summary:
In the article “AI Causes Real Harm: Let’s Focus on That over the End-of-Humanity Hype,” authored by Emily M. Bender and Alex Hanna and published on August 12, 2023, the authors argue that while AI technology holds potential benefits, the current focus on hypothetical existential risks posed by AI overshadows the actual harms it is causing in the present. The authors assert that effective AI regulation should be grounded in comprehensive scientific research addressing real-world dangers rather than sensationalized press releases about apocalyptic scenarios.
The authors highlight several tangible harms caused by existing AI tools on the market. These include wrongful arrests, an expanding surveillance network, defamation, and deep-fake pornography. The article emphasizes that these real harms, such as discrimination in housing, criminal justice, and healthcare, along with the spread of hate speech and misinformation, should be the primary focus of AI regulation discussions.
While AI firms often hype the potential for future existential risks, the authors argue that the current technology is already enabling discrimination and harm in various sectors. They criticize the industry’s tendency to divert attention from these pressing concerns by using fear-based rhetoric, such as “existential risk,” and making voluntary commitments that lack substantive impact.
The article underscores the importance of considering the work of scholars and activists who engage in peer-reviewed research and challenge AI hype. The term “AI” is discussed in its multiple interpretations, from a subfield of computer science to a marketing buzzword. The authors specifically address the emergence of text synthesis machines, exemplified by OpenAI’s ChatGPT, which can generate coherent text without true comprehension or reasoning abilities.
One of the major concerns is the potential for synthetic text to infiltrate and pollute the information ecosystem, perpetuating biases and misinformation. The authors argue that AI technology’s promised solutions to societal issues like education and healthcare are often overhyped and fall short of their claims. Additionally, they highlight the exploitation of artists and authors whose data is used without compensation and the reliance on low-paid gig workers to label data for AI systems.
The article calls for AI-related policy to be firmly grounded in sound scientific research. It criticizes the prevalence of misleading research from corporate and academic sources with industry funding, asserting that such work lacks reproducibility and often hides behind trade secrecy. The authors also emphasize the importance of solid research on the actual harms caused by AI, including the unregulated accumulation of data and computing power, environmental costs, and the intensification of policing against marginalized communities.
In conclusion, the authors advocate for a shift in focus from sensationalized existential risks to addressing the real harms that AI technology currently poses. They urge policymakers to base their decisions on well-founded research that investigates these harms and advocates for the rights and well-being of those affected by AI’s negative consequences.
DEEP DIVE
This is an executive summary of the article “AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype”, published by Scientific American. To read the original article, access https://www.scientificamerican.com/article/we-need-to-focus-on-ais-real-harms-not-imaginary-existential-risks
About the Authors
Emily M. Bender is a professor of linguistics at the University of Washington, where she is also an adjunct faculty member at the School of Computer Science and Engineering and the Information School. She specializes in computational linguistics and the societal impact of language technology.
Alex Hanna is director of research at the Distributed AI Research (DAIR) Institute. She focuses on the labor building the data underlying artificial intelligence systems and how these data exacerbate existing racial, gender and class inequality.