AI’s Controllable Nature Dispels Existential Threat, Study Finds

AI's Controllable Nature Dispels Existential Threat, Study Finds

A recent study conducted by researchers at the University of Bath and the Technical University of Darmstadt offers reassurance that large language models (LLMs) like ChatGPT are not an existential threat to humanity. The research, presented at the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), concludes that these models are inherently controllable and predictable due to their inability to learn independently or acquire new skills without explicit instruction.

LLMs’ Strengths and Limitations

The study emphasizes that while LLMs exhibit remarkable language proficiency and can diligently follow instructions, they lack the capacity to autonomously master new skills. This finding directly challenges the prevalent notion that LLMs could evolve to possess complex reasoning skills and thereby pose a threat to humanity. Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the study, underscored that concerns about LLMs developing hazardous abilities like reasoning and planning are unwarranted.

The research team, spearheaded by Professor Iryna Gurevych, conducted experiments to assess LLMs’ ability to complete unfamiliar tasks, referred to as emergent abilities. Their findings revealed that LLMs’ capabilities are largely attributed to a process called in-context learning (ICL), where models perform tasks based on provided examples. This ability, coupled with their instruction-following skills and linguistic proficiency, accounts for both their strengths and limitations.

Addressing Real Risks, Not Perceived Threats

Despite the absence of an existential threat, the study acknowledges the potential for LLMs to be misused, such as in the creation of fake news or the facilitation of fraud. Dr. Tayyar Madabushi cautioned against enacting regulations based on perceived existential threats, instead advocating for a focus on addressing the tangible risks associated with AI misuse.

Professor Gurevych added that while AI does not pose a threat in terms of emergent complex thinking, it remains crucial to control the learning process of LLMs and direct future research towards other potential risks. The study recommends that users furnish LLMs with explicit instructions and examples for complex tasks, as relying on them for advanced reasoning without guidance is likely to lead to errors.

The research provides a clearer understanding of LLMs’ capabilities and limitations, encouraging the continued development and deployment of these technologies without undue fear of existential risks.

About the author

Avatar photo

Lakshmi Narayanan

Lakshmi, with a BA in Mass Communication from Delhi University and over 8 years of experience, explores the societal impacts of tech. Her thought-provoking articles have been featured in major academic and popular media outlets. Her articles often explore the broader implications of tech advancements on society and culture.

Add Comment

Click here to post a comment

Follow Us on Social Media

Recommended Video

Web Stories

5 Best Gaming phones under Rs 20,000 in September 2024: iQOO Z9s, Vivo T3 and More! 5 Best Phone Under 25,000 in September 2024 5 Best Earbuds Under 20k in September 2024: Apple Airpods 4 ANC, Samsung Galaxy Buds 3 Pro & More! 5 Best Smartwatches Under ₹5,000 in September 2024 6 Best Phone Under 20,000 in September 2024 5 Best Phone Under 30,000 in September 2024