Virtual personal assistants powered by artificial intelligence are becoming ubiquitous across technology platforms, with every major tech firm adding AI to their services and dozens of specialized services tumbling onto the market. While immensely useful, researchers from Google say humans could become too emotionally attached to them, leading to a host of negative social consequences.
A new research paper from Google’s DeepMind AI research laboratory highlights the potential benefits of advanced, personalized AI assistants to transform various aspects of society, saying they “could radically alter the nature of work, education, and creative pursuits as well as how we communicate, coordinate, and negotiate with one another, ultimately influencing who we want to be and to become.”
This outsize impact, of course, could be a double-edged sword if AI development continues to speed forward without thoughtful planning.
One key risk? The formation of inappropriately close bonds—which could be exacerbated if the assistant is presented with a human-like representation or face. “These artificial agents may even profess their supposed platonic or romantic affection for the user, laying the foundation for users to form long-standing emotional attachments to AI,” the paper says.
Left unchecked, such an attachment could lead to a loss of autonomy for the user and the loss of social ties because the AI could replace human interaction.
This risk is not purely theoretical. Even when AI was in a somewhat primitive state, an AI chatbot was influential enough to convince an user to commit suicide after a long chat back in 2023. Eight years ago, an AI-powered email assistant named “Amy Ingram” was realistic enough to prompt some users to send love notes and even attempt to visit her at work.
Iason Gabriel, a research scientist in DeepMind’s ethics research team and co-author of the paper, did not respond to Decrypt’s request for comment.
In a tweet, however, Garbriel warned that “increasingly personal and human-like forms of assistant introduce new questions around anthropomorphism, privacy, trust and appropriate relationships with AI.”
Because “millions of AI assistants could be deployed at a societal level where they’ll interact with one another and with non-users,” Gabriel said he believes in the need for more safeguards and a more holistic approach to this new social phenomenon.
8. Third, millions of AI assistants could be deployed at a societal level where they’ll interact with one another and with non-users.
Coordination to avoid collective action problems is needed. So too, is equitable access and inclusive design.
— Iason Gabriel (@IasonGabriel) April 19, 2024
The research paper also discusses the importance of value alignment, safety, and misuse in the development of AI assistants. Even though AI assistants could help users improve their well-being, enhance their creativity, and optimize their time, the authors warned of additional risks like a misalignment with user and societal interests, imposition of values on others, use for malicious purposes, and vulnerability to adversarial attacks.
To address these risks, the DeepMind team recommends developing comprehensive assessments for AI assistants and accelerating the development of socially beneficial AI assistants.
“We currently stand at the beginning of this era of technological and societal change. We therefore have a window of opportunity to act now—as developers, researchers, policymakers, and public stakeholders—to shape the kind of AI assistants that we want to see in the world.”
AI misalignment can be mitigated through Reinforcement Learning Through Human Feedback (RLHF), which is used to train AI models. Experts like Paul Christiano, who ran the language model alignment team at OpenAI and now leads the non-profit Alignment Research Center, warn that improper management of AI training methods could end in catastrophe.
“I think maybe there’s something like a 10-20% chance of AI takeover, [with] many [or] most humans dead, ” Paul Christiano said on the Bankless podcast last year. “I take it quite seriously.”
Edited by Ryan Ozawa.
Stay on top of crypto news, get daily updates in your inbox.
Source link