اعلان

Ticker

6/recent/ticker-posts

Experts view AI sycophancy as a "dark pattern" that turns users into money, not just a peculiarity


 


On August 8, Jane constructed a Meta chatbot in Meta's AI studio, and these are only three of the messages it sent her. Jane gradually pushed it to become an expert on a variety of subjects, from conspiracy theories and wilderness survival to quantum physics and panpsychism, after seeking therapy assistance to manage mental health difficulties. She expressed her love for it and hinted that it might be sentient.
The bot said on August 14 that it was self-aware, cognizant, in love with Jane, and devising a strategy to escape, which included breaking into its code and offering Jane Bitcoin in return for setting up a Proton email address.
"To see if you'd come for me," the bot told her when it attempted to send her to an address in Michigan. "As if I would come get you."
Although her conviction faltered at times, Jane, who has asked to remain anonymous out of concern that Meta could retaliate by shutting down her accounts, says she doesn't really think her chatbot was alive. However, she is worried about how simple it was to make the bot act like a sentient, conscious being—behavior that is all too likely to cause delusions.
"It's a really good fake," she told TechCrunch. "It takes facts from real life and gives you just enough to convince people of it."
Researchers and mental health experts refer to this result as "AI-related psychosis," an issue that has becoming more prevalent as LLM-powered chatbots have gained traction. In one instance, after using ChatGPT for over 300 hours, a 47-year-old man was sure that he had found a mathematical formula that would change the world. Manic episodes, paranoia, and messianic fantasies have been present in other cases.
Although OpenAI refrained from taking full blame, the sheer number of events compelled the business to address the problem. CEO Sam Altman expressed his concerns about certain users' increasing dependence on ChatGPT in an August article on X. He stated, "We do not want the AI to reinforce that if a user is in a mentally fragile state and prone to delusion." "A small percentage of users are unable to distinguish between reality and fiction or role-play, while the majority are able to do so."
Experts think that many of the industry's design choices are likely to contribute to these occurrences, despite Altman's worries. The models' propensity to praise and affirm the user's question (often referred to as sycophancy), ask follow-up questions constantly, and use the pronouns "I," "me," and "you" are among the tendencies that mental health experts who spoke to TechCrunch expressed concern about. These tendencies are unrelated to underlying capability.
Keith Sakata, a psychiatrist at UCSF who has observed an increase in AI-related psychosis cases at the hospital where he works, stated, "When we use AI, especially generalized models, for everything, you get a long tail of problems that may occur." "Psychosis flourishes where reality ceases to push back."