Could talking to a bot help you feel better?". It provides a nice summary of the state of the art of chatbots, as well as their promise.
Not much about the peril.
Let me summarize:
One of the world’s first chatbots was a therapist. Built in 1964, the program, called ELIZA, was designed to mimic techniques from Rogerian psychotherapy where the therapist prompts the patient to examine their own thoughts and feelings.
ELIZA had no memory or understanding of the conversation. It merely searched for a keyword in the last sentence typed in by its interlocutor and calculated an answer using a rule associated with the keyword. Nevertheless, many users became convinced that ELIZA understood them.
“ELIZA created the most remarkable illusion of having understood in the minds of many people who conversed with it.” Users would often demand to be permitted to converse with the system in private. This phenomenon became known as the ELIZA effect.
These days we’re surrounded by chatbots and voice analysis apps, a growing number of which are geared toward improving how we feel. Aimed at users who suffer from conditions like anxiety, depression, bipolar disorder, PTSD, or simply from stress, chatbots claim to be able to identify the mood or condition of the user, and in many cases can also offer advice or suggest therapeutic exercises.
There is even a chatbot for substance use disorders.
As with ELIZA, many users had emotional interactions with this chatbot. They thanked it for its help. One participant struggling with domestic problems and opioid abuse even sent the bot photos of her vacation at Disneyland with her children. “Hey, I know you are not real but I just wanted to send these pictures of my family out at Disneyland having a great time,” the user told the bot. “I’m doing better now. Thank you.”
For all of the supposed benefits of mental health and counseling bots, critics have questioned their safety and point to a lack of regulation. Others have wondered if a reliance on bots and screens might deprive people of the benefits of real-life communication and connection.
The concerns about connection coincide with a rise in loneliness. Recent research on the placebo effect suggests that the effect may actually be a biological response to an act of caring.
A study explains that human beings “evolved in an environment which did not require them to distinguish between authentic and simulated relationships.” So when people interact with a non-human listener, they may feel as though they are dealing with a sentient being who cares about them.
At the end of the article, the author notes: "In a society where people seek constant validation via social media, yet feel chronically lonely, can non-human listeners ease our sense of isolation and the problems that result from it, or could these listeners become the ultimate “online only” friend, addressing our basic human need for connection and caring in a way that ultimately leaves us even more alone?"