An AI is training counselors to deal with teens in crisis

The chatbot uses GPT-2 for its baseline conversational abilities. That model is trained on 45 million pages from the web, which teaches it the basic structure and grammar of the English language. The Trevor Project then trained it further on all the transcripts of previous Riley role-play conversations, which gave the bot the materials it needed to mimic the persona.

Throughout the development process, the team was surprised by how well the chatbot performed. There is no database storing details of Riley’s bio, yet the chatbot stayed consistent because every transcript reflects the same storyline.

But there are also trade-offs to using AI, especially in sensitive contexts with vulnerable communities. GPT-2, and other natural-language algorithms like it, are known to embed deeply racist, sexist, and homophobic ideas. More than one chatbot has been led disastrously astray this way, the most recent being a South Korean chatbot called Lee Luda that had the persona of a 20-year-old university student. After quickly gaining popularity and interacting with more and more users, it began using slurs to describe the queer and disabled communities.

The Trevor Project is aware of this and designed ways to limit the potential for trouble. While Lee Luda was meant to converse with users about anything, Riley is very narrowly focused. Volunteers won’t deviate too far from the conversations it has been trained on, which minimizes the chances of unpredictable behavior.

This also makes it easier to comprehensively test the chatbot, which the Trevor Project says it is doing. “These use cases that are highly specialized and well-defined, and designed inclusively, don’t pose a very high risk,” says Nenad Tomasev, a researcher at DeepMind.

Human to human

This isn’t the first time the mental health field has tried to tap into AI’s potential to provide inclusive, ethical assistance without hurting the people it’s designed to help. Researchers have developed promising ways of detecting depression from a combination of visual and auditory signals. Therapy “bots,” while not equivalent to a human professional, are being pitched as alternatives for those who can’t access a therapist or are uncomfortable  confiding in a person. 

Each of these developments, and others like it, require thinking about how much agency AI tools should have when it comes to treating vulnerable people. And the consensus seems to be that at this point the technology isn’t really suited to replacing human help. 

Still, Joiner, the psychology professor, says this could change over time. While replacing human counselors with AI copies is currently a bad idea, “that doesn’t mean that it’s a constraint that’s permanent,” he says. People, “have artificial friendships and relationships” with AI services already. As long as people aren’t being tricked into thinking they are having a discussion with a human when they are talking to an AI, he says, it could be a possibility down the line. 

In the meantime, Riley will never face the youths who actually text in to the Trevor Project: it will only ever serve as a training tool for volunteers. “The human-to-human connection between our counselors and the people who reach out to us is essential to everything that we do,” says Kendra Gaunt, the group’s data and AI product lead. “I think that makes us really unique, and something that I don’t think any of us want to replace or change.”

Discover more from WHO WILL CARE eCommerce

Subscribe now to keep reading and get access to the full archive.

Continue reading