May 6, 2021

JT NEWS

every news you want

AI trains counselors to deal with teens in crisis

3 min read


The chatbot uses GPT-2 for its basic conversational capabilities. This model is trained on 45 million pages of the web, which teaches it the basic structure and grammar of the English language. Project Trevor then trained him further on all of the transcripts from Riley’s previous role-playing conversations, which gave the robot the material he needed to emulate the character.

Throughout the development process, the team was surprised by the performance of the chatbot. There is no database storing the details of Riley’s bio, but the chatbot has remained consistent as each transcript reflects the same storyline.

But there are also tradeoffs to using AI, especially in sensitive contexts with vulnerable communities. GPT-2, and other natural language algorithms like this one, are known to deeply integrate racist, sexist and homophobic ideas. More than one chatbot has been disastrously misplaced in this way, the most recent being a South Korean chatbot called Lee Luda who had the character of a 20 year old college student. After quickly gaining popularity and interacting with more and more users, he began to use slurs to describe queer and disabled communities.

The Trevor Project recognizes this and has devised ways to limit the risk of problems. While Lee Luda was supposed to converse with users about anything, Riley is very tightly focused. Volunteers will not stray too far from the conversations they were trained on, minimizing the risk of unpredictable behavior.

It also makes it easier to fully test the chatbot, which Project Trevor says it does. “These highly specialized and well-defined and inclusive designed use cases do not pose a very high risk,” says Nenad Tomasev, researcher at DeepMind.

Human to human

This is not the first time the mental health field has attempted to harness the potential of AI to provide inclusive and ethical assistance without harming the people it is meant to help. Researchers have developed promising means detect depression from a combination of visual and auditory signals. Therapy “robots”, although not equivalent to a human professional, are launched as an alternative for those who do not have access to a therapist or are uncomfortable confide in a person.

Each of these developments, and others like them, require thinking about the agency’s amount of AI tools to deal with vulnerable people. And the consensus seems to be that at this point the technology is not really suited to replace human aid.

Still, Joiner, the psychology professor, says that could change over time. While replacing human advisers with copies of AI is currently a bad idea, “that doesn’t mean it’s a permanent constraint,” he says. People, “have artificial friendships and relationships” already with AI services. As long as people aren’t tricked into believing they’re having a conversation with a human when talking to an AI, he says, that could be a possibility down the line.

In the meantime, Riley will never face the young people texting the Trevor Project: it will only ever serve as a training tool for volunteers. “The human-to-human connection between our advisors and the people who contact us is essential to everything we do,” says Kendra Gaunt, group manager of data and AI products. “I think that makes us really unique, and something that I don’t think any of us want to replace or change.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *