AI tools are analyzing thousands of bullying cases across Japan – part of a push to usher in a new machine learning-powered anti-bullying effort.
It was a heart breaking tragedy that shook Japan – a 13-year old boy took his own life after being bullied. Earlier this year, two former classmates of the boy were ordered to pay just over $300 000 in damages to the teen’s family. The Otsu District court in Japan delivered the ruling, recognizing that the bullying of the victim resulted in his death.
The city has turned to AI tools to prevent similar tragedies, according to an article by the Japan Times.
“Through an AI theoretical analysis of past data, we will be able to properly respond to cases without just relying on teachers’ past experiences,” said Otsu Mayor Naomi Koshi.
AI tools are expected to analyze 9000 bullying cases, examining the grade and gender of the victims as well as the perpetrators. It hopes to identify forms of bullying that can escalate. More than 410,000 cases of bullying were reported in Japan in the 2017 financial year. These incidents are exacting a tragic toll. Ten of the 250 students who took their own lives had been bullied at school.
In the United States, the number of children who were hospitalized after attempting suicide doubled between 2007 and 2017, according to a study co-authored by Dr Brett Burstein, a pediatric emergency room physician.
Burstein suggested a similar trend was appearing in Canada. Fardous Hosseiny of the Canadian Mental Health Association attributed the increase to a range of factors, including bullying.
While technology can curb these trends, it’s also part of the problem.
The rise of cyberbullying – a term coined by Canadian educator Bill Belsey – means that bullying follows children home with cruel taunts online.
In the US, 59 percent of teens report being bullied online. On Instagram, 42 percent of teenage users have experienced bullying.
To stamp out harassment, Instagram uses machine learning to scan photos to check for bullying. It’s not the first time the photo-sharing app has enlisted AI tools to address abuse. In 2017, Instagram introduced an offensive comments filter to detect and hide abusive comments.
Bullying doesn’t end at school. There are frequent revelations of harassment at major tech firms in Silicon Valley, and almost half of the women working in Europe’s tech sector experienced discrimination, according to the 2018 State of European Tech report.
But AI tools are giving victims avenues for reporting harassment. Employers have already relied on channels like SMS to support vulnerable staff, like the Alert-a-buddy service that pushes automated texts to employees before they travel, or when they’re working on their own remotely. Now machine learning is helping victims of workplace harassment wrest back control.
Spot is an AI-powered chatbot which lets employees record cases of harassment. Time-stamped messages can be sent to HR anonymously. Another tool is the Botler AI bot. Botler AI is trained on 300 000 US and Canadian court case documents. It helps victims determine whether they have a strong legal case. Botler AI produces an incident report for HR or law enforcement. The first version achieved 89 percent accuracy, according to a BBC article.
Machine learning is one of the most promising forms of artificial intelligence, and it’s playing an increasingly key role in human interactions. On top of better tackling bullying and harassment, artificial intelligence can give businesses an edge, too. Organizations are implementing AI-solutions to improve support, enhancing the customer experience. To learn more about how your business can have AI-powered conversations, read our recent article on how chatbots can meet your clients’ needs.
SMS and two-way channels, automation, call center integration, payments - do it all with Clickatell's Chat Commerce platform.