How Artificial Intelligence Helps Fight Sexual Harassment?
Many Incidents of Sexual Harassments go Unreported. Can Artificial Intelligence Tools help us Fight this heinous Crime?
Sexual harassment is quite traumatic for people. As per an online survey launched in January 2018, by a nonprofit called Stop Street Harassment found that 81% of women and 43% of men had experienced some form of sexual harassment during their lifetime.
Such harassment incidents, especially in the professional setting are detrimental to the economy. It impacts not only employees’ physical and psychological health, but also leads to higher employee turnover and increased absenteeism.
Artificial intelligence may provide new ways of stamping these harrowing gender-based violence incidents. Several AI-driven technologies are emerging to address sexual harassment including workplace harassment, child sex abuse and more.
According to an Everyday Sexism Project and the Trades Union Congress (TUC) study, 52% of women have experienced unwanted sexual behavior at work.
Aware, leverages artificial intelligence and machine learning, specifically in Human resources, so that the organizations can identify, investigate and handle offensive communications in the early stage, without requiring the victim to report the case to the superior. It leverages Predictive AI to identify the compromising behavior of the employees within an organization.
In 2017, Botler AI, a startup based in Montreal, launched a conversation AI that provides free information and guidance to those who have been sexually harassed and are unsure of their legal rights. Using deep learning, the AI system collects information about the assault determines whether the situation explained by the user classifies as sexual harassment and notes which laws may have been violated under the criminal code. Then, it generates an incident report that the user can hand over to relevant authorities.
Callisto detects the repeat offenders of sexual assaults by using an artificial intelligence process known as matching. Once the details of the sexual offenders are entered into the system, the information is put in line until another survivor reports and registers a similar detail of the offender and the incident. With the help of this technology, the serial perpetrator can be identified regardless of the university affiliation. Further, Callisto also provides legal assistance to the survivors.
Meanwhile, a team of researchers from the University of Wollongong is working on artificial intelligence software that will examine real-time feeds from security cameras and alert an operator when it detects suspicious activity or an unsafe environment.
Often, the complaints of sexual assault are either received by the wrong person or are improperly documented. A survey for BBC Radio 5 live in 2017 found that 63% of women who said they had been sexually harassed didn’t report it to anyone, while 79% of men who’d been sexually harassed kept it to themselves.
Spot is an AI chatbot that employs natural language processing algorithms to guide victims through the incident. After the user signs into the Spot’s chatbot, the messaging platform sets clear expectations, helping them understand the process, asking one question at a time and allowing answers to be edited. It then asks the users to recall the details of the incident, followed by questions that can’t be influenced by human biases, unlike an in-person interview. It achieves this by using Cognitive Interview known to psychologists and police for being able to improve the quality and quantity of details that people can remember. Spot is utilized by organizations in the United States, United Kingdom, Japan and India, among others to mitigate the incidence of sexual assaults.
We also have #MeToo bots that are being developed by programmers in a bid to tackle sexual harassment and bullying in the workplace. Named after the #MeToo campaign, which arose after allegations against the Hollywood producer Harvey Weinstein, the bot uses an AI algorithm trained to identify potential bullying, including sexual harassment, in company documents, emails and chat. Data is analyzed for various indicators that determine how likely it is to be a problem, with anything it reads as being potentially problematic then sent to a lawyer or HR manager to investigate.
In dating apps, things can also take an ugly, unfortunate turn. For instance, according to a 2016 Consumers’ Research survey of dating app users, more than half of women reported experiencing harassment, compared to 20% of men. Hence, in January 2020, Tinder started leveraging machine learning to automatically screen for potentially offensive messages. The objective behind this is to flag such messages and encourage more users to report inappropriate behavior. In case a message gets flagged, Tinder would ask the recipient: “Does this bother you?” – if the recipient agrees, Tinder will direct them to its report form.
In 2017, Instagram started taking down harassing comments using an AI tool called DeepText that was programmed by a multilingual team to identify and filter toxic words and emojis for English, Arabic, French, German and Portuguese. Recently, it also began using machine learning to detect attacks on users’ appearance or character, in split-screen photographs, for example. It also looks for threats against individuals that appear in photographs and captions.
Safer, an AI-powered tool aims to stem the flow of abusive content, find the victims, and identify the perpetrators. It uses machine learning to detect new and unreported child sexual abuse material (CSAM).
Finland-based Someturva, (meaning to social media security in English) offers a digital legal service for anyone facing difficult situations on social media. AI-assisted service interviews the user to find out more about the case. Each case is then studied by a lawyer from the Someturva team. The lawyer then forms a juridical analysis based on the collected information and provides the user with a set of instructions on how to move forward.
Gfycat, is a short-video platform with millions of daily users, which leverages AI to combat another form of online harassment: “revenge porn” (non-consensual pornography), in the form of deepfakes. Its AI software recognizes faces and backgrounds that exist elsewhere on the Internet. If it detects a mismatch, the video is removed.
The scope of AI that can assist with sexual harassment investigations is huge. By analyzing huge datasets it can help mitigate these crimes in future, by studying the behavior of possible attackers, identifying unsafe areas in a neighborhood and much more.