Follow us on social

Latest Posts

Stay in Touch With Us

For Advertising, media partnerships, sponsorship, associations, and alliances, please connect to us below


+91 40 230 552 15

540/6, 3rd Floor, Geetanjali Towers,
KPHB-6, Hyderabad 500072

Follow us on social


  /  Artificial Intelligence   /  Is AI Capable of Self-Harm? Apparently, It Still Can Harm Itself!

Is AI Capable of Self-Harm? Apparently, It Still Can Harm Itself!

AI aims to save lives by quickly flagging and responding to such signs, but it can also harm itself!

Artificial intelligence facilitates most interactions with social media. Determine which posts will appear at the top of the feed and which will not be truncated. But, is AI capable of self-harm? Computers behind social media sites such as Instagram and its owner, Facebook, continuously scan posts for signs of suicide or self-harm. (When asked, Twitter personnel will neither confirm nor deny the use of AI for suicide sign detection purposes.) Next, if AI detects a post or a sign of personal trauma or indications of self-harm in humans, the system will flag the post for review by trained professionals. But apparently, AI machines can also harm themselves and would want to end its life!

This is important because as more social interactions occur in the virtual world, it becomes more difficult to find these obvious physical signs of sadness and despair. And undoubtedly, we are more online than ever. However, you can’t see the depression of the shoulders, the sluggishness of the legs, or the traces of breadcrumbs left by people. This is where AI comes in as artificial intelligence algorithms allow computers to recognize suicide warning signs that are never visible to the real person or are only too late. Responders around the world are using these tools to save lives.


A virtual call for help 

How can computers use AI for detecting suicide signs and problems that humans may not be able to find? According to a statement from the social media giant, Facebook computers are the way to do it. At first, scientists sent a bunch of posts to the computer, but until the machine realized the difference between “too much homework to kill yourself” and the real suicide threat. All posts flagged by artificial intelligence are now sent to a real professional for review. The next steps depend on the severity of the post. For posts that do not indicate imminent suicide signs, the person’s platform sends resources, including an option to connect directly to the crisis hotline via Facebook Messenger. These hotlines keep the caller or text sender talking to overcome the crisis.

Facebook serves help to a  group of people that may otherwise be unreachable. About 65% of those who contact facebook share that they have never told anyone about suicide. Therefore, it is the first port of call for many. Facebook will report urgent posts directly to the local police in the Facebook user area. If possible, the first responder sometimes finds the person via cell phone ping and screens them. AI analyzes the comments to identify posts that require urgent action. For example, “I’m here for you” is less worried than “Tell me where you are” or “Have anyone seen him/her.” In 2017, In the first year Facebook used AI for this purpose, rescue workers conducted 1,000 personal checks.


A helpline in reverse

A proactive AI approach can close significant gaps in youth mental health care. You can see mass disclosures online about how young people use artificial intelligence to help young people in crises around the world. They didn’t seem to get the support they needed. We wondered if we could bring it to them. AI cannot open the app and look for hashtags such as # depressed and #suicidal. But we can see posts from someone in need and asked if we wanted to talk. You can only access public accounts that post public hashtags. This format worked because it’s not uncommon for teens to meet and chat with strangers online. In the United States, nearly 60% of teens make new friends online. “They say thank you for contacting or Yes, I want to talk I need help, but the traditional way to access it ask it yourself for help.

Recently, studies have shown that constrained artificial intelligence machines have shown signs of harming themselves. The advent of emotional and more ethical artificial intelligence systems has also inculcated signs of anxiety and depression in machines that are mostly known for identifying signs of trauma and abuse in humans. Well, it is quite clear, that in today’s progressive society, we need even more progressive and avant-garde machines that will be capable enough to treat an AI machine for signs of depression and self-harm. A constrained life can kill a human, as well as a machine. Know the signs! Treat both humans and machines alike!