Twitter Seems to Be Suspending Accounts for Old Tweets That Suggest Physical Harm

Twitter Seems to Be Suspending Accounts for Old Tweets That Suggest Physical Harm
Highlights
  • Tweets dating back to 2010 have apparently been the reason of suspension
  • In most cases, deleting the tweet resolves the issue
  • There has been no official public update from Twitter
Advertisement

Social networking giant Twitter, much like counterparts such as Facebook and WhatsApp, is going through a rough patch with incidents of fake news and controversies around user verifications on the rise on the microblogging platform, while fighting a long-term battle against abuse. Gadgets 360 has learnt that Twitter might be in process of tightening its rules and policies, especially in the category of abuse - specifically threats of violence and physical harm. According to recent developments, the usage of phrases such as "kill me" and "kill you" have prompted Twitter to send out suspension warnings to certain users, even if the phrases are used in a sarcastic context, and not literally.

An Indian public affairs consultant, who goes by the username @berges, claims that Twitter prompted him to delete a tweet that dated back to 2010. The tweet read, "100 bucks - Rent. ALL calls 20 Paise. All SMS 30 Paise. 250 calls and sms free. VMC 199. Now kill me. Or Vodafone. Or both. I'll start PCO." While it does contain the "kill me" phrase, the context is evidently not in its absolute literal sense. Notably, the tweet is from January 2010 while the prompt to delete the tweet was sent to him in April 2018.

He is not alone. Other users have also reportedly received emails from Twitter for similar tweets that are well over a year old. For instance, digital marketing professional Harish Iyengaar on Wednesday tweeted about a Twitter notification prompting him to delete something he posted back in August 2016. Once again, his almost-two-year-old tweet mentioned the phrase "kill me" in a sarcastic tone but Twitter locked his account. Separately, in a reply, Twitter user @aashnaaaugh claimed that her account was temporarily suspended because of a tweet from 2015 where she jokingly replied to another user saying "so bad. Is there a way I could kill you?" Even other users have reported similar issues.

In an attempt to verify the issue, we tweeted out the "kill me" phrase in a similar context but are yet to receive a notification/ email from Twitter at the time of filing this report. However, Twitter user @manikgupta1 recently tried out the same and got the same template response from Twitter informing him about a temporary suspension of services on his account and, in turn, prompting him to delete the concerned tweet.

In some cases, deleting a tweet is not enough. Another Twitter user, with handle @thatobesewoman, said that the Twitter app asked her to verify her contact details - including her personal mobile number - in order to unlock access.

While there is no update from Twitter on whether the social media giant is strengthening the fight against self harm or harm to others, this is not a new practice. Back in November 2017, Facebook said that it is testing out the use of artificial intelligence in an attempt to help discover text in Facebook posts and comments for phrases that could signify suicide. For that matter, even Twitter's official rule book states that it will reach out to people who appear to have suicide tendencies and help them out with resources.

twitter rules and policies inline Twitter Rules Policies

Twitter's current Rules and Policies page

We reached out to Twitter for a comment on the situation, and in response, were informed that the tweets were removed for violating Twitter's rules as they did not contain any context and appeared to be in encouraging self harm.

A Twitter spokesperson said, "Self harm is an extremely complex and sensitive issue with serious implications that we take seriously. We allow content from users where they express their struggles or seek help. However, we take down reported content that promotes or encourages suicide or self harm. Context is very important when it comes to Tweets using language which could pertain to self harm. For example, a user could post 'Craving Mexican food and my favourite restaurant is closed...I want to kill myself' without any intention of self harming. As such, we do not remove these Tweets in an automated way and rely on bystander reporting as human review plays an important role."

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Further reading: Twitter, Facebook, Social, India
Gaming Is Too Damn Expensive in India
What Causes YouTube to Take Down Videos?
Share on Facebook Gadgets360 Twitter Share Tweet Snapchat Share Reddit Comment google-newsGoogle News
 
 

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »