Despite its immense popularity, Twitter is facing the brunt of controversies due to millions of inflammatory content. However, the social media platform seems to find out a way to curb the prevalent problem. The company has tipped to working on a feature which would analyse the words used in a tweet, and alert the user to rethink over the choice of words if it finds anything offensive.
The company has also mentioned that the ongoing experiment for its iOS version of the app is going through its testing phase, hinting the feature could be rolled out for the iPhone users soon.
Limited availability
However, the company has also mentioned it as a limited experiment which also suggests that the company might release it with the next update of Twitter or could roll back if things go wrong.
"When things get heated, you may say things you don't mean. To let you rethink a reply, we're running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it's published if it uses language that could be harmful," reads the statement from the official Twitter support handle.
No parameters
The announcement, however, sounds a bit obscure since Twitter hasn't explained about the parameters to track the offensive language. Twitters earlier published rules document mentions violent threats, extremism, terrorism, child sexual exploitation, abuse, suicidal or hateful contents.
As a popular social media platform, Twitter experiences millions of potentially hateful messages in a circadian manner. And despite being aware of the ongoing battle of thoughts, Twitter never screens such messages until they violate its rules and terms of services. However, Twitter offers rounds of suggestions for the fellow Twitterati about how to step back and avoid any possible heads on.
Twitter is also working on the design arrangements of like, retweet, and reply icons to make the app more fun. This feature is also going through its testing phase on the iOS and the web version of Twitter.