In another initiative to curb cyber bullying, Facebook-owned Instagram has rolled out a new feature that will warn the users if their caption is potentially offensive. In such cases, the user will be given an option to revise their post's caption before sharing it on the social media platform.
The 'Caption Warning' feature will use an Artificial Intelligence (AI) algorithm that will recognize different forms of bullying on the online platform and flag a notification reading: "This caption looks similar to others that have been reported." Following this, the user will shown options for 'Edit', 'Learn More' or they can share it without making any change to it.
"In addition to limiting the reach of bullying, this warning helps educate people on what we don't allow on Instagram, and when an account may be at risk of breaking our rules," Instagram writes in its blog.
Instagram believes that this warning will "help educate people on what we don't allow on Instagram, and when an account may be at risk of breaking our rules." Adam Mosseri, Head of Instagram said, "It's our responsibility to create a safe environment on Instagram." For now, Instagram plans to roll out this latest anti-bullying feature in selected countries. But the feature will be available globally in the next few months.
Series of actions on cyber bullying
Considering the popularity of the photo-sharing app among youngsters, Instagram has been lately working on a bunch of features to make it safe for its users. Recently in October, Instagram launched its 'Restrict' feature that allows the users to restrict people who bully them through abusive comments or offensive posts. Users can restrict such people by swiping left on a comment, through the Privacy tab in Settings, or directly on the profile of the account you intend to restrict.
Instagram also restricted people under 18 years of age from viewing the posts from celebrity influencers which promote cosmetic surgery and various weight-loss products.