Instagram has announced a range of new tools it says will help protect users from abuse on the platform.
At the centre of the update is a new feature called Limits, which will give people the ability to automatically hide comments and direct message requests from other users who do not already follow, or have only recently followed them.
The firm said it had been designed to stop waves of abuse from accounts who “pile on in the moment”.
The Facebook-owned service has also strengthened the in-app warnings it shows to those who attempt to post abuse – warning users they face having their account removed if they continue to send abusive comments – and is rolling out its Hidden Words filter tool to all users globally, allowing people to filter out words, phrases and emojis they don’t want to see.
Instagram said the aim of the tools was to give people more control while ensuring they feel safe when using the site.
The update comes amid ongoing scrutiny of social media and how it handles abuse following the racist attacks on England footballers Marcus Rashford, Jadon Sancho and Bukayo Saka after the Euro 2020 final.
The Limits feature will be rolled out to all Instagram users globally starting on Wednesday and will enable people to decide for how long they would like to hide comments and message requests from non-followers and those who only started following them in the last week.
Instagram’s public policy manager for Europe, Tom Gault, told the PA news agency that the Limits tool was being introduced to combat incidents like the Euro 2020 final, when public figures see a sudden spike in targeted comments and message requests in the wake of an event.
“Our own research, as well as feedback from public figures, shows that a lot of the negativity directed at high-profile people comes from those who don’t follow them or who recently followed them,” he said.
“And this is the kind of behaviour that we saw after the Euros final.”
But he added that the company had also found some public figures did not want to cut off comments and messages entirely because they often received many messages of support.
“Often the incident that caused the spike in comments is one which also leads to huge volumes of supportive messages from longer-term followers. People still want to hear from that community as well,” he told PA.
“So, that is why we genuinely think this feature will be so effective, it means you can hear from your returning followers, while limiting contact with people who might be coming to your account to target you.”
The tool could also be expanded in the future to automatically prompt users to turn on Limits when the platform detects a user may be experiencing a spike in comments and direct messages.
Writing in a blog post announcing the new features, Instagram boss Adam Mosseri said: “We don’t allow hate speech or bullying on Instagram, and we remove it whenever we find it.
“We also want to protect people from having to experience this abuse in the first place, which is why we’re constantly listening to feedback from experts and our community, and developing new features to give people more control over their experience on Instagram, and help protect them from abuse.
“We hope these new features will better protect people from seeing abusive content, whether it’s racist, sexist, homophobic or any other type of abuse.
“We know there’s more to do, including improving our systems to find and remove abusive content more quickly, and holding those who post it accountable.
“We also know that, while we’re committed to doing everything we can to fight hate on our platform, these problems are bigger than us.
“We will continue to invest in organisations focused on racial justice and equity, and look forward to further partnership with industry, governments and NGOs to educate and help root out hate. This work remains unfinished, and we’ll continue to share updates on our progress.”