Instagram Launches New Parental Alert Feature for Self-Harm and Suicide-Related Searches
Instagram will send alerts to parents if their teenagers repeatedly attempt to search for terms related to suicide or self-harm within a short timeframe, according to a company announcement on Thursday, 26 February 2026. The feature will be rolled out over the coming weeks and can be used by parents who have registered in Instagram’s “parental supervision” programme.
Meta, the social media platform’s parent company, stated that the new alert is designed to ensure parents are aware if their teenager repeatedly attempts to access such content, enabling them to provide necessary support. Searches that can trigger alerts include phrases promoting suicide or self-harm, phrases indicating a teenager may be at risk of self-injury, and terms such as “suicide” or “self-harm”.
Parents will receive alerts via email, SMS, or WhatsApp, depending on the contact information they have provided, along with in-app notifications. The notifications will include resources designed to help parents approach conversations with their teenagers, according to Meta as reported by TechCrunch on Friday, 27 February 2026.
In a blog post, the company stated it has analysed search behaviour on Instagram and consulted with experts from the Suicide and Self-Harm Advisory Group. Instagram has set a threshold requiring multiple searches within a short period before alerts are sent, whilst maintaining a cautious approach.
The company acknowledged that in some cases parents may receive notifications even when there is no genuine cause for concern. However, Instagram views the approach as an appropriate starting point and will continue to monitor and listen to feedback to ensure the policy works effectively.
The alert feature will begin rolling out next week in the United States, United Kingdom, Australia, and Canada, before expanding to other regions by the end of this year. Instagram also plans to introduce similar notifications when teenagers attempt to discuss suicide or self-harm with the artificial intelligence (AI) feature within the application.
This step comes amid various lawsuits against Meta and several other major technology companies accused of harming teenage mental health. During a hearing at the US District Court for the Northern District of California this week, Instagram head Adam Mosseri faced sharp questioning from prosecutors in an ongoing social media addiction case. He was questioned about delays in launching basic safety features, including nudity filters for private messages to teenagers.
Meanwhile, in a separate hearing at Los Angeles County Superior Court, it emerged that Meta’s internal studies found that parental supervision and controls have limited impact on compulsive social media use by children. The studies also noted that children experiencing stressful life events are more vulnerable to struggling to regulate their social media use appropriately.