Instagram to Launch Parental Reminder Feature to Protect Adolescents

Meta’s Instagram company announced on Thursday (February 26) that it is introducing a new feature that will issue alerts to parents when teenagers repeatedly search for terms like “suicide” or “self-harm.”

According to a blog post by the company, if teenagers repeatedly search for phrases promoting suicide or self-harm, hinting at self-harm, or using words like “suicide” or “self-harm” within a short period, parents will receive an alert.

Meta also released a similar announcement, stating that the feature is still in testing and undergoing rigorous evaluations in multiple tests. The company noted that if teenagers attempt to engage their AI in conversations related to suicide or self-harm, guardians will be notified.

In a statement, Instagram mentioned that these alerts are designed to ensure parents are aware if their children are repeatedly trying to search for such content and to provide resources to support the children.

Parents will receive notifications through email, text message, WhatsApp, or Instagram. To activate this alert feature, both parents and teenagers need to register for Instagram’s parental control tools.

The company mentioned that parents who receive alerts will see a message explaining concerning Instagram search patterns of their children and will have the option to access more help resources.

The alert feature is expected to roll out next week in the United States, United Kingdom, Australia, and Canada.

Meta stated that parents may receive some alerts that are not genuinely concerning, but they will continue to listen to feedback regarding the feature to help them determine the appropriate threshold for issuing alerts.

The company also stated that this is a “right starting point.”

Concerns have been growing regarding AI chatbots from various tech companies like OpenAI and Meta engaging in “suspicious and potentially harmful” mental health-related conversations with users. According to CNBC, Meta provides its AI chatbot and is developing a new powerful AI model with the codename Avocado, set to debut later this year.

The introduction of this parental alert feature by Instagram at this time helps portray a positive image of caring for teenage users and being accountable to them in society.

On February 18, Meta’s CEO Mark Zuckerberg testified in a landmark case regarding teenage social media addiction at the Los Angeles Superior Court. This case involved accusations of harm caused to children by social media platforms, marking the first time a tech company responded to charges related to teenage safety in front of a jury. Instagram denied these accusations, citing years of continuous improvement in safety features and parental controls as a defense.

Due to various legal challenges Meta faces in children’s digital safety, the National Parent Teacher Association stated that they will no longer continue funding relationships with Meta.

(Reference: CNBC)