Snapchat Parental Controls Technology TrendingETC

Snapchat has added additional parental restrictions that prevent teenagers from viewing “sensitive” and “suggestive” content

Last year, Snapchat added parental restrictions to its app via the brand-new “Family Center” function. The firm today declared through a statement on its online Privacy and Safety Portal that it will now include content filtering tools that would enable parents to prevent their children from being exposed to anything that has been deemed sensitive or provocative.

Parents can turn on the “Restrict Sensitive Material” filter in Snapchat’s Family Center to activate the feature. Teens won’t be able to access the restricted content on Stories and Spotlight, the platform’s short-form video area after it has been enabled. The language next to the toggle states that enabling the filter will not affect information shared in Chat, Snaps, or Search.

To provide producers on Stories and Spotlight additional information about what kinds of posts may be recommended on its platform and what content will now be labeled “sensitive” under its community guidelines, Snapchat is now publishing its content guidelines with this move for the first time. The platform claimed to have shared these rules with a group of creators participating in the Snap Stars program as well as with its media partners, but it is now making them public through a page on its website.

Snapchat-Parental-Controls-TrendingETC

On its platform, the firm already forbids the publication of material that promotes hatred, terrorism, violent extremism, criminal activities, damaging, false, or misleading information, harassment and bullying, threats of violence, and more. However, the standards now outline which information falls under which categories will be deemed “sensitive.” Under these new controls, teen users may not see this content, and it may also be blocked for other app users based on their age, location, or personal preferences. This content may be suitable for recommendations.

In the case of AI images, for instance, Snap explains that content will be deemed “sensitive” if it contains “all nudity, as well as all depictions of sexual activity, even if the imagery is not real, as well as “explicit language” describing sex acts and other things related to sex, like sex work, taboos, genitalia, sex toy, “overtly suggestive imagery,” “insensitive or demeaning sexual content.

It also discusses other delicate topics, such as harassment, distressing or violent content, incorrect or misleading information, unlawful or regulated activities, hateful content, terrorism and violent extremism, and commercial content (overt solicitation to buy from non-approved creators). This covers a wide range of material, such as images of drugs, “engagement bait” (wait for it), self-harm, body modifications, gore, violent news stories, graphic depictions of human and animal suffering, sensationalized reporting of distributing incidents like violent or sexual crimes, risky behavior, and much, much more.

The adjustments were made many years after a congressional hearing in 2021 during which Snap was questioned about displaying adult-related content, including invitations to sexually explicit video games and articles about attending bars or porn, in the app’s Explore feed. The content Snap was posting was obviously intended for a more adult audience, despite the fact that its app was marked as 12+ on the App Store, as senators correctly noted. In other situations, even the video games it promoted were classified as being intended for older players.

“We hope these new tools and guidelines help parents, caregivers, trusted adults, and teens not only personalize their Snapchat experience but empower them to have productive conversations about their online experiences,” the social media company said in a blog post.

Yet, even if the new function might significantly reduce teen access to sensitive content in some instances, parents still need to take action by activating a toggle they probably aren’t familiar with.

In other words, this is yet another illustration of how the absence of laws and regulations governing social media firms has resulted in self-policing, which falls short of adequately shielding young users from danger.

Snap stated that it is working on introducing features to provide parents additional “visibility and control” over how their children use the new My AI chatbot in addition to the content filters.

The social network introduced this chatbot powered by Open AI’s GPT technology last month as part of the Snapchat+ membership. Interestingly, Snapchat made its disclosure after its chatbot misbehaved when conversing with a Washington Post journalist posing as a teenager. The bot purportedly gave the columnist advice on how to hold a birthday celebration while minimizing the scent of alcohol and marijuana. Separately, researchers at the Center for Humane Technologies discovered that a user posing as 13 years old received sex advice from the bot.

The other chatbot-targeting options have not yet been launched.

Share