Connect with us

Bussiness

Instagram revamps safety features for teens as congressional pressure mounts — here’s all the changes social media giant is making

Published

on

Instagram revamps safety features for teens as congressional pressure mounts — here’s all the changes social media giant is making

Mark Zuckerberg’s Instagram unveiled what was billed as a major overhaul of safety features for kids on Tuesday – a move that online watchdogs quickly blasted as a bid to avoid a looming congressional crackdown on the social media giant.

Instagram said it will automatically place users under age 18 into “teen accounts” and block people who do not follow them from viewing their content or interacting with them.

It will also mute Instagram app notifications for teen users between 10 p.m. and 7 a.m. and send “time limit reminders” urging teens to close the app after 60 minutes per day.

Mark Zuckerberg issued a stunning apology to the families of victims of online harm in January. AP

Parents will be able to view which accounts their kid has recently messaged, set daily time limits and block teens from using the app during specific time periods.

Additionally, users under the age of 16 will need parental permission to make changes to their account safety settings.

Meta’s announcement fell flat with online safety groups – several of whom said that the safety upgrades are inadequate.

Tech Oversight Project director Sacha Haworth said parents “should ignore Meta’s latest hollow announcement” and said the company has a history of “broken promises and policy reversals” on online safety.

“Meta can push out as many ‘kid-’ or ‘teen’-focused ‘features’ as it wants, it won’t change the fact that its core business model is predicated on profiting off and encouraging children and teenagers to become addicted to its products – and American parents are wise to the hustle and demanding legislative action,” Haworth said in a statement to The Post.

The overhaul was announced as the bipartisan Kids Online Safety Act – a landmark bill that would impose a legal “duty of care” on Instagram parent Meta, TikTok and other social media firms to protect kids from online harm, gains momentum in Congress.

In July, the Senate passed KOSA and another bill called COPPA 2.0, which would ban targeted advertising to minors and data collection without their consent and give parents and kids the option to delete their information from social media platforms, in an overwhelming 91-3 vote.

The House Energy and Commerce Committee is set to markup the bills on Wednesday – a key procedural step that would clear the way for a floor vote in the near future.

Another watchdog, the Tech Transparency Project, argued that Meta has “claimed for years to already be implementing” versions of the features detailed in Tuesday’s announcement.

For example, Meta originally announced plans to make teen accounts private by default and to limit their interactions with strangers as far back as 2021, according to previous blog posts.

The group also noted that several of the online safety experts who touted Meta’s safety changes in the company’s blog post work for organizations that received funding from the company.

“Not only is Meta repackaging these efforts as new while simultaneously claiming for years it was implementing these safety tools. It is also trotting out Meta-funded voices and holding them up as independent experts,” the Tech Transparency Project wrote on X.

Instagram announced overhauled safety features for kids and their parents on Tuesday. ink drop – stock.adobe.com

Fairplay for Kids, one of the groups leading the charge for KOSA’s passage, decried Meta’s announcement as an attempt to skirt a meaningful legislative crackdown.

“Default private accounts for minors and turning off notifications in the middle of the night are safeguards Meta should have implemented years ago,” Fairplay executive director Josh Golin said. “We hope lawmakers will not be fooled by this attempt to forestall legislation.”

“The Kids Online Safety Act and COPPA 2.0 will require companies like Meta to ensure their platforms are safe and privacy-protective for young people at all times, not just when it’s politically expedient,” Golin added.

Alix Fraser, director of the Council for Responsible Media, took a similar view of the announcement.

“The simple fact is that this announcement comes as Congressional pressure is mounting and support for the bipartisan Kids Online Safety Act continues to build,” Fraser said. “It wouldn’t be the first time Meta made a promise to avoid Congressional action and then never followed through or quietly backed away.”

Online safety groups accused Meta of trying to dodge a legislative crackdown. New Africa – stock.adobe.com

The Post reached out to Meta for comment.

Policymakers have singled out Meta for failing to protect kids from “sextortion” scams and other forms of online sexual abuse.

Critics have also accused apps like Instagram of fueling a youth mental health crisis with negative outcomes ranging from anxiety and depression to eating disorders and even self-harm.

Last fall, a coalition of state attorneys general sued Meta, alleging the company has relied on addictive features to hook kids and boost profits at the expense of their mental health.

In January, Zuckerberg issued a stunning apology to the families of victims of online abuse during a tense hearing on Capitol Hill. 

Online safety groups accused Meta of trying to dodge a legislative crackdown. Allison Bailey/NurPhoto/Shutterstock

Despite its easy passage in the Senate, KOSA’s final prospects in the House remain uncertain, with some critics on both sides of the aisle raising concerns about the impact on online free speech.

In July, US Surgeon General Vivek Murthy called for the implementation of a tobacco-style “warning label” for social media apps to raise awareness for their potential mental health risks, including depression and anxiety.

Continue Reading