Australia’s eSafety Commissioner Criticizes Social Media Giants Over Child Abuse Content

Australia’s internet watchdog has accused major social media companies of neglecting child sexual abuse material on their platforms, with YouTube singled out for its lack of responsiveness. In a report released on Wednesday, the eSafety Commissioner revealed that YouTube and Apple failed to track user reports of such content or provide details on their response times. The findings prompted the Australian government to include YouTube in its recent social media ban for teenagers, reversing an earlier exemption for the platform.

eSafety Commissioner Julie Inman Grant condemned the companies for not prioritizing child safety, stating, “When left to their own devices, these companies aren’t prioritising the protection of children and are seemingly turning a blind eye to crimes occurring on their services.” She compared the situation to other industries, arguing that no other sector would be allowed to operate while enabling such crimes. Google, however, disputed the claims, asserting that YouTube proactively removes over 99% of abusive content before it is flagged or viewed.

The report also highlighted Meta, the owner of Facebook, Instagram, and Threads, which has policies against graphic content but was found to have safety gaps. The eSafety Commissioner has mandated several tech giants, including Apple, Google, Meta, and Microsoft, to report on their efforts to combat child exploitation. The review uncovered deficiencies such as failures to block livestreamed abuse, inadequate reporting mechanisms, and the absence of hash-matching technology across all services to detect known abuse material.

Despite prior warnings, some companies, including Apple and YouTube, had not addressed these gaps or provided requested data on user reports and safety personnel. Inman Grant expressed frustration, stating that these platforms had ignored critical questions about their handling of abuse reports. The regulator’s findings underscore ongoing concerns about the tech industry’s accountability in protecting vulnerable users, particularly children, from harmful content.