2LT International News

Charlie Kirk shooting shows social platforms’ moderation limits

Sep 15, 2025

WASHINGTON, D.C.: The shooting of conservative activist Charlie Kirk at a Utah college event has reignited debate over how social media platforms handle violent and graphic content. Within minutes of the attack, disturbing videos of the moment were circulating widely across X, Facebook, TikTok, Instagram, YouTube, and even Truth Social, often autoplaying in feeds and reaching millions of users who had not sought them out.

Days later, many clips were still easy to find despite takedown efforts, underscoring the challenge platforms face when shocking events unfold in real time. Some platforms removed videos if they were deemed to glorify violence, while others added warning labels or age restrictions. But inconsistent enforcement left many clips accessible.

“It was not immediately obvious whether Instagram, for example, was just failing to remove some of the graphic videos … or whether they had made a conscious choice to leave them up,” said Laura Edelson, assistant professor of computer science at Northeastern University. “Those videos were circulating really widely.”

The situation highlights a long-running problem. Tech firms faced similar backlash after a 2019 mass shooting in New Zealand was livestreamed, and after videos of suicides, murders, and fights spread online. Policies remain ambiguous. Meta, which owns Facebook, Instagram, and Threads, does not ban such clips outright; instead, it applies labels and restricts under-18 users. YouTube said it was taking down some content lacking context, limiting others to logged-in adults, and promoting news sources to help people “stay informed.”

TikTok said it deployed extra safeguards, including removing close-up graphic clips, adding content warnings, and preventing the videos from surfacing in its “For You” feed.

Still, algorithms designed to maximize engagement continue to amplify content that sparks strong reactions. “This is the world that we have all made,” Edelson said. “The person who gets to decide what’s newsworthy on Instagram is Mark Zuckerberg. The person who gets to decide what stays up on X is Elon Musk.” She noted that platforms have cut back on human moderators, leaving AI systems that can both over- and under-police sensitive content.

The U.S. has no broad rules banning violent imagery, leaving enforcement largely to the platforms. Minors can often sidestep restrictions by misreporting their ages. Other jurisdictions have gone further. The United Kingdom’s new Online Safety Act requires platforms to shield children from harmful material, including depictions of serious violence, with penalties of up to 18 million pounds (US$24.4 million) or 10 percent of annual revenue. The law, in effect since March, also allows senior managers to be held criminally liable.

The European Union’s Digital Safety Act, implemented in 2023, also holds major platforms more accountable. It requires them to provide users with easy tools to report illegal material, such as terrorism or child sexual abuse, and to act quickly on those reports. However, it stops short of mandating proactive takedowns of violent imagery.

The widespread and lingering availability of the Kirk footage suggests that platforms, even with existing safeguards, are still struggling to balance public interest, user safety, and the relentless pull of algorithms that reward viral engagement.