“We quickly removed both the shooter’s Facebook and Instagram accounts and the video,” a Facebook spokesperson said. (Four arrests were made after the Christchurch shooting, and it remains unclear whether the shooter who live-streamed the attack acted alone.)įacebook said that the original video of the attack was only taken down after they were alerted to its existence by New Zealand police, indicating that an algorithm had not noticed the video. “We would strongly urge that the link not be shared.” Mass shooters often crave notoriety, and each horrific event brings calls to deny assailants the infamy they so desire. “There is extremely distressing footage relating to the incident in Christchurch circulating online,” police said on Twitter. New Zealand police said they were aware the video was circulating on social media, and urged people not to share it. Neither YouTube, Facebook nor Twitter answered questions from TIME about how many copies of the Christchurch video they had taken down. Still, social media companies often fail to recognize violent content before it spreads virally, letting users take advantage of the unprecedented and instantaneous reach offered by the very same platforms trying to police them. Social media companies augment their AI technology with thousands of human moderators who manually check videos and other content. “Once you know something is prohibited content, that’s where the technology kicks in,” says Lemieux. First, there’s content recognition technology, which uses artificial intelligence to compare newly-uploaded footage to known illicit material. “It becomes essentially like a game of whack-a-mole,” says Tony Lemieux, professor of global studies and communication at Georgia State University.įacebook, YouTube and other social media companies have two main ways of checking content uploaded to their platforms. The episode underscored social media companies’ Sisyphean struggle to police violent content posted on their platforms. Even as the platforms worked to take some copies down, other versions were re-uploaded elsewhere. Copies of that footage quickly proliferated to other platforms, like YouTube, Twitter, Instagram and Reddit, and back to Facebook itself. The original Facebook Live broadcast was eventually taken down, but not before its 17-minute runtime had been viewed, replayed and downloaded by users. While websites like YouTube and Vimeo have strict policies about uploading violent and graphic content, such as of murders, executions and accidents, LiveLeak has for years had no such restraint.In an apparent effort to ensure their heinous actions would “go viral,” a shooter who murdered at least 49 people in attacks on two mosques in Christchurch, New Zealand, on Friday live-streamed footage of the assault online, leaving Facebook, YouTube and other social media companies scrambling to block and delete the footage even as other copies continued to spread like a virus. On Wednesday, however, after 15 years of operation, the infamous video-sharing website has shut down, with visitors redirected to a new “social video factory” site called ItemFix.įrom the video of Saddam Hussein’s hanging to the beheading of James Foley, LiveLeak has often sparked controversy with the videos users uploaded onto its platform. Live co-founder Hayden Hewitt explained the move in a statement published on ItemFix. Nothing lasts forever though and – as we did all those years ago – we felt LiveLeak hadĪchieved all that it could and it was time for us to try something new and exciting.” “The thing is, it's never been less than exhilarating, challenging and something we were all fullyĬommitted to. “The world has changed a lot over these last few years, the Internet alongside it, and we as people.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |