Image Image Image Image Image Image Image Image Image Image

Mace & Crown | October 21, 2017

Scroll to top

Top

No Comments

Live Violence and Crime Appearing on Facebook

Audra Reigle | Assistant Technology Editor

Social media is a huge part of our day-to-day lives. From being able to share our achievements with the world to being able to vent about anything that has frustrated us during the day, social media allows us to share information with family and friends around the world. However, social media can also be used to share stories of violence. With livestreaming services being added to Facebook and other social media outlets, violence can be streamed to millions of people around the world, intentionally or otherwise.

In a recent and disturbing trend, incidents of violence and even death have been streamed live on social media. Locally, there was a shooting caught on Facebook Live, according to the Virginian-Pilot. The victims were in their car smoking and listening to music when they were shot at. The stream continued for an hour after the shooting occurred.

In Georgia, Malachi Hemphill, 13, accidentally shot himself while streaming on Instagram Live and died from his injuries, according to the New York Post. His mom and sister discovered that he was streaming, but the stream was cut off after the incident occurred.

Last summer, when Philando Castile was shot by police, his girlfriend, Diamond Reynolds, streamed the incident on Facebook Live. That footage is “expected to be used in the prosecution of the Minnesota police officer charged with second-degree manslaughter and two counts of dangerous discharge of a firearm,” according to USA Today. Such moments are difficult to keep from spreading because of their direct broadcast onto social media.

Also last summer, Antonio Perkins was shot to death in Chicago while streaming on Facebook Live, according to Engadget. Facebook typically removes videos that sensationalize violence, but this video was not removed, as Facebook “believe[d] it [would] boost awareness of violence and its consequences.” This was Facebook’s stance on such incidents until recently. Other than a warning before the video starts, there are no barriers to stop people from watching it and unlike live TV, there is also no one to cut the camera when violence strikes.

The most recent incident of violence being streamed on Facebook Live comes out of Cleveland. An elderly man was fatally shot during a Facebook Live stream on April 16, according to Engadget. Facebook released a statement to Engadget that said, “This is a horrific crime and we do not allow this kind of content on Facebook. We take our responsibility to keep people safe on Facebook very seriously, and are in touch with law enforcement in emergencies when there are direct threats to physical safety.” Since the Engadget article was posted, the shooter was found dead in Pennsylvania, according to CNN. This incident in Cleveland seemed to be the moment where Facebook decided to change its policy on violence on its platforms.

Facebook’s policy on violent content is that it “prohibits content that glorifies [or] promotes violence, only permitting violent content that is considered to be in the public interest.” There are “teams of content moderators who are trained to remove content that violates the company’s policies,” and they have also started to use artificial intelligence to help find the content. In the case of the Cleveland shooting, the videos posted were deleted and the shooter’s account was deactivated, but by the time it was done the video had already spread across the internet.

Training artificial intelligence will take time, so Facebook is going to work on their reporting system to help users report content “that violates our standards as easily and quickly as possible,” according to Wired. More moderators will be added in the meantime to watch the content and train AI.

Since artificial intelligence will take some time to complete, Facebook and similar streaming services will need to rely on humans to moderate the streams. They will also be reliant on their users to report violent content when they see it so that it can be removed faster. Hopefully, humans will be able to moderate violent content well until artificial intelligence is ready to take over.