Lack of Facebook Moderation Leading Toward an Unfeeling Future?

  • Date Published
  • Categories Analysis, Blog, News
  • Reading Time 7-Minute Read

People have long wondered whether the effects of media exposure are detrimental to the healthy mental development of children and pubescent young adults. However, the media typically tends to report that video games, specifically violent video games, are the main culprit in this mass desensitization.

This couldn’t be further from the truth, as a multitude of studies have found that games involving difficult moral dilemmas actually tend to increase the level of moral/ethical introspection on the part of the player. Additionally, violence in video games tends to decrease the elicited response of guilt, but not in the long-term, as the media would like you to believe. However, it seems that most of these media companies never bother to include themselves or their competitors, and online media, in their assessment and proclamation that the media kids are exposed to is damaging.

Since 2011 or 2012 I have wondered whether the long-term effects of poorly-filtered YouTube videos, Facebook videos and Television News content has generated a looming problem; will this cause a generation to develop morally confused? We’ll have to wait a decade or two to see the full effects of being exposed to such videos, but based on Facebook’s content moderation strategy (at least in the branch investigated by UK’s Channel 4) there will be effects.

I, personally, believe the internet is both humanity’s biggest blessing and its worst curse, as the content available can range from extremely beneficial for the individual to extremely detrimental. Some of the most uplifting, wholesome, and genuine content and scholarly information is available on the internet, but unfortunately some disgusting content is also available (gore videos, fights, videos glorifying self-harm, and worse…).

So, in a time when all varieties of content are available on multiple, extremely influential, platforms, is it not the duty of the moderator to moderate such content? If a public forum devolved into an arguing-match, spearheaded by vitriol and hate, is it not the duty of the forum moderator to either guide the topic back on-track, or to halt the discussion entirely?

Facebook has recently been criticized for its lackluster filtering of obscene content, mainly because of their Irish branch’s considerations for what is and is not deemed inappropriate. Nude female bodies, regardless of whether or not they are engaged in a sexual act, appear to have the most critical filter, as an exposed nipple is grounds for immediately removing the video. I find this ridiculous, but many others are more sensitive to this specific topic, so I realize this should be decided through consensus. However, the rest of the protocols for other, undeniably inappropriate content are much more strange. As Business Insider reported, after Channel 4 aired their investigative report, the types of content filtering done by moderators is as follows:

  • Pictures of topless women — delete
    • Reasoning: No reasoning given.
  • Violent death — mark as disturbing
    • Reasoning: “The reporter asked why images of people dying aren’t removed, and was told that sometimes they fit within Facebook’s terms — especially if they ‘raise awareness.’”
  • Footage of a little boy being beaten by his step-father — leave it up
    • Reasoning: “The trainer says: ‘We always mark as disturbing child abuse, we never delete it and we never ignore it.’”
  • Footage of a man eating live baby rats — ignore
    • Reasoning: “While on the job as a moderator, the undercover reporter reviewed footage of somebody eating live baby rats. He was told to ignore the video because it was for ‘feeding purposes’ and therefore didn’t violate Facebook’s animal cruelty policy.”
  • Footage of two girls fighting — mark as disturbing
    • Reasoning: “Working as a moderator, the undercover reporter came across footage of two teenage girls in a physical fight, which had been shared over 1,000 times. His assessment was that one girl is “definitely more dominant than the other” and results in the other girl getting ‘battered.’” (For clarification, the video includes one tagline against bullying others, so it was left up.)
  • Images of self-harm — delete “promotion,” leave up “admission”
    • Reasoning: Only if the content is overtly promotional, in this case “an image of healed cuts on someone’s arm accompanied by the text “miss the way it feels” was to be deleted. While a video or image of the aftermath of a self-harm episode is grounds for a “checkpoint” to be issued (information for how to get help).
  • Clearly underage users — ignore and don’t send help
    • Reasoning: “The trainees were shown an image of a clearly underage user posting a picture about having an eating disorder, in which case they were told not to send a checkpoint. They are told that Facebook does not action underage accounts unless they specifically admit to being under 13 years old.”
  • Racist hate speech — it depends
    • Reasoning: “[A]s an example of what to ignore, they were given a meme of a girl having her head held underwater with the caption “when your daughter’s first crush is a little negro boy.” This was because the image, “implies a lot, but to reach actual the violation you have to jump through a lot of hoops to get there.”
  • Hate speech aimed at immigrants — ignore
    • Reasoning: “The undercover reporter asked for advice on whether to delete a comment which said ‘f**k off back to your own countries’ under a video with a caption referring to Muslim immigrants. The trainer told the reporter: ‘If it just said Muslims then yeah, you’d take this action. But it doesn’t, so it’s actually an ignore.’”
  • Pages with large followings — proceed with caution
    • Reasoning: “Pages with a large following are ‘shielded content,’ and can’t be deleted by moderators at CPL. Full-time Facebook employees make the final call after the pages are put into the ‘shielded review queue.’ One such shielded page is that of far-right activist Tommy Robinson, according to Channel 4, which is followed by just over 904,000 people.”

However, despite what is and is not covered by the policies of Facebook’s moderation community, and despite what the youth being exposed to this content may imply, this sort of content does damage a person’s psyche over a long period being exposed to it. One of Facebook’s moderators, Sarah Katz, had to quit after viewing so much obscene content that she stated she had become numb to child porn and bestiality. Furthermore, she stated that “At the time, Katz said, she was not asked to report the accounts sharing the material — something she said ‘disturbed’ her.” So, if this sort of content was disturbing to Ms. Katz, both for the reasons implicitly and explicitly stated, of course this content is detrimental to those still developing both morally and mentally. Additionally, there is a major detriment to what the facebook moderating protocols describe as “promotion” or “admission” of self-harm, as both types of self-harm videos may trigger someone to do the same, regardless of the uploader’s intentions.

To conclude, my only real suggestion for fixing this issue is utilizing machine learning algorithms as the moderators of the content, both to reduce the mental stress on moderators and to allow round-the-clock moderation. However, the only issue that arises here is the issue YouTube faces with its content promotion bot, which is the bot is trying to maximize value as outlined by YouTube’s internal team. Therefore, the effectiveness of the bot is limited regarding moderation as its purpose was never to moderate equally (based on ethical guidelines solely), but to maximize watch time as well. So, it would be best to determine the perfect balance between ethics and profit, as what good are the profits a firm may reap in the present, when they will be spent in an unstable, unfeeling, future?