How Facebook’s leaked guidelines could make a big difference in preventing suicide

Until just a few days ago, only Facebook staff and insiders knew the details of the company’s controversial moderation policies. 

Now, whether Facebook likes it or not, those guidelines have been thrust into the spotlight by the Guardian, which leaked internal documents advising moderators on how to handle troubling content of all stripes. That includes one of the most sensitive kinds of content: livestreamed self-harm or suicide. 

In general, criticism of its moderator guidelines focus on the fact that Facebook has little business incentive to remove offensive content because perceived censorship makes its platform less attractive to users. That conflict of interest is a valid concern when it comes to hate speech. It doesn’t work the same way, however, when applied to the problem of people livestreaming suicidal thoughts or behavior. 

In those cases, Facebook needs to carefully balance the need to help users in crisis while shielding bystanders from trauma. Its guidelines for self-harm and suicide represent progress toward doing that effectively, but Facebook has the power to do even more. 

If the company embraces data collection, research, and transparency, experts say, Facebook can advance our understanding of effective suicide prevention perhaps more than any other relevant scientific finding that’s come before.

Users can report suicidal content to Facebook moderators.

Users can report suicidal content to Facebook moderators.

This possibility is what motivates Daniel J. Reidenberg, executive director of the nonprofit suicide prevention group SAVE. For the past decade, Reidenberg has provided feedback to Facebook about mental health and suicide prevention issues. (While Facebook declined to make a staff member available to discuss its self-harm and suicide moderation guidelines, a spokesperson did identify Reidenberg as one of the unpaid experts the company has consulted.) 

“If somebody that you deeply cared about was in pain, would you want there to be everything possible done to help them?” 

“If somebody that you deeply cared about was in pain, if they were struggling and hurting, would you want there to be everything possible done to help them?” says Reidenberg, describing his view of handling live self-harm and suicidal content on Facebook. “For me, I’d want to have everything possible done … to take every moment of opportunity to help them.” 

You can see Reidenberg’s influence in Facebook’s guidelines for self-harm and suicide. They permit moderators to leave up such content unless it’s no longer possible to help the user in question, at which point the video is taken down. The recommendations also allow moderators to consider whether the video is newsworthy in some way. 

Essentially, the guidelines maximize the opportunity for a friend or bystander to offer practical or emotional support to someone in crisis, but also recognize the risk that unsuspecting users may be traumatized by seeing violent imagery in their News Feed. 

Reidenberg acknowledges that this strategy has little basis in scientific research, but that’s because there aren’t yet studies that tell us what kind of interventions are most effective at stopping a suicide attempt — much less what works best for this scenario on social media. It may well be the support of a person’s closest friends or family, but Reidenberg argues that it could also be words of encouragement from a distant high school friend. 

Until research can make a convincing case for who’s best positioned to make a successful intervention, Reidenberg says Facebook should operate as if everyone has the potential to make that life-saving difference. Shutting down a livestream of self-harm or suicide, in other words, might also shut people off from help that could turn the moment around. 

The only problem is that the research we do have suggests that people already vulnerable to suicidal thoughts or behavior who encounter graphic details about suicide may be more likely to subsequently make an attempt. That “contagion effect” is in addition to whatever emotional or psychological trauma anyone might experience after witnessing self-harm or suicide on social media. 

Facebook's User Operations Safety Team workers look at reviews at Facebook headquarters in Menlo Park, California.

Facebook’s User Operations Safety Team workers look at reviews at Facebook headquarters in Menlo Park, California.

Image: Paul Sakuma/AP/REX/Shutterstock

To minimize such exposure to users, Facebook moderators are instructed to remove videos after it’s no longer possible to intervene. That means the content can’t circulate without the clear purpose of trying to support someone engaged in self-harm or reporting the behavior to Facebook so it can make its own efforts to reach a user directly or contact the authorities. It doesn’t, however, guarantee that people vulnerable to suicidal thoughts or behavior won’t stumble across a live act of self-harm. 

The compromise between protecting bystanders and creating opportunities to aid those in crisis might strike some as odd, given the recent outrage over the gratuitous depiction of suicide in the Netflix series 13 Reasons Why. After all, mental health experts and organizations, including SAVE, were concerned that scene might negatively affect young viewers. 

But Reidenberg says the difference between fiction and real life is an important one; whereas no one can stop the lead character Hannah from dying by suicide, we might be able to help someone who is crying out for help on Facebook. Still, he adds, it’s key that streaming platforms take contagion effect seriously and continue to find ways to minimize it. 

This may seem like completely uncharted territory, but April C. Foreman, a licensed psychologist and board member of the American Association of Suicidology, says we can draw lessons about how to respond from education professionals who prepare for instances of self-harm in the classroom. After accounting for everyone’s safety, the goal is to “remove the audience” either by taking the person or the crowd from the immediate setting. Then you provide resources to those affected by the incident. 

“That is sort of how you do your duty by everybody, without silencing a need to get help but also not traumatizing everyone else,” says Foreman, who has not worked with Facebook. 

She believes the company’s guidelines are a good start but require refining as user behavior and the technology evolves. Ideally, she’d like to see Facebook find a “middle path” that strategically minimizes the number of people who see live self-harm or suicide to a group of “trusted” contacts and “skilled” helpers. 

“We’re trying to know what’s best without the evidence,” Foreman says. “There’s probably going to have to be trial and error … and it’s unfair to Facebook to not take into consideration how new this is.” 

But that, Foreman says, is exactly why Facebook has the potential to make a huge difference in suicide-prevention efforts. By developing these guidelines, watching what works, and then making adjustments, Facebook will have generated incredibly valuable information about how people respond to interventions — and how bystanders are affected by seeing traumatic scenes of non-fatal and fatal self-harm.

Foreman hopes that Facebook will collect, analyze, and publish the data so that researchers can learn from its experiences. “That’s the actual part where I’d want to hold them more accountable,” she says. After all, if the company’s insights show how to individualize interventions so they’re most effective, that could be “truly groundbreaking.” 

Reidenberg has a similar perspective and laments how few transformational leaps scientists have made in understanding suicide over the past few decades. That’s partly because the research is chronically under-funded and partly because of ethical concerns over how to study suicide. 

To Reidenberg, Facebook’s potential role in bringing suicide-prevention efforts into the 21st century is a remarkable opportunity. While critics see a social media company dangerously making it up as it goes along, he sees employees who’ve dedicated themselves to understanding what happens when its users broadcast self-harm on its platform — and what to do in response. Indeed, the company tells its users what to do when they see a friend post about suicide and recently announced that it would hire 3,000 more people to review reports of objectionable content, including self-harm. 

“They have the ability to do something we’ve never been able to do before.”

“We’re going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards and easier for them to contact law enforcement if someone needs help,” Monika Bickert, head of global policy management at Facebook, said in a statement. 

Reidenberg says Facebook consults with numerous experts in the field from diverse backgrounds, including researchers, clinicians, educators, and people who’ve attempted suicide and lived.

“They take this incredibly seriously,” he says of Facebook employees charged with taking on this challenge. “I don’t believe any of them got into this line of work thinking they would be confronted by this issue, and yet they’re working on it diligently.”

How those efforts play out internally with the company’s moderators are indeed an important part of its chances for success. But this also represents a high-stakes proposition for an entire field of research on one of the most important scientific questions of our time. 

“They have the ability to do something we’ve never been able to do before,” Reidenberg says. 

Let’s hope, then, that Facebook gets it right. 

If you want to talk to someone or are experiencing suicidal thoughts, text the Crisis Text Line at 741-741 or call the National Suicide Prevention Lifeline at 1-800-273-8255. Here is a list of international resources. 

from Mashable! http://ift.tt/2q8OFUp
via IFTTT