top of page
Search
Writer's pictureStaff Writer

Facebook & Instagram's #failed Attempt to Prevent User Suicide




In a 2018 commentary published in the journal Annals of Internal Medicine, two researchers are questioning the ethics and transparency of Facebook's suicide prevention system. The algorithm reportedly flags users deemed to be at high-risk of self-harm, activating a process where the company notifies local authorities to intervene. At the time of the 2018 report, Facebook reports 3,500 self-harm cases (not including those reported by Facebook's application Instagram) have been flagged by its algorithm and resulted in local authorities being called on to intervene.



Back in 2017, Facebook began testing a machine learning algorithm designed to track a user's activity on the platform, and flag the person if it identifies an imminent risk of self-harm. Once flagged, the case is moved to a human team for evaluation, and if deemed urgent Facebook contacts local authorities to intervene.


By late 2018, Facebook called the experiment a great success, deploying it in many countries around the world – but not in Europe, where the new GDPR rules deem it a privacy violation. After a year the company reported around 3,500 cases had occurred where emergency services had been notified of a potential self-harm risk. Specifics of these 3,500 cases were not clear. What percentage of these resulted in Facebook actually stopping a fatal case of self-harm?


The New York Times reviewed four specific police reports of cases instigated through Facebook's algorithm, suggests the system is far from successful. One out of the four cases studied actually resulted in Facebook helping police identify the location of an individual live streaming a suicide attempt and intervene in time. Two other cases were too late, and a fourth case turned out to be entirely incorrect, with police arriving at the doorstep of a woman who claimed to have no suicidal intent. The police, not believing the woman's statements, demanded she come to a local hospital for a mental health evaluation.


Dan Muriello, one of the engineers on the Facebook team that developed the algorithm, suggests the system doesn't imply Facebook is making any kind of health diagnosis but rather it just works to connect those in need with relevant help. "We're not doctors, and we're not trying to make a mental health diagnosis," says Muriello. "We're trying to get information to the right people quickly."


Barnett, a researcher from the University of Pennsylvania, and

John Torous, a psychiatrist working with the Harvard Medical School, have recently penned a commentary suggesting Facebook's suicide prevention tools constitute the equivalent of medical research, and should be subject to the same ethical requirements and transparency of process.


The authors cite a variety of concerns around Facebook's suicide prevention effort, including a lack of informed consent from the users regarding real-world interventions, to the potential for the system to target vulnerable people without clear protections. Underpinning all of this is a profound lack of transparency. Neither the general public nor the medical community actually know how successful this system is, or whether there are social harms being generated by police being called on unwitting citizens. Facebook claimed in 2018 it doesn't even track the outcomes of calls to emergency services due to privacy issues, so what is even going on here?


"Considering the amount of personal medical and mental health information Facebook accumulates in determining whether a person is at risk for suicide, the public health system it actives through calling emergency services, and the need to ensure equal access and efficacy if the system does actually work as hoped, the scope seems more fitting for public health departments than a publicly traded company whose mandate is to return value to shareholders," the pair conclude in their commentary. "What happens when Google offers such a service based on search history, Amazon on purchase history, and Microsoft on browsing history?"


Mason Marks, a visiting fellow at Yale Law School, is another expert that has been raising concerns over Facebook's suicide prevention algorithms. Alongside the potential privacy issues of a private company generating this kind of mental health profile on a person, Marks presents some frightening possibilities for this kind of predictive algorithmic tool.


"For instance, in Singapore, where Facebook maintains its Asia-Pacific headquarters, suicide attempts are punishable by imprisonment for up to one year," Marks wrote in an editorial on the subject last year. "In these countries, Facebook-initiated wellness checks could result in criminal prosecution and incarceration."


Ultimately, all of this leaves Facebook in a tricky situation. The social networking giant may be trying to take responsibility for any negative social effects of the platform, however, it seems to be caught in a no win scenario. As researchers call for greater transparency, Antigone Davis, Facebook's Global Head of Safety, has suggested releasing too much information on the algorithm's process could be counterproductive.

"That information could could allow people to play games with the system," Davis said to NPR. "So I think what we are very focused on is working very closely with people who are experts in mental health, people who are experts in suicide prevention to ensure that we do this in a responsible, ethical, sensitive and thoughtful way."


At this point it is all good for Facebook to state the goal of working with experts, in ethical and sensitive ways, but the response so far from experts in the field is that no one has any idea what the technology is doing, how it is generating its results, who is reviewing these results, and if it is actually causing more harm than good. All we know for sure is that at least 10 people a day around the world are having the police or emergency services show up on their doorstep after being called by Facebook.


The results of their 'intervention'? Not so great


The "horrifying" epidemic of misery emerged after kids became exposed to sites such as Facebook and Twitter on their phones a decade ago, experts say. Child suicide rates have soared by up to 150% in a decade which coincides with the adoption of social media sites.

Self-harming by girls aged ten to 14 has almost tripled. Stats show an alarming spike in the number of kids in the US being admitted to hospital after cutting themselves or otherwise self-harming. For girls aged 15 to 19, there has been a 62 per cent increase since 2009. Among pre-teens aged ten to 14, the increase is 189 per cent. Nearly triple.



Even more horrifying, we're seeing the same pattern with suicide. And that pattern points to social media.

Deaths by suicide in the US are up 70 per cent in older teenage girls compared with the first decade of the century. In pre-teen girls, suicide has risen by 151 per cent.

Haidt, professor of ethical leadership at New York University Stern School of Business, said the pattern coincides with growing use of mobile devices.


He said: "Gen Z, the kids born after 1996 or so, those kids are the first generation in history that got on social media in middle school.


"How do they spend their time? They come home from school, and they're on their devices.


"A whole generation is more anxious, more fragile, more depressed.


"They are much less comfortable taking risks. The rates at which they get drivers licenses have been dropping.


"The number who have ever gone out on a date or had any kind of romantic interaction is dropping rapidly.



"This is a real change in a generation. And remember for every one of these, for every hospital admission, there's a family that is traumatized and horrified - 'My God, what is happening to our kids?'."

Tim Kendall, former president of Pinterest and director of monetization at Facebook, is one of a number of tech wizards who have come to see the danger of their creations.

He tells the documentary: "It's plain as day to me - these services are killing people. And causing people to kill themselves."

14 views0 comments

Comments


Post: Blog2 Post
bottom of page