Part 4 – When Facebook finds out firstHow private companies are using data to get involved in suicide risk assessment
Article in Review: D’Hotman, D. & Loh, E. (2020). AI-enabled suicide prediction tools: a qualitative narrative review. BMJ Health Care Inform, doi:10.1136/bmjhci-2020-100175
Summary: This article is the final of a four part series examining the article by D’Hotman & Loh (2020) and their narrative review of artificial intelligence (AI) examining the capability of using AI for suicide prediction via data gathered through medical records and social media. In the first three articles a range of factors were shared, including challenges of suicide ‘prediction’, the use of data from medical sources (such as medical records) and the social media content. After reflecting on D’Hotman’s & Loh’s (2020) discussion of mental and physical health conditions, we conclude the series with the current article that summarises their observations of the capability for Facebook to be developed into a ‘suicide prevention tool’.
Advancements in Artificial Intelligence (AI) have given way to novel ideas about and opportunities for the development of tools to help us assess suicide risk in individuals. In our final blog on the article by D’Hotman and Loh (2020), we take a look at how private social media companies are using data to try and assist with suicide risk prediction and we assert that such usage should be the subject of independent scientific review.
Social media used as a diagnostic aid
There has been promising research on the use of social media data has been used to aid diagnoses of mental illness, including:
• Major depressive disorder
• Eating disorders
• Bipolar affective disorder
• Borderline personality disorder and others
Biomedical Informatics Insights study
So it is no wonder that research is underway to understand how social media might be used to assess the risk of suicide attempt or completion. One study of interest published in Biomedical Informatics Insights applied natural language processing and both supervised and unsupervised machine learning to attempt to identify suicide risk in data taken from:
• Fitbit and others
Although this study had some limitations, particularly in the limited demographic differences amongst study participants, the results were impressive. The study found that this data analysis process may be up to ten times more accurate at correctly predicting suicide attempts than when compared with clinician averages. That is, an accuracy rate of 40-60% vs. 4-6% with clinicians.
Data sets are growing
It’s a promising finding for an assessment that was done through social media alone. Furthermore, our capacity to find patterns is likely only to improve as more data is acquired through public channels, and as systems become better at dealing with data and complex algorithms. The amounts of data that big social media companies have access to and the tools with which they conduct analysis are already even more advanced. Many search engines now direct people to suicide prevention resources when certain keywords and terms are submitted.
Data from wearable technologies
One area for further investigation is the use of wearable data to determine real-time suicide risks. Popular wearable devices include the Apple Watch and the Fitbit. Personal health apps compile extensive information about a wearer’s:
• Heart and other biomedical indicators
In one small study of patients in a mental health inpatients ward, data from Fitbit, Apple and Facebook were linked to give a more cohesive picture of user’s mood, sleep behavior, step count, and technology use. Although it was a small study, it demonstrated a technically feasible pathway for machine learning to assess suicide risk from this data.
Facebook and suicide prevention
Facebook has one of the most in-depth suicide prevention programs, which have been on the platform for more than ten years. In November 2017, in response to the live streaming of suicide attempts, Facebook stepped up efforts with a more detailed prediction tool. Other users can now report posts of concern, while a machine learning tool is also used to identify posts of concern. It is claimed this tool is more accurate at recognizing posts that indicate the creator is at risk of suicide attempt than the human report process.
In some cases, human reviewers have used geo-location data to provide immediate assistance to those in a situation of immediate risk. Facebook reported that this service was provided to 100 people in its first month of operation.
Information is provided to those who are assessed as needing it
When a post is assessed as indicating the creator is at risk of suicide, the user in question is provided with information about support services, including hotline numbers, resources, chat options, and tips. Facebook has stated these tools were developed in collaboration with mental health services; however, there has been no independent review or assessment of this methodology or process. Facebook reports that they have invited public contributions to the development of these tools. They also claim that user privacy is a key concern in the creation of these resources and that it does not take into account “only me” data, demographics data, nor does it alert friends or family about an identified suicidal intent.
Ensuring ethical and appropriate use of data
All of this knowledge can complement clinical services and treatments, and create a knowledge database that is rich and detailed enough to give insight across large population groups. All this proves is that there is a need for the scientific community to closely monitor how private companies are using the suicide risk-related data that they acquire. These efforts should be subject to independent review and ethical consideration to ensure they are safe, effective, and permissible. Further research is now required to determine how people’s diverse contexts, cultures, and settings can be taken into account.