Subtotal: $0.00


No products in the cart.

Select Page

Part 2 – Learning from machine learning

The use of non-medical data to predict suicide and suicide risk

Article in Review: D’Hotman, D. & Loh, E. (2020). AI enabled suicide prediction tools: a qualitative narrative review. BMJ Health Care Inform, doi:10.1136/bmjhci-2020-100175

Summary: This article is the second of a four part series examining artificial intelligence (AI) and seeking to better understand the capability of using such tools for suicide prediction – from a medical and a social perspective. The first article considered some of the challenges facing ‘suicide prediction’, before diving into medical prediction tools. In the current article, we reflect on the data used from medical sources (such as medical records) and what it may mean to use social media content for suicide ‘prediction’. Parts 3 and 4 of the series will then reflect on the use of data from non-medical sources and what it means to use social media content, before looking at specific health/mental health conditions and Facebook specifically as a ‘suicide prevention tool’.

___________________________________________________________________

There has been an emerging body of evidence to suggest that language patterns on social media, along with specific methods of smartphone use can indicate psychiatric issues. Researchers and tech companies are racing to identify ways in which they might assess suicide risk through online activity.

Assessing data from non-medical sources

The use of AI and data tools can leverage and analyse information from social media and web page browsing behaviours in an effort to identify users who are at risk. Systems can then subsequently deploy information (such as counselling services or advice) as an intervention. Several studies have demonstrated potential effacing in using non-medical data to predict suicide risk. In most cases, data is gathered through natural language processing which looks at the language people use in the content they write online. Examples would include identification of mentions of suicide attempts, suicidal ideation, or discussion of suicidal themes. Machine learning techniques can subsequently be used to compare and contrast findings and to look for patterns that might help our understanding of risk severity.

Considerations when analysing social media content

Of course, one of the challenges about what people post on social media is that it is difficult, if not impossible, to determine what is truthful. There is also not really an easy way to verify what is said on social media with relevant medical and other records. Sole studies in this area have achieved gold standard, wherever medical professionals involved have unanimously agreed the individual concerned is at risk of suicidal behaviour.

Text-based communication tools

The crisis text line is a nonprofit text message support service operating in America, Canada, Ireland and South Africa. Machine learning algorithms are used to help researchers and counsellors determine when a reference to suicide is a real indication of risk, rather than a joke or other expression of emotion. More than 54 million messages have been analysed to enable counsellors to typically determine in three messages whether they should alert emergency services. They have been able to determine that users who mentioned “Ibuprofen” or “Advil” are 14 times more likely to need emergency assistance than a person using the word “suicide.” Similarly, a person using the crying face emoji is 11 times more likely to need emergency services than a person using the word “suicide.”

Other examples

  • Radar is an app that can alert users when a friend has exhibited signs of suicide risk on Twitter
  • Trevor Project incorporates machine-based learning into text-based counselling so that counsellors can more quickly determine the risk profile of the LBGTIQ young people who use the services

Data concerns and privacy when using non-medical data

There is also the issue of patient consent to have their social media information associated with their medical records. People are becoming increasingly accustomed to having data shared through applications, particularly Facebook. When 2717 people visiting a hospital emergency setting were asked if they should share their social media details into their health records for the purpose of data linkage, 37% said yes. More work needs to be done on the validity and ethics of using social media tools in this particular health context.

Extending the reach

Population-wide suicide prediction is likely to offer ethical and useful applications for AI in a community-wide context; through resources, policy and medical services. The Canadian government has recently launched a project to gather data to determine suicide hotspots and inform the allocation of resources into high-need areas.

While here in Australia, in May 2019, research centre Turning Point, in conjunction with Monash University and Eastern Health was awarded $1.21 million from Google to establish a world-first suicide surveillance system. The system will use AI techniques to code suicide-related ambulance data to help identify trends and hotspots. The government intends to use this information to help inform intervention strategies and public health policy. In an example of this public-private collaboration, Google provided funding and coaching and support from among their team of AI experts.

Part 3 – AI use in relation to specific concerns

Request a Prospectus

  • This field is for validation purposes and should be left unchanged.