Can AI predict suicide?
Artificial intelligence (AI) and machine learning are increasingly being used in healthcare, raising hopes that technology could help identify people at risk of suicide and self-harm earlier than ever before. However, a new study suggests that machine learning tools are not reliable enough to predict suicidal behaviour in a way that would be useful for everyday healthcare. While there has been growing hope that artificial intelligence could help doctors identify people at high risk and intervene earlier, the research indicates that this promise has not yet been fulfilled.
For decades, clinicians have used various risk assessment tools to try to predict suicide or self-harm, but these have generally struggled to do so accurately. Machine learning was expected to improve this by analysing large amounts of health information and spotting hidden patterns. However, the latest findings show that these newer approaches perform no better than traditional methods.
In simple terms, the algorithms are much better at identifying people who are unlikely to harm themselves than those who actually will. Many individuals who later went on to self-harm or die by suicide were incorrectly labelled as “low risk”. At the same time, a large number of people flagged as “high risk” never went on to harm themselves, meaning the tools also generate many false alarms.
The researchers conclude that relying on these predictions could be misleading and potentially harmful if they are used to decide who receives care or support. Current clinical guidelines already advise against using risk prediction alone to guide treatment, and this study reinforces that advice.
Source: PLOS Medicine
Comments