A participatory initiative to include LGBT+ voices in AI for mental health

Andrey Kormilitzin, Nenad Tomasev, Kevin R. McKee, & Dan W. Joyce

Excerpt

Artificial intelligence (AI) can help clinicians to improve healthcare decision-making by integrating data from high-volume, heterogeneous electronic health records (EHRs). However, there is growing evidence that AI solutions in healthcare carry considerable risk of harm for people belonging to racial, ethnic, sexual and gender minority communities, which can exacerbate inequalities.

The safety and fairness of AI algorithms for LGBT+ communities have so far been largely unaddressed and under-discussed. These considerations are especially important when developing tools to support the delivery of mental healthcare, given that mental health difficulties such as self-harm and suicidal distress are more prevalent among LGBT+ people, in part because of stressors including victimization and isolation.

Tools to support clinical decisions require high-fidelity, granular and representative patient data. Many LGBT+ organizations endorse the collection of data on sexual orientation and gender identity to advance population health. Nonetheless, the collection and use of data comes with privacy concerns, especially when including sensitive and sometimes stigmatized characteristics. EHR systems should be adapted both to respect privacy and to reflect the diverse ways in which LGBT+ people express their identities.


Venue

Nature Medicine

Year

2023

Links

Nature