Mitigating the Effects of Gender Bias in Clinical Trials
The last step in developing a drug or medical treatment is the clinical trial, which confirms the treatment’s efficacy in humans and its safety. While a long and expensive process, a successful clinical trial is essential to receiving approval from the FDA or other regulating authorities. Yet clinical trials have long underrepresented women, leading to misdiagnosed diseases and adverse drug reactions.
A new article in the Journal of the American Medical Informatics Association (JAMIA) by researchers at the Henry and Marilyn Taub Faculty of Computer Science at the Technion – Israel Institute of Technology, in collaboration with Dr. Eric Horvitz of Microsoft Research, unveils a special tool that can help compensate for this gender gap, thereby improving medical treatments for women.
Gender bias in clinical trials is not new, and it has in fact worsened following traumatic events, including the Thalidomide affair – a drug that caused numerous birth defects when prescribed to pregnant women to alleviate morning sickness. That tragic episode, which took place in the early 1960s, led to a drastic decline in female participants in clinical trials.
In 1993, laws were passed in the United States that mandated the inclusion of women in these trials and the analysis of results with regards to gender. Yet, underrepresentation remains a persistent problem. It’s not just women: certain age groups, ethnic groups, and other demographics are often underrepresented in clinical trials, and in some cases, there is also an underrepresentation of men, such as for diseases that are considered more “feminine,” such as fibromyalgia.
In recent years, machine learning models have been introduced to the world of medicine, aiming to improve medical diagnosis, treatment, and prevention. However, Technion Ph.D. candidate Shunit Agmon, who conducted the research together with Technion alumna and visiting professor Dr. Kira Radinsky, claims that “many of these models are based on biased trials and therefore they ‘inherit’ their biases, and in some cases even amplify them.”
The researchers explored this issue using machine learning tools, including natural language programming (NLP), and vector representation of words (word embeddings), approaches that enable computers to “understand” texts. They used these methods on 16,772 articles from the PubMed database and allotted each one a “weight” based on the percentage of women in the clinical trials described in each article. This way, they developed an algorithmic tool that enables gender-sensitive use of clinical literature. This algorithm corrects the gender bias and improves the treatments’ suitability for female patients.
The algorithm succeeded in substantially improving predictions for women in various situations, including length of hospitalization, re-hospitalization within a month, and correlation between various diseases. Although the model focused on improving predictions for women, it also significantly improved overall clinical predictions for men as well.
The researchers expect the JAMIA article to increase awareness of the problems of underrepresentation in research in general and in clinical trials in particular and will lead to additional solutions to improve the quality of personalized medicine.