Abstract
Radiodiagnostics by machine-learning (ML) systems is often perceived as objective and fair. It may, however, exhibit bias towards certain patient sub-groups. The typical reasons for this are the selection of disease features for ML systems to screen, that ML systems learn from human clinical judgements, which are often biased, and that fairness in ML is often inappropriately conceptualized as “equality”. ML systems with such parameters fail to accurately diagnose and address patients’ actual health needs and how they depend on patients’ social identities (i.e. intersectionality) and broader social conditions (i.e. embeddedness). This paper explores the ethical obligations to ensure fairness of ML systems precisely in light of patients’ intersectionality and the social embeddedness of their health. The paper proposes a set of interventions to tackle these issues. It recommended a paradigm shift in the development of ML systems that enables them to screen both endogenous disease causes and the health effects of patients’ relevant underlying (e.g. socioeconomic) circumstances. The paper proposes a framework of ethical requirements for instituting this shift and further ensuring fairness. The requirements center patients’ intersectionality and the social embeddedness of their health most notably through (i) integrating in ML systems adequate measurable medical indicators of the health impact of patients’ circumstances, (ii) ethically sourced, diverse, representative and correct patient data concerning relevant disease features and medical indicators, and (iii) iterative socially sensitive co-exploration and co-design of datasets and ML systems involving all relevant stakeholders.