Abstract for: Taking a step back: A Systemic look at bias in machine learning

Existing methods to identify bias in Machine Learning (ML) neglect to examine the subjectivities of the data scientists and the wider social context within which the problem is located. We argue that this omission leads to bias being inadvertently introduced to the datasets and seek to provide a systemic means to address this gap in literature. A mixed-methods field study was carried out within a grocery store chain in the United States to test the proposed approach. Initial results of the study demonstrate that enabling stakeholders involved in ML design to learn about their subjective frames of reference and the wider social environment within which the problem was located, helped them to become more aware of first, how their particular biases could inadvertently get embedded within the training/test datasets and second, how societal biases in the environment could become reinforced in the datasets. The initial results show that the using systems tools during the ML process can help produce datasets that were less biased. Analyzing and documenting the subjective frames of reference od data scientists can also provide other researchers insights into the subjective frames through which the original data scientists prepared their datasets and reached their conclusions.