Threads of Assumption: Gender Bias and AI
Technological development, especially in the form of artificial intelligence (AI), opens possibilities for service design. However, the AI-reliant systems in the products, services, and technologies that we design replicate harmful constructions of race and gender that are inherent in the datasets used for training (Crawford, 2021). While most AI researchers view datasets as “operational” relative to the “lionized work of building novel models” (Sambasivan, 2021), research artist Hannah Davis advocates for “ideal or experimental worldviews” by creating new datasets “with data and labels that model an ideal or healthier society, not just mirror how it is today” (Davis, 2020). In response, Threads of Assumption, a multi-year interactive art project, collects and codes stories of gender bias. For our initial dataset, we collected intimate conversations using a digital platform. However, traditional Natural Language Processing (NLP) and sentiment analysis couldn’t evaluate gender bias. Starting in 2021, we developed a workshop format for collecting qualitative data, centering on reflection and storytelling, to build a training dataset for AI models to identify gender bias. Relying on our expertise as artists and designers, we cultivate a meditative space for contributing sensitive stories. Participants experience how hands-on activities, such as writing and weaving, can personalize abstract and sensitive topics and make complexity tangible and accessible. Participants leave with a blueprint of best practices for facilitating a workshop for qualitative data collection, tools for integrating physical making into workshop design, and an expanded understanding of how gender bias affects them, their communities, and the service design process.
By
Estefania Ciliotta
CfD Team Members
Affiliate Members
U-Meleni Adebo, Maria Finkelmeir, Sofie Hodara and Martha Rettig
Research Areas
Design + AI