Skip to content

Research Area

LUCID Framework | From User Reviews to User-Centered Generative Design

From User Reviews to User-Centered Generative Design: Automated Methods for Augmented Designer Performance + ADA Tech (Impact Engine

This project imagines user-centered design processes where the latent needs of myriad users are automatically elicited from social media, forums, and online reviews, and translated into new concept recommendations for designers. This project will advance the fundamental understanding of whether AI can augment the performance of designers in early-stage product development by investigating two fundamental questions: (1) Can we build and validate novel natural language processing (NLP) algorithms for large-scale elicitation of latent user needs with cross-domain transferability and minimal need for manually labeled data? (2) Can we build and validate novel deep generative design algorithms that capture the visual and functional aspects of past successful designs and automatically translate them into new design concepts? The convergence research team is well-positioned to undertake these questions, with expertise across four disciplines of engineering, computer science, business, and design.

View the project here.

CfD Team members: Paolo Ciuccarelli, Estefania Ciliotta

Other Team Members: Lu Wang,Tucker Marion, Mohsen Moghaddam

Phd Students: Ryan Bruggeman, Yi han , Parisa Ghasemi

Summer Intern: Purti Hardikar

Partners: Northeastern College of Engineering, D’Amore-McKim School of Business at Northeastern, University of Michigan

Threads of Assumption: Gender Bias and AI

Threads of Assumption is a multi-year interactive art project that collects and codes stories of gender bias. The initial dataset collected intimate conversations using a digital platform. However, traditional Natural Language Processing (NLP) and sentiment analysis could not evaluate gender bias. Starting in 2021, the team developed a workshop format for collecting qualitative data, centering on reflection and storytelling, in order to build a training dataset for AI models to identify gender bias. The team relied on their expertise as artists and designers to cultivate a meditative space for contributing sensitive stories. In the workshop, participants experience how hands-on activities, like writing and weaving, can personalize abstract and sensitive topics. Participants leave with a blueprint of best practices for facilitating a workshop for qualitative data collection, tools for integrating physical making into workshop design, and an expanded understanding of how gender bias affects them, their communities, and the service design process.

View the project here:

CfD Members: Estefania Ciliotta

Other Members: Sofie Hodara, Martha Rettig, Maria Finkelmeier, U-Meleni Mhlaba-Adebo

Presented at ServDes 2023.

Want to Stay Up to Date on the CfD?