Skip to content

Building a Visual Metaphor Machine

By Pedro Cruz, John Wihbey, April Qian

In 2024, using text-to-text, text-to-image, or even image-to-image AI generators is nothing new. Most of these tools, however, allow only for a superficial understanding of the provided context. What if we could take a raw dataset and harness AI to generate a portrait of the data in an automated and systematic manner? Our engine will achieve just that by mapping data attributes to visual properties like color, shape, and motion. The resulting visual output will be rich in narrative and emotion, and will be both computationally and visually sound.

In this project, we build upon previous works such as Charticulator (Microsoft) and browser-based research studies to develop a two-part visual engine. The first engine generates a data stream that maps out aggregated, ecosystem-level behavior from the dataset. This stream is then used by the second visual engine to model individual behavior and introduce stochastic properties inherent in real-world scenarios.

The Visual Metaphor Machine project is an initiative that aims to redefine the boundaries of data exploration and visualization. As a proof of concept, we will visualize a vast dataset based on browser and mobile studies to showcase both an ecosystem-level visualization of online behavior in an adaptive multi-agent system, as well as aggregated individual user profiles portrayed through movement-based visual outputs. This exercise will help us fine-tune our visual language, train our visual metaphor engine, and unlock new insights and narratives hidden within data streams.

By

April Qian

Share