IEEE VIS 2025 Content: Visualizationary: Automating Design Feedback for Visualization Designers Using LLMs

Visualizationary: Automating Design Feedback for Visualization Designers Using LLMs

Sungbok Shin -

Sanghyun Hong -

Niklas Elmqvist -

Image not found
Screen-reader Accessible PDF

Room: Hall E1

Keywords

Data visualization, Visualization, Computational modeling, Training, Measurement, Filters, Predictive models, Image color analysis, Translation, Large language models

Abstract

Interactive visualization editors empower users to author visualizations without writing code, but do not provide guidance on the art and craft of effective visual communication. In this article, we explore the potential of using an off-the-shelf large language models (LLMs) to provide actionable and customized feedback to visualization designers. Our implementation, Visualizationary, demonstrates how ChatGPT can be used for this purpose through two key components: a preamble of visualization design guidelines and a suite of perceptual filters that extract salient metrics from a visualization image. We present findings from a longitudinal user study involving 13 visualization designers—6 novices, 4 intermediates, and 3 experts—who authored a new visualization from scratch over several days. Our results indicate that providing guidance in natural language via an LLM can aid even seasoned designers in refining their visualizations.