IEEE VIS 2024 Content: Chart2Vec: A Universal Embedding of Context-Aware Visualizations

Chart2Vec: A Universal Embedding of Context-Aware Visualizations

Qing Chen -

Ying Chen -

Ruishi Zou -

Wei Shuai -

Yi Guo -

Jiazhe Wang -

Nan Cao -

Room: Bayshore II

2024-10-17T12:54:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:54:00Z
Exemplar figure, described by caption below
To capture the information of a single visualization, we designed the Chart2Vec model. The input embedding module transforms the raw data into a vector format containing both fact schema and fact semantics, the encoder module then employs feature pooling and feature fusion to achieve the final vector representation.
Fast forward
Keywords

Representation Learning, Multi-view Visualization, Visual Storytelling, Visualization Embedding

Abstract

The advances in AI-enabled techniques have accelerated the creation and automation of visualizations in the past decade. However, presenting visualizations in a descriptive and generative format remains a challenge. Moreover, current visualization embedding methods focus on standalone visualizations, neglecting the importance of contextual information for multi-view visualizations. To address this issue, we propose a new representation model, Chart2Vec, to learn a universal embedding of visualizations with context-aware information. Chart2Vec aims to support a wide range of downstream visualization tasks such as recommendation and storytelling. Our model considers both structural and semantic information of visualizations in declarative specifications. To enhance the context-aware capability, Chart2Vec employs multi-task learning on both supervised and unsupervised tasks concerning the cooccurrence of visualizations. We evaluate our method through an ablation study, a user study, and a quantitative comparison. The results verified the consistency of our embedding method with human cognition and showed its advantages over existing methods.