DCINR: a Divide-and-Conquer Implicit Neural Representation for Compressing Time-Varying Volumetric Data in Hours
 Jun Han -
 Fan Yang -

 Screen-reader Accessible PDF
 DOI: 10.1109/TVCG.2025.3564255
Room: Hall M1
Keywords
Spatiotemporal phenomena, Training, Data models, Optimization, Data compression, Image coding, Data visualization, Partitioning algorithms, Graphics processing units, Entropy
Abstract
Implicit neural representation (INR) has been a powerful paradigm for effectively compressing time-varying volumetric data. However, the optimization process can span days or even weeks due to its reliance on coordinate-based inputs and outputs for modeling volumetric data. To address this issue, we introduce a divide-and-conquer INR (DCINR), significantly accelerating the compressing process of time-varying volumetric data in hours. Our approach starts by dividing the data set into a set of non-overlapping blocks. Then, we apply a block selection strategy to weed out redundant blocks to reduce the computation cost without sacrificing performance. In parallel, each selected block is modeled by a tiny INR, with the size of the INR being adapted to match the information richness in the block. The block size is determined by maximizing the average network capacity. After optimization, the optimized INRs are utilized to decompress the data set. By evaluating our approach across various time-varying volumetric data sets, DCINR surpasses learning-based and lossy compression approaches in compression ratio, visual fidelity, and various performance metrics. Additionally, this method operates within a comparable compression time to that of lossy compressors, achieves extreme compression ratios ranging from thousands to tens of thousands, and preserves features with high quality.