Efficient Out-of-memory Sparse Tensor Decomposition for Massively Parallel Architectures

Date and time: 
Tue, May 31 2022 - 12:00pm
220 Deschutes
Dr. Jee Choi, UO CIS
University of Oregon

This study presents a novel framework for accelerating fundamental TD operations on massively parallel GPU architectures. In contrast to prior work, the pro- posed Blocked Linearized CoOrdinate (BLCO) format enables efficient out-of-memory computation of tensor algorithms using a unified implementation that works on a single tensor copy. Our adaptive blocking and linearization strategies not only meet the resource constraints of GPU devices, but also accelerate data indexing, eliminate control-flow and memory- access irregularities, and reduce kernel launching overhead. To address the substantial synchronization cost on GPUs, we introduce an opportunistic conflict resolution algorithm that discovers and resolves conflicting updates across threads on- the-fly, without keeping any auxiliary information or storing non-zero elements in specific mode orientations.

As a result, our framework delivers superior in-memory performance compared to prior state-of-the-art, and is the only frame- work capable of processing out-of-memory tensors.  On the latest Intel and NVIDIA GPUs, BLCO achieves 2.12 − 2.6× geometric-mean speedup (with up to 33.35× speedup) over the state-of-the-art on a range of real-world sparse tensors.


I am an Assistant Professor in the Department of Computer and Information Science at the University of Oregon. During my PhD, I worked on designing parallel and scalable algorithms for scientific applications, and modeling their performance and energy efficiency on the latest high-performance computing (HPC) systems. After graduation, I worked as a research staff member at the IBM T. J. Watson Research Center on designing and optimizing tensor decomposition algorithms for Big Data analytics. My current research focuses on developing performance-portable parallel algorithms and data structures for streaming data analysis on shared- and distributed-memory systems and accelerators.