Future is Sparse: Methods and Tools for Sparse Computations

This workshop will be co-located with Supercomputing’23 in Denver Colorado.

Abstract

Many real-world computations involve sparse data structures in the form of sparse matrices, graphs, or sparse tensors. In computational sciences, sparse matrices are commonly used for numerically solving partial differential equations. Likewise, many approaches for deep learning using graph representations, namely GNNs, have been proposed. More generic, multi-dimensional sparse tensors are currently at the heart of data-driven fields such as AI and deep learning. Getting high performance on such sparse data structures is well-known to be challenging and is still an open research problem. This workshop will gather a group of experts who are researching various aspects of this topic and aim to present the attendees with an overview of the state of the art research activities. More importantly, the workshop will provide a forum for interactions between the SC participants, so that new ideas can be generated to push forward the state of the art.

Workshop Scope

Sparse computations that involve sparse matrices, graphs, or tensors are characterized by a low ratio between the volumes of computational work executed and memory traffic incurred. As memory resources are increasingly “lagging behind” on modern computing platforms, it is increasingly challenging to secure the performance of sparse computations. For example, even the fastest supercomputers in the world from the Top500 list can only achieve 3% or less of their peak performance on the conjugate gradient solver, which is a well-known kernel of sparse computation. The challenge is even larger for real-world problems where the data access patterns are irregular. Examples include sparse matrices that arise from discretizing partial differential equations over unstructured computational meshes, or sparse graphs representing entities that are unevenly connected, or missing information in high-dimensional data leading to sparse tensors.

Although there exist methodologies that can be used for understanding or improving the performance of some sparse computations, such as insightful architecture/application  modeling and matrix reordering, these methods are typically too coarse-grained and often not easily applicable to sparse computations with irregular memory patterns. Moreover, the availability of heterogeneous systems, which include powerful accelerators, widens the design space in particular for this type of computation. Lately, research activities have gained new momentum on optimizing sparse computations and developing sparsity-aware tools on modern hardware architectures in HPC. One example is the ongoing EuroHPC-JU funded project SparCity (https://sparcity.eu/). 

Sparsification is also a new trend in large-scale deep learning. The growing costs of model training have led to several researchers to explore methods to reduce the DNN size by pruning. Sparsification enables reduction of  both the memory footprint and training time. This half-day workshop is therefore a discussion forum for the latest research developments  and outlines major open problems in the field of HPC and AI. 

The workshop aims to gather academic researchers, software developers, R&D staff from the HPC industry, and domain scientists who work with sparse computations in their daily work. The common denominator of the presenters and attendees is the wish to obtain high performance for sparse computations. The topics of interest include, but are not restricted to:

  • performance characterization and prediction, including the use of ML
  • optimization strategies covering algorithms, data structures, parallelization
  • automated code generation for general-purpose and specific hardware
  • software libraries and tools for implementing optimization strategies and real-world sparse, tensor and graph computations
  • sparsification in deep learning training and inference 
  • sparse matrix/graph/tensor datasets and generators 
  • real-world instances of sparse data
  • real-world applications of sparse computations
Expected outcome from the workshop

This workshop thus provides a timely forum with the expected outcome as follows:

  • Exchange of the latest research ideas and results
  • Inspiration for new ideas and strategies
  • Initiation of a dedicated yet open research community for this topic
  • Performance enhancement of sparse computations in real-world applications
  • Development of new implementation strategies that target emerging hardware architectures
  • Collaboration between HPC researchers, deep learning experts and domain scientists who leverage sparse computation in their applications
  • An interactive panel composed of experts in the field to simulate lively discussions 
  • Knowledge transfer between different fields that utilize sparse data structures.
  • Industry involvement in the discussion to provide custom solutions to sparse applications
Format

We plan a half-day workshop with a keynote speaker, invited talks followed by a panel.
More info coming soon.

Schedule

8.30 am – 9:00 am Welcome & Introduction
9:00 am – 9.40 am Keynote 
9:40 am – 10.00 am Invited Talk (1)
10:00 am -10:30 am Break
10:30 am – 10:50 am Invited Talk (2)
10:50 am – 11:10 am Invite Talk (3)
11:10 am – 11:55 am Panel 
11:55 am – 12:00 pm Close and Remarks