Learning sparsity of representations with discrete latent variables

Zhao Xu, Daniel Onoro Rubio, Giuseppe Serra, Mathias Niepert

Research output: Chapter in Book/Report/Conference proceedingConference contribution

50 Downloads (Pure)


Deep latent generative models have attracted increasing attention due to the capacity of combining the strengths of deep learning and probabilistic models in an elegant way. The data representations learned with the models are often continuous and dense. However in many applications, sparse representations are expected, such as learning sparse high dimensional embedding of data in an unsupervised setting, and learning multi-labels from thousands of candidate tags in a supervised setting. In some scenarios, there could be further restriction on degree of sparsity: the number of non-zero features of a representation cannot be larger than a pre-defined threshold L0. In this paper we propose a sparse deep latent generative model SDLGM to explicitly model degree of sparsity and thus enable to learn the sparse structure of the data with the quantified sparsity constraint. The resulting sparsity of a representation is not fixed, but fits to the observation itself under the pre-defined restriction. In particular, we introduce to each observation i an auxiliary random variable Li, which models the sparsity of its representation. The sparse representations are then generated with a two-step sampling process via two Gumbel-Softmax distributions. For inference and learning, we develop an amortized variational method based on MC gradient estimator. The resulting sparse representations are differentiable with backpropagation. The experimental evaluation on multiple datasets for unsupervised and supervised learning problems shows the benefits of the proposed method.
Original languageEnglish
Title of host publication2021 International Joint Conference on Neural Networks (IJCNN)
Number of pages9
ISBN (Electronic)9781665439008, 9780738133669
ISBN (Print)9781665445979 (PoD)
Publication statusPublished - 20 Sept 2021
Event2021 International Joint Conference on Neural Networks (IJCNN) - Shenzhen, China
Duration: 18 Jul 202122 Jul 2021

Publication series

NameInternational Joint Conference on Neural Networks (IJCNN)
ISSN (Print)2161-4393
ISSN (Electronic)2161-4407


Conference2021 International Joint Conference on Neural Networks (IJCNN)

Bibliographical note

Funding Information:
The work of NEC Laboratories Europe was partially supported by H2020 MonB5G project (grant agreement no. 871780). The research of G. Serra was supported by H2020 ECOLE project (grant agreement no. 766186).

Publisher Copyright:
© 2021 IEEE.


  • Visualization
  • Supervised learning
  • Neural networks
  • Memory
  • Probabilistic logic
  • Particle measurements
  • Data models
  • Amortized Variational Inference
  • Sparsity of Representation
  • Deep Latent Generative Models

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence


Dive into the research topics of 'Learning sparsity of representations with discrete latent variables'. Together they form a unique fingerprint.

Cite this