Open Access
February 2017 Tensor decompositions and sparse log-linear models
James E. Johndrow, Anirban Bhattacharya, David B. Dunson
Ann. Statist. 45(1): 1-38 (February 2017). DOI: 10.1214/15-AOS1414


Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.


Download Citation

James E. Johndrow. Anirban Bhattacharya. David B. Dunson. "Tensor decompositions and sparse log-linear models." Ann. Statist. 45 (1) 1 - 38, February 2017.


Received: 1 April 2014; Revised: 1 November 2015; Published: February 2017
First available in Project Euclid: 21 February 2017

zbMATH: 1367.62180
MathSciNet: MR3611485
Digital Object Identifier: 10.1214/15-AOS1414

Primary: 62F15

Keywords: Bayesian , categorical data , Contingency table , Graphical model , high-dimensional , latent class analysis , low rank , PARAFAC , Sparsity , Tucker

Rights: Copyright © 2017 Institute of Mathematical Statistics

Vol.45 • No. 1 • February 2017
Back to Top