Skip to content. Skip to navigation
CIM Menus

Representing Cause-and-Effect in a Tensor Framework

M. Alex O. Vasilescu

May 29, 2019 at  11:30 AM
McConnell Engineering Room 437


Natural images are the compositional consequence of multiple causal factors related to scene structure, illumination, and imaging. Tensor algebra, the algebra of higher-order tensors offers a potent mathematical framework for explicitly representing and disentangling the causal factors of data formation which allows intelligent agents to better understand and navigate the world, an important tenet of artificial intelligence, and an important goal of data science. Theoretical evidence has shown that deep learning is a neural network implementation equivalent to multilinear tensor decomposition, while a shallow network corresponds to linear tensor factorization (aka. CANDECOMP/Parafac tensor factorization).

Tensor factorizations have been successfully applied in numerous computer vision, signal processing, computer graphics, and machine learning tasks. Tensor approach first employed in computer vision to recognize people from the way they move (Human Motion Signatures in 2001) and from their facial images (TensorFaces in 2002), but it may be used to recognize any objects, or object attributes.

We will also discuss several multilinear representations that represent cause-and-effect, such as, Multilinear PCA, Multilinear ICA (not to be confused with computing ICA by employing tensor methods, an approach typically employed to reparameterize deep learning models), Compositional Hierarchical Tensor Factorization, as well as the multilinear projection operator which is important in performing recognition.


M. Alex O. Vasilescu (Homepage) received her education at the Massachusetts Institute of Technology and the University of Toronto.

Vasilescu introduced the tensor paradigm in computer vision, computer graphics, machine learning, and extended the tensor algebraic framework by generalizing concepts from linear algebra. Starting in the early 2000s, she re-framed the analysis, recognition, synthesis, and interpretability of sensory data as multilinear tensor factorization problems suitable for mathematically representing cause-and-effect and demonstratively disentangling the causal factors of observable data. The tensor framework is a powerful paradigm whose utility and value has been further underscored by theoretical evidence that has showing that deep learning is a neural network approximation of multilinear tensor factorization and shallow networks are linear tensor factorizations (CP decomposition).

Vasilescu’s face recognition research, known as TensorFaces, has been funded by the TSWG, the Department of Defenses Combating Terrorism Support Program, and by IARPA, Intelligence Advanced Research Projects Activity. Her work was featured on the cover of Computer World, and in articles in the New York Times, Washington Times, etc. MITs Technology Review Magazine named her to their TR100 honoree, and the National Academy of Science co-awarded the KeckFutures Initiative Grant.