When | Friday, 27th of September |
Where | Room R6-66, Congress Centre U Hájků, Prague |
10:55 - 11:35 | Morten Mørup |
Tensor Decompositions for Machine Learning and the Modelling of Neuroimaging Data |
11:35 - 12:15 | Lieven de Lathauwer | Advances in (Numerical) Linear Algebra |
12:15 - 13:45 | Lunch Break | |
13:45 - 14:25 | Taylan Cemgil |
Probabilistic Latent Tensor Factorization with Applications to Audio Processing and Source Separation |
14:25 - 14:45 | Denis Krompass | Non-Negative Tensor Factorization with RESCAL |
14:45 - 15:00 | Spotlight Talks | |
15:00 - 15:45 | Poster Session and Coffee Break | |
15:45 - 16:25 | Steffen Rendle | Factorization Machines |
16:25 - 17:05 | Pauli Miettinen | Boolean Tensor and Matrix Factorization |
17:05 - 17:15 | Discussion |
Tensors, as generalizations of vectors and matrices, have become increasingly popular in different areas of machine learning and data mining, where they are employed to approach a diverse number of difficult learning and analysis tasks. Prominent examples include learning on multi-relational data and large-scale knowledge bases, recommendation systems, computer vision, mining boolean data, neuroimaging or the analysis of time-varying networks. The success of tensors methods is strongly related to their ability to efficiently model, analyse and predict data with multiple modalities. To address specific challenges and problems, a variety of methods has been developed in different fields of application. This workshop should serve as a basis for an interdisciplinary exchange of methods, ideas and techniques, with the goal to develop a deeper understanding of tensor methods in machine learning, further advance existing approaches and enable new approaches to important problems. A particular focus of this workshop is to uncover underlying principles in tensor methods, their applications and associated problems. The workshop is intended for researchers in the machine learning, data minining and tensor communities to discuss novel methods and applications as well as theoretical advances.
The workshop consists of contributed talks, poster sessions and a number of invited talks which will cover important work and recents developments in tensor methods. Furthermore, the workshop will include open discussion sessions to encourage the exchange of ideas and the development of a common understanding of problems and methods among the participants of the workshop.
We invite the submission of short and regular papers to the workshop. Submitted papers should be at most 5 (extended abstract) or 10 (regular) pages long and formatted according to the Springer LNAI guidelines.
All submitted manuscripts will be subject to peer reviews by members of the program committee. Selected papers will be presented as full-length or spotlight talks during the morning and afternoon sessions of the workshop. All authors of selected papers are also invited to participate in poster sessions. Topics of interest include, but are not limited to
To submit your manuscript via EasyChair, please follow the link
For paper submission, please consider the following deadlines
Tensor decompositions have several advantages over (two-way) matrix factorization methods for unsupervised learning/exploratory data analysis including uniqueness of solution and the ability to explicitly exploit the multi-way structure that is lost when collapsing some of the modes of the tensor in order to analyze the data by matrix factorization approaches. This talk will in particular focus on tensor decompositions for the modeling of neuroimaging data where important challenges include extracting consistent, reproducible patterns of activation across trials, subjects, and/or conditions. Emphasis will be given both to the extension of tensor decomposition methods for the modeling of latency and shape changes in EEG and fMRI as well as the modeling of multi-subject brain connectivity using non-parametric relational modeling approaches.
Recently important progress has been made in the understanding of the conditions under which tensor decompositions are unique. The uniqueness properties of decompositions such as the Canonical Polyadic Decomposition are at the heart of tensor based signal processing, data analysis and machine learning. We briefly sketch the state of the art.
Also in numerical multilinear algebra important progress has recently been made. It has been recognized that tensor product structure allows a very efficient storage and handling of the Jacobian and (approximate) Hessian of the cost function. On the other hand, multilinearity allows global optimization in (scaled) line and plane search. Although there are many possibilities for decomposition symmetry and factor structure, these may be conveniently handled. We demonstrate the algorithms using Tensorlab, a MATLAB toolbox for tensors and tensor computations that we have recently released.
Tensor factorization approaches have shown high predictive accuracy in several important machine learning problems. However, tensor factorization models usually lack flexibility and are restricted to categorical variables.
In this talk, I present factorization machines which are based on standard feature-engineering / design matrices. I will discuss the relationship of factorization machines to standard linear and polynomial models as well as to well-known factorization models. Several learning methods for factorization machines are presented, among them coordinate descent and MCMC inference with Gibbs sampling.
Boolean matrix decomposition represents a given binary matrix as a Boolean product of two (possibly smaller) binary matrices. Similarly, Boolean tensor decompositions decompose binary tensors into binary factors. The crux of these methods is the use of Boolean algebra, replacing addition with logical OR, giving the decompositions more combinatorial flavour.
Boolean matrix and tensor decompositions have been studied and used in many fields, including extremal combinatorics, communication complexity, and psychometrics, to name a few. In recent years, they have seen an increased amount of interest in data mining, providing a powerful tool that can be used to generalize many existing data mining problems. The Boolean algebra can help on sparsity and interpretability, and allows modelling different type of behaviour than normal algebra, but its use usually comes with increased computational complexity.
In this talk we go thru the basics of Boolean matrix and tensor decompositions, explain the main similarities and dissimilarities between Boolean and normal decompositions, and talk about applications of Boolean tensor decompositions to data mining and information extraction. We will cover the main algorithmic ideas and point out some open problems.
Algorithms for decompositions of matrices and tensors are of central importance in machine learning, signal processing and information retrieval. In the recent years tensor methods, that compute decompositions of multiway arrays have gained significant popularity (Kolda and Bader, 2009; Cichocki et. al. 2008). Notable extensions include coupled factorizations where multiple observed tensors are factorized collectively; such methods are in particular useful for information fusion.
We will discuss a subset of such tensor models from a statistical modelling perspective, building upon probabilistic generative models and generalised linear models (McCulloch and Nelder). Probabilistic interpretations of factorisation models facilitate the construction of application specific models. Here, the factorisation is implicit in a well-defined statistical model and factorisations can be computed via maximum likelihood.
We express a tensor factorisation model using a factor graph and the factor tensors are optimised iteratively. In each iteration, the update equation can be implemented by a message passing algorithm, reminiscent to variable elimination in a discrete graphical model. This setting provides a structured and efficient approach that enables very easy development of application specific custom models, as well as algorithms for coupled factorizations. Full Bayesian inference and model selection are also feasible via variational approximations or Markov Chain Monte Carlo (MCMC) methods. Well known models of multiway analysis such as Nonnegative Matrix Factorisation (NMF), Parafac, Tucker, and audio processing (Convolutive NMF, NMF2D, SF-SSNTF) appear as special cases and new models can easily be developed. We will illustrate the approach with applications in audio and music processing and informed source separation.
Non-negative data is generated by a broad selection of applications today, e.g in gene expression analysis or imaging. Many factorization techniques have been extended to account for this natural constraint and have become very popular due to their decomposition into interpretable latent factors. Generally relational data like protein interaction networks or social network data can also be seen as naturally non-negative. In this work, we extend the RESCAL tensor factorization, which has shown state-of-the-art results for multi-relational learning, to account for non-negativity by employing multiplicative update rules. We study the performance via these approaches on various benchmark datasets and show that a non-negativity constraint can be introduced by losing only little in terms of predictive quality in most of the cases but simultaneously increasing the sparsity of the factors significantly compared to the original RESCAL algorithm.
To improve the measurement and differentiation of normal and abnormal brain function we are developing new methods to decompose multichannel (electroencephalogram) EEG into elemental components or “atoms.” We estimate EEG atoms using multiway analysis, specifically parallel factor analysis or PARAFAC for modeling. Activation sequences of EEG atoms can identify functional brain networks dynamically, with much finer time resolution than fMRI. For example, EEG atoms activate in specific combinations during the sequential operations of brain networks, such as Default Mode, Somatomotor, Dorsal Attention and others. Guided by the score values of the identified atoms we inferred the volumetric brain sources of the selected networks using the sLORETA pseudoinverse algorithm. To confirm network identities, we compared 2-D and 3-D functional network maps derived from EEG atoms to known functional neuroanatomy of the networks. We find that multichannel EEGs in most individuals can be accounted for by a set of five to six standard atoms, which parallel classical EEG bands, and have unique power spectra, scalp and cortical topographies. We discuss how we may use the activation sequences of these atoms to describe the dynamic interplay of functional brain networks.
This paper proposes a simplified Tucker decomposition of a tensor model for gait recognition from dense local spatiotemporal (S/T) features extracted from gait video sequences. Unlike silhouettes, local S/T features have displayed state-of-art performances on challenging action recognition testbeds, and have the potential to push gait ID towards real-world deployment. We adopt a Fisher representation of S/T features, rearranged as tensors. These tensors still contain redundant information, and are projected onto a lower dimensional space with tensor decomposition. The dimensions of the reduced tensor space can be automatically selected by keeping a proportion of the energy of the original tensor. Gait features can then be extracted from the reduced “core” tensor, and ranked according to how relevant each feature is for classification. We validate our method on the benchmark USF/INIST gait data set, showing performances in line with the best reported results.
This paper introduces a new stepwise approach for predicting one specific binary relationship in a multi-relational setting. The approach includes a phase of initializing the components of a logistic ad- ditive model by matrix factorization and a phase of further optimizing the components with an additive restriction and the Bernoulli modelling assumption. By using low-rank approximations on a set of matrices de- rived from various interactions of the multi-relational data, the approach achieves data efficiency and exploits sparse matrix algebra. Experiments on three multi-relational datasets are conducted to validate the logistic additive approach.
In this paper we propose an algorithm for non-linear embedding of affinity tensors obtained by measuring higher-order similarities between high-dimensional points. We achieve this by preserving the original triadic similarities using another triadic similarity function obtained by sum of squares of diadic similarities in a low-dimension. We show that this formulation reduces to solving for the nonlinear embedding of a graph which has a specific kind of a graph Laplacian. We provide an iterative algorithm for minimizing the loss, and also propose a simple linear-constraint that prevents non-zero solutions for embedding problems unlike the existing variants of quadratic orthonormality constraints used in the literature, that require eigen decompositions to solve for the embedding.