
Daniela Calvetti
Case Western
Title
Dictionary learning: where inverse problems and data science meet.
Abstract
In the current era of big data, a lot of effort is spent in organizing and querying data sets. These efforts are particularly central for inverse problems, in which the goal is to estimate quantities depending indirectly on the data. Previously collected or simulated data sets provide an insight into how typical data should look like, and how much variability in the data can be expected as the unknown of interest varies. In this context, it is natural to think of data sets as entries in a dictionary. Intrinsic knowledge about the data combined with data science methods can be used to partition the dictionary into subdictionaries. Matching previously unseen data to labeled dictionary entries can provide an interpretation of the data. Dictionary matching/learning methods provide a flexible and versatile framework for traditional classification problems as well as for solving inverse problems where traditional techniques fail either because the forward model is complex, ill-defined, or difficult to parametrize, or the data are insufficient for standard methods. To increase computational efficiency and accuracy dictionary matching can be preceded by a dictionary learning step, yielding a reduced dictionary.
Sparsity of the solutions can greatly speed up the calculation and to facilitate the interpretation of the dictionary matching process. Hierarchical Bayesian methods leading to very effective computations with sparsity-promoting prior models are very naturally suited for dictionary learning applications. In this talk, we review the sparsity promoting methods for solving inverse problems as a dictionary matching problem, and discuss how Bayesian modeling error methods and matrix factorization techniques can be used to learn compressed dictionaries.