(Jump to the full list of publications: “Fresh from the oven,” peer-reviewed journals, peer-reviewed conferences, non-peer-reviewed conferences, book chapters, tutorials, preprints and technical reports.)
We study the problem of learning from data that live in low-dimensional manifolds. Loosely speaking, manifolds are smooth surfaces which are usually embedded in high-dimensional spaces. Manifolds provide us a structured and rigorous way to identify latent and low-dim data patterns and structures. Our preferred way of learning in this theme is regression. To this end, a novel non-parametric regression framework is introduced based only on the assumption that the underlying manifold is smooth. Neither explicit knowledge of the manifold nor training data are needed to run our regression tasks. Our design can be straightforwardly extended to reap the benefits of reproducing kernel functions; a well-known machine-learning toolbox. The framework is general enough to accommodate data with missing entries. We validate our designs on synthetic and real dynamic-MRI data, where our framework scores excellent performance under a small computational footprint, and, interestingly, outperforms even very recent tensor-based and deep-image-prior designs. Several generalizations and novel research directions are currently under study.
See, for example, our papers in IEEE Transactions on Computational Imaging, IEEE Transactions on Medical Imaging, and our very recent preprint in [arXiv] [TechRxiv].
We study here the case of learning from data/features which live in Riemannian manifolds; a special class of manifolds endowed with an inner product and thus a distance metric. These concepts may appear abstract, but they give us the freedom to employ our geometric intuition to address learning tasks in a wide variety of application domains. For example, numerous well-known features in signal processing and machine learning belong to Riemannian manifolds; see correlation matrices, orthogonal matrices, fixed-rank linear subspaces and tensors, probability density functions, etc. To build concrete discussions, we consider the basic learning tasks of clustering and classification on data taken from network time series, and in particular, from multi-layer brain networks. By capitalizing on our prior work on this theme, several research directions are currently under study.
See, for example, our papers in IEEE Open Journal of Signal Processing and Signal Processing.
We are currently designing novel reinforcement-learning (RL) algorithms for continuous and high-dimensional state/action spaces. Several application domains are currently under exploration. An example is the problem of selecting dynamically, at each time instance, the “optimal” p-norm to combat outliers in linear adaptive filtering, without any knowledge on the potentially time-varying probability density function of the outliers. To offer designs with small computational footprint, our RL algorithms are designed on reproducing kernel Hilbert spaces (RKHSs). Unlike classical routes, we introduce novel Bellman mappings which sample the state space on-the-fly, dynamically, without any need for information on transition probabilities of Markov decision processes. In contrast to the prevailing line of research, which views Bellman mappings as contractions on \(\mathcal{L}_{\infty}\)-norm Banach spaces (no inner product available by definition), our Bellman mappings are shown to be nonexpansive on the underlying RKHS, opening thus the door to the reproducing property of the RKHS inner product and to powerful Hilbertian tools. Several research directions are currently under study, and results will be reported at several publication venues.
See our preprints in (i) [arXiv] [TechRxiv] and (ii) [arXiv] [TechRxiv] .
Motivated by real-world signal-processing and machine-learning problems, we study general stochastic optimization tasks, that is, looking for “best” solutions in dynamic and “noisy” environments. Rather than using off-the-shelf methods, we prefer to invent our own tools. To this end, our quest for new algorithms may go down into the basics of convex analysis and fixed-point theory. For example, we have recently introduced a stochastic-optimization framework that addresses the minimization of a composite convex loss over a constraint set which takes the form of the fixed-point set of an affine nonexpansive mapping. Interestingly, both the loss and the constraint set may be of stochastic nature; for example, “noisy.” To connect our designs with real-world problems, we have recently moved our attention onto the problem of outlier rejection in dynamic environments, preferably via methods which operate under low computational load. One such instance is kernel-based adaptive filtering in signal processing. As an example, by capitalizing on our general stochastic-optimization designs, an interesting variation of the classical recursive least squares (RLS) has been introduced, allowing for hierarchical optimization via fixed-point theory and non-expansive mappings. Several novel research directions and extensions are currently under study.
See, for example, our papers in IEEE Transactions on Signal Processing and the Proceedings of IEEE ICASSP.