|
| |
Below are the titles and the abstracts of the talks (click on the title to read the abstract).
The full program (time schedule, abstracts and poster titles) can be also downloaded from here. A printed version will be available at the conference venue.
|
|
| |
Monday, 15 January
|
08:00-08:45 |
Registration
|
08:45-09:00 |
Welcome message
|
09:00-09:45 |
Pierre-Antoine Absil, Curve Fitting on Riemannian Manifolds
In this talk I will discuss curve fitting problems on
manifolds. Manifolds of interest include the rotation group SO(3)
(generation of rigid body motions from sample points), the set of 3x3
symmetric positive-definite matrices (interpolation of diffusion
tensors) and the shape manifold (morphing). Ideally, we would like to
find the curve that minimizes an energy function E defined as a
weighted sum of (i) a sum-of-squares term penalizing the lack of
fitting to the data points and (ii) a regularity term defined as the
mean squared acceleration of the curve. The Euler-Lagrange necessary
conditions for this problem are known to take the form of a
fourth-order ordinary differential equation involving the curvature
tensor of the manifold, which is in general hard to solve. Instead, we
simplify the problem by restricting the set of admissible curves and
by resorting to suboptimal Riemannian generalizations of Euclidean
techniques.
|
09:45-10:30 |
Björn Andres, Graph Decomposition Problems in Image Analysis
A large part of image analysis is about breaking things into pieces.
Decompositions of a graph are a mathematical abstraction of the
possible outcomes. This talk is about a generalization of the
correlation clustering problem whose feasible solutions relate one-to-one to the decompositions of a graph, and whose objective function puts
a cost or reward on pairs of nodes being in distinct components. It
shows applications of this problem to diverse image analysis tasks,
sketches algorithms for finding feasible solutions for large instances
in practice, and generalizes some well-known properties of correlation
polytopes.
References:
[1] Horňáková A., Lange J.-H. and Andres B. Analysis and Optimization
of Graph Decompositions by Lifted Multicuts. ICML 2017
[2] Levinkov E., Uhrig J., Tang S., Omran M., Insafutdinov E., Kirillov
A., Rother C., Brox T., Schiele B. and Andres B. Joint Graph
Decomposition and Node Labeling: Problem, Algorithms, Applications.
CVPR 2017
|
10:30-11:00 |
Coffee Break
|
11:00-11:45 |
Edoardo Provenzi, Retinex-like Models in Color Enhancement
In this talk, I will present and compare many different interpretations of the famous Retinex model of color vision, developed by E. Land in the late sixties. In particular, I will underlined the difference between physically-based and spatially-based Retinex interpretations thanks to the use of variational techniques. Finally, I will briefly discuss the role of Retinex in the development of perceptually-based contrast measures.
|
11:45-12:30 |
Julianne Chung, Efficient Generalized Golub-Kahan based Methods for Dynamic Imaging
In many applications, the problem is assumed to be static, in the sense that the underlying parameters do not change during the measurement process. However, in many realistic scenarios such as in passive seismic tomography or dynamic photoacoustic tomography, the underlying parameters of interest may change during the measurement procedure. Incorporating prior information regarding temporal smoothness in reconstruction algorithms can lead to better reconstructions. However, this can become computationally intensive, in part, due to the large number of unknown parameters. In a Bayesian framework, explicit computation of the square root and/or inverse of the prior covariance matrix is not possible. In this talk, we describe efficient, iterative, matrix-free methods based on the generalized Golub-Kahan bidiagonalization that allow automatic regularization parameter and variance estimation. We demonstrate that these methods can be more flexible than standard methods and develop efficient implementations that can exploit structure in the prior, as well as possible structure in the forward model. Numerical examples demonstrate the range of applicability and effectiveness of the described approaches.
|
12:30-14:00 |
Lunch Break
|
14:00-14:45 |
Marco Cuturi, Generative Modeling with Optimal Transport
We present in this talk recent advances on the topic of parameter estimation using optimal transport, and discuss possible implementations for "Minimum Kantorovich Estimators". We show why these estimators are currently of great interest in the deep learning community, in which researchers have tried to formulate generative models for images. We will present a few algorithmic solutions to this problem.
|
14:45-15:30 |
Yiqiu Dong, Spatially Varying Parameter Selection for CT Reconstruction
In variational models, the regularization parameter controls the trade-off between the smoothness and the preservation of details. In this talk, we propose an iterative method to determine spatially varying regularization parameter corresponding to the image textures pertinent to different scales. The sizes of these homogenous/texture regions can be adjusted adaptively based on local features. Further, we extend this method to CT reconstruction. Numerical results show that this method can provide better performance of suppressing noise as well as preserving details in the reconstruction.
|
15:30-16:00 |
Coffee Break
|
16:00-16:45 |
Remco Duits, Optimal Paths for Variants of the 2D and 3D Reeds-Shepp Car with Applications in Image Analysis
We consider a PDE-based approach for finding minimal paths for the Reeds-Shepp car. In our model we minimize a (data-driven) functional involving both curvature and length penalization, with several generalizations. Our approach encompasses the two and three dimensional variants of this model, state dependent costs, and moreover, the possibility of removing the reverse gear of the vehicle. We prove both global and local controllability results of the models.
Via eikonal equations on the manifold $\mathbb{R}^{d}\times\mathbb{S}^{d-1}$ we compute distance maps w.r.t. highly anisotropic Finsler functions, which approximate the singular (pseudo)-metrics underlying the model. This is achieved using a Fast-Marching (FM) method, building on Mirebeau [5,4]. The FM method is based on specific discretization stencils which are adapted to the preferred directions of the metric and obey a generalized acuteness property. The shortest paths can be found with a gradient descent method on the distance map, which we formalize in a theorem. We justify the use of our approximating metrics by proving convergence results.
Our curve optimization model in $\mathbb{R}^{d}\times\mathbb{S}^{d-1}$ with data-driven cost allows to extract complex tubular structures from medical images, e.g. crossings, and incomplete data due to occlusions or low contrast. Our work extends the results of Sanguinetti et al. [7] on numerical sub-Riemannian eikonal equations and the Reeds-Shepp Car to 3D, with comparisons to exact solutions by Duits et al. [2]. Numerical experiments show the high potential of our method in two applications: vessel tracking in retinal images for the case $d = 2$, and brain connectivity measures from diffusion weighted MRI-data for the case $d = 3$, extending the work of Bekkers et al [3] and Duits et al. [6] to 3D. We demonstrate how the new model without reverse gear better handles bifurcations.
Keywords: Finsler geometry on $\mathbb{R}^{d}\times\mathbb{S}^{d-1}$, Sub-Riemannian geometry, fast-marching, Reed-Shepp car, keypoints, tracking
Acknowledgements: The research leading to the results of this article has received funding from the European Research Council under the European Community's 7th Framework Programme (FP7/20072014)/ERC grant agreement No. 335555 (Lie Analysis). This work was partly funded by ANR grant NS-LBR. ANR-13- JS01-0003-01.
References:
[1] R. Duits, S. Meesters, J. Mirebeau, and J. Portegies, “Optimal paths for variants of the 2d and 3d reeds-shepp car with applications in image analysis,” submitted to JMIV, preprint available on https://arxiv.org/pdf/1612.06137.pdf, 2017.
[2] R. Duits, A. Ghosh, T. C. J. Dela Haije, and A. Mashtakov. On Sub-Riemannian Geodesics in SE(3) Whose Spatial Projections do not Have Cusps. J Dyn Control Syst, 22(4):771–805, October 2016.
[3] E. Bekkers, R. Duits, A. Mashtakov, and G. Sanguinetti. A PDE Approach to Data-Driven Sub-Riemannian Geodesics in SE(2). SIAM J. Imaging Sci., 8(4):2740–2770, 2015.
[4] J-M. Mirebeau. Efficient fast marching with Finsler metrics. Numer. Math., 126(3):515–557, July 2013.
[5] J-M. Mirebeau. Anisotropic Fast-Marching on Cartesian Grids Using Lattice Basis Reduction. SIAM J. Numer.
Anal., 52(4):1573–1599, January 2014.
[6] R. Duits, U. Boscain, F. Rossi, and Y. Sachkov. Association Fields via Cuspless Sub-Riemannian Geodesics in SE(2).
J Math Imaging Vis, 49(2):384–417, December 2013.
[7] G.R. Sanguinetti, E.J. Bekkers, R. Duits, M.H.J. Janssen, A. Mashtakov, and J-M. Mirebeau. Sub-Riemannian
Fast Marching in SE(2). In Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, number 9423 in Lecture Notes in Computer Science, pages 366–374. Springer International Publishing, 2015. DOI: 10.1007/978-3-319-25751-8 44.
|
16:45-17:30 |
Selim Esedoglu, Auction Dynamics: A Volume Constrained MBO Scheme
I will discuss a variant of Merriman, Bence, and Osher's threshold dynamics scheme for minimizing multiphase interfacial energies that allows various volume constraints on the individual phases. The upshot is an interesting and new connection between volume constrained multiphase mean curvature motion and auction algorithms that are inspired by market mechanisms. Applications to minimal partition problems in Euclidean space, and to semi-supervised machine learning on graphs for computer vision, will be emphasized. Joint work with Matt Jacobs and Ekaterina Merkurjev.
|
|
| |
Tuesday, 16 January
|
09:00-09:45 |
Andrew Fitzgibbon, Discrete Images, Continuous World: A Basis for Discussion?
Work done with Fabio Viola, John MacCormick, Vitaliy Kurlin and others.
An image provides a finite set of measurements of the world, and our goal in image analysis is often to reconstruct a representation of the world from those measurements. I will argue that that representation of the world, at any scale compatible with these images, is best thought of as continuous. Thus we propose that image reconstruction should minimize a functional of the form
\[\mathcal{E}(f)=\sum_{i=1}^{n}\psi\left (z_{i}-\int_{\Omega} f(u)\cdot \kappa_{i}(u)\mathrm{d}u \right ) +\mathcal{R}(f),\]
Where:
- The world is modelled as a function $f:\Omega\mapsto \mathbb{R}^{d}$. For a 2D camera $\Omega$ might be a sphere surrounding the imaging device.
- The image comprises $n$ measurements $\{z_{i}\,|\, 1\le i\le n\}\subset \mathbb{R}$. Yet, the measurements live in $\mathbb{R}$, not in chromaticity space, in $k$-space, or pretty much anywhere else.
- Each measurement is equipped with a point-spread function $\kappa_{i}:\Omega\mapsto \mathbb{R}^{d},$ which may be given, or may be inferred as part of the optimization.
- The kernel $\psi(t)$ might be e.g. $t^{2}$, $|t|^{p}$, or $\min (t^{2},\tau)$.
- The regularizer $\mathcal{R}(f)$ can be anything we already use for continuous variational image reconstruction: TV, elastica, etc...
Numerous advantages accrue from this formulation, such a refinement invariance, no loss of rotational symmetry, and perhaps not least, a better match to reality. I will talk about experiments we've done using a moving-mesh finite element approach, showing the feasibility of optimization of this functional. Many existing approaches can be expressed as special cases of this approach, sometimes requiring amusing or revealing choices of finite element.
|
09:45-10:30 |
Bastian Goldlücke, Variational Inverse Problems in Light Field Analysis
In contrast to a traditional 2D image, a light field is defined on a 4D space of rays instead of a 2D image domain. In particular, different rays can intersect in the same point of a surface, and thus for a Lambertian scene, the light field should have similar radiance for these. This constraint generates a rich structure on the 4D domain, which is strongly linked to scene geometry. Commonly, this structure is exploited to infer depth from a single light field, but it also needs to be respected if one intends to solve inverse problems on ray space.
In this talk, I will first give an introduction to the light field structure and the various transformations used to recover depth and perform occlusion analysis. From this, I will derive a rich set of example applications, such as convex ray space priors which preserve its structure, sparse coding techniques to infer depth for multi-layered scenes, and variational models for intrinsic light field decomposition.
|
10:30-11:00 |
Coffee Break
|
11:00-11:45 |
Bernadette Hahn, Time-Dependent Inverse Problems in Imaging
The reconstruction process in tomography represents a well-known application of the theory of inverse problems and is well understood if the specimen is stationary during the data acquisition process.
However, tomographic modalities are also utilized to visualize structural and functional changes of a specimen, e.g. in medical diagnosis, radiotherapy treatment planning or imaging objects at working stage. In general, this dynamic behavior leads to a temporal undersampling and inconsistent measurements. Thus, if the dynamics are not taken into account, the reconstructed images can suffer from motion artefacts like blurring, ghosting, etc., which can significantly impede a reliable diagnosis.
Consequently, the imaging process has to be modeled and solved as a time-dependent inverse problem. Especially, efficient numerical methods are required which can extract the searched-for information from motion-corrupted tomographic data. This talk addresses these challenges, and examples from computerized tomography and magnetic resonance tomography illustrate the advantages of our approach. In addition, we also analyze peculiarities compared to the classic static case.
|
11:45-12:30 |
Martin Holler, Total Generalized Variation for Manifold-valued Data
Introduced in 2010, the total generalized variation (TGV) functional is nowadays amongst the most successful regularization functionals for variational image reconstruction. It is defined for an arbitrary order of differentiation and provides a convex model for piecewise smooth vector-space data.
On the other hand, variational models for manifold-valued data have become popular recently and many successful approaches, such as first- and second-order TV regularization, have been successfully generalized to this setting. Despite the fact that TGV regularization is, generally, considered to be preferable to such approaches, an appropriate extension for manifold-valued data was still missing.
In this talk we introduce the notion of second-order total generalized variation (TGV) regularization for manifold-valued data. We provide an axiomatic approach to formalize reasonable generalizations of TGV to the manifold setting and present concrete instances that fulfill the proposed axioms. We prove well-posedness results and present algorithms for a numerical realization of these generalizations to the manifold setup. Further, we provide experimental results for synthetic and real data to further underpin the proposed generalization numerically and show its potential for applications with manifold-valued data.
This is joint work with K. Bredies, M. Storath and A. Weinmann.
|
12:30-14:00 |
Lunch Break
|
14:00-14:45 |
Barbara Gris, Deformation Prior in Image Matching
A general method to match two images is to estimate a deformation (a
diffeomorphism) transforming the first image into the second one. I
will present how a prior can be introduced in the deformation model
via a structure, named deformation module, capable of generating
deformations of a particular and chosen type. The introduced prior can
for instance correspond to a knowledge about 'realistic' deformations
given the nature of images under study: using deformations generated
by the corresponding deformation module ensures that the first image
is matched into the second one only via realistic deformations.
|
14:45-15:30 |
Dirk Lorenz, Using the Douglas-Rachford Method in Imaging
The Douglas-Rachford method has been around for quite some time, but its
usage in imaging has increased significantly only recently. The method
usually compares favorably to other proximal methods and its convergence
has been analyzed in several settings. It has been observed that the
choice of the stepsize has a great impact on the performance of the
method and we will present a way to tune the stepsize adaptively. To
show convergence of the method with this adaptive choice, we prove a new
convergence result on the non-stationary iteration. Due to the
connection of the Douglas-Rachford method and the alternating direction
of multipliers method (ADMM) our results also apply to the ADMM method.
Numerical experiments illustrate that the adaptive stepsize choice
usually improves the performance of the method significantly.
|
15:30-16:00 |
Coffee Break
|
16:00-17:30 |
Poster Session
|
|
| |
Wednesday, 17 January
|
09:00-09:45 |
PhD Prize talk: Emmanuel Soubies, On a class of exact continuous relaxations of the l2-l0 criteria
Numerous non-convex continuous relaxations of the l0 pseudo-norm have been proposed over the past for optimization purpose. Considering the l0-regularized least-squares minimization problem (l2-l0), we analyze in this talk such relaxations from the perspective of their fidelity to the initial l2-l0 problem. In particular, we address the two following questions: does relaxed functionals preserve global minimizers of the initial one? Does these reformulations introduce unwanted new (local) minimizers? To answer these questions, we derive necessary and sufficient conditions on continuous and separable relaxations of the l0 pseudo-norm which ensure that the associated regularized least-squares functional preserves the global minimizers of the l2-l0 criteria and do not add new local minimizers. From these conditions, we get a class of penalties said to be exact regarding to their properties concerning the relaxed functional. Moreover, we show that these relaxations eliminate some local (non-global) minimizers of the initial functional which is an interesting property in this context of non-convex optimization. Then, we will focus on the inferior limit of this class and its special properties. Finally, an application to single molecule localization microscopy will be presented.
|
09:45-10:30 |
Konrad Polthier, Branched Covering Surfaces
Multivalued functions and differential forms naturally lead to the concept of branched covering surfaces and more generally of branched covering manifolds in the spirit of Hermann Weyl’s book "Die Idee der Riemannschen Fläche" from 1913. This talk will illustrate and discretize basic concepts of branched (simplicial) covering surfaces starting from complex analysis and surface theory up to their recent appearance in geometry processing algorithms and artistic mathematical designs. Applications will touch differential based surface modeling, image and geometry retargeting, global surface and volume remeshing, and novel weaved geometry representations with recent industrial applications.
|
10:30-11:00 |
Coffee Break
|
11:00-11:45 |
Xavier Pennec, Barycentric Subspace Analysis: An Extension of PCA to Manifolds
I address in this talk the generalization of Principal Component Analysis (PCA) to Riemannian manifolds and potentially more general stratified spaces. Tangent PCA is often sufficient for analyzing data which are sufficiently centered around a central value (unimodal or Gaussian-like data), but fails for multimodal or large support distributions (e.g. uniform on close compact subspaces). Instead of a covariance matrix analysis, Principal Geodesic Analysis (PGA) and Geodesic PCA (GPCA) are proposing to minimize the distance to Geodesic Subspaces (GS) which are spanned by the geodesics going through a point with tangent vector is a restricted linear sub-space of the tangent space. Other methods like Principal Nested Spheres (PNS) restrict to simpler manifolds but emphasize on the need for the nestedness of the resulting principal subspaces.
In this work, we first propose a new and more general type of family of subspaces in manifolds that we call barycentric subspaces. They are implicitly defined as the locus of points which are weighted means of $k+1$ reference points. As this definition relies on points and do not on tangent vectors, it can also be extended to geodesic spaces which are not Riemannian. For instance, in stratified spaces, it naturally allows to have principal subspaces that span over several strata, which is not the case with PGA. Barycentric subspaces locally define a submanifold of dimension k which generalizes geodesic subspaces. Like PGA, barycentric subspaces can naturally be nested, which allow the construction of inductive forward nested subspaces approximating data points which contains the Frechet mean. However, it also allows the construction of backward flags which may not contain the mean. Second, we rephrase PCA in Euclidean spaces as an optimization on flags of linear subspaces (a hierarchies of properly embedded linear subspaces of increasing dimension). We propose for that an extension of the unexplained variance criterion that generalizes nicely to flags of barycentric subspaces in Riemannian manifolds. This results into a particularly appealing generalization of PCA on manifolds, that we call Barycentric Subspaces Analysis (BSA). The method will be illustrated on spherical and hyperbolic spaces, and on diffeomorphisms encoding the deformation of the heart in cardiac image sequences.
|
11:45-12:30 |
Irène Waldspurger, Convergence Rate of the Douglas-Rachford Method for Finding Best Approximating Pairs
Given two convex polyhedrons, we consider the problem of finding two points, one in each of these sets, at minimal distance. Two sequences of points that converge to such a "best approximating pair" can be constructed with the Douglas-Rachford method. In this talk, we will discuss the (worst-case) convergence speed of these sequences, globally (that is, when the initial points of the sequences are arbitrary) as well as locally (that is, when the initial points are close to a best approximating pair). Joint work with Stefanie Jegelka.
|
12:30-14:00 |
Lunch Break
|
14:00-14:45 |
Clarice Poon, Multi-dimensional Sparse Super-resolution
This talk is concerned with the super-resolution problem for positive spikes in arbitrary dimensions. More precisely, I will discuss the issue of support recovery for the so-called BLASSO method. While super-resolution is of paramount importance in overcoming the limitations of many imaging devices, its theoretical analysis is still lacking beyond the 1-dimensional case. The reason is that in the 2-dimensional case and beyond, the relative positions of the spikes enter the picture, and one needs to account for these different geometrical configurations. After presenting an algorithmic description of the limit of the associated dual problems as the spikes cluster around a given point, I will present a detailed analysis of the support stability and
super-resolution effect in the case of a pair of spikes. This is joint work with Gabriel Peyré.
|
14:45-15:30 |
Jean-Michel Morel, A Theory of Anomaly Detection in Images
Anomaly detection can not be formulated in a Bayesian framework which would require to simultaneously learn a model of the anomaly, and a model of the background. (In the case where there are plenty of examples of the background and for the object to be detected, neural networks may provide a practical answer, but without explanatory power). In the case of anomalies, we dispose of only one image as unique informer on the background, and of no example at all for the anomaly. Thus one is led to learn a background model from very few samples, and to detect an anomaly as a large deviation from this background model. I’ll show how the anomaly detection problem can be led back to the simpler problem of detecting outliers in noise. I’ll develop the proposed solution as a logical deduction of the huge literature on anomaly detection. Work carried out in collaboration with Axel Davy, Mauricio Delbracio and Thibaud Ehret.
|
15:30-16:00 |
Coffee Break
|
16:00-16:45 |
Michael Möller, Nonconvex Majorization Minimization via Functional Lifting
High-dimensional optimization problems with a nonconvex objective function are often extremely difficult to solve globally, such that one usually settles for a local minimum or critical point only. Interestingly, if the objective has a certain structure, functional lifting allows to faithfully approximate the original minimization problem by a convex problem in a higher dimensional space and often yields near-globally optimal solutions. In this talk I will review some of the recent advances in functional lifting and present a novel nonconvex majorization minimization technique, that can leverage some advantages of functional lifting techniques to a much larger class of nonconvex composite functions.
|
16:45-17:30 |
Peter Ochs, Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms
We propose a unifying algorithm for non-smooth non-convex optimization. The algorithm approximates the objective function by a convex model function and finds an approximate (Bregman) proximal point of the convex model. This approximate minimizer of the model function yields a descent direction, along which the next iterate is found. Complemented with an Armijo-like line search strategy, we obtain a flexible algorithm for which we prove (subsequential) convergence to a stationary point under weak assumptions on the growth of the model function error. Special instances of the algorithm with a Euclidean distance function are, for example, Gradient Descent, Forward-Backward Splitting, ProxDescent, without the common requirement of a "Lipschitz continuous gradient". In addition, we consider a broad class of Bregman distance functions (generated by Legendre functions) replacing the Euclidean distance. The algorithm has a wide range of applications including many linear and non-linear inverse problems in image processing and machine learning.
|
|
Webmaster: Kostas Papafitsoros
Original webdesign by Caroline Löbhard
|
|