Publikationen

Artikel in Referierten Journalen

  • R.A. Vandermeulen, R. Saitenmacher, Generalized identifiability bounds for mixture models with grouped samples, IEEE Transactions on Information Theory, 70 (2024), pp. 2746--2758, DOI 10.1109/TIT.2024.3367433 .

  • L. Schweizer, P. Seegerer, H.-y. Kim, R. Saitenmacher, A. Muench, L. Barnick, A. Osterloh, C. Dittmayer, R. Jödicke, D. Pehl, A. Reinhardt, K. Ruprecht, W. Stenzel, A.K. Wefers, P.N. Harter, U. Schüller, F.L. Heppner, M. Alber, K.-R. Müller, F. Klauschen, Analysing cerebrospinal fluid with explainable deep learning: From diagnostics to insights, Neuropathology and Applied Neurobiology, 49 (2023), pp. e12866/1--e12866/16, DOI 10.1111/nan.12866 .

  • H.T. Chu, L. Liang, K.-Ch. Toh, L. Yang, An efficient implementable inexact entropic proximal point algorithm for a class of linear programming problems, Computational Materials Science, 85 (2023), pp. 107--146, DOI 10.1007/s10589-023-00459-2 .

Beiträge zu Sammelwerken

  • H. Kremer, Y. Nemmour, B. Schölkopf, J.-J. Zhu, Estimation beyond data reweighting: Kernel method of moments, in: Proceedings of the 40th International Conference on Machine Learning (ICML), A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, J. Scarlett, eds., 202 of Proceedings of Machine Learning Research, 2023, pp. 17745-17783.

  • P. Dvurechensky, J.-J. Zhu, Kernel mirror prox and RKHS gradient flow for mixed functional Nash equilibrium, in: Proceedings of Machine Learning Research, S. Dasgupta, S. Mandt, Y. Li, eds., 238, 2023, pp. 2350--2358, DOI 10.20347/WIAS.PREPRINT.3032 .
    Abstract
    Kernel mirror prox and RKHS gradient flow for mixed functional Nash equilibrium Pavel Dvurechensky , Jia-Jie Zhu Abstract The theoretical analysis of machine learning algorithms, such as deep generative modeling, motivates multiple recent works on the Mixed Nash Equilibrium (MNE) problem. Different from MNE, this paper formulates the Mixed Functional Nash Equilibrium (MFNE), which replaces one of the measure optimization problems with optimization over a class of dual functions, e.g., the reproducing kernel Hilbert space (RKHS) in the case of Mixed Kernel Nash Equilibrium (MKNE). We show that our MFNE and MKNE framework form the backbones that govern several existing machine learning algorithms, such as implicit generative models, distributionally robust optimization (DRO), and Wasserstein barycenters. To model the infinite-dimensional continuous- limit optimization dynamics, we propose the Interacting Wasserstein-Kernel Gradient Flow, which includes the RKHS flow that is much less common than the Wasserstein gradient flow but enjoys a much simpler convexity structure. Time-discretizing this gradient flow, we propose a primal-dual kernel mirror prox algorithm, which alternates between a dual step in the RKHS, and a primal step in the space of probability measures. We then provide the first unified convergence analysis of our algorithm for this class of MKNE problems, which establishes a convergence rate of O(1/N ) in the deterministic case and O(1/√N) in the stochastic case. As a case study, we apply our analysis to DRO, providing the first primal-dual convergence analysis for DRO with probability-metric constraints.

Vorträge, Poster

  • J.-J. Zhu, Kernelization, approximation, and entropy dissipation of gradient flows, January 24 - 26, 2024, RIKEN, Center for Advanced Intelligence Project, Japan.

  • L. Liang, A squared smoothing Newton method for semidefinite programming, SIAM Conference on Optimization (OP23), MS331 ``A Newton-type Method for SDP'', May 30 - June 3, 2023, Seattle, USA, June 3, 2023.

  • J.-J. Zhu, Approximating forces of gradient flows for robust machine learning, Variational and Information Flows in Machine Learning and Optimal Transport, November 19 - 25, 2023, Mathematisches Forschungsinstitut Oberwolfach, November 21, 2023.

  • J.-J. Zhu, Duality from distributionally robust learning to gradient flow force-balance, ICML 2023 Workshop on Duality Principles for Modern Machine Learning, July 27 - 29, 2023, Honolulu, USA, July 29, 2023.

  • J.-J. Zhu, From gradient flow force-balance to distributionally robust learning, European Conference on Computational Optimization (EUCCO), Minisymposium ML3 ``Optimization and Machine Learning'', September 25 - 27, 2023, Universität Heidelberg, September 26, 2023.

  • J.-J. Zhu, From gradient flow force-balance to distributionally robust machine learning, Universität Bonn, Mathematisch-Naturwissenschaftliche Fakultät, May 23, 2023.

  • J.-J. Zhu, From gradient flow to distributionally robust optimization, Seminar of the Computer Science Department, University of British Columbia, Vancouver, Canada, June 5, 2023.

  • J.-J. Zhu, Learning with kernel gradient flow, Thematic Einstein Semester Conference on Mathematical Optimization for Machine Learning, September 13 - 15, 2023, Mathematics Research Cluster MATH+, Berlin, September 15, 2023.

  • J.-J. Zhu, Optimization and dynamics: From Euclidean gradient descent to Wasserstein gradient flow, International Workshop of Intelligent Autonomous Learning Systems 2023, August 14 - 17, 2023, Technische Universität Darmstadt, Intelligent Autonomous Systems, Computer Science Department, August 15, 2023.

  • J.-J. Zhu, Principled robust machine learning in new geometries, Leibniz MMS Days 2023, April 17 - 19, 2023, Leibniz-Institut für Agrartechnik und Bioökonomie (ATB), Potsdam, April 17, 2023.

Preprints im Fremdverlag

  • E. Gladin, P. Dvurechensky, A. Mielke , J.-J. Zhu, Interaction-force transport gradient flows, Preprint no. arXiv:2405.17075, Cornell University, , DOI 10.48550/arXiv.2405.17075 .
    Abstract
    This paper presents a new type of gradient flow geometries over non-negative and probability measures motivated via a principled construction that combines the optimal transport and interaction forces modeled by reproducing kernels. Concretely, we propose the interaction-force transport (IFT) gradient flows and its spherical variant via an infimal convolution of the Wasserstein and spherical MMD Riemannian metric tensors. We then develop a particle-based optimization algorithm based on the JKO-splitting scheme of the mass-preserving spherical IFT gradient flows. Finally, we provide both theoretical global exponential convergence guarantees and empirical simulation results for applying the IFT gradient flows to the sampling task of MMD-minimization studied by Arbel et al. [2019]. Furthermore, we prove that the spherical IFT gradient flow enjoys the best of both worlds by providing the global exponential convergence guarantee for both the MMD and KL energy.

  • L. Liang, D. Sun, K.-Ch. Toh, A squared smoothing Newton method for semidefinite programming, Preprint no. arXiv:2303.05825, Cornell University, 2023, DOI 10.48550/arXiv.2303.05825 .

  • Z. Zhong, J.-J. Zhu, Nonlinear Wasserstein distributionally robust optimal control, Preprint no. arXiv:2304.07415, Cornell University, 2023, DOI 10.48550/arXiv.2304.07415 .

  • J.-J. Zhu, Propagating kernel ambiguity sets in nonlinear data-driven dynamics models, Preprint no. arXiv:2304.14057, Cornell University, 2023, DOI 10.48550/arXiv.2304.14057 .