Provably Convergent Data-Driven Convex-Nonconvex Regularization

Jeremy Budd, Zakhar Shumaylov*, Carola Bibiane Schonlieb, Subhadip Mukherjee

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

An emerging new paradigm for solving inverse problems is via the use of deep learning to learn a regularizer from data. This leads to high-quality results, but often at the cost of provable guarantees. In this work, we show how well-posedness and convergent regularisation arises within the convex-nonconvex (CNC) framework for inverse problems. We introduce a novel input weakly convex neural network (IWCNN) construction to adapt the method of learned adversarial regularization to the CNC framework. Empirically we show that our method overcomes numerical issues of previous adversarial methods.
Original languageEnglish
Title of host publicationNeurIPS 2023 Workshop on Deep Learning and Inverse Problems
Number of pages9
Publication statusPublished - 23 Nov 2023

Keywords

  • Inverse problems
  • variational imaging
  • convergent regularisation
  • weak convexity
  • input-convex neural networks
  • data-driven regularization

Fingerprint

Dive into the research topics of 'Provably Convergent Data-Driven Convex-Nonconvex Regularization'. Together they form a unique fingerprint.

Cite this