UNSURE2019
Uncertainty for Safe Utilization of Machine Learning in Medical Imaging

About the MICCAI UNSURE Workshop

Overview

With the rise and influence of machine learning (ML) in medical application and the need to translate newly developed techniques into clinical practice, questions about safety and uncertainty over measurements and reported quantities have gained importance. Obtaining accurate measurements is insufficient, as one needs to establish the circumstances under which these values generalize, or give appropriate error bounds for these measures. This is becoming particularly relevant to patient safety as many research groups and companies have deployed or are aiming to deploy ML technology in clinical practice.

The purpose of this workshop is to develop awareness and encourage research on uncertainty modelling to ensure safety for applications spanning both the MIC and CAI fields. In particular, this workshop invites submissions to cover different facets of this topic, including but not limited to: detection and quantification of algorithmic failures; processes of healthcare risk management (e.g. CAD systems); robustness and adaptation to domain shifts; evaluation of uncertainty estimates; defence against noise and mistakes in data (e.g. bias, label mistakes, measurement noise, inter/intra-observer variability). The workshop aims to encourage contributions in a wide range of applications and types of ML algorithms. The use or development of any relevant ML methods are welcomed, including, but not limited to, probabilistic deep learning, Bayesian nonparametric statistics, graphical models and Gaussian processes. We also aim to ensure broad coverage of applications in the context of both MIC and CAI, which are categorized into reporting problems (descriptions of image contents) such as diagnosis, measurements, segmentation, detection, and enhancement problems (addition of information) such as image synthesis, registration, reconstruction, super-resolution, harmonisation, inpainting and augmented display.

Details

In the last few years, machine learning (ML) techniques have permeated many aspects of the MICCAI community, leading to substantial progress in a wide range of applications ranging from image analysis to surgical assistance. However, in medical applications, algorithms ultimately assist life and death decisions, and translation of such innovations into practice requires a measure of safety. In practice, ML systems often face situations where the correct decision is ambiguous, and therefore principled mechanisms for quantifying uncertainty are required to envision potential practical deployment.

Safety is indeed paramount in medical imaging applications, where images inform scientific conclusions in research, and diagnostic, prognostic or interventional decisions in clinics. However, efforts have mostly focused on improving the accuracy, while systematic approaches ensuring safety of medical imaging-derived automated systems are largely lacking in the existing body of research.

Uncertainty quantification has recently attracted attention in the MICCAI community as a promising approach to provide a reliability metric of the output, and as a mechanism to communicate the knowledge boundary of such ML systems. Spurred on by this emergent interest, the workshop will encourage discussions on the topic of uncertainty modelling and alternative approaches for risk management in a wide range of medical applications. It aims thereby to highlight both fundamental and practical challenges that need to be addressed to achieve safer implementations of ML systems in the clinical world.

Call for Papers

Scope

We accept submissions of original, unpublished work on safety and uncertainty in medical imaging, including (but not limited to) the following areas:

  • Uncertainty quantification in any MIC or CAI applications
  • Risk management of ML systems in clinical pipelines
  • Defending against hallucinations in enhancement tasks (e.g. super-resolution, reconstruction, modality translation)
  • Robustness to domain shifts
  • Measurement errors
  • Modelling noise in data (e.g. labels, measurements, inter/intra-observer variability)
  • Validation of uncertainty estimates
  • Active Learning
  • Confidence bounds
  • Posterior inference over point estimates
  • Bayesian deep learning
  • Graphical models
  • Gaussian processes
  • Calibration of uncertainty measures
  • Bayesian decision theory

Submission Format

Submissions must be 8-page papers (excluding references) following the Springer LNCS format. Author names, affiliations and acknowledgements, as well as any obvious phrasings or clues that can identify authors must be removed to ensure anonymity. Note that in contrast to the requirements to the main MICCAI conference, the 8 page limit refers only to the main content. Including references and acknowledgements the submission may exceed 8 pages.

How to submit?

Please submit papers using the paper submission system. The submission deadline is July 19, 2019.

Publication

All accepted papers will be published as part of the MICCAI Satellite Events joint LNCS proceedings to be published by Springer Nature.

Presentation

All accepted papers will be presented at the UNSURE workshop on 17. October in Shenzhen, China. We will select a number of papers for oral presentation based on the reviewers' suggestions. All authors will additionally get the opportunity to present their work as poster during the workshop poster session.

Resources

Best paper award

The UNSURE 2019 best paper award was presented to Raghav Mehta and colleagues for their work: Propagating Uncertainty Across Cascaded Medical Imaging Tasks for Improved Deep learning Inference.

Proceedings

The full Springer proceedings for the UNSURE2019 workshop are available here: https://link.springer.com/book/10.1007%2F978-3-030-32689-0

Video Recordings

The video recordings of the long orals and the keynote speakers can be found on the following Youtube playlist.

Presentation Slides

Program

Schedule

  • 12:30-12:35 Welcoming Words
  • 12:35-13:10 Keynote - Koen van Leemput (slides)
  • 13:10-14:00 Oral Session 1
    • 13:10-13:30
      (Long Oral) "Propagating Uncertainty Across Cascaded Medical Imaging Tasks for Improved Deep learning Inference", Raghav Mehta, Thomas D Christinck, Tanya Nair, Paul Lemaitre, Douglas Arnold, Tal Arbel (slides)

      13:30-13:35
      (Spotlight) "Probabilistic Surface Reconstruction with Unknown Correspondence", Dennis Madsen, Thomas Vetter, Marcel Luthi (slides)

      13:35-13:40
      (Spotlight) "Probabilistic Image Registration via Deep Multi-class Classification Characterizing Uncertainty", Alireza Sedghi, Jie Luo, Tina Kapur, Parvin Mousavi, William M Wells (slides)

      13:40-13:45
      (Spotlight) "Reg R-CNN: Lesion Detection and Grading under Noisy Labels", Gregor N Ramien, Paul Jäger, Simon A Kohl, Klaus H Maier-Hein (slides)

      13:45-13:50
      (Spotlight) "Quantifying Uncertainty of Deep Neural Networks in Skin Lesion Classification", Pieter Van Molle, Tim Verbelen, Cedric de Boom, Bert Vankeirsbilck, Jonas De Vylder, Bart Diricx, Tom Kimpe, Pieter Simoens, Bart Dhoedt (slides)

      13:50-13:55
      (Spotlight) "A Generalized Approach to Determine Confident Samples for Deep Neural Network on Unseen Data", Min Zhang, Kevin Leung, Zili Ma, Wen Jin, Avinash Gopal
  • 14:00-15:00 Poster Session
  • 15:00-15:35 Keynote - Yingzheng Li (slides)
  • 15:35-16:15 Oral Session 2
    • 15:35-15:55
      (Long Oral) "Fast Nonparametric Mutual-Information based Registration and Uncertainty Estimation", Mikael Agn, Koen Van Leemput (slides)

      15:55-16:15
      (Long Oral) "Out of Distribution Detection for Intra-Operative Functional Imaging", Tim Adler, Leonardo Ayala, Lynton Ardizzone, Hannes Kenngott, Anant Vemuri, Beat Muller-Stich, Carsten Rother, Ullrich Köthe, Lena Mair-Hein (slides)
  • 16:15-16:30 Concluding Remarks

Keynote Speakers

Koen Van Leemput

Technical University of Denmark & Harvard Medical School
Koen Van Leemput obtained his PhD degree from the KU Leuven, Belgium, in 2001. Since 2007 he is a faculty member at the Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, and since 2011 also at the Department of Applied Mathematics and Computer Science, Technical University of Denmark. Between 2007 and 2011 he was also a research scientist at the MIT Computer Science and Artificial Intelligence Laboratory. His research is focused on computational analysis of medical images using Bayesian modeling.

Yingzhen Li

Microsoft Research Cambridge
Yingzhen Li read her PhD in machine learning at the University of Cambridge, where she was also a member of Darwin College. At Microsoft Research she primarily works on probabilistic modelling and representation learning. Some of her main research focuses include (deep) probabilistic graphical model design, fast and accurate (Bayesian) inference/computation techniques, and uncertainty quantification for computation and downstream tasks.

UNSURE2019 Organizing Committee

Organizers

Tal Arbel

McGill Centre for Intelligent Machines, McGill University

Christian Baumgartner

Computer Vision Lab, ETH Zurich

Adrian Dalca

CSAIL, MIT and
MGH, Harvard Medical School

Carole Sudre

Department of Biomedical Engineering, King's College London

Ryutaro Tanno

Department of Computer Science, University College London

William (Sandy) Wells

Radiology, BWH, Harvard Medical School

Program Committee

  • Alejandro Granados (King's College London)
  • Angelos Filos (Oxford University)
  • Christian Wachinger (Ludwig Maximilian University of Munich)
  • Daniel Coelho (Imperial College London)
  • Daniel Worrall (University of Amsterdam)
  • Felix Bragman (King's College London)
  • Harry Lin (University College London)
  • Jorge Cardoso (King's College London)
  • Juan Cerrolaza (Accenture)
  • Juan Eugenio Iglesias (University College London)
  • Kerem Can Tezcan (ETH Zurich)
  • Koen van Leemput (Harvard Medical School / TU Denmark)
  • Leo Joskowicz (Hebrew University of Jerusalem)
  • Liane Canas (King's College London)
  • Lucas Fidon (King's College London)
  • Raghav Mehta (McGill University)
  • Reuben Dorent (King's College London)
  • Simon Kohl (DeepMind)
  • Tanya Nair (Imagia)
  • Thomas Varsavsky (King's College London)
  • Zach Eaton Rosen (King's College London)

Contact

For general inquiries please send an email to: unsure-info@vision.ee.ethz.ch