With the rise and influence of machine learning (ML) in medical application and the need to translate newly developed techniques into clinical practice, questions about safety and uncertainty over measurements and reported quantities have gained importance. Obtaining accurate measurements is insufficient, as one needs to establish the circumstances under which these values generalize, or give appropriate error bounds for these measures. This is becoming particularly relevant to patient safety as many research groups and companies have deployed or are aiming to deploy ML technology in clinical practice.
The purpose of this workshop is to develop awareness and encourage research on uncertainty modelling to ensure safety for applications spanning both the MIC and CAI fields. In particular, this workshop invites submissions to cover different facets of this topic, including but not limited to: detection and quantification of algorithmic failures; processes of healthcare risk management (e.g. CAD systems); robustness and adaptation to domain shifts; evaluation of uncertainty estimates; defence against noise and mistakes in data (e.g. bias, label mistakes, measurement noise, inter/intra-observer variability). The workshop aims to encourage contributions in a wide range of applications and types of ML algorithms. The use or development of any relevant ML methods are welcomed, including, but not limited to, probabilistic deep learning, Bayesian nonparametric statistics, graphical models and Gaussian processes. We also aim to ensure broad coverage of applications in the context of both MIC and CAI, which are categorized into reporting problems (descriptions of image contents) such as diagnosis, measurements, segmentation, detection, and enhancement problems (addition of information) such as image synthesis, registration, reconstruction, super-resolution, harmonisation, inpainting and augmented display.
In the last few years, machine learning (ML) techniques have permeated many aspects of the MICCAI community, leading to substantial progress in a wide range of applications ranging from image analysis to surgical assistance. However, in medical applications, algorithms ultimately assist life and death decisions, and translation of such innovations into practice requires a measure of safety. In practice, ML systems often face situations where the correct decision is ambiguous, and therefore principled mechanisms for quantifying uncertainty are required to envision potential practical deployment.
Safety is indeed paramount in medical imaging applications, where images inform scientific conclusions in research, and diagnostic, prognostic or interventional decisions in clinics. However, efforts have mostly focused on improving the accuracy, while systematic approaches ensuring safety of medical imaging-derived automated systems are largely lacking in the existing body of research.
Uncertainty quantification has recently attracted attention in the MICCAI community as a promising approach to provide a reliability metric of the output, and as a mechanism to communicate the knowledge boundary of such ML systems. Spurred on by this emergent interest, the workshop will encourage discussions on the topic of uncertainty modelling and alternative approaches for risk management in a wide range of medical applications. It aims thereby to highlight both fundamental and practical challenges that need to be addressed to achieve safer implementations of ML systems in the clinical world.
We accept submissions of original, unpublished work on safety and uncertainty in medical imaging, including (but not limited to) the following areas:
Submissions must be 8-page papers (excluding references) following the Springer LNCS format. Author names, affiliations and acknowledgements, as well as any obvious phrasings or clues that can identify authors must be removed to ensure anonymity. Note that the 8 page limit refers only to the main content. Including references and acknowledgements the submission may exceed 8 pages.