Automated age estimation from MRI volumes of the hand

Publication date: December 2019

Source: Medical Image Analysis, Volume 58

Author(s): Darko Štern, Christian Payer, Martin Urschler

Abstract

Highly relevant for both clinical and legal medicine applications, the established radiological methods for estimating unknown age in children and adolescents are based on visual examination of bone ossification in X-ray images of the hand. Our group has initiated the development of fully automatic age estimation methods from 3D MRI scans of the hand, in order to simultaneously overcome the problems of the radiological methods including (1) exposure to ionizing radiation, (2) necessity to define new, MRI specific staging systems, and (3) subjective influence of the examiner. The present work provides a theoretical background for understanding the nonlinear regression problem of biological age estimation and chronological age approximation. Based on this theoretical background, we comprehensively evaluate machine learning methods (random forests, deep convolutional neural networks) with different simplifications of the image information used as an input for learning. Trained on a large dataset of 328 MR images, we compare the performance of the different input strategies and demonstrate unprecedented results. For estimating biological age, we obtain a mean absolute error of 0.37 ± 0.51 years for the age range of the subjects  ≤  18 years, i.e. where bone ossification has not yet saturated. Finally, we validate our findings by adapting our best performing method to 2D images and applying it to a publicly available dataset of X-ray images, showing that we are in line with the state-of-the-art automatic methods for this task.

Graphical abstract

Graphical abstract for this article

Surface-constrained volumetric registration for the early developing brain

Publication date: December 2019

Source: Medical Image Analysis, Volume 58

Author(s): Sahar Ahmad, Zhengwang Wu, Gang Li, Li Wang, Weili Lin, Pew-Thian Yap, Dinggang Shen

Abstract

The T1-weighted and T2-weighted MRI contrasts of the infant brain evolve drastically during the first year of life. This poses significant challenges to inter- and intra-subject registration, which is key to subsequent statistical analyses. Existing registration methods that do not consider temporal contrast changes are ineffective for infant brain MRI data. To address this problem, we present in this paper a method for deformable registration of infant brain MRI. The key advantage of our method is threefold: (i) To deal with appearance changes, registration is performed based on segmented tissue maps instead of image intensity. Segmentation is performed by using an infant-centric algorithm previously developed by our group. (ii) Registration is carried out with respect to both cortical surfaces and volumetric tissue maps, thus allowing precise alignment of both cortical and subcortical structures. (iii) A dynamic elasticity model is utilized to allow large non-linear deformation. Experimental results in comparison with well-established registration methods indicate that our method yields superior accuracy in both cortical and subcortical alignment.

Graphical abstract

Graphical abstract for this article

Computer-aided detection and visualization of pulmonary embolism using a novel, compact, and discriminative image representation

Publication date: December 2019

Source: Medical Image Analysis, Volume 58

Author(s): Nima Tajbakhsh, Jae Y. Shin, Michael B. Gotway, Jianming Liang

Abstract

Diagnosing pulmonary embolism (PE) and excluding disorders that may clinically and radiologically simulate PE poses a challenging task for both human and machine perception. In this paper, we propose a novel vessel-oriented image representation (VOIR) that can improve the machine perception of PE through a consistent, compact, and discriminative image representation, and can also improve radiologists’ diagnostic capabilities for PE assessment by serving as the backbone of an effective PE visualization system. Specifically, our image representation can be used to train more effective convolutional neural networks for distinguishing PE from PE mimics, and also allows radiologists to inspect the vessel lumen from multiple perspectives, so that they can report filling defects (PE), if any, with confidence. Our image representation offers four advantages: (1) Efficiency and compactness—concisely summarizing the 3D contextual information around an embolus in only three image channels, (2) consistency—automatically aligning the embolus in the 3-channel images according to the orientation of the affected vessel, (3) expandability—naturally supporting data augmentation for training CNNs, and (4) multi-view visualization—maximally revealing filling defects. To evaluate the effectiveness of VOIR for PE diagnosis, we use 121 CTPA datasets with a total of 326 emboli. We first compare VOIR with two other compact alternatives using six CNN architectures of varying depths and under varying amounts of labeled training data. Our experiments demonstrate that VOIR enables faster training of a higher-performing model compared to the other compact representations, even in the absence of deep architectures and large labeled training sets. Our experiments comparing VOIR with the 3D image representation further demonstrate that the 2D CNN trained with VOIR achieves a significant performance gain over the 3D CNNs. Our robustness analyses also show that the suggested PE CAD is robust to the choice of CT scanner machines and the physical size of crops used for training. Finally, our PE CAD is ranked second at the PE challenge in the category of 0 mm localization error.

Graphical abstract

Graphical abstract for this article

Disentangled representation learning in cardiac image analysis

Publication date: December 2019

Source: Medical Image Analysis, Volume 58

Author(s): Agisilaos Chartsias, Thomas Joyce, Giorgos Papanastasiou, Scott Semple, Michelle Williams, David E. Newby, Rohan Dharmakumar, Sotirios A. Tsaftaris

Abstract

Typically, a medical image offers spatial information on the anatomy (and pathology) modulated by imaging specific characteristics. Many imaging modalities including Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) can be interpreted in this way. We can venture further and consider that a medical image naturally factors into some spatial factors depicting anatomy and factors that denote the imaging characteristics. Here, we explicitly learn this decomposed (disentangled) representation of imaging data, focusing in particular on cardiac images. We propose Spatial Decomposition Network (SDNet), which factorises 2D medical images into spatial anatomical factors and non-spatial modality factors. We demonstrate that this high-level representation is ideally suited for several medical image analysis tasks, such as semi-supervised segmentation, multi-task segmentation and regression, and image-to-image synthesis. Specifically, we show that our model can match the performance of fully supervised segmentation models, using only a fraction of the labelled images. Critically, we show that our factorised representation also benefits from supervision obtained either when we use auxiliary tasks to train the model in a multi-task setting (e.g. regressing to known cardiac indices), or when aggregating multimodal data from different sources (e.g. pooling together MRI and CT data). To explore the properties of the learned factorisation, we perform latent-space arithmetic and show that we can synthesise CT from MR and vice versa, by swapping the modality factors. We also demonstrate that the factor holding image specific information can be used to predict the input modality with high accuracy. Code will be made available at https://github.com/agis85/anatomy_modality_decomposition.

Graphical abstract

Graphical abstract for this article