Image Analysis and Modeling

We provide novel solutions to enable efficient and accurate medical image processing, such as segmentation, registration, and noise and artifact correction. We also develop software and tools to facilitate image analysis tasks in the brain and musculoskeletal images for neuroscience and orthopedic research.

We proudly offer two free software to the public: MRiLab (MRI simulation) and MatrixUser (GUI-based image processing)


Research Highlight

Fully-automated Medical Image Segmentation

Traditional segmentation of medical images is based on the manual delineation of tissue boundaries. Manual segmentation is extremely time-consuming on high-resolution images and often leads to substantial variation due to human error and inconsistency. Our group pioneers the development of automatic approaches for segmenting medical images with the goals of achieving high accuracy, efficiency, and reproducibility. We demonstrate our algorithms in knee, lung, and brain MRIs.

Knee MRI

Most recently proposed methods for fully-automated segmentation can be categorized into model-based and atlas-based approaches. Model-based approaches apply a prior statistical shape model of the image object and try to best match the model to the target image. On the other hand, in atlas-based approaches, one or multiple references are generated by aligning and merging manually segmented images into specific atlas coordinate spaces. Despite promising results by model-based and atlas-based segmentation methods, both approaches perform poorly when there is high subject variability and significant differences in local features. In addition, both approaches rely on a priori knowledge of object shapes and require high computation costs and relatively long segmentation times. Other semi-automated techniques have been used for image segmentation using “region growing,” “live wire,” “edge detection,” and “active contour” methods. Although these semi-automated methods can achieve good results, they generally require a great deal of user interaction and, therefore, can be time-consuming and effort-demanding compared to fully-automated methods.

In early 2017, our group was among the first to explore deep learning for medical image analysis and developed an a-first-of-this-kind convolutional neural network-based method to perform high-performance knee MRI segmentation. We proposed a deep learning technique for segmenting bone and cartilage on knee MR images using a convolutional encoder-decoder network, which provides accurate multi-class tissue labels for patellofemoral bone and cartilage. A 3D simplex deformable modeling is applied to refine the output from the neural network to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. After benchmarking using publicly available knee image datasets to compare with other competing segmentation methods, our fully-automated approach led to remarkably improved accuracy and efficiency for segmenting 3D patellofemoral bone and cartilage with significantly reduced computing time (less than one minute processing time in our method versus hours in other conventional methods). Our approach demonstrated consistent segmentation performance on morphologic and quantitative MR images acquired using different sequences and spatial resolutions, including T1-weighted images, proton-density weighted images, fat-suppressed fast spin-echo images, and T2 maps.

Later, we extended the segmentation framework to incorporate generative adversarial networks, achieving unsupervised multi-contrast image conversion and segmentation in knee MRI. We also improved our method to allow automated full knee structure segmentation and parcellation so that efficient and highly reproducible analysis can be performed for regional quantification on all knee anatomies to better characterize the regional effect.

The segmentation framework of the proposed deep learning plus deformable modeling
Lung MRI

MRI of the chest can provide a regional assessment of lung ventilation by inhaling oxygen, hyperpolarized noble gases, or fluorinated gas. Oxygen-enhanced MRI using a three-dimensional radial ultrashort echo time (UTE) sequence supports quantitative differentiation of diseased vs. healthy lungs using whole lung ventilation defect percent.

Despite these rapid advances in pulmonary structural and functional imaging using MRI, developing a fast, reproducible, and reliable quantification tool for extracting potential biomarkers and regional image features has lagged. Segmentation of lung parenchyma from proton MRI is challenging due to modality-specific complexities, including coil inhomogeneity, arbitrary intensity values, local magnetic susceptibility, and reduced proton density due to the large fraction of air space.

We developed and evaluated a deep learning framework for automated lung segmentation from functional lung imaging using UTE proton MRI to support fast functional quantification. Secondarily, to understand the disease-related structural alterations, the parenchymal signal-intensity assessments in the upper, middle, and lower lung regions automatically separated from the whole lung mask were compared in the diseased vs. normal groups. We demonstrated that lung segmentation using deep learning would enable accurate and faster volumetric and functional measurements relative to the reference method of segmentation using supervised region growing.

Lung Segmentation Deep Learning Framework using Multi-Plane Consensus Labeling
Lung UTE
Segmented Lung
Brain MRI

Brain extraction or skull stripping of MRI is an essential step in neuroimaging studies, the accuracy of which can severely affect subsequent image processing procedures. Current automatic brain extraction methods demonstrate good results on human brains but are often far from satisfactory on nonhuman primates, which are a necessary part of neuroscience research.

To overcome the challenges of brain extraction in nonhuman primates, we propose a fully-automated brain extraction pipeline combining a deep Bayesian convolutional neural network and a fully connected three-dimensional (3D) conditional random field (CRF). The deep Bayesian CNN, the Bayesian network, is the core segmentation engine. As a probabilistic network, it is not only able to perform accurate high-resolution pixel-wise brain segmentation but is also capable of measuring the model uncertainty by Monte Carlo sampling with dropout in the testing stage. Then, fully connected 3D CRF is used to refine the probability result from the Bayesian network in the whole 3D context of the brain volume. The proposed method was evaluated with a manually brain-extracted dataset comprising T1w images of 100 nonhuman primates. Our method outperforms six popular publicly available brain extraction packages and three well-established deep learning-based methods. A better performance against all the compared methods was verified by statistical tests. The maximum uncertainty of the model on nonhuman primate brain extraction has a mean value of 0.116 across all 100 subjects. The behavior of the uncertainty was also studied, which shows the uncertainty increases as the training set size decreases, the number of inconsistent labels in the training set increases, or the inconsistency between the training set and the testing set increases.

Comparison of the brain masks extracted by our method and the ground truth on a typical subject
The uncertainty map given by the Bayesian network for one subject
Numerical MRI Simulation System

MRiLab is a numerical MRI simulation package. It has been developed and optimized to simulate MR signal formation, k-space acquisition, and MR image reconstruction. MRiLab provides several dedicated toolboxes to analyze RF pulse, design MR sequences, configure multiple transmitting and receiving coils, investigate magnetic field-related properties and evaluate real-time imaging techniques. The main MRiLab simulation platform combined with various toolboxes can be applied to customize virtual MR experiments with great flexibility and extensibility, which can serve as a prior stage for prototyping and testing new MR techniques, applications, and implementations. 

MRiLab features a highly interactive graphical user interface for the convenience of fast experiment design and technique prototyping. High simulation accuracy is achieved by simulating discrete spin evolution at configurable time events using the Bloch-equation, Bloch-McConnell equation, and appropriate models which simulate tissue microstructure and composition. To manipulate large multidimensional spin arrays, MRiLab employs parallel computing by incorporating the latest graphical processing unit technique and multi-threading CPU technique.

You can download MRiLab here.

MRiLab Workflow Diagram
The Simulation Main Console
Presentation in ISMRM 2019 Open-source software Tools Weekend Course

Video Demo for ISMRM 2019 Open-Source Software Tools Weekend Course:

ISMRM 2019 Educational Talk
General Image Analysis and Processing Toolkit

Most medical images (e.g., CT, MRI, PET, etc.) include multiple frames representing slices, phases, timing, etc., from the same imaging object. Those images can be saved as multidimensional matrices in Matlab thanks to Matlab’s powerful support of multidimensional data representation. However, within Matlab, most of the image manipulation functions are limited or tailored for processing two-dimensional matrices. The MatrixUser is a software package that features functions designed and optimized for manipulating multidimensional real or complex data matrices. MatrixUser provides a nice graphical environment for easily performing image analysis tasks, including multidimensional image display, matrix (image stack) processing and rendering, etc. MatrixUser is a great lightweight tool for users working in the image processing field under Matlab.

You can download MatrixUser here.

An Example of 3D Human Brain Rendering. Control units on the rendering window are provided for fine-tuning the renderer. The user can select the isosurface threshold, cutoff connectivity threshold (i.e., an object with total voxels less than the threshold will be removed from the rendering. ‘@’ is followed by the voxel number of the current largest object), and object opacity. A set of pushbuttons are also available for changing surface color, display box, and patches. The default Matlab camera toolbar is provided for adjusting the lighting effect.
Reslice 3D Matrix. An example of 3D reslicing generates a new stack of images in the oblique plane from an axial human knee MRI image stack. Note that the resliced images are extracted from the plane perpendicular to the indicating line on the left window.

We don’t write a specific paper to publish MatrixUser. However, your citation of this website is greatly appreciated if you publish your work and use MatrixUser.