Computational Environment for Radiological Research (CERR)

Computational Environment for Radiological Research (CERR) provides MATLAB-based computational functionality for radiotherapy dose, imaging, and structure analysis and for the development and deployment of deep-learning segmentation and radiomics models.


Key Features:

  • MATLAB-based platform: Implements radiological research and radiotherapy computational functionality within a MATLAB environment.
  • Deep learning integration: Supports deep-learning-based image segmentation models for automated delineation of anatomical structures in medical images.
  • Model implementation library: Hosts a library of model implementations for image segmentation and outcomes models to enable validation, ensemble creation, and integration into analysis workflows.
  • Feature extraction and control: Implements validated feature extraction for radiotherapy dosimetry and radiomics with configurable calculation settings to select features used in model derivation.
  • Deployment via Singularity containers: Distributes models through Singularity containers with JSON configuration files for model input/output and execution across computing architectures.
  • Comprehensive model support: Includes implementations of radiotherapy models from Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC) and radiomics models using Image Biomarker Standardization Initiative (IBSI) features.
  • State-of-the-art segmentation networks: Incorporates segmentation networks such as Deeplab and DeepLabV3+ with a resnet-101 backbone and other problem-specific architectures.
  • Ensemble multi-view inference: Employs ensemble approaches that combine contextual information from axial, coronal, and sagittal views by averaging probability maps and assigning voxel labels based on the highest combined probability.

Scientific Applications:

  • Radiotherapy treatment planning: Auto-segmentation of anatomical structures in CT scans to support radiotherapy planning, exemplified by head and neck (H&N) applications.
  • Oncology toxicity reduction: Development of auto-segmentation models for swallowing and chewing structures in H&N CT scans to support reduction of radiation-induced dysphagia, trismus, and speech dysfunction.
  • Model validation and radiomics: Validation and implementation of radiotherapy outcome models (including QUANTEC) and radiomics models using IBSI features across sites and modalities.
  • Dose, imaging, and structure analysis: Comparative analysis of dose distributions, imaging, and anatomical structures for radiotherapy research.

Methodology:

Deep learning models are trained on labeled datasets (example: 194 H&N CT scans) to segment structures such as masseters, medial pterygoids, larynx, and pharyngeal constrictor muscles using architectures including DeepLabV3+ with a resnet-101 backbone trained sequentially to improve localization; ensemble methods combine axial, coronal, and sagittal model probability maps which are averaged and converted to voxel labels by highest combined probability, with evaluation using dice similarity coefficients and Hausdorff distances.

Topics

Details

License:
LGPL-2.1
Programming Languages:
MATLAB
Added:
11/14/2019
Last Updated:
12/10/2020

Operations

Publications

Iyer A, Thor M, Haq R, Deasy JO, Apte AP. Deep learning-based auto-segmentation of swallowing and chewing structures. Unknown Journal. 2019. doi:10.1101/772178.

Apte AP, Iyer A, Thor M, Pandya R, Haq R, Shukla-Dave A, Yu-Chi H, Elguindi S, Veeraraghavan H, Oh JH, Jackson A, Deasy JO. Library of model implementations for sharing deep-learning image segmentation and outcomes models. Unknown Journal. 2019. doi:10.1101/773929.

Documentation