Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
eef7d5d5c8
Edit ymal file and add readme
2 years ago
88c82ede56
Add eval stage
2 years ago
8bfe57a7e3
Run model stage
2 years ago
eef7d5d5c8
Edit ymal file and add readme
2 years ago
src
094a382541
Run eval stage
2 years ago
34e3ca2894
init repo
2 years ago
0163267352
Update '.gitignore'
2 years ago
1414a7590b
Add notebook
2 years ago
51e9cbe845
Update 'README.md'
2 years ago
094a382541
Run eval stage
2 years ago
0f29428192
Update 'dvc.yaml'
2 years ago
e045490024
7 epochs
2 years ago
e045490024
7 epochs
2 years ago
Storage Buckets
Data Pipeline
Legend
DVC Managed File
Git Managed File
Metric
Stage File
External File

README.md

You have to be logged in to leave a comment. Sign In

BraTS 2018 - Semantic Segmentation on Brain MRI Scans

Open In Colab

![](eef7d5d5c8/readme_images/ex1.gif)

Intro

BraTS 2018 utilizes multi-institutional pre-operative MRI scans and focuses on the segmentation of intrinsically heterogeneous (in appearance, shape, and histology) brain tumors, namely gliomas. Furthermore, to pinpoint the clinical relevance of this segmentation task, BraTS’18 also focuses on the prediction of patient overall survival, via integrative analyses of radiomic features and machine learning algorithms

Dataset

  • The dataset is divided into 2 types of cancer - HGG (High-grade gliomas) and LGG (Low-grade gliomas). For every type of cancer, we will have 3D MRI scans (voxel) of patients that were acquired with different clinical protocols and various scanners from multiple (n=19) institutions. For every patient with four filters:

    • Native (T1).
    • Post-contrast T1-weighted (T1Gd).
    • T2-weighted (T2).
    • T2 Fluid Attenuated Inversion Recovery (FLAIR).

    Each filter is an MRI scan that acts as a feature. and

  • The segmented MRI scenes have been segmented manually, by one to four raters, following the same annotation protocol, and their annotations were approved by experienced neuro-radiologists. The segmented data have been labeled using 3 different pixels:

    • GD-enhancing tumor (ET — label 4)
    • Peritumoral edema (ED — label 2)
    • Necrotic and non-enhancing tumor core (NCR/NET — label 1)
  • Image type: NIfTI files (.nii.gz)

  • Dataset tree:

        .
        ├── HGG
        │   ├── Brats18_2013_10_1
        │   │   ├── Brats18_2013_10_1_flair.nii.gz
        │   │   ├── Brats18_2013_10_1_seg.nii.gz
        │   │   ├── Brats18_2013_10_1_t1.nii.gz
        │   │   ├── Brats18_2013_10_1_t1ce.nii.gz
        │   │   └── Brats18_2013_10_1_t2.nii.gz
        │   ├── Brats18_2013_11_1
        │   │   ├── Brats18_2013_11_1_flair.nii.gz
        │   │   ├── Brats18_2013_11_1_seg.nii.gz
        │   │   ├── Brats18_2013_11_1_t1.nii.gz
        │   │   ├── Brats18_2013_11_1_t1ce.nii.gz
        │   │   └── Brats18_2013_11_1_t2.nii.gz
    
        │   ├── ...
    
        ├── LGG
        │   ├── Brats18_2013_0_1
        │   │   ├── Brats18_2013_0_1_flair.nii.gz
        │   │   ├── Brats18_2013_0_1_seg.nii.gz
        │   │   ├── Brats18_2013_0_1_t1.nii.gz
        │   │   ├── Brats18_2013_0_1_t1ce.nii.gz
        │   │   └── Brats18_2013_0_1_t2.nii.gz
        │   ├── Brats18_2013_15_1
        │   │   ├── Brats18_2013_15_1_flair.nii.gz
        │   │   ├── Brats18_2013_15_1_seg.nii.gz
        │   │   ├── Brats18_2013_15_1_t1.nii.gz
        │   │   ├── Brats18_2013_15_1_t1ce.nii.gz
        │   │   └── Brats18_2013_15_1_t2.nii.gz
        │   ├── ...
    
        ├── survival_data.csv
    

Models

  • We will use a 3D U-net model from the winning paper with the following architecture:

  • The model was implemented with Keras and TensorFlow V 1.* by Suyog Jadhav

TODO:

  • Add global const file and change the stage const
  • Split and save to csv by names of patients

Hypothesis:

  • Create two models for the different types of tumors - HGG and LGG.
  • Change the classification from multi-class (ncr,ed,et) to mask (one channel).
  • Use 2D U-net model.
  • Change the image size to the paper's size (4,160,192,128) with a batch size of 1.
  • Use different model architecture.
  • Change the normalization of the images.

Additional Information ℹ️

Tip!

Press p or to see the previous file or, n or to see the next file

About

No description

Collaborators 1

Comments

Loading...