GPReconResNet — BraTS 2020 · T1CE (single contrast) · Axial
Architecture — GPReconResNet
GPReconResNet is a residual reconstruction network adapted for classification:
| Hyperparameter | Value |
|---|---|
| Residual blocks | 14 |
| Starting feature maps | 64 |
| Up/down-sampling blocks | 2 |
| Activation | Leaky ReLU |
| Dropout (residual) | 0.5 |
| Upsampling | sinc interpolation |
| Batch normalisation | ✗ |
| 3-D mode | ✗ (2-D) |
The reconstruction bottleneck forces the network to learn compact, semantically rich representations that are easy to interpret as class-discriminative maps.
Overview
This model is part of the GPModels family — a set of inherently-explainable convolutional networks for simultaneous brain-tumour classification and weakly-supervised segmentation from multi-contrast MRI. The models were trained and evaluated on the BraTS 2020 dataset.
⚠️ Single-contrast model — not used in the paper
This model was trained on T1CE contrast only (1 input channel) as an exploratory experiment. It is not part of the original publication: Weakly-supervised segmentation using inherently-explainable classification models and their application to brain tumour classification, which exclusively uses 4-contrast (T1 · T2 · T1CE · FLAIR) models.
- If you want a quick single-contrast experiment, this model may be useful.
- If you want to reproduce the paper results, or need the best performance, please use the 4-contrast counterpart: GPReconResNet — 4-contrast
- Input contrast: T1CE only (1 channel)
- Task: Multi-class brain-tumour classification (Healthy / LGG / HGG) and weakly-supervised segmentation
- Orientation: Axial 2-D slices (240 × 240 px)
- Output classes: 0 = Healthy, 1 = LGG (Low-grade glioma), 2 = HGG (High-grade glioma)
- Normalization: Per-image max normalization (divide by slice maximum)
- Preprint (open access): https://arxiv.org/abs/2206.05148
- Code & training details: https://github.com/soumickmj/GPModels
Model Inputs / Outputs
| Property | Value |
|---|---|
| Input shape | (B, 1, 240, 240) — float32, max-normalized, T1CE only |
| Output — train mode | (B, 3) — raw logits |
| Output — eval mode | ((B, 3), (B, 3, H, W)) — logits and spatial heatmap |
| Class order | [Healthy (0), LGG (1), HGG (2)] |
Usage
import torch
from transformers import AutoModel
model = AutoModel.from_pretrained("soumickmj/GPReconResNet_BraTS2020T1ce_Axial", trust_remote_code=True)
model.eval()
# x: (B, 1, 240, 240) — T1CE slice, max-normalised
x = torch.randn(1, 1, 240, 240)
with torch.no_grad():
logits, heatmap = model(x) # eval mode: logits (B,3) + heatmap (B,3,H,W)
pred_class = logits.argmax(dim=1) # 0=Healthy, 1=LGG, 2=HGG
Weakly-Supervised Segmentation (Heatmap Mode)
These models are inherently explainable: eval mode bypasses the global max-pooling (GMP) layer, exposing a full-resolution spatial activation map per class. No extra labels or re-training are required.
How it works:
model.train()→ GMP applied → returnslogits (B, 3)— classification onlymodel.eval()→ GMP skipped → returns(logits, heatmap)whereheatmaphas shape(B, 3, H, W)— one spatial map per class
Heatmap channel order:
0 = Healthy, 1 = LGG, 2 = HGG. The whole-tumour map is obtained by combining channels 1 and 2.
with torch.no_grad():
logits, heatmap = model(x) # heatmap: (B, 3, H, W)
# Whole-tumour map (max over LGG + HGG channels)
wt_map = heatmap[:, 1:, :, :].max(dim=1).values # (B, H, W)
# Min-max normalise to [0, 1]
wt_flat = wt_map.view(wt_map.size(0), -1)
wt_min = wt_flat.min(dim=1).values[:, None, None]
wt_max = wt_flat.max(dim=1).values[:, None, None]
wt_norm = (wt_map - wt_min) / (wt_max - wt_min + 1e-8)
# Binary mask via threshold
binary_mask = (wt_norm > 0.5).float() # (B, H, W)
For advanced post-processing (multi-Otsu, top-k binarisation, morphological clean-up, per-slice aggregation) see the project repository.
Training Details
| Setting | Value |
|---|---|
| Trained by | Soumick Chatterjee (Pavan) |
| Dataset | BraTS 2020 — non-empty axial slices |
| Input contrast | T1CE only (1 channel) |
| Split | Stratified 75/25 train-test (seed 13); 5-fold CV (fold 0 reported) |
| Optimizer | Adam (lr = 1e-3, weight_decay = 5e-4) |
| Loss | Cross-entropy with class-balanced weights |
| Mixed precision | AMP |
| Max epochs | 300 |
Citation
If you use this model, please cite:
@article{chatterjee2026weakly,
title={Weakly-supervised segmentation using inherently-explainable classification models and their application to brain tumour classification},
author={Chatterjee, Soumick and Yassin, Hadya and Dubost, Florian and N{\"u}rnberger, Andreas and Speck, Oliver},
journal={Neurocomputing},
pages={133460},
year={2026},
publisher={Elsevier}
}
License
MIT — see https://github.com/soumickmj/GPModels for full details.
- Downloads last month
- 26