DEV Community

Cover image for ๐Ÿš€ I Built My First CNN for Brain Tumor Detection - Hereโ€™s What Actually Confused Me
Tanmay P. Tawade
Tanmay P. Tawade

Posted on • Edited on

๐Ÿš€ I Built My First CNN for Brain Tumor Detection - Hereโ€™s What Actually Confused Me

๐Ÿง  From Theory to Reality

As a final-year E&TC engineering student, I recently built my first Convolutional Neural Network (CNN) project for brain tumor detection using MRI images.

This is not a tutorial.

Itโ€™s a breakdown of:

  • what I thought I understood
  • what actually confused me
  • what changed after I implemented everything

๐Ÿค” Why I Chose This Project

I wanted something that was:

  • Academically meaningful
  • Related to deep learning
  • Practical enough to connect theory with real-world use

Medical image analysis stood out because itโ€™s not just technical - it has real-world impact.


๐Ÿ“Š Understanding the Dataset (Where I Initially Went Wrong)

Before writing any code, I should have asked:

  • What exactly do the labels represent?
  • Are the images already preprocessed?
  • Is the dataset balanced?

I didnโ€™t take these seriously at first - and it caused confusion later.

๐Ÿ‘‰ Lesson:

Understanding your dataset is more important than building the model.


๐Ÿ–ผ๏ธ Sample MRI Data

Dataset Image

Even a quick visual inspection of data would have helped me understand patterns early.


โš™๏ธ CNNs: What Changed After Implementation

I had studied CNNs before, but coding them changed everything.

Hereโ€™s what became clear:

  • Convolution layers are feature extractors, not magic
  • Pooling reduces dimensions and overfitting, not just โ€œdata sizeโ€
  • More layers โ‰  better performance

๐Ÿ‘‰ Biggest realization:

Small architectural changes can significantly impact results.


๐Ÿงฉ A Simple CNN Structure

CNN Architecture

Block diagram of CNN

This helped me finally visualize how data flows through the network.


โš ๏ธ Challenges I Faced

This is where things got real.

  • Overfitting on training data
  • Confusing validation accuracy with real performance
  • Randomly choosing hyperparameters

At one point, I genuinely thought:

โ€œIf accuracy is high, the model must be good.โ€

That assumption was wrong.


๐Ÿ” Why Explainability Became Important

In medical applications, accuracy alone isnโ€™t enough.

I started exploring model explainability to answer:

  • Why is the model predicting tumor?
  • Which part of the image matters most?

Even simple visualization methods helped me trust the model more.


๐Ÿง  Model Interpretation Example

Grad-CAM Output (No Tumor Image)

Grad-CAM Output (Tumor Image)

Seeing highlighted regions made predictions more meaningful.


๐Ÿ“ˆ What This Project Taught Me

  • Machine learning is iterative, not linear
  • Debugging requires patience and observation
  • Reading results is as important as writing code

๐Ÿ‘‰ Most important:

Copying solutions is easy. Understanding them is not.


๐Ÿ”ง What I Plan to Improve Next

  • Better evaluation techniques (beyond accuracy)
  • Cleaner project structure
  • Deeper understanding of explainable AI

๐Ÿ”— Project Reference

You can check the full implementation here:

๐Ÿ‘‰ Github Demo

Includes:

  • CNN model implementation
  • Data preprocessing
  • Experimentation and results

๐Ÿ’ฌ Final Thought

Iโ€™m not an expert - just someone learning by building.

If youโ€™ve worked on a CNN project:
๐Ÿ‘‰ What confused you the most in the beginning?

Letโ€™s learn together.


๐Ÿ‘จ๐Ÿปโ€๐Ÿ’ป Author

Tanmay Tawade

Top comments (0)