Case study
Fake Image Detector
Texture decomposition and CNN-based detection to separate AI-generated images from real ones using a research-inspired pipeline.
Overview
Texture decomposition and CNN-based detection to separate AI-generated images from real ones using a research-inspired pipeline.
Problem
Fake Image Detector targets reliable detection of AI-generated images (GAN, diffusion, and deepfake outputs) versus real photos. It builds an end-to-end pipeline that emphasizes texture artifacts using custom preprocessing and a CNN classifier inspired by research literature.
Constraints
Used a balanced subset of datasets for diversity, resized inputs to 256x256 with 32x32 patches, ran heavy preprocessing (patch extraction plus multiple filters), framed it as binary classification with limited labeled sources (Kaggle), and relied on notebooks with path-based reproducibility.
Decisions
- Adopted a smash-and-reconstruct pipeline to separate rich vs poor texture regions.
- Applied custom high-frequency and edge filters (A-G) to emphasize texture artifacts.
- Computed filtered differences between rich and poor textures as features.
- Used a CNN classifier with stacked Conv2D and BatchNorm blocks.
- Tracked val_loss for early stopping and saved best checkpoints.
Metrics
Architecture
Input image
Resize 256x256
Patch split 32x32
Pixel variance
Rich/Poor texture recon
Custom filters A-G
Feature map difference
Trainable conv block (hard_tanh)
CNN classifier
Sigmoid output (real/fake)
Connections
input → resize
resize → patches
patches → variance
variance → textures
textures → filters
filters → features
features → conv
conv → cnn
cnn → output