Tutorial - Aerial Image Analysis
Goal
After this tutorial, you can detect objects and changes in satellite/drone imagery. You will work with real Sentinel-2 data (free, no cost) and build a change detection pipeline.
Prerequisites: NumPy, basic image processing, Image Classification.
Time: 90-120 minutes.
Step 1: Download Sentinel-2 imagery
Sentinel-2 provides free multispectral satellite imagery at 10m resolution. Updated every 5 days for any location on Earth.
Option A: Manual download from Copernicus Browser
- Go to https://browser.dataspace.copernicus.eu/
- Search for your area of interest
- Filter: Sentinel-2 L2A (atmospherically corrected), cloud cover < 10%
- Download the SAFE format product (or individual bands as GeoTIFF)
Option B: Programmatic download with sentinelhub or eodag
# Using rasterio to load already-downloaded Sentinel-2 bands
import rasterio
import numpy as np
def load_sentinel_band(band_path):
"""Load a single Sentinel-2 band as numpy array."""
with rasterio.open(band_path) as src:
band = src.read(1).astype(np.float32)
profile = src.profile
return band, profile
# Sentinel-2 bands at 10m resolution:
# B02 = Blue, B03 = Green, B04 = Red, B08 = NIR
# B04_path = "T35VLG_20250601T093031_B04_10m.jp2" # example filenameOption C: Use a small GeoTIFF for practice
If you don’t have Sentinel-2 data, create synthetic data to practice the pipeline:
import numpy as np
def make_synthetic_aerial(size=512, n_objects=20):
"""Create a synthetic aerial image with random 'buildings' and 'vegetation'."""
# Background: brownish terrain
img = np.random.normal(120, 15, (size, size, 3)).clip(0, 255).astype(np.uint8)
img[:, :, 0] = np.clip(img[:, :, 0] + 20, 0, 255) # more red
for _ in range(n_objects):
# Random rectangular "buildings"
x, y = np.random.randint(20, size - 60, 2)
w, h = np.random.randint(15, 50, 2)
gray = np.random.randint(160, 220)
img[y:y+h, x:x+w] = [gray, gray, gray]
# Add some "vegetation" patches
for _ in range(15):
cx, cy = np.random.randint(50, size - 50, 2)
r = np.random.randint(20, 40)
yy, xx = np.ogrid[-r:r, -r:r]
mask = xx**2 + yy**2 <= r**2
y_start, x_start = max(cy-r, 0), max(cx-r, 0)
y_end, x_end = min(cy+r, size), min(cx+r, size)
patch = img[y_start:y_end, x_start:x_end]
m = mask[:patch.shape[0], :patch.shape[1]]
patch[m] = [40, np.random.randint(100, 160), 30]
return imgStep 2: Load and visualize satellite image bands
Sentinel-2 has 13 spectral bands. The key ones:
| Band | Wavelength | Resolution | Use |
|---|---|---|---|
| B02 | Blue (490nm) | 10m | RGB composite |
| B03 | Green (560nm) | 10m | RGB composite |
| B04 | Red (665nm) | 10m | RGB composite, vegetation |
| B08 | NIR (842nm) | 10m | Vegetation, water detection |
| B11 | SWIR (1610nm) | 20m | Soil moisture, burn scars |
import numpy as np
import matplotlib.pyplot as plt
def load_and_visualize_rgb(red_path, green_path, blue_path):
"""Load RGB bands and create a true-color composite."""
red, profile = load_sentinel_band(red_path)
green, _ = load_sentinel_band(green_path)
blue, _ = load_sentinel_band(blue_path)
# Stack and normalize to 0-1
rgb = np.stack([red, green, blue], axis=-1)
# Sentinel-2 L2A values are reflectance * 10000
rgb = rgb / 10000.0
rgb = np.clip(rgb * 3.5, 0, 1) # brightness adjustment
return rgb, profile
def compute_ndvi(red, nir):
"""Normalized Difference Vegetation Index.
NDVI = (NIR - Red) / (NIR + Red)
Range: -1 to 1. Vegetation: 0.2-0.8. Water: < 0. Bare soil: 0-0.2.
"""
ndvi = (nir - red) / (nir + red + 1e-10)
return ndvi
def compute_ndwi(green, nir):
"""Normalized Difference Water Index.
NDWI = (Green - NIR) / (Green + NIR)
Water bodies: positive. Land: negative.
"""
return (green - nir) / (green + nir + 1e-10)
# Visualize
# rgb, profile = load_and_visualize_rgb("B04.jp2", "B03.jp2", "B02.jp2")
# red, _ = load_sentinel_band("B04.jp2")
# nir, _ = load_sentinel_band("B08.jp2")
# ndvi = compute_ndvi(red, nir)
# fig, axes = plt.subplots(1, 3, figsize=(18, 6))
# axes[0].imshow(rgb)
# axes[0].set_title("True Color (RGB)")
# axes[1].imshow(ndvi, cmap="RdYlGn", vmin=-0.2, vmax=0.8)
# axes[1].set_title("NDVI")
# nir_composite = np.stack([nir, red, green], axis=-1)
# nir_composite = np.clip(nir_composite / 10000 * 3.5, 0, 1)
# axes[2].imshow(nir_composite)
# axes[2].set_title("False Color (NIR-R-G)")
# plt.tight_layout()
# plt.savefig("sentinel_overview.png", dpi=150)
# plt.show()What just happened: Beyond RGB, satellite sensors capture invisible wavelengths (NIR, SWIR) that reveal information invisible to the eye. NDVI measures vegetation health. False-color composites make vegetation appear bright red and water dark.
Step 3: Change detection
Compare two images of the same area from different dates to find what changed.
import numpy as np
import cv2
def pixel_change_detection(image_before, image_after, threshold=30):
"""Simple pixel-difference change detection.
Works best with pre-aligned images of similar illumination.
"""
# Convert to grayscale if color
if image_before.ndim == 3:
gray_before = np.mean(image_before, axis=2)
gray_after = np.mean(image_after, axis=2)
else:
gray_before = image_before.astype(float)
gray_after = image_after.astype(float)
# Absolute difference
diff = np.abs(gray_after - gray_before)
# Threshold to get change mask
change_mask = (diff > threshold).astype(np.uint8) * 255
# Clean up with morphological operations
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5))
change_mask = cv2.morphologyEx(change_mask, cv2.MORPH_CLOSE, kernel)
change_mask = cv2.morphologyEx(change_mask, cv2.MORPH_OPEN, kernel)
return change_mask, diff
def ndvi_change_detection(ndvi_before, ndvi_after, threshold=0.15):
"""Detect vegetation changes between two dates.
Negative change: vegetation loss (deforestation, construction, fire).
Positive change: vegetation growth.
"""
ndvi_diff = ndvi_after - ndvi_before
loss_mask = (ndvi_diff < -threshold).astype(np.uint8) * 255
gain_mask = (ndvi_diff > threshold).astype(np.uint8) * 255
return ndvi_diff, loss_mask, gain_maskHandling common problems
- Image alignment: Sentinel-2 images of the same area should be co-registered. If not, align with feature matching (ORB + homography) before differencing.
- Illumination differences: Normalize images or use indices (NDVI) which are ratio-based and more robust.
- Seasonal changes: vegetation changes seasonally. Compare same season across years, not spring vs winter.
- Cloud cover: mask clouds before comparison. Sentinel-2 L2A includes a Scene Classification Layer (SCL) with cloud masks.
def mask_clouds_scl(scl_band):
"""Create a clear-sky mask from Sentinel-2 Scene Classification Layer.
SCL classes: 4=vegetation, 5=bare_soil, 6=water, 7=cloud_low_prob,
8=cloud_medium, 9=cloud_high, 10=thin_cirrus, 11=snow_ice.
"""
clear_classes = [4, 5, 6] # vegetation, bare soil, water
mask = np.isin(scl_band, clear_classes)
return maskStep 4: Object counting in aerial images
Detect and count objects (vehicles, buildings) in aerial imagery.
Simple approach: thresholding + contours
Works for high-contrast objects on uniform backgrounds.
import cv2
import numpy as np
def count_bright_objects(image, min_area=50, max_area=5000):
"""Count bright objects (e.g., vehicles, buildings) on dark background.
Returns: count, list of bounding boxes, annotated image.
"""
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) if image.ndim == 3 else image
# Adaptive threshold (handles varying illumination)
binary = cv2.adaptiveThreshold(
gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY, 51, -10
)
# Clean up
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
binary = cv2.morphologyEx(binary, cv2.MORPH_OPEN, kernel)
# Find contours
contours, _ = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
boxes = []
for cnt in contours:
area = cv2.contourArea(cnt)
if min_area < area < max_area:
boxes.append(cv2.boundingRect(cnt))
# Annotate
annotated = image.copy() if image.ndim == 3 else cv2.cvtColor(image, cv2.COLOR_GRAY2BGR)
for x, y, w, h in boxes:
cv2.rectangle(annotated, (x, y), (x+w, y+h), (0, 255, 0), 2)
return len(boxes), boxes, annotatedBetter approach: YOLO on aerial data
from ultralytics import YOLO
# For aerial imagery, use a model fine-tuned on aerial data
# or fine-tune yourself on DOTA/xView dataset
model = YOLO("yolo11n.pt")
# For standard YOLO (trained on COCO): works for vehicles, people
# but misses small aerial objects. Fine-tuning on aerial data helps a lot.
results = model("aerial_image.jpg", conf=0.25)
results[0].show()
print(f"Detected {len(results[0].boxes)} objects")Step 5: Create NDVI vegetation change map
A practical workflow: monitor vegetation health over time.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import TwoSlopeNorm
def vegetation_change_report(ndvi_t1, ndvi_t2, date1, date2):
"""Generate a vegetation change analysis report."""
diff = ndvi_t2 - ndvi_t1
# Statistics
mean_change = np.nanmean(diff)
loss_pct = np.sum(diff < -0.15) / diff.size * 100
gain_pct = np.sum(diff > 0.15) / diff.size * 100
print(f"NDVI Change Report: {date1} → {date2}")
print(f" Mean NDVI change: {mean_change:+.4f}")
print(f" Vegetation loss (NDVI drop > 0.15): {loss_pct:.1f}% of area")
print(f" Vegetation gain (NDVI rise > 0.15): {gain_pct:.1f}% of area")
# Visualization
fig, axes = plt.subplots(1, 3, figsize=(18, 5))
axes[0].imshow(ndvi_t1, cmap="RdYlGn", vmin=-0.2, vmax=0.8)
axes[0].set_title(f"NDVI {date1}")
axes[1].imshow(ndvi_t2, cmap="RdYlGn", vmin=-0.2, vmax=0.8)
axes[1].set_title(f"NDVI {date2}")
norm = TwoSlopeNorm(vmin=-0.5, vcenter=0, vmax=0.5)
im = axes[2].imshow(diff, cmap="RdBu", norm=norm)
axes[2].set_title("Change (red=loss, blue=gain)")
plt.colorbar(im, ax=axes[2], label="NDVI change")
plt.tight_layout()
plt.savefig("ndvi_change_report.png", dpi=150)
plt.show()Step 6: Tile large images for processing
Satellite images are huge (10980x10980 pixels for a Sentinel-2 tile). You can’t process them in one shot. Sliding window approach.
import numpy as np
def tile_image(image, tile_size=512, overlap=64):
"""Split a large image into overlapping tiles.
Returns: list of (tile, x_offset, y_offset).
"""
h, w = image.shape[:2]
stride = tile_size - overlap
tiles = []
for y in range(0, h, stride):
for x in range(0, w, stride):
y_end = min(y + tile_size, h)
x_end = min(x + tile_size, w)
y_start = y_end - tile_size # ensure full tile size
x_start = x_end - tile_size
if y_start < 0: y_start = 0
if x_start < 0: x_start = 0
tile = image[y_start:y_end, x_start:x_end]
tiles.append((tile, x_start, y_start))
return tiles
def merge_detections(all_detections, tile_size, overlap):
"""Merge detections from overlapping tiles, removing duplicates.
all_detections: list of (boxes, x_offset, y_offset) per tile.
boxes: list of [x1, y1, x2, y2, conf].
"""
global_boxes = []
for boxes, x_off, y_off in all_detections:
for box in boxes:
global_boxes.append([
box[0] + x_off, box[1] + y_off,
box[2] + x_off, box[3] + y_off,
box[4], # confidence
])
if not global_boxes:
return []
# NMS to remove duplicates from overlapping regions
global_boxes = np.array(global_boxes)
# Use OpenCV NMS
indices = cv2.dnn.NMSBoxes(
bboxes=[[b[0], b[1], b[2]-b[0], b[3]-b[1]] for b in global_boxes],
scores=global_boxes[:, 4].tolist(),
score_threshold=0.3,
nms_threshold=0.5,
)
if len(indices) > 0:
indices = indices.flatten()
return global_boxes[indices].tolist()
return []OSINT connection
Satellite imagery is a cornerstone of open-source intelligence:
- Monitoring military infrastructure: detect new construction, vehicle staging areas, equipment movement
- Verifying claims: compare declared vs actual site status
- Environmental monitoring: illegal logging, mining, pollution
- Crisis response: assess damage from natural disasters or conflict
- Pattern of life: regular satellite passes reveal routine changes
All of this is possible with free Sentinel-2 data at 10m resolution. Commercial providers (Maxar, Planet) offer sub-meter resolution for more detailed analysis.
Self-test questions
- What is NDVI and why is it more robust than raw pixel differences for vegetation monitoring?
- Why do you need to handle cloud cover when comparing satellite images from different dates?
- What is the sliding window approach, and why is it necessary for satellite imagery?
- How does the spatial resolution (10m for Sentinel-2 vs 0.3m for commercial) affect what you can detect?
Try this next
- New construction detection: Download Sentinel-2 images of a growing city (6 months apart). Use change detection to identify areas where new buildings appeared.
- Fine-tune detector on aerial data: Download DOTA or xView dataset. Fine-tune YOLO on aerial object classes (small vehicles, planes, ships).
- Automated change alerting: Build a script that downloads the latest Sentinel-2 image, compares with a baseline, and flags significant changes. Run it periodically.
Links
- Image Segmentation — semantic segmentation of land cover types
- Object Detection — detection models for aerial targets
- Image Classification — scene classification of aerial tiles
- 3D Vision and Depth — elevation from stereo satellite pairs
- Case Study - CV Pipeline Design — satellite change detection design