π¦ Green Wave Traffic Signal Optimizer
ML-based traffic signal synchronization for creating smooth "green waves" along urban arterial corridors.
Model Description
This repository contains 4 trained models for predicting optimal signal offsets between consecutive traffic intersections:
| Model | MAE (s) β | RMSE (s) β | RΒ² β | File |
|---|---|---|---|---|
| XGBoost | 1.39 | 2.66 | 0.960 | models/xgboost_model.joblib |
| Random Forest | 1.75 | 3.16 | 0.943 | models/rf_model.joblib |
| MLP Neural Net | 1.41 | 2.54 | 0.963 | models/mlp_model.pt |
| LSTM Sequence | 1.23 | 3.61 | 0.899 | models/lstm_model.pt |
All ML models reduce offset prediction error by 60-80% compared to the classical baseline (distance/speed formula).
What is a Green Wave?
A green wave synchronizes traffic signals so vehicles traveling at a recommended speed encounter consecutive green lights without stopping. The key parameter is the offset β the time delay between consecutive signals' green phases.
Basic formula: offset = distance / speed
ML improves on this by accounting for: congestion-dependent speed, platoon dispersion, queue discharge, weather, and temporal patterns.
Input Features (20 total)
Static Road Features
road_length_m: Distance between intersections (150-500m)speed_limit_mps: Posted speed limit (m/s)n_lanes: Number of lanes (2-4)cycle_length_s: Signal cycle length (60-120s)green_duration_s: Green phase duration
Dynamic Traffic Features
avg_speed_mps: Current average vehicle speedqueue_length: Number of queued vehiclesn_approaching_vehicles: Vehicles approaching the intersectiontraffic_density: Vehicles per km per laneaccumulated_wait_s: Total accumulated waiting time
Upstream Signal State
upstream_phase: Current phase (0=red, 1=green)upstream_phase_duration_s: Time elapsed in current phaseupstream_remaining_s: Time remaining in current phase
Temporal Features
hour_sin,hour_cos: Cyclical time-of-day encodingdow_sin,dow_cos: Cyclical day-of-week encodingweather: Weather condition (0=clear, 1=rain, 2=fog, 3=snow)speed_ratio: avg_speed / speed_limitvolume_to_capacity: traffic_density / capacity
Output
optimal_offset_s: Recommended offset in seconds for the next signal
Usage
XGBoost (Recommended for production)
import joblib
import numpy as np
# Load model
model = joblib.load("models/xgboost_model.joblib")
feature_cols = joblib.load("models/feature_cols.joblib")
# Example: predict offset for a 300m road segment
features = np.array([[
300, # road_length_m
13.89, # speed_limit_mps (50 km/h)
3, # n_lanes
60, # cycle_length_s
33, # green_duration_s
11.11, # avg_speed_mps (40 km/h)
5, # queue_length
15, # n_approaching_vehicles
25, # traffic_density
10, # accumulated_wait_s
1, # upstream_phase (green)
20, # upstream_phase_duration_s
13, # upstream_remaining_s
0.71, # hour_sin (8am)
-0.71, # hour_cos
0.0, # dow_sin (Monday)
1.0, # dow_cos
0, # weather (clear)
0.8, # speed_ratio
0.5, # volume_to_capacity
]])
offset = model.predict(features)
print(f"Recommended offset: {offset[0]:.1f} seconds")
MLP Neural Network
import torch
import joblib
import numpy as np
# Load model
checkpoint = torch.load("models/mlp_model.pt", map_location="cpu")
scaler = joblib.load("models/mlp_scaler.joblib")
# Recreate model architecture
class GreenWaveMLP(torch.nn.Module):
def __init__(self, input_dim, hidden_dims=[256, 128, 64]):
super().__init__()
layers = []
prev_dim = input_dim
for h in hidden_dims:
layers.extend([
torch.nn.Linear(prev_dim, h),
torch.nn.BatchNorm1d(h),
torch.nn.ReLU(),
torch.nn.Dropout(0.2),
])
prev_dim = h
layers.append(torch.nn.Linear(prev_dim, 1))
self.network = torch.nn.Sequential(*layers)
def forward(self, x):
return self.network(x).squeeze(-1)
model = GreenWaveMLP(checkpoint['input_dim'], checkpoint['hidden_dims'])
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()
# Predict
features_scaled = scaler.transform(features) # Use same features as above
with torch.no_grad():
offset = model(torch.FloatTensor(features_scaled))
print(f"MLP offset: {offset.item():.1f} seconds")
Training Details
- Dataset: 100,000 synthetic samples from 20 diverse corridor configurations
- Data generation: Physics-based simulator with BPR speed-flow curves, platoon dispersion, weather effects, stochastic noise
- Train/Test split: 80/20
- XGBoost: 500 trees, max_depth=8, lr=0.05, subsample=0.8
- MLP: 3 hidden layers [256, 128, 64], Adam optimizer, lr=0.001, 100 epochs
- LSTM: 2-layer LSTM (hidden=64), context window=20, trained on sequential data (10,000 sequences)
Feature Importance (XGBoost)
- Road Length (38.8%)
- Average Speed (16.8%)
- Speed Limit (14.2%)
- Cycle Length (5.2%)
- Number of Lanes (4.7%)
Demo
Try the interactive demo: π¦ Green Wave Optimizer Space
References
- DTLight (AAAI 2024): Decision Transformer for traffic signal control
- MADT (2026): Multi-Agent Decision Transformer for corridor coordination
- Traffic-R1 (2025): LLM-based traffic signal control with reinforcement learning
- RESCO: Open benchmark for traffic signal control
License
MIT