Centralized Network Configuration, Automation, and Observability
A local network control plane and observability system that simulates how modern production networks are configured centrally, automated via declarative intent, and monitored through real-time telemetry.
NetworkControlPlane demonstrates core network control plane concepts through a complete implementation:
- Declarative Desired State: YAML-based configuration that defines network intent (topology, device configs, routing policies)
- Configuration Rendering: Jinja2 template engine transforms declarative state into device-specific configurations
- Network Automation: Netmiko-style device abstraction layer for idempotent configuration deployment
- Network Simulation: Mininet-based topology simulation using Linux network namespaces and virtual Ethernet pairs
- Real-time Telemetry: Collection of latency, packet loss, throughput, interface counters, and path visibility metrics
- Validation Framework: Baseline vs post-change comparison for deterministic network behavior validation
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Control Plane Interface β
β (CLI / Web UI / REST API) β
βββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Desired State Parser (YAML) β
β β’ Validates YAML schema β
β β’ Extracts topology and device configurations β
βββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Configuration Renderer (Jinja2 Templates) β
β β’ Template-based config generation β
β β’ Device-specific configuration rendering β
βββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Automation Layer (Device Abstraction) β
β β’ SimulatedDevice: Netmiko-style device interface β
β β’ DeviceSession: Configuration deployment β
β β’ Idempotent config application β
βββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Topology Manager (Mininet) β
β β’ Linux network namespaces (node isolation) β
β β’ Virtual Ethernet pairs (veth) for links β
β β’ Open vSwitch (OVS) for switching β
β β’ Linux Traffic Control (TC) for link characteristics β
βββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Telemetry Collector β
β β’ Latency metrics (ping-based) β
β β’ Path visibility (traceroute) β
β β’ Interface counters (ifconfig/ip) β
β β’ Throughput measurements β
βββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Validation Engine β
β β’ Baseline telemetry capture β
β β’ Post-change comparison β
β β’ Deterministic pass/fail validation β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- Purpose: Parses and validates YAML files containing declarative network configuration
- Key Features:
- YAML schema validation
- Topology extraction (nodes, links with bandwidth/delay/loss characteristics)
- Device configuration extraction (interfaces, routes, hostnames)
- Output: Structured Python dictionaries ready for downstream processing
- Purpose: Transforms declarative state into device-specific configuration commands
- Technology: Jinja2 template engine
- Key Features:
- Template-based configuration generation
- Device-specific template support
- Extensible template system (default templates in
templates/default.j2)
- Output: Rendered configuration strings per device
device.py:SimulatedDeviceclass providing Netmiko-style device abstractionsession.py:DeviceSessioncontext manager for configuration deployment- Key Features:
- Idempotent configuration application
- Session-based configuration management
- Device type abstraction (host, switch, router)
- Purpose: Creates and manages simulated network topology using Mininet
- Technology Stack:
- Mininet: Network emulation framework
- Linux Network Namespaces: Isolated network stacks per node
- Virtual Ethernet Pairs (veth): Virtual links between nodes
- Open vSwitch (OVS): Virtual switching functionality
- Linux Traffic Control (TC): Enforces link characteristics (bandwidth, delay, packet loss)
- Key Features:
- Dynamic topology creation from YAML
- Link characteristic enforcement (bandwidth limits, latency, packet loss)
- Node isolation via network namespaces
- Topology lifecycle management (create, start, stop, cleanup)
- Purpose: Collects real-time network telemetry from simulated topology
- Metrics Collected:
- Latency Metrics: RTT, min/max/avg latency, packet loss (via
ping) - Path Metrics: Hop-by-hop path visibility (via
traceroute) - Interface Counters: RX/TX packets, bytes, errors (via
ifconfig/ip) - Throughput Metrics: Bandwidth utilization measurements
- Latency Metrics: RTT, min/max/avg latency, packet loss (via
- Technology: Uses real system tools (
ping,traceroute,ifconfig,ip) within network namespaces
- Purpose: Validates network behavior by comparing baseline vs post-change telemetry
- Key Features:
- Baseline telemetry capture
- Post-change telemetry collection
- Deterministic comparison logic
- Pass/fail/warning status determination
- Detailed validation reports
- Technology: Flask-based REST API with HTML frontend
- Features:
- YAML file upload (multipart form data)
- Topology deployment via web interface
- Real-time telemetry collection
- Network validation
- Topology status monitoring
- API Endpoints:
GET /api/topology/status- Get current topology statusPOST /api/topology/deploy- Deploy topology from uploaded YAML or file pathPOST /api/telemetry/collect- Collect telemetry metricsPOST /api/validation/validate- Validate network behavior
- Python 3.8+: Core language
- Mininet 2.3.0+: Network emulation framework
- Jinja2 3.1.0+: Template engine for configuration rendering
- PyYAML 6.0+: YAML parsing and validation
- Flask 2.3.0+: Web framework for UI
- Click 8.1.0+: CLI framework
- Paramiko 3.0.0+: SSH/networking abstractions
- Python 3.8+ (or use Docker)
- Linux (recommended) or macOS with Docker
- Mininet (install via:
sudo apt-get install minineton Linux)
Note for macOS users: Mininet requires Linux kernel features (network namespaces, veth pairs). Use Docker (recommended) - see docs/DOCKER_SETUP.md.
Option 1: Using Docker (Recommended)
# Pull pre-built image from Docker Hub
docker pull varzzz/network-control-plane:latest
# Run the web UI
docker run -it --rm --privileged -p 5001:5001 \
-v $(pwd):/workspace -w /workspace \
varzzz/network-control-plane:latest \
python3 -m network_control_plane.uiThen open http://localhost:5001 in your browser.
Option 2: Local Python Installation
# Install dependencies
pip install -r requirements.txt
# Or install as a package
pip install -e .# Deploy network configuration from YAML
python -m network_control_plane.cli deploy examples/topology.yaml
# Collect latency telemetry between nodes
python -m network_control_plane.cli ping h1 h2
# Collect path visibility telemetry
python -m network_control_plane.cli trace h1 h2
# Validate network behavior
python -m network_control_plane.cli validate h1 h2# Start the web control surface
python -m network_control_plane.ui
# Then open http://localhost:5001 in your browserWeb UI Features:
- Upload YAML topology files directly through the web interface
- Deploy network topologies with a single click
- Collect real-time telemetry metrics
- Validate network behavior
Quick Start:
# Build and run with Docker Compose
docker-compose up
# Access web UI at http://localhost:5001Building from source:
docker build -t network-control-plane:latest .
docker-compose upSee docs/DOCKER_SETUP.md for detailed Docker instructions.
-
Define desired state in YAML (see
examples/topology.yaml):topology: nodes: - name: h1 type: host - name: h2 type: host links: - node1: h1 node2: h2 bandwidth: 100 delay: 5ms loss: 0 devices: h1: interfaces: - name: eth0 ip: 10.0.0.1
-
Deploy configuration to create simulated topology
-
Collect baseline telemetry (latency, packet loss, path visibility)
-
Inject failure (link down, congestion, route changes)
-
Collect post-change telemetry
-
Validate network behavior against baseline
The desired state YAML file defines:
- Topology: Nodes (hosts, switches, routers) and links with characteristics
- Devices: Per-device configuration (interfaces, routes, hostnames)
See examples/topology.yaml for a complete example.
Key Fields:
topology.nodes: List of network nodes with names and typestopology.links: Links between nodes with bandwidth (Mbps), delay (ms), and loss (%)devices: Device-specific configurations (interfaces with IP/netmask, static routes)
NetworkControlPlane/
βββ network_control_plane/ # Main package
β βββ desired_state/ # YAML schema and parsing
β β βββ parser.py # DesiredStateParser class
β βββ config_rendering/ # Jinja2 template rendering
β β βββ renderer.py # ConfigRenderer class
β β βββ templates/ # Jinja2 templates
β β βββ default.j2 # Default device config template
β βββ automation/ # Netmiko-style device management
β β βββ device.py # SimulatedDevice class
β β βββ session.py # DeviceSession context manager
β βββ topology/ # Mininet network simulation
β β βββ manager.py # TopologyManager class
β βββ telemetry/ # Telemetry collection
β β βββ collector.py # TelemetryCollector class
β β βββ metrics.py # Telemetry data structures
β βββ validation/ # Validation logic
β β βββ validator.py # NetworkValidator class
β βββ cli/ # Command-line interface
β β βββ main.py # Click-based CLI commands
β βββ ui/ # Web control surface
β βββ app.py # Flask application and REST API
β βββ templates/ # HTML templates
β βββ index.html # Web UI frontend
βββ docs/ # Documentation
β βββ DOCKER_SETUP.md # Docker setup instructions
βββ scripts/ # Utility scripts
β βββ docker-run.sh # Docker container runner
β βββ push-to-dockerhub.sh # Docker Hub publishing script
β βββ start-ovs.sh # Open vSwitch startup script
βββ tests/ # Test files
βββ examples/ # Example topologies and workflows
β βββ topology.yaml # Example desired state
β βββ example_workflow.py # Example usage script
βββ Dockerfile # Docker image definition
βββ docker-compose.yml # Docker Compose configuration
βββ requirements.txt # Python dependencies
βββ setup.py # Package setup
- Network Namespaces: Each node runs in its own Linux network namespace, providing complete network stack isolation
- Virtual Links: Links are implemented using
vethpairs with one end in each namespace - Link Characteristics: Linux Traffic Control (TC) enforces bandwidth limits, delay, and packet loss on links
- Switching: Open vSwitch provides virtual switching functionality for multi-port switches
- Idempotent Operations: Configuration deployment is idempotent - repeated deployments produce the same result
- Template-Based: Device configurations are generated from Jinja2 templates, allowing customization per device type
- Session Management: Device sessions ensure atomic configuration changes
- Real System Tools: Uses actual
ping,traceroute, andipcommands within network namespaces - Metrics: Collects latency (RTT), packet loss, path visibility (hops), and interface statistics
- Real-time: Telemetry is collected on-demand via API calls
- Baseline Capture: Establishes baseline metrics before changes
- Comparison Logic: Compares post-change metrics against baseline with configurable thresholds
- Deterministic: Provides clear pass/fail results based on measurable differences
Contributions are welcome! Please feel free to submit a Pull Request.
MIT