Skip to content

Centralized network control plane with declarative YAML configuration, automated topology deployment, and real-time telemetry collection. Simulates production networks using Mininet for testing and validation.

Notifications You must be signed in to change notification settings

InduVarshini/NetworkControlPlane

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

NetworkControlPlane

Centralized Network Configuration, Automation, and Observability

A local network control plane and observability system that simulates how modern production networks are configured centrally, automated via declarative intent, and monitored through real-time telemetry.

Overview

NetworkControlPlane demonstrates core network control plane concepts through a complete implementation:

  • Declarative Desired State: YAML-based configuration that defines network intent (topology, device configs, routing policies)
  • Configuration Rendering: Jinja2 template engine transforms declarative state into device-specific configurations
  • Network Automation: Netmiko-style device abstraction layer for idempotent configuration deployment
  • Network Simulation: Mininet-based topology simulation using Linux network namespaces and virtual Ethernet pairs
  • Real-time Telemetry: Collection of latency, packet loss, throughput, interface counters, and path visibility metrics
  • Validation Framework: Baseline vs post-change comparison for deterministic network behavior validation

Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    Control Plane Interface                   β”‚
β”‚              (CLI / Web UI / REST API)                       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        β”‚
                        β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚              Desired State Parser (YAML)                     β”‚
β”‚  β€’ Validates YAML schema                                     β”‚
β”‚  β€’ Extracts topology and device configurations               β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        β”‚
                        β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚         Configuration Renderer (Jinja2 Templates)           β”‚
β”‚  β€’ Template-based config generation                          β”‚
β”‚  β€’ Device-specific configuration rendering                   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        β”‚
                        β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚         Automation Layer (Device Abstraction)                β”‚
β”‚  β€’ SimulatedDevice: Netmiko-style device interface           β”‚
β”‚  β€’ DeviceSession: Configuration deployment                   β”‚
β”‚  β€’ Idempotent config application                             β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        β”‚
                        β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚         Topology Manager (Mininet)                           β”‚
β”‚  β€’ Linux network namespaces (node isolation)                 β”‚
β”‚  β€’ Virtual Ethernet pairs (veth) for links                   β”‚
β”‚  β€’ Open vSwitch (OVS) for switching                          β”‚
β”‚  β€’ Linux Traffic Control (TC) for link characteristics       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        β”‚
                        β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚         Telemetry Collector                                  β”‚
β”‚  β€’ Latency metrics (ping-based)                             β”‚
β”‚  β€’ Path visibility (traceroute)                              β”‚
β”‚  β€’ Interface counters (ifconfig/ip)                          β”‚
β”‚  β€’ Throughput measurements                                   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        β”‚
                        β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚         Validation Engine                                    β”‚
β”‚  β€’ Baseline telemetry capture                                β”‚
β”‚  β€’ Post-change comparison                                    β”‚
β”‚  β€’ Deterministic pass/fail validation                        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Technical Components

1. Desired State Parser (desired_state/parser.py)

  • Purpose: Parses and validates YAML files containing declarative network configuration
  • Key Features:
    • YAML schema validation
    • Topology extraction (nodes, links with bandwidth/delay/loss characteristics)
    • Device configuration extraction (interfaces, routes, hostnames)
  • Output: Structured Python dictionaries ready for downstream processing

2. Configuration Renderer (config_rendering/renderer.py)

  • Purpose: Transforms declarative state into device-specific configuration commands
  • Technology: Jinja2 template engine
  • Key Features:
    • Template-based configuration generation
    • Device-specific template support
    • Extensible template system (default templates in templates/default.j2)
  • Output: Rendered configuration strings per device

3. Automation Layer (automation/)

  • device.py: SimulatedDevice class providing Netmiko-style device abstraction
  • session.py: DeviceSession context manager for configuration deployment
  • Key Features:
    • Idempotent configuration application
    • Session-based configuration management
    • Device type abstraction (host, switch, router)

4. Topology Manager (topology/manager.py)

  • Purpose: Creates and manages simulated network topology using Mininet
  • Technology Stack:
    • Mininet: Network emulation framework
    • Linux Network Namespaces: Isolated network stacks per node
    • Virtual Ethernet Pairs (veth): Virtual links between nodes
    • Open vSwitch (OVS): Virtual switching functionality
    • Linux Traffic Control (TC): Enforces link characteristics (bandwidth, delay, packet loss)
  • Key Features:
    • Dynamic topology creation from YAML
    • Link characteristic enforcement (bandwidth limits, latency, packet loss)
    • Node isolation via network namespaces
    • Topology lifecycle management (create, start, stop, cleanup)

5. Telemetry Collector (telemetry/collector.py)

  • Purpose: Collects real-time network telemetry from simulated topology
  • Metrics Collected:
    • Latency Metrics: RTT, min/max/avg latency, packet loss (via ping)
    • Path Metrics: Hop-by-hop path visibility (via traceroute)
    • Interface Counters: RX/TX packets, bytes, errors (via ifconfig/ip)
    • Throughput Metrics: Bandwidth utilization measurements
  • Technology: Uses real system tools (ping, traceroute, ifconfig, ip) within network namespaces

6. Validation Engine (validation/validator.py)

  • Purpose: Validates network behavior by comparing baseline vs post-change telemetry
  • Key Features:
    • Baseline telemetry capture
    • Post-change telemetry collection
    • Deterministic comparison logic
    • Pass/fail/warning status determination
    • Detailed validation reports

7. Web UI (ui/app.py)

  • Technology: Flask-based REST API with HTML frontend
  • Features:
    • YAML file upload (multipart form data)
    • Topology deployment via web interface
    • Real-time telemetry collection
    • Network validation
    • Topology status monitoring
  • API Endpoints:
    • GET /api/topology/status - Get current topology status
    • POST /api/topology/deploy - Deploy topology from uploaded YAML or file path
    • POST /api/telemetry/collect - Collect telemetry metrics
    • POST /api/validation/validate - Validate network behavior

Technology Stack

  • Python 3.8+: Core language
  • Mininet 2.3.0+: Network emulation framework
  • Jinja2 3.1.0+: Template engine for configuration rendering
  • PyYAML 6.0+: YAML parsing and validation
  • Flask 2.3.0+: Web framework for UI
  • Click 8.1.0+: CLI framework
  • Paramiko 3.0.0+: SSH/networking abstractions

Quick Start

Prerequisites

  • Python 3.8+ (or use Docker)
  • Linux (recommended) or macOS with Docker
  • Mininet (install via: sudo apt-get install mininet on Linux)

Note for macOS users: Mininet requires Linux kernel features (network namespaces, veth pairs). Use Docker (recommended) - see docs/DOCKER_SETUP.md.

Installation

Option 1: Using Docker (Recommended)

# Pull pre-built image from Docker Hub
docker pull varzzz/network-control-plane:latest

# Run the web UI
docker run -it --rm --privileged -p 5001:5001 \
  -v $(pwd):/workspace -w /workspace \
  varzzz/network-control-plane:latest \
  python3 -m network_control_plane.ui

Then open http://localhost:5001 in your browser.

Option 2: Local Python Installation

# Install dependencies
pip install -r requirements.txt

# Or install as a package
pip install -e .

Usage

Command Line Interface

# Deploy network configuration from YAML
python -m network_control_plane.cli deploy examples/topology.yaml

# Collect latency telemetry between nodes
python -m network_control_plane.cli ping h1 h2

# Collect path visibility telemetry
python -m network_control_plane.cli trace h1 h2

# Validate network behavior
python -m network_control_plane.cli validate h1 h2

Web UI

# Start the web control surface
python -m network_control_plane.ui

# Then open http://localhost:5001 in your browser

Web UI Features:

  • Upload YAML topology files directly through the web interface
  • Deploy network topologies with a single click
  • Collect real-time telemetry metrics
  • Validate network behavior

Docker Compose

Quick Start:

# Build and run with Docker Compose
docker-compose up

# Access web UI at http://localhost:5001

Building from source:

docker build -t network-control-plane:latest .
docker-compose up

See docs/DOCKER_SETUP.md for detailed Docker instructions.

Example Workflow

  1. Define desired state in YAML (see examples/topology.yaml):

    topology:
      nodes:
        - name: h1
          type: host
        - name: h2
          type: host
      links:
        - node1: h1
          node2: h2
          bandwidth: 100
          delay: 5ms
          loss: 0
    devices:
      h1:
        interfaces:
          - name: eth0
            ip: 10.0.0.1
  2. Deploy configuration to create simulated topology

  3. Collect baseline telemetry (latency, packet loss, path visibility)

  4. Inject failure (link down, congestion, route changes)

  5. Collect post-change telemetry

  6. Validate network behavior against baseline

YAML Desired State Format

The desired state YAML file defines:

  • Topology: Nodes (hosts, switches, routers) and links with characteristics
  • Devices: Per-device configuration (interfaces, routes, hostnames)

See examples/topology.yaml for a complete example.

Key Fields:

  • topology.nodes: List of network nodes with names and types
  • topology.links: Links between nodes with bandwidth (Mbps), delay (ms), and loss (%)
  • devices: Device-specific configurations (interfaces with IP/netmask, static routes)

Project Structure

NetworkControlPlane/
β”œβ”€β”€ network_control_plane/    # Main package
β”‚   β”œβ”€β”€ desired_state/       # YAML schema and parsing
β”‚   β”‚   └── parser.py        # DesiredStateParser class
β”‚   β”œβ”€β”€ config_rendering/    # Jinja2 template rendering
β”‚   β”‚   β”œβ”€β”€ renderer.py      # ConfigRenderer class
β”‚   β”‚   └── templates/       # Jinja2 templates
β”‚   β”‚       └── default.j2   # Default device config template
β”‚   β”œβ”€β”€ automation/          # Netmiko-style device management
β”‚   β”‚   β”œβ”€β”€ device.py        # SimulatedDevice class
β”‚   β”‚   └── session.py       # DeviceSession context manager
β”‚   β”œβ”€β”€ topology/            # Mininet network simulation
β”‚   β”‚   └── manager.py       # TopologyManager class
β”‚   β”œβ”€β”€ telemetry/           # Telemetry collection
β”‚   β”‚   β”œβ”€β”€ collector.py     # TelemetryCollector class
β”‚   β”‚   └── metrics.py       # Telemetry data structures
β”‚   β”œβ”€β”€ validation/          # Validation logic
β”‚   β”‚   └── validator.py     # NetworkValidator class
β”‚   β”œβ”€β”€ cli/                 # Command-line interface
β”‚   β”‚   └── main.py          # Click-based CLI commands
β”‚   └── ui/                  # Web control surface
β”‚       β”œβ”€β”€ app.py           # Flask application and REST API
β”‚       └── templates/       # HTML templates
β”‚           └── index.html   # Web UI frontend
β”œβ”€β”€ docs/                     # Documentation
β”‚   └── DOCKER_SETUP.md       # Docker setup instructions
β”œβ”€β”€ scripts/                  # Utility scripts
β”‚   β”œβ”€β”€ docker-run.sh         # Docker container runner
β”‚   β”œβ”€β”€ push-to-dockerhub.sh  # Docker Hub publishing script
β”‚   └── start-ovs.sh          # Open vSwitch startup script
β”œβ”€β”€ tests/                    # Test files
β”œβ”€β”€ examples/                 # Example topologies and workflows
β”‚   β”œβ”€β”€ topology.yaml         # Example desired state
β”‚   └── example_workflow.py   # Example usage script
β”œβ”€β”€ Dockerfile                # Docker image definition
β”œβ”€β”€ docker-compose.yml        # Docker Compose configuration
β”œβ”€β”€ requirements.txt          # Python dependencies
└── setup.py                  # Package setup

Technical Implementation Notes

Network Simulation Details

  • Network Namespaces: Each node runs in its own Linux network namespace, providing complete network stack isolation
  • Virtual Links: Links are implemented using veth pairs with one end in each namespace
  • Link Characteristics: Linux Traffic Control (TC) enforces bandwidth limits, delay, and packet loss on links
  • Switching: Open vSwitch provides virtual switching functionality for multi-port switches

Configuration Deployment

  • Idempotent Operations: Configuration deployment is idempotent - repeated deployments produce the same result
  • Template-Based: Device configurations are generated from Jinja2 templates, allowing customization per device type
  • Session Management: Device sessions ensure atomic configuration changes

Telemetry Collection

  • Real System Tools: Uses actual ping, traceroute, and ip commands within network namespaces
  • Metrics: Collects latency (RTT), packet loss, path visibility (hops), and interface statistics
  • Real-time: Telemetry is collected on-demand via API calls

Validation Framework

  • Baseline Capture: Establishes baseline metrics before changes
  • Comparison Logic: Compares post-change metrics against baseline with configurable thresholds
  • Deterministic: Provides clear pass/fail results based on measurable differences

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

MIT

About

Centralized network control plane with declarative YAML configuration, automated topology deployment, and real-time telemetry collection. Simulates production networks using Mininet for testing and validation.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published