The operator surface for the NexusPlatform 66-VM lab — a single ≤25 MB Native AOT binary that introspects, drives, and recovers the lab's Tier-1 (Vault, AD, gateway) and Tier-2 (Docker Swarm + Nomad + Consul + Portainer) control planes. No raw terraform, no vault CLI, no docker stack for daily ops; one tool, predictable verbs, panic buttons everywhere.
Canon: This repo implements Phase 0.F (line 156) of the NexusPlatform blueprint. Read
nexus-platform-planfirst to understand the lab the CLI talks to.New to the tool stack (Vault, Consul, Nomad, Portainer)? See the tool stack glossary for plain-English definitions of each.
Current state (v0.2.1): Two of five master-plan verbs ship —
cluster-status(live HTTPS introspection of Consul + Nomad + Portainer; v0.1) andinfrastructure {list, status, suspend, resume}(VMware Workstation control via vmrun.exe + a hand-rolledvms.yamlreader; v0.2.x). Verified end-to-end against the live cluster with asuspend → status → resumeround-trip onfoundation/vault-3showing the correctsuspendedmid-state. v0.2.1 cleared the carryover backlog (Spectre.Console.Cli 0.55 bump + Workstation Pro 17.5+ session-suffixed.vmemdetection). The remaining three verbs (failover-test,kafka failover,demo run/record) are stubs.
| Layer | Tech | Purpose |
|---|---|---|
| Entry + UX | Spectre.Console.Cli 0.50 + .NET 10 | Verb routing, table rendering, help text, AOT publish root |
| Domain | Nexus.Cli.Core (lib) |
Interfaces (INexusConsulClient, INexusNomadClient, …), Result<T>, response records |
| Adapters | Nexus.Cli.Adapters (lib) |
HttpClient factory pinned to the operator's CA bundle, source-gen JSON, Vault token resolver |
| Tests | xUnit + NetArchTest | Layer-dependency rules, JSON contract round-trips, env-var resolver permutations |
| Distribution | GitHub Releases | linux-x64.tar.gz + win-x64.zip attached to every tag — single static binary |
| Command | Status | Slice |
|---|---|---|
nexus cluster-status |
✅ v0.1.0 | Live HTTPS to Consul + Nomad + Portainer; tabular health summary |
nexus infrastructure list |
✅ v0.2.0 | Whole-fleet table from vms.yaml decorated with live VMware state |
nexus infrastructure status <cluster> |
✅ v0.2.0 | Single-cluster (or single-node via --node) state view |
nexus infrastructure suspend <cluster> |
✅ v0.2.0 | vmrun suspend with confirm prompt + per-VM glyph; aliased as suspend-cluster |
nexus infrastructure resume <cluster> |
✅ v0.2.0 | vmrun start <vmx> nogui for every stopped/suspended VM in scope |
nexus failover-test |
🟡 stub | Drive a manager loss + raft re-election, measure RTO (planned v0.3) |
nexus kafka failover |
🟡 stub | East→West DR via MM2 (planned alongside Phase 0.H) |
nexus demo run | record |
🟡 stub | Idempotent demo orchestrator + VHS/Playwright recorder (planned v0.4) |
Run nexus --help for the live verb list against the binary you have installed.
# 1) Authenticate to Vault first (operator's existing flow). nexus-cli reads
# VAULT_TOKEN/VAULT_ADDR/VAULT_CACERT from your environment.
$env:VAULT_ADDR = 'https://192.168.70.121:8200'
$env:VAULT_CACERT = "$HOME\.nexus\vault-ca-bundle.crt"
vault login -method=ldap username=nexusadmin
# 2) Run cluster-status
.\nexus.exe cluster-status
# 3) JSON for scripting
.\nexus.exe cluster-status --json | ConvertFrom-Json
# 4) Drive Workstation VMs via vms.yaml (v0.2)
$env:NEXUS_VMS_YAML = "$HOME\src\nexus-platform-plan\docs\infra\vms.yaml"
.\nexus.exe infrastructure list # whole fleet
.\nexus.exe infrastructure status foundation # one cluster
.\nexus.exe infrastructure suspend foundation --yes # vmrun suspend
.\nexus.exe infrastructure suspend-cluster foundation --yes # alias
.\nexus.exe infrastructure resume foundation --yesExpected output (live 0.E.4 cluster, 2026-05-07):
─── Cluster status ───────────────────────────────── ● GREEN ───
Consul 6 alive · 0 left · leader: swarm-manager-1
Nomad 3 servers alive · 3 clients ready · leader: swarm-manager-1
Portainer 1 manager-pinned replica · 6 agents · API 200 OK
# Windows
$ver = '0.1.0'
Invoke-WebRequest "https://github.com/grezap/nexus-cli/releases/download/v$ver/nexus-cli-$ver-win-x64.zip" -OutFile nexus.zip
Expand-Archive nexus.zip -DestinationPath C:\Tools\nexus-cli
$env:Path += ';C:\Tools\nexus-cli'# Linux
ver=0.1.0
curl -sSL "https://github.com/grezap/nexus-cli/releases/download/v$ver/nexus-cli-$ver-linux-x64.tar.gz" | tar xz -C /usr/local/bin
nexus --versionwinget and .deb are deferred to v0.2.
Prerequisites: .NET 10 SDK (global.json pins 10.0.100), pwsh 7+ on Windows.
git clone https://github.com/grezap/nexus-cli
cd nexus-cli
pwsh -File scripts\cli.ps1 publish -Rid win-x64
.\artifacts\win-x64\nexus.exe --versionVerbs supported by scripts/cli.ps1: build, publish, test, lint, clean, size-check. -Rid all does both linux-x64 + win-x64.
nexus-cli reads only environment variables — no config files, no embedded creds.
| Variable | Required | Purpose |
|---|---|---|
VAULT_TOKEN |
cluster-status |
Operator's Vault token (from vault login) |
VAULT_ADDR |
cluster-status |
e.g. https://192.168.70.121:8200 |
VAULT_CACERT |
cluster-status (or NEXUS_CA_BUNDLE) |
Path to PEM bundle of the lab root CA |
NEXUS_CA_BUNDLE |
no | Override; same shape as VAULT_CACERT |
NEXUS_VMS_YAML |
infrastructure (recommended) |
Absolute path to nexus-platform-plan/docs/infra/vms.yaml. If unset, falls back to ../nexus-platform-plan/docs/infra/vms.yaml from the cwd. |
NEXUS_VMRUN_PATH |
no | Override vmrun.exe discovery. Defaults to the canonical Workstation Pro install paths on Windows. |
The CLI does not call vault login for you — manage your token externally (per ADR-0004).
# default human-readable
nexus cluster-status
# JSON for scripting / piping into jq
nexus cluster-status --json
# verbose: dump per-component HTTP timing
nexus cluster-status --verbose3 projects + tests; layer rules enforced by NetArchTest:
Nexus.Cli (AOT root) ───▶ Nexus.Cli.Adapters ───▶ Nexus.Cli.Core
(HTTP, Vault, JSON) (interfaces, records)
Nexus.Cli.Core depends only on the BCL.
Nexus.Cli.Adapters may depend on Nexus.Cli.Core.
Nothing depends on Nexus.Cli.
ADR index: docs/adr/index.md. Five ADRs ship with v0.1.0 covering framework choice, AOT cadence, layout, auth model, and the Dapper-on-AOT mandate for future DB I/O.
| Version | Scope |
|---|---|
| v0.1.0 | cluster-status — Consul + Nomad + Portainer read-only; AOT pipeline; size budget; CI |
| v0.2.0 | infrastructure {list, status, suspend, resume} + suspend-cluster alias; vmrun.exe adapter; hand-rolled vms.yaml reader (ADR-0006) |
| v0.2.1 | Spectre.Console.Cli 0.55 bump (breaking-change adoption: CT param + protected override); session-suffixed .vmem detection so post-suspend status correctly reports suspended on Workstation Pro 17.5+ |
| v0.3+ | winget manifest; .deb; --watch flag; deferred to slice cycles |
| v0.3.0 | failover-test; SSH client + raft introspection |
| v0.4.0 | demo run/record — VHS .tape orchestration + Playwright bridge |
| v0.5.0 | kafka failover — pairs with Phase 0.H Kafka ecosystem |
| v1.0.0 | All five master-plan commands stable; panic-button verbs everywhere |
This is a portfolio project authored solely by Grigoris Zapantis. PRs are welcome but the commit author/owner stays single-named per CONTRIBUTING.md.
MIT.
- Spectre.Console — the table rendering and
CommandApphost - HashiCorp Vault, Consul, Nomad — the control planes this CLI talks to
- Portainer CE — the lab's Swarm UI
- The
nexus-platform-planblueprint — every command in this CLI exists because the master plan specified it