Throw an error that it cant map the cluster to the context. Alternatively, the cluster specified there should be the context perhaps? This would break backwards compatibility I suppose, so should we allow directly referencing a context from a kube config via another key like cluster_context instead ? Worth noting that as a new user originally I recall being confused that the cluster key is not a context. Also I originally thought this should be caught by the linter but, since the cluster can be also supplied from command line, catching it on the linter may not be possible.
$ KUBECONFIG=../../../../../etc/kubernetes/eks.conf bin/chainsaw test tests/e2e/vault-injector/
Version: 0.2.14
Loading default configuration...
- Using test file: chainsaw-test
- TestDirs [tests/e2e/vault-injector/]
- Quiet false
- SkipDelete false
- FailFast false
- Namespace ''
- FullName false
- IncludeTestRegex ''
- ExcludeTestRegex ''
- ApplyTimeout 5s
- AssertTimeout 30s
- CleanupTimeout 30s
- DeleteTimeout 15s
- ErrorTimeout 30s
- ExecTimeout 5s
- DeletionPropagationPolicy Background
- Template true
- NoCluster false
- PauseOnFailure false
Loading tests...
- vault-injector-test (tests/e2e/vault-injector/)
Loading values...
Running tests...
=== RUN chainsaw
=== PAUSE chainsaw
=== CONT chainsaw
=== RUN chainsaw/vault-injector-test [6/4500]
=== PAUSE chainsaw/vault-injector-test
=== CONT chainsaw/vault-injector-test
| 07:09:03 | vault-injector-test | @chainsaw | CREATE | OK | v1/Namespace @ chainsaw-powerful-kid
| 07:09:03 | vault-injector-test | step-1 | TRY | BEGIN |
| 07:09:03 | vault-injector-test | step-1 | APPLY | RUN | apps/v1/StatefulSet @ monitoring/vault-injector-smoketest
E1209 07:09:03.917760 1074733 panic.go:262] "Observed a panic" panic="runtime error: invalid memory address or nil pointer dereference" panicGoValue="\"invalid memory address or nil pointe
r dereference\"" stacktrace=<
goroutine 138 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x8390d68, 0xc001762c30}, {0x2c58da0, 0x9f537f0})
k8s.io/apimachinery@v0.34.2/pkg/util/runtime/runtime.go:132 +0xbc
k8s.io/apimachinery/pkg/util/runtime.handleCrash({0x8390e10, 0xc001228460}, {0x2c58da0, 0x9f537f0}, {0x0, 0x0, 0x441060?})
k8s.io/apimachinery@v0.34.2/pkg/util/runtime/runtime.go:107 +0x116
k8s.io/apimachinery/pkg/util/runtime.HandleCrashWithContext({0x8390e10, 0xc001228460}, {0x0, 0x0, 0x0})
k8s.io/apimachinery@v0.34.2/pkg/util/runtime/runtime.go:78 +0x5a
panic({0x2c58da0?, 0x9f537f0?})
runtime/panic.go:792 +0x132
github.com/kyverno/chainsaw/pkg/engine/operations/apply.(*operation).tryApplyResource(0xc000c88568, {0x8390e10, 0xc001228460}, {0x836bdf8, 0xc0015ab6b0}, {0xc001762780})
github.com/kyverno/chainsaw/pkg/engine/operations/apply/operation.go:102 +0x166
github.com/kyverno/chainsaw/pkg/engine/operations/apply.(*operation).execute.func1({0x8390e10?, 0xc001228460?})
github.com/kyverno/chainsaw/pkg/engine/operations/apply/operation.go:86 +0x4c
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func2({0x8390e10?, 0xc001228460?}, 0xc000c87e30?)
k8s.io/apimachinery@v0.34.2/pkg/util/wait/loop.go:87 +0x62
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x8390e10, 0xc001228460}, {0x837ddd0, 0xc0000fd800}, 0x0, 0x0, 0xc000c87ec0)
k8s.io/apimachinery@v0.34.2/pkg/util/wait/loop.go:88 +0x237
k8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel({0x8390e10, 0xc001228460}, 0x0?, 0x0, 0xc000c87ec0)
k8s.io/apimachinery@v0.34.2/pkg/util/wait/poll.go:33 +0x56
github.com/kyverno/chainsaw/pkg/engine/operations/apply.(*operation).execute(0x0?, {0x8390e10?, 0xc001228460?}, {0x836bdf8?, 0xc0015ab6b0?}, {0x31c3c86?})
github.com/kyverno/chainsaw/pkg/engine/operations/apply/operation.go:85 +0x8c
github.com/kyverno/chainsaw/pkg/engine/operations/apply.(*operation).Exec(0xc000c88568, {0x8390e10, 0xc001228460}, {0x836bdf8?, 0xc0015ab6b0?})
github.com/kyverno/chainsaw/pkg/engine/operations/apply/operation.go:79 +0x3ca
github.com/kyverno/chainsaw/pkg/runner/operations.applyAction.Execute({{{{0x0, 0x0, 0x0}}, {0x0, 0x0}, {0x0}, {{0x0, 0x0, 0x0}}, {{0x0, ...}}, ...}, ...}, ...)
github.com/kyverno/chainsaw/pkg/runner/operations/apply.go:47 +0x505
github.com/kyverno/chainsaw/pkg/runner.(*runner).runAction(_, {_, _}, {_, _}, {_, _}, _, _, {0xc000a4d440, ...}, ...)
github.com/kyverno/chainsaw/pkg/runner/runner.go:448 +0x2b2
github.com/kyverno/chainsaw/pkg/runner.(*runner).runOperation(_, {_, _}, {0xc000a4d440, 0xc0018f6ae0, {0x7ffc4e3e5516, 0x19}, {0x836bdf8, 0xc0012dcdb0}, {0x0, ...}, ...}, ...)
github.com/kyverno/chainsaw/pkg/runner/runner.go:383 +0x4d7
github.com/kyverno/chainsaw/pkg/runner.(*runner).runStep(_, {_, _}, _, _, _, {0xc000a4d440, 0xc0018f6ae0, {0x7ffc4e3e5516, 0x19}, ...}, ...)
github.com/kyverno/chainsaw/pkg/runner/runner.go:351 +0xc1d
github.com/kyverno/chainsaw/pkg/runner.(*runner).run.func2.3({_, _}, _, _, _, {0xc000a4d440, 0xc0018f6ae0, {0x0, 0x0}, {0x836bdf8, ...}, ...}, ...)
github.com/kyverno/chainsaw/pkg/runner/runner.go:192 +0x13ab
github.com/kyverno/chainsaw/pkg/runner.(*runner).run.func2.4(0xc00182cfc0)
github.com/kyverno/chainsaw/pkg/runner/runner.go:202 +0xcf
testing.tRunner(0xc00182cfc0, 0xc00074db90)
testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 137
testing/testing.go:1851 +0x413
testing/testing.go:1851 +0x413
>
| 07:09:03 | vault-injector-test | step-1 | APPLY | DONE | apps/v1/StatefulSet @ monitoring/vault-injector-smoketest
| 07:09:03 | vault-injector-test | step-1 | TRY | END |
| 07:09:03 | vault-injector-test | @chainsaw | CLEANUP | BEGIN |
| 07:09:03 | vault-injector-test | @chainsaw | DELETE | OK | v1/Namespace @ chainsaw-powerful-kid
--- FAIL: chainsaw (6.69s)
--- FAIL: chainsaw/vault-injector-test (6.69s)
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x28a7c06]
goroutine 138 [running]:
testing.tRunner.func1.2({0x2c58da0, 0x9f537f0})
testing/testing.go:1734 +0x21c
testing.tRunner.func1()
testing/testing.go:1737 +0x35e
panic({0x2c58da0?, 0x9f537f0?})
runtime/panic.go:792 +0x132
k8s.io/apimachinery/pkg/util/runtime.handleCrash({0x8390e10, 0xc001228460}, {0x2c58da0, 0x9f537f0}, {0x0, 0x0, 0x441060?})
k8s.io/apimachinery@v0.34.2/pkg/util/runtime/runtime.go:114 +0x1a9
k8s.io/apimachinery/pkg/util/runtime.HandleCrashWithContext({0x8390e10, 0xc001228460}, {0x0, 0x0, 0x0})
k8s.io/apimachinery@v0.34.2/pkg/util/runtime/runtime.go:78 +0x5a
panic({0x2c58da0?, 0x9f537f0?})
runtime/panic.go:792 +0x132
github.com/kyverno/chainsaw/pkg/engine/operations/apply.(*operation).tryApplyResource(0xc000c88568, {0x8390e10, 0xc001228460}, {0x836bdf8, 0xc0015ab6b0}, {0xc001762780})
github.com/kyverno/chainsaw/pkg/engine/operations/apply/operation.go:102 +0x166
github.com/kyverno/chainsaw/pkg/engine/operations/apply.(*operation).execute.func1({0x8390e10?, 0xc001228460?})
github.com/kyverno/chainsaw/pkg/engine/operations/apply/operation.go:86 +0x4c
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func2({0x8390e10?, 0xc001228460?}, 0xc000c87e30?)
k8s.io/apimachinery@v0.34.2/pkg/util/wait/loop.go:87 +0x62
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x8390e10, 0xc001228460}, {0x837ddd0, 0xc0000fd800}, 0x0, 0x0, 0xc000c87ec0)
k8s.io/apimachinery@v0.34.2/pkg/util/wait/loop.go:88 +0x237
k8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel({0x8390e10, 0xc001228460}, 0x0?, 0x0, 0xc000c87ec0)
k8s.io/apimachinery@v0.34.2/pkg/util/wait/poll.go:33 +0x56
github.com/kyverno/chainsaw/pkg/engine/operations/apply.(*operation).execute(0x0?, {0x8390e10?, 0xc001228460?}, {0x836bdf8?, 0xc0015ab6b0?}, {0x31c3c86?})
github.com/kyverno/chainsaw/pkg/engine/operations/apply/operation.go:85 +0x8c
github.com/kyverno/chainsaw/pkg/engine/operations/apply.(*operation).Exec(0xc000c88568, {0x8390e10, 0xc001228460}, {0x836bdf8?, 0xc0015ab6b0?})
github.com/kyverno/chainsaw/pkg/engine/operations/apply/operation.go:79 +0x3ca
github.com/kyverno/chainsaw/pkg/runner/operations.applyAction.Execute({{{{0x0, 0x0, 0x0}}, {0x0, 0x0}, {0x0}, {{0x0, 0x0, 0x0}}, {{0x0, ...}}, ...}, ...}, ...)
github.com/kyverno/chainsaw/pkg/runner/operations/apply.go:47 +0x505
github.com/kyverno/chainsaw/pkg/runner.(*runner).runAction(_, {_, _}, {_, _}, {_, _}, _, _, {0xc000a4d440, ...}, ...)
github.com/kyverno/chainsaw/pkg/runner/runner.go:448 +0x2b2
github.com/kyverno/chainsaw/pkg/runner.(*runner).runOperation(_, {_, _}, {0xc000a4d440, 0xc0018f6ae0, {0x7ffc4e3e5516, 0x19}, {0x836bdf8, 0xc0012dcdb0}, {0x0, ...}, ...}, ...)
github.com/kyverno/chainsaw/pkg/runner/runner.go:383 +0x4d7
github.com/kyverno/chainsaw/pkg/runner.(*runner).runStep(_, {_, _}, _, _, _, {0xc000a4d440, 0xc0018f6ae0, {0x7ffc4e3e5516, 0x19}, ...}, ...)
github.com/kyverno/chainsaw/pkg/runner/runner.go:351 +0xc1d
github.com/kyverno/chainsaw/pkg/runner.(*runner).run.func2.3({_, _}, _, _, _, {0xc000a4d440, 0xc0018f6ae0, {0x0, 0x0}, {0x836bdf8, ...}, ...}, ...)
github.com/kyverno/chainsaw/pkg/runner/runner.go:192 +0x13ab
github.com/kyverno/chainsaw/pkg/runner.(*runner).run.func2.4(0xc00182cfc0)
github.com/kyverno/chainsaw/pkg/runner/runner.go:202 +0xcf
testing.tRunner(0xc00182cfc0, 0xc00074db90)
testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 137
testing/testing.go:1851 +0x413
chainsaw version Version
v0.2.14
Description
Specifying a
clusterin try steps without.spec.clustersconfigured leads to segfault, even if a KUBECONFIG env var is supplied.Steps to reproduce
KUBECONFIG=path/to/kubeconfig bin/chainsaw test tests/e2e/vault-injector/🗒️ Note that supplying
--clusterfrom command line with correct parameters does not reproduce this error, which is good/expected.Expected behavior
Throw an error that it cant map the cluster to the context. Alternatively, the cluster specified there should be the context perhaps? This would break backwards compatibility I suppose, so should we allow directly referencing a context from a kube config via another key like
cluster_contextinstead ? Worth noting that as a new user originally I recall being confused that theclusterkey is not a context. Also I originally thought this should be caught by the linter but, since the cluster can be also supplied from command line, catching it on the linter may not be possible.Screenshots
No response
Logs
Slack discussion
No response
Troubleshooting