Prevent deadlocks in the metrics client#44762
Conversation
cb8e725 to
fe83f31
Compare
songy23
left a comment
There was a problem hiding this comment.
Is there a simple way to reproduce it in unit tests (e.g with a mock meter)? The change makes sense to me
Static quality checks✅ Please find below the results from static quality gates Successful checksInfo
15 successful checks with minimal change (< 2 KiB)
On-wire sizes (compressed)
|
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: adfcd28 Optimization Goals: ✅ No significant changes detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | +2.19 | [-0.85, +5.23] | 1 | Logs |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | quality_gate_logs | % cpu utilization | +2.59 | [+1.12, +4.06] | 1 | Logs bounds checks dashboard |
| ➖ | docker_containers_cpu | % cpu utilization | +2.19 | [-0.85, +5.23] | 1 | Logs |
| ➖ | quality_gate_metrics_logs | memory utilization | +1.20 | [+0.99, +1.40] | 1 | Logs bounds checks dashboard |
| ➖ | otlp_ingest_metrics | memory utilization | +0.51 | [+0.36, +0.66] | 1 | Logs |
| ➖ | ddot_logs | memory utilization | +0.24 | [+0.18, +0.29] | 1 | Logs |
| ➖ | file_to_blackhole_0ms_latency | egress throughput | +0.04 | [-0.35, +0.44] | 1 | Logs |
| ➖ | file_to_blackhole_500ms_latency | egress throughput | +0.04 | [-0.34, +0.41] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulativetodelta_exporter | memory utilization | +0.03 | [-0.20, +0.26] | 1 | Logs |
| ➖ | quality_gate_idle | memory utilization | +0.02 | [-0.02, +0.07] | 1 | Logs bounds checks dashboard |
| ➖ | otlp_ingest_logs | memory utilization | +0.01 | [-0.08, +0.10] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api | ingress throughput | +0.01 | [-0.11, +0.13] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.01 | [-0.08, +0.09] | 1 | Logs |
| ➖ | docker_containers_memory | memory utilization | -0.00 | [-0.08, +0.07] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api_v3 | ingress throughput | -0.00 | [-0.14, +0.13] | 1 | Logs |
| ➖ | quality_gate_idle_all_features | memory utilization | -0.03 | [-0.07, +0.01] | 1 | Logs bounds checks dashboard |
| ➖ | ddot_metrics_sum_delta | memory utilization | -0.04 | [-0.25, +0.17] | 1 | Logs |
| ➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.05 | [-0.45, +0.35] | 1 | Logs |
| ➖ | file_to_blackhole_100ms_latency | egress throughput | -0.10 | [-0.14, -0.05] | 1 | Logs |
| ➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | -0.17 | [-0.22, -0.11] | 1 | Logs |
| ➖ | file_tree | memory utilization | -0.18 | [-0.24, -0.12] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulative | memory utilization | -0.35 | [-0.51, -0.18] | 1 | Logs |
| ➖ | ddot_metrics | memory utilization | -0.42 | [-0.64, -0.21] | 1 | Logs |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | -0.58 | [-0.66, -0.50] | 1 | Logs |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | links |
|---|---|---|---|---|
| ✅ | docker_containers_cpu | simple_check_run | 10/10 | |
| ✅ | docker_containers_memory | memory_usage | 10/10 | |
| ✅ | docker_containers_memory | simple_check_run | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
| ✅ | quality_gate_idle | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | cpu_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | memory_usage | 10/10 | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
fe83f31 to
6a72b2c
Compare
6a72b2c to
359edb4
Compare
drichards-87
left a comment
There was a problem hiding this comment.
Left some feedback on the release notes from Docs and approved the PR.
fcf1505 to
5623f36
Compare
|
Note : I verified that the test fails on main, and succeeds with the associated changes. Pushed with |
2607dc4 to
d8cf397
Compare
d8cf397 to
8235945
Compare
Co-authored-by: DeForest Richards <56796055+drichards-87@users.noreply.github.com>
8235945 to
bbc7914
Compare
|
/merge -m squash |
|
View all feedbacks in Devflow UI.
The expected merge time in
|
What does this PR do?
Reduce the scope where the metrics client's mutex is held, to prevent a deadlock from happening when the meter's methods want to acquire another mutex
Motivation
Prevent a deadlock we've seen occurring in prod OTAGENT-753
Describe how you validated your changes
Reading the change
Additional Notes
I'm making the assumption that holding the lock is not required when using the meter, just like it is done in the
HistogramandCountfunctions