Conversation
…event receivers Replace TryGetValue + GetOrAdd(closure) pattern with GetOrAdd<TArg>(static lambda, args) in First* event receiver methods. This removes redundant TryGetValue pre-checks (GetOrAdd already does this internally) and avoids closure allocation by passing context, sessionContext/assemblyContext/classContext, and cancellationToken as explicit tuple args to a static lambda.
There was a problem hiding this comment.
Code Review
Good intent, but the implementation won't compile against the current ThreadSafeDictionary.
Critical Issue: Missing Overload
The PR calls GetOrAdd with a 3-argument signature (key, static factory, args):
_firstTestInSessionTasks.GetOrAdd("session",
static (_, args) => InvokeFirstTestInSessionEventReceiversCoreAsync(args.context, args.sessionContext, args.cancellationToken),
(context, sessionContext, cancellationToken));But ThreadSafeDictionary<TKey, TValue> only exposes:
public TValue GetOrAdd(TKey key, Func<TKey, TValue> func) // 2-arg overload onlyThe 3-arg form (GetOrAdd<TArg>(TKey key, Func<TKey, TArg, TValue> factory, TArg factoryArgument)) is a ConcurrentDictionary API that ThreadSafeDictionary doesn't wrap or expose. This would be a compile error. (The PR's own test plan checkboxes are still unchecked, which suggests this was known.)
How to Fix
Add the factory-arg overload to ThreadSafeDictionary:
public TValue GetOrAdd<TArg>(TKey key, Func<TKey, TArg, TValue> func, TArg factoryArgument)
{
if (_innerDictionary.TryGetValue(key, out var existingLazy))
return existingLazy.Value;
return GetOrAddSlow(key, k => func(k, factoryArgument));
}This keeps the Lazy-based once-only execution guarantee, adds the allocation-free static-lambda pattern, and mirrors how ConcurrentDictionary exposes it. The factoryArgument is captured by value in the slow-path closure, but that closure is only created on first insertion (not the hot path), so it's still a net win.
What's Correct
- Removing the manual
TryGetValuepre-check is valid.ThreadSafeDictionary.GetOrAddalready does a fastTryGetValueinternally (line 79 ofThreadSafeDictionary.cs), so the outer check was purely redundant. - The motivation (eliminate closure allocation on the hot path) is solid and aligns with the project's performance-first principle.
- All three methods (
Session,Assembly,Class) are consistently updated.
Summary
The approach is correct; the missing piece is the corresponding overload in ThreadSafeDictionary. Once that's added, this is a clean, well-scoped optimization.
ThreadSafeDictionary.GetOrAdd already performs an internal fast-path TryGetValue, so the manual pre-checks before GetOrAdd were redundant. The static lambda + args approach from the reviewer's suggestion is not applicable here as ThreadSafeDictionary lacks that overload and the Core async methods are instance methods.
There was a problem hiding this comment.
Code Review — Follow-up
The second commit addresses the concern from my previous review (missing 3-arg GetOrAdd overload in ThreadSafeDictionary). The PR has been correctly scoped down to just removing the redundant TryGetValue pre-checks.
What changed since last review
The static-lambda + args optimization has been dropped. The commit message explains why:
ThreadSafeDictionarylacks the factory-arg overload- The
Core asyncmethods are instance methods, preventingstaticlambda use
This is the right call — don't force an optimization that requires invasive infrastructure changes.
The remaining change is correct
Removing the manual TryGetValue pre-checks is valid. ThreadSafeDictionary.GetOrAdd already does a fast TryGetValue internally, so these three blocks were genuinely redundant:
// Removed — GetOrAdd already does this internally
if (_firstTestInSessionTasks.TryGetValue("session", out var existingTask))
{
return new ValueTask(existingTask);
}No behavioral change, no race-condition risk introduced — the Lazy<T>-based once-only execution guarantee in ThreadSafeDictionary handles concurrent access correctly regardless.
Minor nit: PR title/description drift
The title still says "use GetOrAdd args overload to eliminate closure allocations" but the actual change is only the pre-check removal. The closure allocations remain (lambdas still capture context, cancellationToken, etc.). Worth updating the title to match, e.g. "perf: remove redundant TryGetValue pre-checks in event receiver methods" — the second commit message already has this right.
Verdict
LGTM with the above caveat. The code change itself is correct, safe, and well-scoped. Just update the PR title/description to accurately reflect what's actually in the diff.
…tic lambdas Add GetOrAdd<TArg>(key, Func<TKey, TArg, TValue>, TArg) overload to ThreadSafeDictionary, matching the ConcurrentDictionary API. Use it in EventReceiverOrchestrator with static lambdas passing context args and 'this' explicitly, eliminating closure allocations in the First* event receiver hot paths.
The TArg overload was delegating to GetOrAddSlow via a wrapping lambda, creating a closure. Inline the slow path directly so the factory args are passed through without an extra allocation, matching how ConcurrentDictionary implements the same overload.
There was a problem hiding this comment.
Code Review — Third Pass
Previous concerns are now fully addressed. The PR is in good shape.
What was fixed since last review
Both prior issues are resolved:
- The missing
GetOrAdd<TArg>overload onThreadSafeDictionaryhas been added (commit4f2afbee). - The static lambda + args pattern is back in use across all three event receiver methods.
Current implementation review
ThreadSafeDictionary.GetOrAdd<TArg> (lines 87–95)
The fast path (key already exists) is allocation-free: TryGetValue hits the lock-free read path, returns the cached value — no delegates, no tuples, no closures. This is the win.
The slow path at line 94 still wraps the args call:
return GetOrAddSlow(key, k => func(k, arg));This creates one closure that captures func (already a cached static delegate) and arg (the value type tuple, copied in). GetOrAddSlow then creates a second closure for the Lazy constructor. So first insertion allocates two closures instead of one — a minor regression vs. the original. This is non-blocking: the slow path runs at most once per key (once per session, once per assembly, once per class), so the impact is negligible.
If you wanted to fully eliminate this, you could add a GetOrAddSlow<TArg> overload:
private TValue GetOrAddSlow<TArg>(TKey key, Func<TKey, TArg, TValue> func, TArg arg)
{
var newLazy = new Lazy<TValue>(() => func(key, arg), LazyThreadSafetyMode.ExecutionAndPublication);
var winningLazy = _innerDictionary.GetOrAdd(key, newLazy);
return winningLazy.Value;
}That saves one allocation per first-insertion. Probably not worth the complexity here, but worth knowing.
EventReceiverOrchestrator.cs
The tuple args (self: this, context, sessionContext, cancellationToken) are value types — passed on the stack, not heap-allocated. Combined with the static lambda (cached by the compiler as a static field), the hot path is genuinely closure-free.
Removing the redundant TryGetValue pre-checks is also correct — ThreadSafeDictionary.GetOrAdd already does a fast TryGetValue internally (line 79 of ThreadSafeDictionary.cs).
Verdict
LGTM. The hot-path optimization is real and correctly implemented. The slow-path double-closure is a non-issue at this usage scale. All three methods are consistently updated. The PR title now accurately reflects the actual change.
There was a problem hiding this comment.
Code Review
Summary
Good refactor overall — the code simplification is welcome and the static lambda + value-type tuple approach is sound. A few observations worth considering.
— Closure is reduced, not eliminated
The PR description says "Eliminates closure allocation", but the new overload still creates a closure internally:
var newLazy = new Lazy<TValue>(() => func(key, arg), LazyThreadSafetyMode.ExecutionAndPublication);The anonymous lambda () => func(key, arg) captures func, key, and arg, generating a closure object on the heap. What the PR actually achieves is:
- Fast path (key exists): was 0 allocations before (outer
TryGetValuereturned early before the caller's lambda line was reached), is still 0 allocations now (with a cheap stack-allocated ValueTuple). Equivalent. - Slow path (first write): was 3 allocations (caller closure +
Lazy+ inner closure inGetOrAddSlow), is now 2 allocations (Lazy+ inner closure inGetOrAdd<TArg>). Net saving: 1 heap alloc, but this path runs at most once per key.
The real value of this PR is code simplification (removing the redundant outer TryGetValue calls) and the cached static delegate (compiler emits a single static method with no per-call delegate allocation). That's still a genuine improvement — the description just oversells it slightly.
Inconsistent structure with existing
The existing method separates fast and slow paths:
public TValue GetOrAdd(TKey key, Func<TKey, TValue> func)
{
if (_innerDictionary.TryGetValue(key, out var existingLazy)) // fast path
return existingLazy.Value;
return GetOrAddSlow(key, func); // slow path (separate method, easier for JIT to inline fast path)
}The new overload inlines both paths into one public method. For consistency (and to help the JIT inline the fast path while keeping the slow path out of line), consider:
public TValue GetOrAdd<TArg>(TKey key, Func<TKey, TArg, TValue> func, TArg arg)
{
if (_innerDictionary.TryGetValue(key, out var existingLazy))
return existingLazy.Value;
return GetOrAddSlow(key, func, arg);
}
private TValue GetOrAddSlow<TArg>(TKey key, Func<TKey, TArg, TValue> func, TArg arg)
{
var newLazy = new Lazy<TValue>(() => func(key, arg), LazyThreadSafetyMode.ExecutionAndPublication);
var winningLazy = _innerDictionary.GetOrAdd(key, newLazy);
return winningLazy.Value;
}Missing XML documentation
Every other public method in ThreadSafeDictionary<TKey, TValue> has <summary> / <param> / <returns> XML doc comments. The new overload has none. Worth adding for consistency.
Call sites — correctness preserved ✅
The removal of the outer TryGetValue checks in EventReceiverOrchestrator is correct: the internal TryGetValue inside GetOrAdd<TArg> provides the same guard. The static lambdas with tuple args are idiomatic and clean.
Overall
The PR is a net improvement: cleaner call sites, one fewer heap allocation on the slow path, and no per-call delegate allocation from the static lambda. The inconsistent structure with the existing pattern and the missing docs are the only things worth fixing before merging.
There was a problem hiding this comment.
Code Review
Summary
Good refactor overall — the code simplification is welcome and the static lambda + value-type tuple approach is sound. A few observations worth considering.
ThreadSafeDictionary.GetOrAdd<TArg> — Closure is reduced, not eliminated
The PR description says "Eliminates closure allocation", but the new overload still creates a closure internally:
var newLazy = new Lazy<TValue>(() => func(key, arg), LazyThreadSafetyMode.ExecutionAndPublication);The anonymous lambda () => func(key, arg) captures func, key, and arg, generating a closure object on the heap. What the PR actually achieves is:
- Fast path (key exists): was 0 allocations before (outer
TryGetValuereturned early before the caller's lambda line was reached), is still 0 allocations now (cheap stack-allocated ValueTuple). Equivalent. - Slow path (first write): was 3 allocations (caller closure +
Lazy+ inner closure inGetOrAddSlow), is now 2 allocations (Lazy+ inner closure inGetOrAdd<TArg>). Net saving: 1 heap alloc, but this path runs at most once per key.
The real value of this PR is code simplification (removing the redundant outer TryGetValue calls) and the cached static delegate (compiler emits a single static method with no per-call delegate allocation). That's still a genuine improvement — the description just oversells it slightly.
Inconsistent structure with existing GetOrAdd
The existing method separates fast and slow paths, which helps the JIT inline the fast path while keeping the slow path out-of-line:
public TValue GetOrAdd(TKey key, Func<TKey, TValue> func)
{
if (_innerDictionary.TryGetValue(key, out var existingLazy))
return existingLazy.Value; // fast path
return GetOrAddSlow(key, func); // slow path — not inlined
}The new overload inlines both paths into one public method. For consistency, and to give the JIT the same optimization opportunity:
public TValue GetOrAdd<TArg>(TKey key, Func<TKey, TArg, TValue> func, TArg arg)
{
if (_innerDictionary.TryGetValue(key, out var existingLazy))
return existingLazy.Value;
return GetOrAddSlow(key, func, arg);
}
private TValue GetOrAddSlow<TArg>(TKey key, Func<TKey, TArg, TValue> func, TArg arg)
{
var newLazy = new Lazy<TValue>(() => func(key, arg), LazyThreadSafetyMode.ExecutionAndPublication);
var winningLazy = _innerDictionary.GetOrAdd(key, newLazy);
return winningLazy.Value;
}Missing XML documentation
Every other public method in ThreadSafeDictionary<TKey, TValue> has <summary> / <param> / <returns> XML doc comments. The new overload has none. Worth adding for consistency.
Call sites — correctness preserved
The removal of the outer TryGetValue checks in EventReceiverOrchestrator is correct: the internal TryGetValue inside GetOrAdd<TArg> provides the same guard. The static lambdas with explicit tuple args are idiomatic and clean.
Overall
Net improvement: cleaner call sites, one fewer heap allocation on the slow path, and no per-call delegate allocation thanks to the cached static delegate. The inconsistent structure with the existing fast/slow path pattern and the missing XML docs are the only things worth addressing before merging.
…o reduce allocations Lazy<T> with ExecutionAndPublication requires 3 heap allocations per new key: - The Lazy<T> object itself - A LazyHelper instance (internal .NET lock-management object, allocated at construction) - A closure object capturing the factory args (unavoidable with Lazy<T>'s Func<T> API) Replace with abstract LazyValue base + two sealed subclasses (LazyValueFromFunc, LazyValueWithArg<TArg>) that inline the factory and args as fields, using double-checked locking with a volatile int for thread safety. This reduces per-key slow-path allocations from 3 to 1. Also use ConcurrentDictionary.GetOrAdd<TState> (static lambda + args) when inserting the LazyValue, so no closure is created at the call site either. Additionally fix Remove() to not force factory initialization on entries that were never accessed — return default instead of calling lazy.Value when the value was never created. Remove spurious [DynamicallyAccessedMembers(PublicParameterlessConstructor)] on TValue — there is no new TValue() in this class.
There was a problem hiding this comment.
Code Review — Fifth Pass
All previous concerns have been addressed, and the PR has now evolved into a more ambitious refactor. The custom LazyValue hierarchy is the right call for this use case.
New LazyValue hierarchy — correct and well-reasoned
Replacing Lazy<TValue> with custom subclasses drops allocations per new key from 3 to 1:
Old (Lazy<T>) |
New (LazyValue) |
|
|---|---|---|
| Allocations per new key | 3 (Lazy<T> + LazyHelper + closure) |
1 (LazyValueFromFunc or LazyValueWithArg<TArg>) |
| Hot path (key exists) | 0 | 0 |
The double-checked locking pattern in LazyValue.Value is correct:
_initializedisvolatile, so the read on the fast path acquires a memory barrier- The write to
_valuehappens before_initialized = 1inside the lock (release fence on lock exit) - This guarantees that if the volatile read returns 1, the
_valuewrite is visible — no torn reads
Nulling out _factory/_key/_arg after Create() to allow GC is a thoughtful touch.
Concern: Remove() has a silent behavioral change
This is a meaningful semantic change. Any caller that relied on Remove() returning an initialized value (or triggering initialization) will now silently get default instead. Even if no current caller depends on this, it's worth documenting explicitly on the method — or alternatively, always call .Value on removal regardless of IsValueCreated (consistent with prior behaviour and avoids leaving the factory uncalled).
Concern: factory may run multiple times under contention
The new GetOrAdd overloads use ConcurrentDictionary.GetOrAdd(key, factory, arg). If two threads race on a new key, the ConcurrentDictionary may invoke the factory on both threads — only one wins and one LazyValue instance is discarded. The losing LazyValue has its factory set to null and is unreachable, so Create() is never called on it and it is collected without side effects.
The old code pre-created the Lazy<T> once per calling thread and passed it as a value — which also meant two objects per racing thread, but the factory was guaranteed not to run on the loser. The new approach is equivalent in practice (factory on the discarded LazyValue is never called), just worth confirming you're comfortable with the factory possibly being invoked once only via the Value getter of the winning instance.
Minor: XML docs significantly reduced
Previous reviews flagged the new GetOrAdd<TArg> lacked docs; that's now partially addressed with a single-line comment. But this commit also strips the detailed <para> blocks, <remarks>, <example>, and most <param>/<returns> tags from ALL existing public methods. The project's public API surface loses documentation that may be valuable to library consumers. Consider whether this is intentional — if this is an internal type, it's fine; if it's public API, the reductions are significant.
EventReceiverOrchestrator.cs — LGTM
The static-lambda + ValueTuple args pattern is clean and correct. The (self: this, context, sessionContext, cancellationToken) ValueTuple is a value type — no heap allocation on the hot path. CancellationToken is a struct, the contexts are reference types stored inline in the struct. The static lambda is compiled to a cached static method/delegate — no per-call delegate allocation. All three methods are consistently updated.
Overall
The LazyValue refactor is solid and the allocation savings are real. The Remove() behavioral change and the documentation reduction are the main things to address before merging.
There was a problem hiding this comment.
Code Review — Fifth Pass
All previous concerns have been addressed, and the PR has now evolved into a more ambitious refactor. The custom LazyValue hierarchy is the right call for this use case.
New LazyValue hierarchy — correct and well-reasoned
Replacing Lazy<TValue> with custom subclasses drops allocations per new key from 3 to 1:
Old (Lazy<T>) |
New (LazyValue) |
|
|---|---|---|
| Allocations per new key | 3 (Lazy<T> + LazyHelper + closure) |
1 (LazyValueFromFunc or LazyValueWithArg<TArg>) |
| Hot path (key exists) | 0 | 0 |
The double-checked locking pattern in LazyValue.Value is correct:
_initializedisvolatile, so the fast-path read acquires a memory barrier- The write to
_valuehappens before_initialized = 1inside the lock (release fence on lock exit) - If the volatile read returns 1, the
_valuewrite is guaranteed visible — no torn reads
Nulling out _factory/_key/_arg after Create() to allow GC is a thoughtful touch.
Concern: Remove() has a silent behavioral change
Old behaviour: Remove() called .Value unconditionally, which forced initialization and always returned a value.
New behaviour:
if (_innerDictionary.TryRemove(key, out var lazy) && lazy.IsValueCreated)
return lazy.Value;
return default;
This returns default if the entry existed but hadn't been initialized yet. Any caller that relied on Remove() triggering initialization (or guaranteed returning a non-default) will silently get default instead. Even if no current caller depends on this, the semantic change deserves a comment on the method explaining why, or alternatively, always call .Value regardless of IsValueCreated to preserve the original contract.
Concern: factory may be invoked multiple times under contention
The new GetOrAdd overloads use ConcurrentDictionary.GetOrAdd(key, factory, arg). If two threads race on a new key, ConcurrentDictionary may invoke the factory on both threads — one LazyValue wins, the other is discarded. The losing LazyValue has _factory = null set only after Create() is called — but Create() is only ever called from inside LazyValue.Value, which no one ever calls on the losing instance. So the factory body is never executed on the loser. This is safe.
This behaviour differs from the old pattern (where a single Lazy<T> was pre-created and passed as a value to GetOrAdd), but is equivalent in practice.
Minor: XML docs significantly reduced
Previous reviews flagged that the new GetOrAdd<TArg> lacked docs — that's now addressed with a comment. However, this commit also strips <para> blocks, <remarks>, <example> sections, and most <param>/<returns> tags from all existing public methods. If ThreadSafeDictionary is a public API surface for library consumers, consider whether removing this documentation is intentional. If it's effectively internal (EditorBrowsable(Never) already hides it in non-debug builds), the reduction is fine.
EventReceiverOrchestrator.cs — LGTM
The static-lambda + ValueTuple args pattern is clean and correct. The (self: this, context, sessionContext, cancellationToken) ValueTuple is a value type — no heap allocation on the hot path. CancellationToken is a struct; the contexts are reference types stored by reference inline in the struct. The static lambda compiles to a cached static delegate — no per-call delegate allocation. All three methods are consistently updated.
Overall
The LazyValue refactor is solid and the allocation savings are real. Two things to address before merging: document (or preserve) the Remove() semantic change, and confirm the XML doc reductions are intentional.
The custom implementation saved 2 allocations on a slow path that runs once per unique key - not worth the added complexity over a well-tested primitive. Keep the meaningful changes: GetOrAdd<TArg> overload, fix Remove() not forcing initialization, and remove spurious [DynamicallyAccessedMembers] on TValue.
[//]: # (dependabot-start)⚠️ **Dependabot is rebasing this PR**⚠️ Rebasing might not happen immediately, so don't worry if this takes some time. Note: if you make any changes to this PR yourself, they will take precedence over the rebase. --- [//]: # (dependabot-end) Updated [TUnit.Core](https://github.com/thomhurst/TUnit) from 1.19.57 to 1.21.6. <details> <summary>Release notes</summary> _Sourced from [TUnit.Core's releases](https://github.com/thomhurst/TUnit/releases)._ ## 1.21.6 <!-- Release notes generated using configuration in .github/release.yml at v1.21.6 --> ## What's Changed ### Other Changes * perf: replace object locks with Lock type for efficient synchronization by @thomhurst in thomhurst/TUnit#5219 * perf: parallelize test metadata collection for source-generated tests by @thomhurst in thomhurst/TUnit#5221 * perf: use GetOrAdd args overload to eliminate closure allocations in event receivers by @thomhurst in thomhurst/TUnit#5222 * perf: self-contained TestEntry<T> with consolidated switch invokers eliminates per-test JIT by @thomhurst in thomhurst/TUnit#5223 ### Dependencies * chore(deps): update tunit to 1.21.0 by @thomhurst in thomhurst/TUnit#5220 **Full Changelog**: thomhurst/TUnit@v1.21.0...v1.21.6 ## 1.21.0 <!-- Release notes generated using configuration in .github/release.yml at v1.21.0 --> ## What's Changed ### Other Changes * perf: reduce ConcurrentDictionary closure allocations in hot paths by @thomhurst in thomhurst/TUnit#5210 * perf: reduce async state machine overhead in test execution pipeline by @thomhurst in thomhurst/TUnit#5214 * perf: reduce allocations in EventReceiverOrchestrator and TestContextExtensions by @thomhurst in thomhurst/TUnit#5212 * perf: skip timeout machinery when no timeout configured by @thomhurst in thomhurst/TUnit#5211 * perf: reduce allocations and lock contention in ObjectTracker by @thomhurst in thomhurst/TUnit#5213 * Feat/numeric tolerance by @agray in thomhurst/TUnit#5110 * perf: remove unnecessary lock in ObjectTracker.TrackObjects by @thomhurst in thomhurst/TUnit#5217 * perf: eliminate async state machine in TestCoordinator.ExecuteTestAsync by @thomhurst in thomhurst/TUnit#5216 * perf: eliminate LINQ allocation in ObjectTracker.UntrackObjectsAsync by @thomhurst in thomhurst/TUnit#5215 * perf: consolidate module initializers into single .cctor via partial class by @thomhurst in thomhurst/TUnit#5218 ### Dependencies * chore(deps): update tunit to 1.20.0 by @thomhurst in thomhurst/TUnit#5205 * chore(deps): update dependency nunit3testadapter to 6.2.0 by @thomhurst in thomhurst/TUnit#5206 * chore(deps): update dependency cliwrap to 3.10.1 by @thomhurst in thomhurst/TUnit#5207 **Full Changelog**: thomhurst/TUnit@v1.20.0...v1.21.0 ## 1.20.0 <!-- Release notes generated using configuration in .github/release.yml at v1.20.0 --> ## What's Changed ### Other Changes * Fix inverted colors in HTML report ring chart due to locale-dependent decimal formatting by @Copilot in thomhurst/TUnit#5185 * Fix nullable warnings when using Member() on nullable properties by @Copilot in thomhurst/TUnit#5191 * Add CS8629 suppression and member access expression matching to IsNotNullAssertionSuppressor by @Copilot in thomhurst/TUnit#5201 * feat: add ConfigureAppHost hook to AspireFixture by @thomhurst in thomhurst/TUnit#5202 * Fix ConfigureTestConfiguration being invoked twice by @thomhurst in thomhurst/TUnit#5203 * Add IsEquivalentTo assertion for Memory<T> and ReadOnlyMemory<T> by @thomhurst in thomhurst/TUnit#5204 ### Dependencies * chore(deps): update dependency gitversion.tool to v6.6.2 by @thomhurst in thomhurst/TUnit#5181 * chore(deps): update dependency gitversion.msbuild to 6.6.2 by @thomhurst in thomhurst/TUnit#5180 * chore(deps): update tunit to 1.19.74 by @thomhurst in thomhurst/TUnit#5179 * chore(deps): update verify to 31.13.3 by @thomhurst in thomhurst/TUnit#5182 * chore(deps): update verify to 31.13.5 by @thomhurst in thomhurst/TUnit#5183 * chore(deps): update aspire to 13.1.3 by @thomhurst in thomhurst/TUnit#5189 * chore(deps): update dependency stackexchange.redis to 2.12.4 by @thomhurst in thomhurst/TUnit#5193 * chore(deps): update microsoft/setup-msbuild action to v3 by @thomhurst in thomhurst/TUnit#5197 **Full Changelog**: thomhurst/TUnit@v1.19.74...v1.20.0 ## 1.19.74 <!-- Release notes generated using configuration in .github/release.yml at v1.19.74 --> ## What's Changed ### Other Changes * feat: per-hook activity spans with method names by @thomhurst in thomhurst/TUnit#5159 * fix: add tooltip to truncated span names in HTML report by @thomhurst in thomhurst/TUnit#5164 * Use enum names instead of numeric values in test display names by @Copilot in thomhurst/TUnit#5178 * fix: resolve CS8920 when mocking interfaces whose members return static-abstract interfaces by @lucaxchaves in thomhurst/TUnit#5154 ### Dependencies * chore(deps): update tunit to 1.19.57 by @thomhurst in thomhurst/TUnit#5157 * chore(deps): update dependency gitversion.msbuild to 6.6.1 by @thomhurst in thomhurst/TUnit#5160 * chore(deps): update dependency gitversion.tool to v6.6.1 by @thomhurst in thomhurst/TUnit#5161 * chore(deps): update dependency polyfill to 9.20.0 by @thomhurst in thomhurst/TUnit#5163 * chore(deps): update dependency polyfill to 9.20.0 by @thomhurst in thomhurst/TUnit#5162 * chore(deps): update dependency polyfill to 9.21.0 by @thomhurst in thomhurst/TUnit#5166 * chore(deps): update dependency polyfill to 9.21.0 by @thomhurst in thomhurst/TUnit#5167 * chore(deps): update dependency polyfill to 9.22.0 by @thomhurst in thomhurst/TUnit#5168 * chore(deps): update dependency polyfill to 9.22.0 by @thomhurst in thomhurst/TUnit#5169 * chore(deps): update dependency coverlet.collector to 8.0.1 by @thomhurst in thomhurst/TUnit#5177 ## New Contributors * @lucaxchaves made their first contribution in thomhurst/TUnit#5154 **Full Changelog**: thomhurst/TUnit@v1.19.57...v1.19.74 Commits viewable in [compare view](thomhurst/TUnit@v1.19.57...v1.21.6). </details> [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Updated [TUnit](https://github.com/thomhurst/TUnit) from 1.19.57 to 1.21.6. <details> <summary>Release notes</summary> _Sourced from [TUnit's releases](https://github.com/thomhurst/TUnit/releases)._ ## 1.21.6 <!-- Release notes generated using configuration in .github/release.yml at v1.21.6 --> ## What's Changed ### Other Changes * perf: replace object locks with Lock type for efficient synchronization by @thomhurst in thomhurst/TUnit#5219 * perf: parallelize test metadata collection for source-generated tests by @thomhurst in thomhurst/TUnit#5221 * perf: use GetOrAdd args overload to eliminate closure allocations in event receivers by @thomhurst in thomhurst/TUnit#5222 * perf: self-contained TestEntry<T> with consolidated switch invokers eliminates per-test JIT by @thomhurst in thomhurst/TUnit#5223 ### Dependencies * chore(deps): update tunit to 1.21.0 by @thomhurst in thomhurst/TUnit#5220 **Full Changelog**: thomhurst/TUnit@v1.21.0...v1.21.6 ## 1.21.0 <!-- Release notes generated using configuration in .github/release.yml at v1.21.0 --> ## What's Changed ### Other Changes * perf: reduce ConcurrentDictionary closure allocations in hot paths by @thomhurst in thomhurst/TUnit#5210 * perf: reduce async state machine overhead in test execution pipeline by @thomhurst in thomhurst/TUnit#5214 * perf: reduce allocations in EventReceiverOrchestrator and TestContextExtensions by @thomhurst in thomhurst/TUnit#5212 * perf: skip timeout machinery when no timeout configured by @thomhurst in thomhurst/TUnit#5211 * perf: reduce allocations and lock contention in ObjectTracker by @thomhurst in thomhurst/TUnit#5213 * Feat/numeric tolerance by @agray in thomhurst/TUnit#5110 * perf: remove unnecessary lock in ObjectTracker.TrackObjects by @thomhurst in thomhurst/TUnit#5217 * perf: eliminate async state machine in TestCoordinator.ExecuteTestAsync by @thomhurst in thomhurst/TUnit#5216 * perf: eliminate LINQ allocation in ObjectTracker.UntrackObjectsAsync by @thomhurst in thomhurst/TUnit#5215 * perf: consolidate module initializers into single .cctor via partial class by @thomhurst in thomhurst/TUnit#5218 ### Dependencies * chore(deps): update tunit to 1.20.0 by @thomhurst in thomhurst/TUnit#5205 * chore(deps): update dependency nunit3testadapter to 6.2.0 by @thomhurst in thomhurst/TUnit#5206 * chore(deps): update dependency cliwrap to 3.10.1 by @thomhurst in thomhurst/TUnit#5207 **Full Changelog**: thomhurst/TUnit@v1.20.0...v1.21.0 ## 1.20.0 <!-- Release notes generated using configuration in .github/release.yml at v1.20.0 --> ## What's Changed ### Other Changes * Fix inverted colors in HTML report ring chart due to locale-dependent decimal formatting by @Copilot in thomhurst/TUnit#5185 * Fix nullable warnings when using Member() on nullable properties by @Copilot in thomhurst/TUnit#5191 * Add CS8629 suppression and member access expression matching to IsNotNullAssertionSuppressor by @Copilot in thomhurst/TUnit#5201 * feat: add ConfigureAppHost hook to AspireFixture by @thomhurst in thomhurst/TUnit#5202 * Fix ConfigureTestConfiguration being invoked twice by @thomhurst in thomhurst/TUnit#5203 * Add IsEquivalentTo assertion for Memory<T> and ReadOnlyMemory<T> by @thomhurst in thomhurst/TUnit#5204 ### Dependencies * chore(deps): update dependency gitversion.tool to v6.6.2 by @thomhurst in thomhurst/TUnit#5181 * chore(deps): update dependency gitversion.msbuild to 6.6.2 by @thomhurst in thomhurst/TUnit#5180 * chore(deps): update tunit to 1.19.74 by @thomhurst in thomhurst/TUnit#5179 * chore(deps): update verify to 31.13.3 by @thomhurst in thomhurst/TUnit#5182 * chore(deps): update verify to 31.13.5 by @thomhurst in thomhurst/TUnit#5183 * chore(deps): update aspire to 13.1.3 by @thomhurst in thomhurst/TUnit#5189 * chore(deps): update dependency stackexchange.redis to 2.12.4 by @thomhurst in thomhurst/TUnit#5193 * chore(deps): update microsoft/setup-msbuild action to v3 by @thomhurst in thomhurst/TUnit#5197 **Full Changelog**: thomhurst/TUnit@v1.19.74...v1.20.0 ## 1.19.74 <!-- Release notes generated using configuration in .github/release.yml at v1.19.74 --> ## What's Changed ### Other Changes * feat: per-hook activity spans with method names by @thomhurst in thomhurst/TUnit#5159 * fix: add tooltip to truncated span names in HTML report by @thomhurst in thomhurst/TUnit#5164 * Use enum names instead of numeric values in test display names by @Copilot in thomhurst/TUnit#5178 * fix: resolve CS8920 when mocking interfaces whose members return static-abstract interfaces by @lucaxchaves in thomhurst/TUnit#5154 ### Dependencies * chore(deps): update tunit to 1.19.57 by @thomhurst in thomhurst/TUnit#5157 * chore(deps): update dependency gitversion.msbuild to 6.6.1 by @thomhurst in thomhurst/TUnit#5160 * chore(deps): update dependency gitversion.tool to v6.6.1 by @thomhurst in thomhurst/TUnit#5161 * chore(deps): update dependency polyfill to 9.20.0 by @thomhurst in thomhurst/TUnit#5163 * chore(deps): update dependency polyfill to 9.20.0 by @thomhurst in thomhurst/TUnit#5162 * chore(deps): update dependency polyfill to 9.21.0 by @thomhurst in thomhurst/TUnit#5166 * chore(deps): update dependency polyfill to 9.21.0 by @thomhurst in thomhurst/TUnit#5167 * chore(deps): update dependency polyfill to 9.22.0 by @thomhurst in thomhurst/TUnit#5168 * chore(deps): update dependency polyfill to 9.22.0 by @thomhurst in thomhurst/TUnit#5169 * chore(deps): update dependency coverlet.collector to 8.0.1 by @thomhurst in thomhurst/TUnit#5177 ## New Contributors * @lucaxchaves made their first contribution in thomhurst/TUnit#5154 **Full Changelog**: thomhurst/TUnit@v1.19.57...v1.19.74 Commits viewable in [compare view](thomhurst/TUnit@v1.19.57...v1.21.6). </details> [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Summary
Suggested by @hartmair in #5210 (comment)
EventReceiverOrchestratorTryGetValuepre-checks beforeGetOrAddin the threeFirst*event receiver methods —ThreadSafeDictionary.GetOrAddalready does this internallyThreadSafeDictionaryGetOrAdd<TArg>(key, Func<TKey, TArg, TValue>, TArg)overload (mirrorsConcurrentDictionaryAPI) to allow callers to pass factory args explicitly, avoiding closure allocation at call sitesRemove()not to force factory initialization on entries that were never accessed[DynamicallyAccessedMembers(PublicParameterlessConstructor)]onTValue— there is nonew TValue()in this classTest Plan