chore: [M3-10715] - Bump Vite and update Vitest from v3 to v4#13119
chore: [M3-10715] - Bump Vite and update Vitest from v3 to v4#13119bnussman-akamai merged 16 commits intolinode:developfrom
v3 to v4#13119Conversation
|
|
||
| it( | ||
| 'should render client side validation errors for threshold and trigger occurences text field', | ||
| { timeout: 10000 }, |
There was a problem hiding this comment.
Thanks @bnussman-akamai - reliance on timeouts can only lead to more flaky tests. Is there any better workaround? like awaiting?
There was a problem hiding this comment.
We may be able to drop the explicit timeout on some of the test but I'm keeping them for now to try to minimize any new flake. Vitest v4 already seems to be posing some new flake that I don't know exactly how to solve right now
There was a problem hiding this comment.
@abailly-akamai @bnussman-akamai I haven't looked at this particular case, but TBH I think a lot of our unit test CI flake is caused by performance issues stemming from JSDom + the parallelization of our tests where increasing the timeout really is the only solution in reach.
I'm only speculating because I don't know how to actually prove it, but here's an explanation of my reasoning that I shared with @HanaXu in Slack a few weeks back when she encountered a similar seemingly random unit test failure (#13015):
I'm not sure exactly what's going on, but usually when this happens (i.e. a unit test starts failing with a timeout in CI that passes reliably locally) increasing the test/assertion* timeout fixes it. I have a feeling it will here too.
I'm speculating, but I think when we run the tests in parallel on CI hardware, JSDom -- which is pretty slow and demanding -- is actually struggling to render some of our components before the Vitest assertions time out, and I think these failures can be triggered by seemingly random/unrelated changes to other tests that just so happen to impact the order/timing of test execution in a way that causes multiple particularly demanding tests to run at the same time when before they didn't, leading to these random but persistent timeouts.
One reason I think this is the case is that back when I was still on my Intel MacBook, I used to be able to reproduce these types of failures locally pretty consistently by running multiple specs at a time whereas running only the failing spec would tend to pass. Since upgrading to the M3, I haven't been able to reproduce any of these types of failures, so I'm pretty confident that performance is a factor in some way.
There was a problem hiding this comment.
Thanks - this makes sense to me. I am less worried about these few instances than I am about proliferation of those. Could be overused by contributors once it starts spreading. But maybe that's a moot concern at this point
There was a problem hiding this comment.
Very much acknowledge and agree with the observations on the performance issues. I experienced that first hand with my Intel Mac and we're seeing it on Akamai's internal CI runners. 😅
In the case of this PR, while the performance issues are likely a factor in play, I think there is more to the story. I'm seeing new flake that I haven't seen before and I'm speculating it's a symptom of Vitests new pool/threading implementation that was introduced in v4
To address this new flake, I seem to be having some success with
- Ensuring Promises like
userEvent.click()are awaited - Mocking flags using our renderWithTheme wrapper options rather than module mocking
- Fixing some circular imports (primarily in CloudPulse code)
abailly-akamai
left a comment
There was a problem hiding this comment.
Still a draft but approving as seems stable - thanks!
v3 to v4
| ); | ||
|
|
||
| user.click(evaluationPeriodInput); | ||
| await userEvent.click(evaluationPeriodInput); |
There was a problem hiding this comment.
Not awaiting some of these userEvent calls was causing some failures/flake. The fix was to just ensure userEvent calls were awaited.
| window.MutationObserver = vi.fn(() => ({ | ||
| disconnect: disconnectMock, | ||
| observe: observeMock, | ||
| takeRecords, | ||
| })); |
There was a problem hiding this comment.
Something changed with mocking in Vitest v4. MutationObserver is expected to be a class so vi.fn is now causing issues here because it isn't a class.
I updated this test to not mock MutationObserver and just ensure scrollErrorIntoViewV2 does what it's expected to do
| @@ -1,6 +1,5 @@ | |||
| import { linodeFactory, nodeBalancerFactory } from '@linode/utilities'; | |||
|
|
|||
| import { transformDimensionValue } from '../../../Utils/utils'; | |||
There was a problem hiding this comment.
I had to move around numerous CloudPulse functions to avoid circular imports that were causing tests to intermittently fail.
No logic was changed, I just had to move things around.
| vi.mock('src/hooks/useFlags', () => { | ||
| const actual = vi.importActual('src/hooks/useFlags'); | ||
| return { | ||
| ...actual, | ||
| useFlags: queryMocks.useFlags, | ||
| }; | ||
| }); |
There was a problem hiding this comment.
I could be completely wrong becuase these errors are hard to track but I think this style of mocking was allowing LaunchDarkly to leave some pending fetches that caused test failures.
I switched to using the flags option on our wrapWithTheme to ensure our LaunchDarkly provider is properly setup. This seems to be fixing failures.
jdamore-linode
left a comment
There was a problem hiding this comment.
Awesome, thanks @bnussman-akamai!
Cloud Manager UI test results🔺 1 failing test on test run #16 ↗︎
Details
TroubleshootingUse this command to re-run the failing tests: pnpm cy:run -s "cypress/e2e/core/linodes/clone-linode.spec.ts" |
|||||||||||||||||

Description 📝
How to test 🧪
We can trust our CI to mostly test this, but to manually test, you can:
pnpm dev)pnpm run --filter linode-manager build)Author Checklists
As an Author, to speed up the review process, I considered 🤔
👀 Doing a self review
❔ Our contribution guidelines
🤏 Splitting feature into small PRs
➕ Adding a changeset
🧪 Providing/improving test coverage
🔐 Removing all sensitive information from the code and PR description
🚩 Using a feature flag to protect the release
👣 Providing comprehensive reproduction steps
📑 Providing or updating our documentation
🕛 Scheduling a pair reviewing session
📱 Providing mobile support
♿ Providing accessibility support
As an Author, before moving this PR from Draft to Open, I confirmed ✅