Skip to content

chore: [M3-10715] - Bump Vite and update Vitest from v3 to v4#13119

Merged
bnussman-akamai merged 16 commits intolinode:developfrom
bnussman-akamai:chore/update-vite-and-vitest
Nov 24, 2025
Merged

chore: [M3-10715] - Bump Vite and update Vitest from v3 to v4#13119
bnussman-akamai merged 16 commits intolinode:developfrom
bnussman-akamai:chore/update-vite-and-vitest

Conversation

@bnussman-akamai
Copy link
Member

@bnussman-akamai bnussman-akamai commented Nov 20, 2025

Description 📝

How to test 🧪

We can trust our CI to mostly test this, but to manually test, you can:

  • Verify local development works (pnpm dev)
  • Verify a production build works (pnpm run --filter linode-manager build)
Author Checklists

As an Author, to speed up the review process, I considered 🤔

👀 Doing a self review
❔ Our contribution guidelines
🤏 Splitting feature into small PRs
➕ Adding a changeset
🧪 Providing/improving test coverage
🔐 Removing all sensitive information from the code and PR description
🚩 Using a feature flag to protect the release
👣 Providing comprehensive reproduction steps
📑 Providing or updating our documentation
🕛 Scheduling a pair reviewing session
📱 Providing mobile support
♿ Providing accessibility support


  • I have read and considered all applicable items listed above.

As an Author, before moving this PR from Draft to Open, I confirmed ✅

  • All tests and CI checks are passing
  • TypeScript compilation succeeded without errors
  • Code passes all linting rules

@bnussman-akamai bnussman-akamai self-assigned this Nov 20, 2025
@bnussman-akamai bnussman-akamai requested a review from a team as a code owner November 20, 2025 16:05
@bnussman-akamai bnussman-akamai added Dependencies Pull requests that update a dependency file Security Pull requests that address a security vulnerability labels Nov 20, 2025
@bnussman-akamai bnussman-akamai removed the Security Pull requests that address a security vulnerability label Nov 20, 2025

it(
'should render client side validation errors for threshold and trigger occurences text field',
{ timeout: 10000 },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @bnussman-akamai - reliance on timeouts can only lead to more flaky tests. Is there any better workaround? like awaiting?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may be able to drop the explicit timeout on some of the test but I'm keeping them for now to try to minimize any new flake. Vitest v4 already seems to be posing some new flake that I don't know exactly how to solve right now

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@abailly-akamai @bnussman-akamai I haven't looked at this particular case, but TBH I think a lot of our unit test CI flake is caused by performance issues stemming from JSDom + the parallelization of our tests where increasing the timeout really is the only solution in reach.

I'm only speculating because I don't know how to actually prove it, but here's an explanation of my reasoning that I shared with @HanaXu in Slack a few weeks back when she encountered a similar seemingly random unit test failure (#13015):

I'm not sure exactly what's going on, but usually when this happens (i.e. a unit test starts failing with a timeout in CI that passes reliably locally) increasing the test/assertion* timeout fixes it. I have a feeling it will here too.

I'm speculating, but I think when we run the tests in parallel on CI hardware, JSDom -- which is pretty slow and demanding -- is actually struggling to render some of our components before the Vitest assertions time out, and I think these failures can be triggered by seemingly random/unrelated changes to other tests that just so happen to impact the order/timing of test execution in a way that causes multiple particularly demanding tests to run at the same time when before they didn't, leading to these random but persistent timeouts.

One reason I think this is the case is that back when I was still on my Intel MacBook, I used to be able to reproduce these types of failures locally pretty consistently by running multiple specs at a time whereas running only the failing spec would tend to pass. Since upgrading to the M3, I haven't been able to reproduce any of these types of failures, so I'm pretty confident that performance is a factor in some way.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks - this makes sense to me. I am less worried about these few instances than I am about proliferation of those. Could be overused by contributors once it starts spreading. But maybe that's a moot concern at this point

Copy link
Member Author

@bnussman-akamai bnussman-akamai Nov 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very much acknowledge and agree with the observations on the performance issues. I experienced that first hand with my Intel Mac and we're seeing it on Akamai's internal CI runners. 😅

In the case of this PR, while the performance issues are likely a factor in play, I think there is more to the story. I'm seeing new flake that I haven't seen before and I'm speculating it's a symptom of Vitests new pool/threading implementation that was introduced in v4

To address this new flake, I seem to be having some success with

  • Ensuring Promises like userEvent.click() are awaited
  • Mocking flags using our renderWithTheme wrapper options rather than module mocking
  • Fixing some circular imports (primarily in CloudPulse code)

@bnussman-akamai bnussman-akamai marked this pull request as draft November 20, 2025 21:00
Copy link
Contributor

@abailly-akamai abailly-akamai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still a draft but approving as seems stable - thanks!

@bnussman-akamai bnussman-akamai changed the title chore: [M3-10715] - Bump Vite and Vitest chore: [M3-10715] - Bump Vite and update Vitest from v3 to v4 Nov 21, 2025
);

user.click(evaluationPeriodInput);
await userEvent.click(evaluationPeriodInput);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not awaiting some of these userEvent calls was causing some failures/flake. The fix was to just ensure userEvent calls were awaited.

Comment on lines -23 to -27
window.MutationObserver = vi.fn(() => ({
disconnect: disconnectMock,
observe: observeMock,
takeRecords,
}));
Copy link
Member Author

@bnussman-akamai bnussman-akamai Nov 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something changed with mocking in Vitest v4. MutationObserver is expected to be a class so vi.fn is now causing issues here because it isn't a class.

I updated this test to not mock MutationObserver and just ensure scrollErrorIntoViewV2 does what it's expected to do

@@ -1,6 +1,5 @@
import { linodeFactory, nodeBalancerFactory } from '@linode/utilities';

import { transformDimensionValue } from '../../../Utils/utils';
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to move around numerous CloudPulse functions to avoid circular imports that were causing tests to intermittently fail.

No logic was changed, I just had to move things around.

Comment on lines -34 to -40
vi.mock('src/hooks/useFlags', () => {
const actual = vi.importActual('src/hooks/useFlags');
return {
...actual,
useFlags: queryMocks.useFlags,
};
});
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could be completely wrong becuase these errors are hard to track but I think this style of mocking was allowing LaunchDarkly to leave some pending fetches that caused test failures.

I switched to using the flags option on our wrapWithTheme to ensure our LaunchDarkly provider is properly setup. This seems to be fixing failures.

Image

@bnussman-akamai bnussman-akamai marked this pull request as ready for review November 21, 2025 20:22
Copy link
Contributor

@jdamore-linode jdamore-linode left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, thanks @bnussman-akamai!

@github-project-automation github-project-automation bot moved this from Review to Approved in Cloud Manager Nov 24, 2025
@linode-gh-bot
Copy link
Collaborator

Cloud Manager UI test results

🔺 1 failing test on test run #16 ↗︎

❌ Failing✅ Passing↪️ Skipped🕐 Duration
1 Failing861 Passing11 Skipped38m 59s

Details

Failing Tests
SpecTest
clone-linode.spec.tsCloud Manager Cypress Tests→clone linode » can clone a Linode from Linode details page

Troubleshooting

Use this command to re-run the failing tests:

pnpm cy:run -s "cypress/e2e/core/linodes/clone-linode.spec.ts"

@bnussman-akamai bnussman-akamai merged commit d699296 into linode:develop Nov 24, 2025
34 of 35 checks passed
@github-project-automation github-project-automation bot moved this from Approved to Merged in Cloud Manager Nov 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Dependencies Pull requests that update a dependency file

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

4 participants