Try: add Playwright tests#7
Conversation
|
Important Review skippedReview was skipped due to path filters ⛔ Files ignored due to path filters (6)
CodeRabbit blocks several paths by default. You can override this behavior by explicitly including those paths in the path filters. For example, including ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThis pull request adds a comprehensive Playwright-based testing infrastructure for visual regression testing. It includes CI/CD workflow configuration, Playwright test setup with mock Google Maps API support, test utilities, and a suite of visual regression tests covering multiple UI states and responsive layouts. Changes
Sequence DiagramsequenceDiagram
participant GitHub as GitHub Actions
participant Node as Node.js
participant Playwright as Playwright
participant Server as PHP Server
participant Browser as Browser
participant TestPage as test-page.html
GitHub->>Node: Run npm ci
GitHub->>Node: Run npx playwright install
GitHub->>Server: Start PHP -S localhost:8888
GitHub->>Playwright: Execute playwright test
Playwright->>Browser: Launch Chromium
Browser->>Server: GET /tests/test-page.html
Server->>TestPage: Serve test harness
TestPage->>Browser: Load with mocked google.maps API
Playwright->>Browser: Navigate to test page
Playwright->>Browser: Wait for network idle
Playwright->>Browser: Capture screenshot (normal state)
Playwright->>Browser: Click "Show Loading State"
Playwright->>Browser: Wait for loading text
Playwright->>Browser: Capture screenshot (loading)
Playwright->>Browser: Click "Show Error State"
Playwright->>Browser: Wait for error text
Playwright->>Browser: Capture screenshot (error)
Playwright->>Browser: Set mobile viewport
Playwright->>Browser: Capture screenshot (mobile)
Playwright->>Browser: Generate test results & artifacts
GitHub->>GitHub: Upload playwright-report & build artifacts
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Summary of Changes
Hello @adamsilverstein, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces Playwright as a new end-to-end testing framework for the project. It sets up a comprehensive test suite including visual regression, component, and WordPress integration tests, along with a custom test runner and necessary configurations.
Highlights
- Playwright Integration: Adds Playwright as a development dependency and configures it for cross-browser testing (Chromium, Firefox, WebKit, mobile viewports).
- Comprehensive Test Suite: Establishes a structured
tests/directory containing dedicated suites for visual regression, component functionality, and WordPress integration. - Custom Test Runner: Introduces
scripts/run-tests.jsto provide a convenient command-line interface for executing various types of Playwright tests, including options for visual-only, headed mode, and snapshot updates. - Testing Utilities: Includes helper functions for mocking APIs (like Google Maps), managing network responses, and taking consistent screenshots, facilitating robust test development.
- Environment Setup: Updates
.gitignoreto properly exclude test artifacts, build outputs, and common development environment files, ensuring a clean repository.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request introduces a comprehensive Playwright test suite, which is a fantastic addition for ensuring the plugin's quality. The structure, including different test types (component, integration, visual), helper scripts, and good documentation, is well-thought-out. However, there are several critical issues related to the test environment setup and the use of anti-patterns like fixed timeouts (waitForTimeout) that could lead to flaky and unreliable tests. The web server configuration seems incorrect for a WordPress environment, and the global setup script doesn't properly handle the server not being ready. Addressing these foundational issues will make the test suite robust and reliable.
| webServer: { | ||
| command: 'php -S localhost:8888', | ||
| url: 'http://localhost:8888', | ||
| reuseExistingServer: !process.env.CI, | ||
| timeout: 120 * 1000, | ||
| }, |
There was a problem hiding this comment.
The webServer command php -S localhost:8888 is not suitable for running a WordPress site, as it serves from the project root directory. A WordPress installation requires a specific document root and a database connection to function correctly. This configuration will likely cause the tests to fail in a CI environment where a pre-existing server isn't available.
| try { | ||
| // Wait for the local server to be ready | ||
| await page.goto('http://localhost:8888'); | ||
| console.log('WordPress server is accessible'); | ||
| } catch (error) { | ||
| console.log('WordPress server not ready yet, continuing...'); | ||
| } finally { | ||
| await browser.close(); | ||
| } |
There was a problem hiding this comment.
The setup script continues even if the web server is not accessible. This will cause tests to fail with a generic navigation timeout error, which can be confusing to debug. The setup should either wait for the server with a retry mechanism or fail fast with a clear error message if the server doesn't start within a reasonable time.
| }, { timeout }); | ||
|
|
||
| // Additional wait for map to be fully rendered | ||
| await page.waitForTimeout(2000); |
There was a problem hiding this comment.
The helper function waitForGoogleMaps uses page.waitForTimeout(), which is a fixed wait. This is an anti-pattern that can lead to flaky tests. The wait should be for a specific element or state, not a fixed duration.
// Wait for a specific element within the map to ensure it's rendered, e.g., a map tile.
await page.waitForSelector('.gm-style > div > div > div:nth-child(1)', { state: 'visible', timeout });| }); | ||
|
|
||
| // Map should not exceed 100% width | ||
| expect(styles.width).toBe('100%'); |
There was a problem hiding this comment.
This assertion is incorrect because styles.width will be a computed pixel value (e.g., 1200px), not the string '100%'. The test will fail as it's comparing a pixel value to a percentage string.
// A better approach is to check that the element's width is not zero or is within an expected range.
expect(parseInt(styles.width, 10)).toBeGreaterThan(0);| headed: args.includes('--headed') || args.includes('-h'), | ||
| update: args.includes('--update') || args.includes('-u'), | ||
| browser: args.find(arg => arg.startsWith('--browser='))?.split('=')[1] || 'chromium', | ||
| help: args.includes('--help') || args.includes('-h') |
| @@ -0,0 +1,196 @@ | |||
| <!DOCTYPE html> | |||
There was a problem hiding this comment.
This file appears to be a well-structured asset for isolated component testing, with mocks for the Google Maps API and controls for different states. However, it seems to be unused in the current test suite, as the tests navigate to the site root (/) instead. This can be confusing for future maintenance.
Add a standalone test page that mocks the map component in its various states (normal, loading, error) along with Playwright visual regression tests that drive the page through each state and capture screenshots. Tests cover: - Normal state layout - Loading state - Error state - Mobile, tablet, and desktop viewports Includes CI workflow that runs tests and builds the plugin on push/PR.
75666b3 to
537ad9c
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (3)
.gitignore (1)
20-27: Duplicate ignore entry forwp-content/uploads/.This is harmless but can be deduplicated to keep the ignore list tidy.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.gitignore around lines 20 - 27, The .gitignore contains a duplicate entry "wp-content/uploads/"; remove the redundant line so "wp-content/uploads/" appears only once in the file (locate the duplicate string "wp-content/uploads/" in the shown diff and delete the extra occurrence), leaving other wp-content entries unchanged.package.json (1)
10-17: Consider routing Playwright scripts throughtest:runnerto avoid command drift.You already introduced
scripts/run-tests.js; wiring the mode scripts through it keeps one source of truth for Playwright flags.♻️ Suggested script consolidation
- "test": "playwright test", - "test:ui": "playwright test --ui", - "test:headed": "playwright test --headed", - "test:visual": "playwright test --grep `@visual`", - "test:update-snapshots": "playwright test --update-snapshots", + "test": "node scripts/run-tests.js", + "test:ui": "playwright test --ui", + "test:headed": "node scripts/run-tests.js --headed", + "test:visual": "node scripts/run-tests.js --visual", + "test:update-snapshots": "node scripts/run-tests.js --update",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` around lines 10 - 17, The various Playwright npm scripts ("test", "test:ui", "test:headed", "test:visual", "test:update-snapshots") should be routed through the single entry "test:runner" (node scripts/run-tests.js) so flags and behavior are centralized; update each of those script entries to invoke node scripts/run-tests.js with a mode or flag (e.g., --mode=ui, --headed, --grep=@visual, --update-snapshots) and ensure scripts/run-tests.js parses those flags and maps them to the appropriate Playwright CLI options, leaving "test:runner" as the canonical runner.playwright.config.js (1)
7-8:global-setup.jsandglobal-teardown.jsare currently not wired into Playwright config.These files exist but are not registered in
defineConfig, leaving them as unused test harness code. Wire them into the configuration or remove them.Option 1: Direct configuration (simpler)
module.exports = defineConfig({ + globalSetup: require.resolve('./tests/global-setup.js'), + globalTeardown: require.resolve('./tests/global-teardown.js'), testDir: './tests',Option 2: Project dependencies (recommended for feature completeness)
Convert the setup/teardown to separate projects with
dependenciesinstead. See Playwright docs.Also applies to: 30-35
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@playwright.config.js` around lines 7 - 8, The Playwright config (defineConfig) is missing wiring for the existing global-setup.js and global-teardown.js files; update the configuration by adding globalSetup and globalTeardown properties referencing "global-setup.js" and "global-teardown.js" (or alternatively convert them into separate projects with dependencies using the Playwright projects/dependencies API) so the functions exported by those files are executed; look for the defineConfig call and add the globalSetup/globalTeardown keys (or refactor to projects with dependencies) to register the setup/teardown logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/run-tests.js`:
- Around line 55-57: The close handler currently calls process.exit(code) and
loses failure semantics when Playwright exits via signal (code is null); update
the handler registered on testProcess (testProcess.on('close', ...)) to accept
both (code, signal) and, if signal is set, forward it to the current process
(e.g., process.kill(process.pid, signal)), otherwise call process.exit(code ||
1) to ensure non-zero exit on child failure; update uses of process.exit and add
process.kill as described.
In `@tests/test-page.html`:
- Around line 127-137: The loading spinner in setLoadingState uses a CSS
animation ("animation: spin 1s linear infinite") which causes non-deterministic
visual snapshots; update the inline markup generated in setLoadingState (and the
similar block at lines ~182-190) to remove or disable the animation (e.g.,
remove the animation property or set animation: none / override with
prefers-reduced-motion-safe static styling) so snapshots are deterministic while
keeping the visible spinner shape and text unchanged; reference the spinner
element created in setLoadingState and the corresponding spinner markup in the
other block when making the change.
- Around line 167-180: The three manual helpers (setMobileView, setTabletView,
setDesktopView) only change document.body styles and do not affect
window.innerWidth or media queries; update them to attempt a real viewport
change by calling window.resizeTo(width, height) when available (e.g., call
window.resizeTo(375, window.outerHeight) in setMobileView, 768 for
setTabletView, and use a large width or window.screen.width for setDesktopView),
and fall back to the existing body.style changes if resizeTo is blocked; also
add a visible note on the test page stating "These buttons only simulate width
constraints; use browser DevTools or automated tests (page.setViewportSize) for
true responsive testing" so developers understand the limitation.
- Around line 99-123: Change MockMap's constructor signature to match the real
API by accepting (element, options) instead of (options); update MockMap to
store the passed element (e.g., this.element = element) while keeping setCenter,
setZoom, and fitBounds as no-ops. Ensure the AdvancedMarkerElement mock remains
without a setMap method and continues to expose this.map = null (and addListener
as no-op) so consumers can set marker.map directly; reference MockMap,
MockLatLngBounds, and marker.AdvancedMarkerElement when making these edits.
In `@tests/visual-regression.spec.js`:
- Around line 11-50: Tests are failing because baseline screenshots referenced
by the visual tests in tests/visual-regression.spec.js (tests like "@visual -
Normal state layout", "@visual - Loading state", etc.) are missing; run the
Playwright visual tests in update mode on a Linux/CI-like environment to
generate deterministic baselines (e.g. HEADLESS=1 and consistent viewport) by
executing the test runner with snapshot update enabled (Playwright:
PLAYWRIGHT_UPDATE_SNAPSHOTS=1 or playwright test --update-snapshots), verify the
generated PNGs appear under tests/visual-regression.spec.js-snapshots/, and
commit those snapshot files so the toHaveScreenshot(...) assertions have their
baseline images.
---
Nitpick comments:
In @.gitignore:
- Around line 20-27: The .gitignore contains a duplicate entry
"wp-content/uploads/"; remove the redundant line so "wp-content/uploads/"
appears only once in the file (locate the duplicate string "wp-content/uploads/"
in the shown diff and delete the extra occurrence), leaving other wp-content
entries unchanged.
In `@package.json`:
- Around line 10-17: The various Playwright npm scripts ("test", "test:ui",
"test:headed", "test:visual", "test:update-snapshots") should be routed through
the single entry "test:runner" (node scripts/run-tests.js) so flags and behavior
are centralized; update each of those script entries to invoke node
scripts/run-tests.js with a mode or flag (e.g., --mode=ui, --headed,
--grep=@visual, --update-snapshots) and ensure scripts/run-tests.js parses those
flags and maps them to the appropriate Playwright CLI options, leaving
"test:runner" as the canonical runner.
In `@playwright.config.js`:
- Around line 7-8: The Playwright config (defineConfig) is missing wiring for
the existing global-setup.js and global-teardown.js files; update the
configuration by adding globalSetup and globalTeardown properties referencing
"global-setup.js" and "global-teardown.js" (or alternatively convert them into
separate projects with dependencies using the Playwright projects/dependencies
API) so the functions exported by those files are executed; look for the
defineConfig call and add the globalSetup/globalTeardown keys (or refactor to
projects with dependencies) to register the setup/teardown logic.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 76dc30c2-0cfa-45dd-85c6-d7b04b1380fb
⛔ Files ignored due to path filters (7)
package-lock.jsonis excluded by!**/package-lock.jsontests/visual-regression.spec.js-snapshots/desktop-viewport-chromium-darwin.pngis excluded by!**/*.pngtests/visual-regression.spec.js-snapshots/error-state-chromium-darwin.pngis excluded by!**/*.pngtests/visual-regression.spec.js-snapshots/loading-state-chromium-darwin.pngis excluded by!**/*.pngtests/visual-regression.spec.js-snapshots/mobile-viewport-chromium-darwin.pngis excluded by!**/*.pngtests/visual-regression.spec.js-snapshots/normal-state-chromium-darwin.pngis excluded by!**/*.pngtests/visual-regression.spec.js-snapshots/tablet-viewport-chromium-darwin.pngis excluded by!**/*.png
📒 Files selected for processing (9)
.github/workflows/test.yml.gitignorepackage.jsonplaywright.config.jsscripts/run-tests.jstests/global-setup.jstests/global-teardown.jstests/test-page.htmltests/visual-regression.spec.js
| testProcess.on('close', (code) => { | ||
| process.exit(code); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Node.js child_process, for the "close" event, can "code" be null when a process is terminated by a signal, and what is the recommended way to propagate that signal/exit status in wrapper CLIs?
💡 Result:
Yes, in Node.js child_process, for the "close" event (as well as the "exit" event), the "code" argument can be null when a process is terminated by a signal. In this case, the "signal" argument will contain the name of the signal (e.g., 'SIGTERM'), and "code" will be null. One of the two will always be non-null. Official Node.js documentation confirms this behavior for both events: - 'close' event: "code: The exit code if the child exited on its own. signal: The signal by which the child process was terminated." - 'exit' event: "If the process exited, code is the final exit code of the process, otherwise null. If the process terminated due to receipt of a signal, signal is the string name of the signal, otherwise null. One of the two will always be non-null." For wrapper CLIs, the recommended way to propagate the signal/exit status to the parent shell or supervisor is to check the "close" or "exit" event handler. If code is a number, use process.exit(code). If signal is present (code is null), either: 1. Forward the same signal to the current process using process.kill(process.pid, signal). This preserves the exact signal for the wrapper process. 2. Convert the signal to the corresponding POSIX exit code (128 + signal number) using util.convertProcessSignalToExitCode(signal) (available since Node.js v24+), then process.exit(thatCode). This is explicitly recommended in recent docs for cases needing an exit code representation. Example for a wrapper CLI: const { spawn } = require('node:child_process'); const child = spawn('your-command', args, { stdio: 'inherit' }); child.on('close', (code, signal) => { if (signal) { // Option 1: Forward signal (preserves signal exactly) process.kill(process.pid, signal); // Or Option 2: Convert to POSIX exit code (Node 24+) // const { convertProcessSignalToExitCode } = require('node:util'); // process.exit(convertProcessSignalToExitCode(signal)); } else { process.exit(code); } }); Using stdio: 'inherit' ensures stdout/stderr are shared directly. Libraries like spawn-wrap and child-process-ext follow similar patterns, propagating signals or converting to non-zero exit codes. This ensures tools like supervisors (e.g., systemd) receive the correct termination info.
Citations:
- 1: https://nodejs.org/dist/latest-v19.x/docs/api/child_process.html
- 2: https://nodejs.org/api/child_process.html
- 3: https://nodejs.org/docs/latest/api/child_process.html
- 4: https://nodejs.org/docs/v22.11.0/api/child_process.html
- 5: https://beta.docs.nodejs.org/child_process.html
- 6: nodejs/node@8e900af6ba
- 7: util: process signal to exit code utility nodejs/node#60963
🏁 Script executed:
# Verify the file exists and check the code at the specified lines
if [ -f "scripts/run-tests.js" ]; then
echo "=== File found: scripts/run-tests.js ==="
wc -l scripts/run-tests.js
echo ""
echo "=== Lines 50-60 (context around 55-57) ==="
sed -n '50,60p' scripts/run-tests.js | cat -n
else
echo "File not found at scripts/run-tests.js"
fiRepository: adamsilverstein/icemap
Length of output: 523
🏁 Script executed:
# Search for any other close event handlers or signal handling in the file
echo "=== Searching for 'close' event handlers ==="
rg "\.on\(['\"]close" scripts/run-tests.js -A 3 -B 1
echo ""
echo "=== Searching for 'signal' handling ==="
rg "signal" scripts/run-tests.js -iRepository: adamsilverstein/icemap
Length of output: 215
Preserve failure semantics when child exits via signal.
When Playwright is terminated by signal, code is null and signal contains the signal name. Calling process.exit(code) with null exits with code 0 (success), masking the interruption. Forward the signal explicitly or convert to an appropriate exit code.
🩹 Suggested close-handler fix
-testProcess.on('close', (code) => {
- process.exit(code);
+testProcess.on('close', (code, signal) => {
+ if (signal) {
+ process.kill(process.pid, signal);
+ return;
+ }
+ process.exit(code ?? 1);
});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/run-tests.js` around lines 55 - 57, The close handler currently calls
process.exit(code) and loses failure semantics when Playwright exits via signal
(code is null); update the handler registered on testProcess
(testProcess.on('close', ...)) to accept both (code, signal) and, if signal is
set, forward it to the current process (e.g., process.kill(process.pid,
signal)), otherwise call process.exit(code || 1) to ensure non-zero exit on
child failure; update uses of process.exit and add process.kill as described.
| window.google = { | ||
| maps: { | ||
| Map: class MockMap { | ||
| constructor(options) { | ||
| this.options = options; | ||
| this.setCenter = () => {}; | ||
| this.setZoom = () => {}; | ||
| this.fitBounds = () => {}; | ||
| } | ||
| }, | ||
| LatLngBounds: class MockLatLngBounds { | ||
| constructor() { | ||
| this.extend = () => {}; | ||
| } | ||
| }, | ||
| marker: { | ||
| AdvancedMarkerElement: class MockAdvancedMarker { | ||
| constructor(options) { | ||
| this.options = options; | ||
| this.addListener = () => {}; | ||
| this.map = null; | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify current repository usage against mock coverage.
rg -n -C3 'new\s+google\.maps\.Map\s*\(' --type=js --type=ts
rg -n -C3 'google\.maps\.marker\.AdvancedMarkerElement' --type=js --type=ts
rg -n -C3 '\.setMap\s*\(' --type=js --type=tsRepository: adamsilverstein/icemap
Length of output: 972
🏁 Script executed:
#!/bin/bash
# First, find and examine the test file
fd test-page.htmlRepository: adamsilverstein/icemap
Length of output: 86
🏁 Script executed:
#!/bin/bash
# Examine the test file around the mock definitions
cat -n tests/test-page.html | sed -n '99,150p'Repository: adamsilverstein/icemap
Length of output: 2830
🏁 Script executed:
#!/bin/bash
# Search for any Map instantiation pattern more broadly
rg -n 'maps\.Map|google\.maps\.Map' --type=js --type=ts -A2Repository: adamsilverstein/icemap
Length of output: 48
🏁 Script executed:
#!/bin/bash
# Look for any test code that uses the Map mock
cat -n tests/test-page.html | sed -n '150,300p'Repository: adamsilverstein/icemap
Length of output: 2093
🏁 Script executed:
#!/bin/bash
# Search for how MapComponent.js instantiates the Map
cat -n src/MapComponent.js | head -150Repository: adamsilverstein/icemap
Length of output: 6259
🏁 Script executed:
#!/bin/bash
# Check if there are any tests that actually call the Map constructor with element
rg -n 'new.*Map|MockMap' --type=htmlRepository: adamsilverstein/icemap
Length of output: 127
🏁 Script executed:
#!/bin/bash
# Check if GoogleMap is being used from the `@react-google-maps/api` library
rg -n 'GoogleMap|useJsApiLoader' src/MapComponent.js -A2 -B2Repository: adamsilverstein/icemap
Length of output: 913
🌐 Web query:
Google Maps API Map constructor signature element options
💡 Result:
The Google Maps JavaScript API Map constructor signature is: new google.maps.Map(mapDiv: HTMLElement, opts?: google.maps.MapOptions) - mapDiv: HTMLElement (typically a
Citations:
- 1: https://developers.google.com/maps/documentation/javascript/reference/map
- 2: https://developers.google.com/maps/documentation/javascript/reference/3.57/map
- 3: https://developers.google.com/maps/documentation/javascript/add-google-map
🌐 Web query:
Google Maps AdvancedMarkerElement constructor API setMap
💡 Result:
The Google Maps JavaScript API AdvancedMarkerElement constructor is new google.maps.marker.AdvancedMarkerElement(options?), where options is an optional AdvancedMarkerElementOptions object. Key options include map (google.maps.Map, adds to map on construction), position (LatLng or LatLngLiteral, required for display), content (HTMLElement), title (string), and others like collisionBehavior, zIndex. There is no setMap(map) method like in legacy Marker. Instead, use the map property directly: marker.map = myMap; to add, or marker.map = null; to remove. Similarly, set marker.position directly. First import: const {AdvancedMarkerElement} = await google.maps.importLibrary("marker"); Example: const {AdvancedMarkerElement} = await google.maps.importLibrary("marker"); const marker = new AdvancedMarkerElement({ map: myMap, position: {lat: 37.423, lng: -122.084} }); // Later: marker.map = null; // remove // marker.map = myMap; // re-add // marker.position = newPosition; // update position Note: Requires mapId in map options. Legacy Marker.setMap is deprecated; migrate to AdvancedMarkerElement.
Citations:
- 1: https://developers.google.com/maps/documentation/javascript/reference/advanced-markers
- 2: https://developers.google.com/maps/documentation/javascript/advanced-markers/add-marker
- 3: https://stackoverflow.com/questions/74486201/remove-google-maps-marker-advancedmarkerview-from-map/74492809
Fix Map constructor to match Google Maps API signature.
The Map mock should accept an element parameter as the first argument, since the real Google Maps API uses new google.maps.Map(element, options). While the current codebase uses the @react-google-maps/api wrapper (which handles this internally), the mock should align with the actual API surface for compatibility and clarity.
However, remove the setMap() method from the proposed fix — the real AdvancedMarkerElement API does not have a setMap() method. Instead, it uses the map property directly (e.g., marker.map = myMap), which the current mock already supports via this.map = null.
Corrected fix
Map: class MockMap {
- constructor(options) {
+ constructor(element, options = {}) {
+ this.element = element;
this.options = options;
this.setCenter = () => {};
this.setZoom = () => {};
this.fitBounds = () => {};
}
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| window.google = { | |
| maps: { | |
| Map: class MockMap { | |
| constructor(options) { | |
| this.options = options; | |
| this.setCenter = () => {}; | |
| this.setZoom = () => {}; | |
| this.fitBounds = () => {}; | |
| } | |
| }, | |
| LatLngBounds: class MockLatLngBounds { | |
| constructor() { | |
| this.extend = () => {}; | |
| } | |
| }, | |
| marker: { | |
| AdvancedMarkerElement: class MockAdvancedMarker { | |
| constructor(options) { | |
| this.options = options; | |
| this.addListener = () => {}; | |
| this.map = null; | |
| } | |
| } | |
| } | |
| } | |
| window.google = { | |
| maps: { | |
| Map: class MockMap { | |
| constructor(element, options = {}) { | |
| this.element = element; | |
| this.options = options; | |
| this.setCenter = () => {}; | |
| this.setZoom = () => {}; | |
| this.fitBounds = () => {}; | |
| } | |
| }, | |
| LatLngBounds: class MockLatLngBounds { | |
| constructor() { | |
| this.extend = () => {}; | |
| } | |
| }, | |
| marker: { | |
| AdvancedMarkerElement: class MockAdvancedMarker { | |
| constructor(options) { | |
| this.options = options; | |
| this.addListener = () => {}; | |
| this.map = null; | |
| } | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test-page.html` around lines 99 - 123, Change MockMap's constructor
signature to match the real API by accepting (element, options) instead of
(options); update MockMap to store the passed element (e.g., this.element =
element) while keeping setCenter, setZoom, and fitBounds as no-ops. Ensure the
AdvancedMarkerElement mock remains without a setMap method and continues to
expose this.map = null (and addListener as no-op) so consumers can set
marker.map directly; reference MockMap, MockLatLngBounds, and
marker.AdvancedMarkerElement when making these edits.
| function setLoadingState() { | ||
| const root = document.getElementById('root'); | ||
| root.innerHTML = ` | ||
| <div style="display: flex; align-items: center; justify-content: center; height: 100%; background-color: #f0f0f0;"> | ||
| <div style="text-align: center;"> | ||
| <div style="width: 40px; height: 40px; border: 4px solid #f3f3f3; border-top: 4px solid #3498db; border-radius: 50%; animation: spin 1s linear infinite; margin: 0 auto 20px;"></div> | ||
| <p style="margin: 0; color: #666;">Loading map...</p> | ||
| </div> | ||
| </div> | ||
| `; | ||
| } |
There was a problem hiding this comment.
Disable spinner animation for deterministic snapshots.
Line 132 introduces infinite motion, which can cause flaky visual diffs depending on capture timing.
Proposed fix
- <div style="width: 40px; height: 40px; border: 4px solid `#f3f3f3`; border-top: 4px solid `#3498db`; border-radius: 50%; animation: spin 1s linear infinite; margin: 0 auto 20px;"></div>
+ <div style="width: 40px; height: 40px; border: 4px solid `#f3f3f3`; border-top: 4px solid `#3498db`; border-radius: 50%; animation: none; margin: 0 auto 20px;"></div>
...
- // Add CSS animation for loading spinner
- const style = document.createElement('style');
- style.textContent = `
- `@keyframes` spin {
- 0% { transform: rotate(0deg); }
- 100% { transform: rotate(360deg); }
- }
- `;
- document.head.appendChild(style);
+ // Keep test page static for stable visual regression snapshots.Also applies to: 182-190
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test-page.html` around lines 127 - 137, The loading spinner in
setLoadingState uses a CSS animation ("animation: spin 1s linear infinite")
which causes non-deterministic visual snapshots; update the inline markup
generated in setLoadingState (and the similar block at lines ~182-190) to remove
or disable the animation (e.g., remove the animation property or set animation:
none / override with prefers-reduced-motion-safe static styling) so snapshots
are deterministic while keeping the visible spinner shape and text unchanged;
reference the spinner element created in setLoadingState and the corresponding
spinner markup in the other block when making the change.
| function setMobileView() { | ||
| document.body.style.maxWidth = '375px'; | ||
| document.body.style.margin = '0 auto'; | ||
| } | ||
|
|
||
| function setTabletView() { | ||
| document.body.style.maxWidth = '768px'; | ||
| document.body.style.margin = '0 auto'; | ||
| } | ||
|
|
||
| function setDesktopView() { | ||
| document.body.style.maxWidth = 'none'; | ||
| document.body.style.margin = '0'; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify responsive tests use Playwright viewport APIs (expected) vs only page helpers.
rg -n -C3 '\bsetViewportSize\s*\(' --type=js --type=ts
rg -n -C3 '\bset(Mobile|Tablet|Desktop)View\s*\(' --type=js --type=tsRepository: adamsilverstein/icemap
Length of output: 1678
🏁 Script executed:
# Search for any calls to the helper functions (check all file types)
rg -n 'setMobileView|setTabletView|setDesktopView' --type-add 'html:*.html'
# Also check the test-page.html file to see context and any function calls
cat tests/test-page.html | wc -lRepository: adamsilverstein/icemap
Length of output: 591
🏁 Script executed:
cat -n tests/test-page.htmlRepository: adamsilverstein/icemap
Length of output: 8708
These manual testing helpers don't affect media queries or responsive logic.
Lines 167–180 constrain document.body width only; they don't change the actual browser viewport (window.innerWidth). Media queries and responsive CSS still react to the real viewport, so clicking "Mobile View" may show a narrower layout visually, but media queries will still apply styles based on the full browser width. A developer relying on these buttons for manual responsive testing could miss media query bugs.
Consider either:
- Documenting this limitation clearly in the test page (e.g., "These buttons only simulate width constraints; use browser DevTools or
setViewportSize()in automated tests for true responsive testing") - Updating the helpers to actually call
window.resizeTo()or a Playwright-integrated method if automated tests ever use this page
Note: The automated test suite (visual-regression.spec.js) correctly uses page.setViewportSize(), so this is isolated to manual testing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test-page.html` around lines 167 - 180, The three manual helpers
(setMobileView, setTabletView, setDesktopView) only change document.body styles
and do not affect window.innerWidth or media queries; update them to attempt a
real viewport change by calling window.resizeTo(width, height) when available
(e.g., call window.resizeTo(375, window.outerHeight) in setMobileView, 768 for
setTabletView, and use a large width or window.screen.width for setDesktopView),
and fall back to the existing body.style changes if resizeTo is blocked; also
add a visible note on the test page stating "These buttons only simulate width
constraints; use browser DevTools or automated tests (page.setViewportSize) for
true responsive testing" so developers understand the limitation.
| test('@visual - Normal state layout', async ({ page }) => { | ||
| // test-page.html auto-calls setNormalState() on load | ||
| await page.waitForSelector('#map-container', { timeout: 5000 }); | ||
| await expect(page).toHaveScreenshot('normal-state.png'); | ||
| }); | ||
|
|
||
| test('@visual - Loading state', async ({ page }) => { | ||
| // Trigger loading state via the test page control | ||
| await page.click('button:has-text("Show Loading State")'); | ||
| await page.waitForSelector('text=Loading map...', { timeout: 5000 }); | ||
| await expect(page).toHaveScreenshot('loading-state.png'); | ||
| }); | ||
|
|
||
| test('@visual - Error state', async ({ page }) => { | ||
| // Trigger error state via the test page control | ||
| await page.click('button:has-text("Show Error State")'); | ||
| await page.waitForSelector('text=Map Error', { timeout: 5000 }); | ||
| await expect(page).toHaveScreenshot('error-state.png'); | ||
| }); | ||
|
|
||
| test('@visual - Mobile viewport', async ({ page }) => { | ||
| await page.setViewportSize({ width: 375, height: 667 }); | ||
| await page.goto(TEST_PAGE); | ||
| await page.waitForSelector('#map-container', { timeout: 5000 }); | ||
| await expect(page).toHaveScreenshot('mobile-viewport.png'); | ||
| }); | ||
|
|
||
| test('@visual - Tablet viewport', async ({ page }) => { | ||
| await page.setViewportSize({ width: 768, height: 1024 }); | ||
| await page.goto(TEST_PAGE); | ||
| await page.waitForSelector('#map-container', { timeout: 5000 }); | ||
| await expect(page).toHaveScreenshot('tablet-viewport.png'); | ||
| }); | ||
|
|
||
| test('@visual - Desktop viewport', async ({ page }) => { | ||
| await page.setViewportSize({ width: 1920, height: 1080 }); | ||
| await page.goto(TEST_PAGE); | ||
| await page.waitForSelector('#map-container', { timeout: 5000 }); | ||
| await expect(page).toHaveScreenshot('desktop-viewport.png'); | ||
| }); |
There was a problem hiding this comment.
Visual tests are failing because baseline snapshots are not committed.
CI logs show every toHaveScreenshot(...) assertion is missing its baseline image under tests/visual-regression.spec.js-snapshots/, so this PR cannot pass as-is. Generate and commit the baseline snapshots (preferably on Linux/CI-like env) before merging.
🧰 Tools
🪛 GitHub Actions: Icemap Plugin Tests
[error] 14-14: Playwright visual regression failed (toHaveScreenshot): snapshot doesn't exist at /home/runner/work/icemap/icemap/tests/visual-regression.spec.js-snapshots/normal-state-chromium-linux.png; writing actual. Error at await expect(page).toHaveScreenshot('normal-state.png');
[error] 21-21: Playwright visual regression failed (toHaveScreenshot): snapshot doesn't exist at /home/runner/work/icemap/icemap/tests/visual-regression.spec.js-snapshots/loading-state-chromium-linux.png; writing actual. Error at await expect(page).toHaveScreenshot('loading-state.png');
[error] 28-28: Playwright visual regression failed (toHaveScreenshot): snapshot doesn't exist at /home/runner/work/icemap/icemap/tests/visual-regression.spec.js-snapshots/error-state-chromium-linux.png; writing actual. Error at await expect(page).toHaveScreenshot('error-state.png');
[error] 35-35: Playwright visual regression failed (toHaveScreenshot): snapshot doesn't exist at /home/runner/work/icemap/icemap/tests/visual-regression.spec.js-snapshots/mobile-viewport-chromium-linux.png; writing actual. Error at await expect(page).toHaveScreenshot('mobile-viewport.png');
[error] 42-42: Playwright visual regression failed (toHaveScreenshot): snapshot doesn't exist at /home/runner/work/icemap/icemap/tests/visual-regression.spec.js-snapshots/tablet-viewport-chromium-linux.png; writing actual. Error at await expect(page).toHaveScreenshot('tablet-viewport.png');
[error] 49-49: Playwright visual regression failed (toHaveScreenshot): snapshot doesn't exist at /home/runner/work/icemap/icemap/tests/visual-regression.spec.js-snapshots/desktop-viewport-chromium-linux.png; writing actual. Error at await expect(page).toHaveScreenshot('desktop-viewport.png');
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/visual-regression.spec.js` around lines 11 - 50, Tests are failing
because baseline screenshots referenced by the visual tests in
tests/visual-regression.spec.js (tests like "@visual - Normal state layout",
"@visual - Loading state", etc.) are missing; run the Playwright visual tests in
update mode on a Linux/CI-like environment to generate deterministic baselines
(e.g. HEADLESS=1 and consistent viewport) by executing the test runner with
snapshot update enabled (Playwright: PLAYWRIGHT_UPDATE_SNAPSHOTS=1 or playwright
test --update-snapshots), verify the generated PNGs appear under
tests/visual-regression.spec.js-snapshots/, and commit those snapshot files so
the toHaveScreenshot(...) assertions have their baseline images.
The previous Linux baselines did not match the actual font rendering produced by the GitHub Actions ubuntu-latest runner, causing all visual regression tests to fail. Regenerate baselines from the runner's own deterministic output (verified identical across all retry attempts).
Summary by CodeRabbit
Release Notes
Tests
Chores