Crawler Automations
Use crawler automations to interact with pages during Scan Mode crawls — navigate, click, fill forms, and evaluate JavaScript — and review step-by-step results with screenshots.
Crawler automations are part of Scan Mode, available exclusively on the Enterprise plan. Contact sales to get started.
Overview
Crawler automations let you define sequences of steps that the cside crawler executes on your pages during a scan. This is useful for reaching content that requires interaction — such as logging in, navigating through multi-step flows, dismissing cookie banners, or triggering client-side rendering.
Each automation is a series of actions the crawler performs in order. After a crawl completes, you can review every automation’s results, see which steps succeeded or failed, and view screenshots captured at each step.
When to use automations
Automations are helpful when the crawler needs to:
- Log in to access authenticated pages
- Navigate through single-page applications or multi-step flows
- Click elements like cookie consent banners, tab controls, or “load more” buttons
- Fill forms with test data to reach specific states
- Evaluate JavaScript to trigger client-side behavior before the page is analyzed
Viewing automation results
- Open the dashboard and navigate to your Domain Settings
- Open the Automations panel
- You will see the list of automations that ran during the most recent crawler session
- Click on an automation to expand it and view the step-by-step breakdown
Each automation displays a description of what it does and a status indicating whether it completed successfully.
Understanding the step breakdown
When you expand an automation, you see each step listed in execution order. Every step includes the following details:
- Action — the type of interaction performed:
- Navigate — load a URL
- Click — click a CSS-selected element
- Fill — enter a value into a form field
- Wait — pause for a condition or duration
- Evaluate JS — run a JavaScript expression on the page
- Stop — end the automation
- Selector — the CSS selector targeted by the step (when applicable)
- URL — the target URL (when applicable)
- Value — the input value provided (when applicable)
- Screenshot — a screenshot captured at that step, which you can click to open in a full-size modal
Step statuses
Each step is marked with a status indicator:
- Success (green check) — the step completed without errors
- Failed (red X) — the step encountered an error and stopped the automation
- Not yet run (grayed out) — the step was not reached because a prior step failed
When an automation fails, the status shows “Error on step N” along with the error message, making it straightforward to identify which step caused the issue and why.
Interpreting errors
Common reasons a step might fail:
- Selector not found — the CSS selector does not match any element on the page. Verify that the selector is correct and that the page has finished loading before the step runs.
- Navigation timeout — the target URL did not load within the allowed time. Check that the URL is reachable and that any required cookies or storage values are configured.
- JavaScript error — an
Evaluate JSstep threw an exception. Review the expression for syntax errors or references to elements that may not exist yet. - Element not interactable — a
ClickorFillstep targeted an element that is hidden, disabled, or covered by another element.
Screenshots are captured at each step regardless of success or failure. When debugging a failed automation, check the screenshot on the failing step and the step immediately before it to understand the page state at the time of the error.
Next steps
- Set up Scan Mode — if you have not configured the crawler yet, see the Scan Mode setup guide
- Configure page-level settings — add cookies, localStorage, or sessionStorage values so the crawler can access authenticated content
- Review crawler sessions — use the automations panel alongside session history to monitor ongoing crawl results