Documentation

Launch Audit Help

Practical guidance for running audits, understanding findings, and fixing launch blockers quickly.

Quick Start

1

Enter Target

Paste a site URL or direct sitemap.xml URL and click Start Audit.

2

Tune Crawl Behavior

Scope is all discovered pages (up to system max). Use run mode, concurrency, delays, and allow/deny patterns to control behavior.

3

Triage + Export

Filter by severity, fix critical pages first, then export JSON or CSV for team handoff.

Understand Dashboard

  • Critical: HTTP errors, broken pages, or major technical blockers.
  • Warnings: SEO and indexing improvements needed before launch.
  • Passed: No detected issues in the audited checks.
  • Metric pills + search: Apply globally across the full result set, then paginate.
  • Grouping toggle: Switch between grouped severity view and plain list view.
  • Score info icon: Opens a modal with score formula and weighted factor breakdown.

Advanced Options

Option Default When to Change Example
page_scope All discovered pages Scope is fixed to full discovered sitemap set (bounded by server max). max 650
cache_mode auto auto reuses latest matching run; fresh forces a new crawl. fresh
parallel System default Lower if target server is sensitive; raise for faster internal staging audits. 4
ua Built-in UA Set a custom user-agent if traffic filtering or bot rules require it. LaunchAuditBot/1.0
delay_min / delay_max System defaults Add crawl delay for fragile origins or strict WAF/rate limits. 200 / 600
allow empty Scan only a specific area of a site. /blog/*
deny empty Exclude internal or low-value paths. /checkout/*

Fix Priorities

  1. Fix Critical pages (4xx/5xx, hard failures).
  2. Fix indexability issues (canonical mismatches, noindex, robots constraints).
  3. Fix core on-page SEO (title, description, H1).
  4. Improve social metadata and analytics completeness.

Exports

  • JSON: Best for automation, CI checks, and custom pipelines.
  • CSV: Best for spreadsheet triage, QA handoff, and owner assignment.
  • Recommended launch workflow: run daily, filter critical first, track deltas in CSV.

Troubleshooting

  • If scans are slow, lower parallel and increase delays.
  • If a crawl is interrupted, use Resume last run to continue pending pages.
  • If expected pages are missing, verify sitemap coverage and allow/deny rules.
  • If canonical checks look odd, verify redirect chains and final URL scheme.
  • If analytics shows "none detected", treat it as informational unless tracking is required.

Server Appendix

# Brotli/Gzip
<IfModule mod_brotli.c>
  AddOutputFilterByType BROTLI_COMPRESS text/html text/css application/javascript application/json image/svg+xml
</IfModule>
<IfModule mod_deflate.c>
  AddOutputFilterByType DEFLATE text/html text/css application/javascript application/json image/svg+xml
</IfModule>

# Security headers
<IfModule mod_headers.c>
  Header always set X-Content-Type-Options "nosniff"
  Header always set Referrer-Policy "strict-origin-when-cross-origin"
</IfModule>

# Rewrites
<IfModule mod_rewrite.c>
  RewriteEngine On
  RewriteRule ^help/?$ help/index.php [L,QSA]
</IfModule>

FAQ

Usually due to sitemap coverage, allow/deny filters, server max page cap, redirects, or blocked fetches.

Gray means "not detected" and is informational. It is not treated as a hard error by itself.

Use allow patterns such as /blog/* and optional deny exclusions.