Acceptable Use Policy
Last updated: April 10, 2026
How Our Scanner Works
The SignalixIQ scanner identifies itself as SignalixIQ-Scanner/1.0 and includes a contact URL in the User-Agent. It:
- Obeys robots.txt — Disallow directives are honored before any fetch
- Respects Crawl-Delay — minimum 500ms between requests, longer if specified
- Limits scope — maximum 20 pages per scan
- Caps concurrency — 2 simultaneous requests per domain
- Times out gracefully — 15 second per-page timeout
What You Can Scan
- Stores you own or operate
- Stores you have explicit written permission to scan
- Public e-commerce sites that allow crawler access via robots.txt
What You Cannot Scan
- Amazon — explicitly disabled. Amazon's TOS prohibits scraping.
- Sites that block our user-agent in robots.txt
- Sites you do not own or have authorization to analyze
- Sites containing PII, health, financial, or regulated data
- Internal or private network URLs (we block these automatically)
Rate Limits
Free tier: 2 scans per month per IP. Paid tiers: unlimited scans subject to fair use. Abuse (e.g., scanning thousands of unrelated stores) triggers automatic suspension.
Reporting Abuse
If you believe SignalixIQ has scanned your site without authorization, email abuse@signalixiq.com. Include the IP address, timestamp, and URL paths from your access logs. We respond within 24 hours and will block future scans of your domain on request.
Blocking Our Crawler
To block SignalixIQ from your site, add this to your robots.txt:
User-agent: SignalixIQ-Scanner
Disallow: /