Last updated: February 2026
Every article on this site starts with me. I bring the idea, the topic, the outline, and the opinions — drawn from 20+ years working in SEO, PPC, analytics, and marketing infrastructure. Before any draft is written, I answer a structured set of interview questions about the topic in my own words. My answers become the raw material. AI takes those answers and shapes them into a structured, publishable article.
This is a collaboration: I supply the thinking, the observations, and the experience. AI supplies the structure and the editorial discipline. Neither works without the other. The ideas are mine. The draft is ours.
The following tools on this site perform automated analysis. This section documents what each tool checks, how results are scored or classified, and what the criteria mean.
What it checks: HTTP status codes, redirect chains, redirect type (301/302), response time, canonical tags, and indexability signals.
How results are classified: Pass / Warning / Fail based on status code and redirect depth.
Results reflect the state of the URL at time of scan and are not cached.
What it checks: Product feed structure, required field presence, field format validity, image URL accessibility, price format, and GTIN/MPN format.
How errors are classified: Error / Warning / Info.
What it checks: Full redirect chains for a given URL — each hop's HTTP status code, destination URL, and latency per hop.
How results are classified: Pass / Warning / Fail based on chain length and terminal status.
Latency is measured as wall-clock time per hop using live HTTP requests at time of scan.
What it checks: Image format, file size, presence and quality of alt attributes, and general optimization signals (e.g., use of next-gen formats).
How results are classified: Pass / Warning / Fail per image.
What it checks: Font families referenced on a page, font weights loaded vs. used, CSS @font-face declarations, and rendering flags that affect legibility.
How results are classified: Pass / Warning / Notice.
What it checks: Revenue dataset integrity across channels — normalization quality, field completeness, channel attribution consistency, and anomaly signals.
How results are classified: Confidence score (0–100) per channel row, aggregated to a dataset-level integrity score.
The audit does not access live payment processors or financial systems — it evaluates the structure and internal consistency of the data you provide.
What it checks: Signals that affect how AI crawlers and language model retrieval systems access and index a page — robots.txt directives, sitemap presence, structured data, canonical signals, and content clarity markers.
How results are classified: AI Visibility Score (0–100) composed of sub-scores per signal category.
This tool reflects the technical signals AI systems use — it does not predict whether any specific AI product will surface the content.
What it checks: Tone, sentiment polarity, and emotional register of page content — positive, negative, neutral, or mixed at both the page level and by content section.
How results are classified: Sentiment label + confidence score per segment.
Sentiment classification is based on model inference and should be treated as a directional signal, not a definitive editorial judgment.
AI-assisted content reflects my knowledge at time of writing. Tool analysis reflects the state of the data at time of scan. Neither constitutes professional advice.
Results from automated tools should be verified before making infrastructure changes, publishing corrections, or drawing conclusions from aggregate data. The tools are diagnostic aids — final judgment belongs to the practitioner using them.
If you find an error in methodology, a factual inaccuracy in an article, or a tool producing unexpected results, use the contact page to let me know.
Methodology is updated when criteria change, tools are revised, or a documented error is confirmed. When that happens, the "Last updated" date at the top of this page will reflect the revision.