You built something with Lovable. It looks great. It works. You're proud of it, and rightly so. Then you Google your own brand name and get nothing. You ask ChatGPT about your product and it has no idea you exist.
Welcome to the Single Page Application problem, and if you've built anything with Lovable, you have it whether you know it or not.
The technical fix for Lovable sites specifically is something we've refined through a fair amount of trial and error, and I'm going to walk you through the whole thing. Every step, every command, every gotcha.
Why Lovable sites are invisible
Lovable builds what's called a Single Page Application, or SPA. When someone visits your site in a browser, the server sends back a mostly empty HTML file - literally just a <div id="root"></div> - and then JavaScript takes over, fetching content and rendering the page on the client side. Your browser runs the JavaScript, the page fills in, and everything looks fine.
The problem is that most bots don't run JavaScript. Googlebot has some capacity for it, but it's inconsistent and slow. Bingbot barely tries. And the A.I. crawlers - GPTBot, ClaudeBot, PerplexityBot - don't execute JavaScript at all. They fetch your page, see an empty div, and move on. As far as they're concerned, your site has no content.
This means your Lovable site is effectively a blank page to every search engine and every A.I. system that might otherwise cite, recommend, or surface your business.
How to fix it
The fix has two parts. First, we need to create fully pre-rendered copies of every page on your site - proper HTML with all the content baked in, the way a bot expects to see it. Second, we need something sitting in front of your site that can tell humans and bots apart, routing humans to the normal Lovable experience and serving bots the pre-rendered versions instead.
We're going to build this with Cloudflare. If you haven't used it before, Cloudflare is a network that sits in front of your website - every request hits their servers first, and you can run custom code there to decide what happens next. To set it up, you point your domain's nameservers at Cloudflare. This doesn't mean moving your domain away from GoDaddy or Namecheap or wherever you bought it - you still own and renew it there. You're just telling the internet to route traffic through Cloudflare's network. Cloudflare walks you through the process and imports all your existing DNS records, so nothing breaks.
Cloudflare has a feature called Workers - small pieces of code that run on their network and can intercept, inspect, and modify any request passing through. We're going to deploy two of them. The first is an edge worker that sits on your domain and handles the routing: it checks each visitor's User-Agent string, and if it matches a known bot, serves pre-rendered HTML from Cloudflare R2 storage (think of R2 as a simple file store). If it's a human, the worker passes them straight through to Lovable.
The second is a renderer worker. This one uses Cloudflare's Browser Rendering service - essentially a headless Chrome instance running on their network - to crawl your site, discover all your pages automatically from internal links, render each one, and save the fully-rendered HTML to R2. It runs on a daily schedule, so when you add new pages in Lovable, they get picked up on the next cycle without you having to do anything.
On top of the routing, the edge worker also serves a custom robots.txt that explicitly allows all A.I. crawlers (most sites accidentally block them), an llms.txt file that gives A.I. systems a structured summary of your site, and an auto-generated sitemap. The whole thing costs almost nothing to run - Cloudflare's free tier covers most small sites comfortably.
What you'll need before you start
From this point on, we're going to be working in a terminal - the command line interface on your computer. On a Mac, that's the Terminal app (or iTerm if you've installed it). On Windows, it's PowerShell or Command Prompt. If you've never opened a terminal before, this is where things get properly technical. Every step from here involves typing commands and reading their output. I've tried to make each one as copy-paste-friendly as possible, but you will need to be comfortable with this way of working.
You'll need a Cloudflare account (the free tier is fine to start with). You'll also need Node.js installed on your machine - this is a JavaScript runtime that lets you run code outside of a browser, and a lot of developer tools depend on it. If you don't have it, download it from nodejs.org and install the LTS version. You can check whether it's already installed by opening your terminal and typing node --version - if you get a version number back, you're good.
Once Node.js is sorted, you need wrangler, which is Cloudflare's command line tool for managing Workers. Install it by running:
npm install -g wrangler
Then log in to your Cloudflare account from the terminal:
wrangler login
This will open a browser window asking you to authorise the connection. Click through, and you're set.
You'll also need to gather a few specific details about your setup. Have these to hand before you start:
Your domain - the apex domain without www. For example, growthmode.co.
Your Lovable slug - this is the part before .lovable.app in your staging URL. You can find it in your Lovable project settings. Something like my-project-abc123.
Your Cloudflare account ID - visible in the Cloudflare dashboard under any zone's overview page, in the right-hand sidebar.
Your Workers subdomain - the *.workers.dev subdomain assigned to your account. You'll find this in the Workers & Pages section of the Cloudflare dashboard.
Step 1: Create the R2 bucket
This is where your pre-rendered HTML snapshots will live. Run:
npx wrangler r2 bucket create your-domain-prerender
Replace your-domain-prerender with something sensible - I usually use the domain name followed by -prerender. So for growthmode.co it would be growthmode-co-prerender.
Step 2: Deploy the edge worker
The edge worker does the heavy lifting. It sits on your domain's routes and decides what to do with each incoming request: pass humans through to Lovable, serve bots the pre-rendered HTML, handle www-to-apex redirects, and serve the special files like robots.txt and llms.txt.
Create a directory for it and set up two files:
mkdir -p your-domain-prerender-edge
cd your-domain-prerender-edge
First, create wrangler.toml. There's one absolutely critical thing here that will cost you hours of debugging if you get it wrong: the routes array must appear before the [[r2_buckets]] block. TOML treats everything after a table header as part of that table, which means if your routes come after [[r2_buckets]], they get silently swallowed into the R2 config and ignored. Your worker deploys fine, it just doesn't actually sit on your domain. I learned this one the hard way.
name = "your-domain-prerender-edge"
main = "worker.js"
compatibility_date = "2024-09-23"
account_id = "YOUR_ACCOUNT_ID"
workers_dev = false
# Zone routes — MUST be before [[r2_buckets]]
routes = [
{ pattern = "yourdomain.com/*", zone_name = "yourdomain.com" },
{ pattern = "www.yourdomain.com/*", zone_name = "yourdomain.com" }
]
# R2 bucket binding
[[r2_buckets]]
binding = "PRERENDER_BUCKET"
bucket_name = "your-domain-prerender"
Now create worker.js. This is the full edge worker code. Replace SLUG with your Lovable slug, DOMAIN with your apex domain, and customise the LLMS_TXT content to describe your site:
// Bot detection — search engines, social previews, AND LLM crawlers
const BOT_UA =
/Googlebot|Google-InspectionTool|AdsBot-Google|Google-Extended|Storebot-Google|GoogleOther|Bingbot|DuckDuckBot|DuckAssistBot|Slurp|YandexBot|Applebot|Applebot-Extended|PetalBot|Baiduspider/i;
const LLM_UA =
/GPTBot|ChatGPT-User|OAI-SearchBot|ClaudeBot|Claude-User|Claude-SearchBot|PerplexityBot|Perplexity-User|Meta-ExternalAgent|Meta-ExternalFetcher|Bytespider|CCBot|cohere-ai|Amazonbot|YouBot|DeepSeekBot|AI2Bot|Diffbot|ImagesiftBot|Omgilibot/i;
const SOCIAL_UA =
/facebookexternalhit|Twitterbot|LinkedInBot|Slackbot|WhatsApp|TelegramBot|Discordbot|Embedly/i;
function isBot(ua) {
return BOT_UA.test(ua) || LLM_UA.test(ua) || SOCIAL_UA.test(ua);
}
const ORIGIN_HOST = "YOUR-SLUG.lovable.app";
const APEX_DOMAIN = "yourdomain.com";
const LLMS_TXT = `# Your Site Name
> One-line description of what you do.
A paragraph expanding on the description. What you offer,
who you serve, what makes you different.
## Pages
- [Home](<https://yourdomain.com/>): Main landing page
- [About](<https://yourdomain.com/about>): About the company
- [Pricing](<https://yourdomain.com/pricing>): Plans and pricing
`;
function isAssetPath(pathname) {
return (
pathname.startsWith("/assets/") ||
pathname === "/favicon.ico" ||
pathname.startsWith("/~") ||
/\\.(png|jpg|jpeg|webp|gif|svg|ico|css|js|map|woff2?|ttf|eot|json|xml)$/i.test(
pathname
)
);
}
function pathToKey(pathname) {
let p = pathname.split("?")[0].split("#")[0];
p = p.replace(/\\/+$/, "");
if (p === "") return "index.html";
if (p.startsWith("/")) p = p.slice(1);
return p + ".html";
}
async function fetchFromOrigin(request) {
const url = new URL(request.url);
url.hostname = ORIGIN_HOST;
url.protocol = "https:";
const headers = new Headers(request.headers);
headers.set("Host", ORIGIN_HOST);
headers.set("X-Forwarded-Host", APEX_DOMAIN);
const originRequest = new Request(url.toString(), {
method: request.method,
headers: headers,
body: request.body,
redirect: "manual",
});
const response = await fetch(originRequest);
const newResponse = new Response(response.body, response);
newResponse.headers.set("x-edge-worker", "active");
return newResponse;
}
export default {
async fetch(request, env) {
const url = new URL(request.url);
// www → apex redirect
if (url.hostname === "www." + APEX_DOMAIN) {
const target = new URL(url);
target.hostname = APEX_DOMAIN;
return Response.redirect(target.toString(), 301);
}
// Sitemap from R2
if (url.pathname === "/sitemap.xml") {
const obj = await env.PRERENDER_BUCKET.get("sitemap.xml");
if (obj) {
return new Response(obj.body, {
headers: {
"content-type": "application/xml; charset=utf-8",
"cache-control": "public, max-age=86400",
},
});
}
}
// Robots.txt — explicitly allows all AI crawlers
if (url.pathname === "/robots.txt") {
return new Response(
[
"User-agent: *",
"Allow: /",
"",
"# AI/LLM crawlers — explicitly allowed",
"User-agent: GPTBot",
"Allow: /",
"",
"User-agent: ChatGPT-User",
"Allow: /",
"",
"User-agent: ClaudeBot",
"Allow: /",
"",
"User-agent: Claude-SearchBot",
"Allow: /",
"",
"User-agent: PerplexityBot",
"Allow: /",
"",
"User-agent: Google-Extended",
"Allow: /",
"",
"User-agent: Meta-ExternalAgent",
"Allow: /",
"",
"User-agent: CCBot",
"Allow: /",
"",
"User-agent: Amazonbot",
"Allow: /",
"",
"User-agent: DeepSeekBot",
"Allow: /",
"",
`Sitemap: <https://$>{APEX_DOMAIN}/sitemap.xml`,
].join("\\n"),
{
headers: {
"content-type": "text/plain; charset=utf-8",
"cache-control": "public, max-age=86400",
},
}
);
}
// llms.txt
if (url.pathname === "/llms.txt") {
return new Response(LLMS_TXT, {
headers: {
"content-type": "text/markdown; charset=utf-8",
"cache-control": "public, max-age=86400",
},
});
}
// Bot detection → serve pre-rendered HTML from R2
const ua = request.headers.get("user-agent") || "";
if (isBot(ua) && !isAssetPath(url.pathname)) {
const key = pathToKey(url.pathname);
try {
const obj = await env.PRERENDER_BUCKET.get(key);
if (obj) {
return new Response(obj.body, {
headers: {
"content-type": "text/html; charset=utf-8",
"x-prerendered": "true",
"cache-control": "public, max-age=3600, s-maxage=3600",
},
});
}
} catch (e) {
console.error('R2 error for key "' + key + '":', e.message);
}
}
// Default: reverse-proxy to Lovable
return fetchFromOrigin(request);
},
};
Deploy it:
npx wrangler deploy
Don't worry that your site won't change yet. The worker is deployed but Lovable's custom domain claim still takes priority. We'll handle that in a later step.
Step 3: Deploy the renderer worker
The renderer is what actually generates the HTML snapshots. It spins up a headless browser, visits each of your pages, waits for the JavaScript to finish rendering, captures the fully-rendered HTML, and writes it to R2. It also generates a sitemap.
Create a separate directory:
cd ..
mkdir -p your-domain-prerender-renderer
cd your-domain-prerender-renderer
Create package.json:
{
"name": "your-domain-prerender-renderer",
"private": true,
"devDependencies": {
"wrangler": "^4",
"@cloudflare/puppeteer": "^1.0.6"
}
}
Create wrangler.toml:
name = "your-domain-prerender-renderer"
main = "src/index.js"
compatibility_date = "2025-01-01"
compatibility_flags = ["nodejs_compat_v2"]
workers_dev = true
[browser]
binding = "MYBROWSER"
[[r2_buckets]]
binding = "PRERENDER_BUCKET"
bucket_name = "your-domain-prerender"
[triggers]
crons = ["15 3 * * *"]
That cron schedule means the renderer runs daily at 3:15 AM UTC. Adjust if you like, but daily is the sweet spot - frequent enough to keep snapshots fresh (which A.I. search engines care about), infrequent enough that you won't burn through your Browser Rendering quota.
Create src/index.js. Replace yourdomain.com with your actual domain. The renderer will automatically discover all the pages on your site by crawling internal links from the homepage, so you don't need to maintain a list of routes manually. When you add new pages in Lovable, they'll get picked up on the next render cycle:
import puppeteer from "@cloudflare/puppeteer";
const DOMAIN = "yourdomain.com";
const ORIGIN = `https://${DOMAIN}`;
const MIN_TEXT_LEN = 300;
const MAX_WAIT_MS = 25000;
// File extensions and paths to ignore when discovering routes
const ASSET_PATTERN =
/\\.(png|jpg|jpeg|webp|gif|svg|ico|css|js|map|woff2?|ttf|eot|json|xml|pdf|zip)$/i;
const IGNORE_PREFIXES = ["/assets/", "/~"];
function isPageLink(href) {
try {
const url = new URL(href, ORIGIN);
if (url.origin !== ORIGIN) return false;
if (ASSET_PATTERN.test(url.pathname)) return false;
if (IGNORE_PREFIXES.some((p) => url.pathname.startsWith(p))) return false;
if (url.pathname === "/favicon.ico") return false;
return true;
} catch {
return false;
}
}
function pathToKey(pathname) {
let p = pathname.split("?")[0].split("#")[0].replace(/\\/+$/, "");
if (p === "") return "index.html";
if (p.startsWith("/")) p = p.slice(1);
return p + ".html";
}
function requireToken(url, env) {
const token = url.searchParams.get("token");
return token && env.PRERENDER_TOKEN && token === env.PRERENDER_TOKEN;
}
// Discover all internal page links by crawling from the homepage
async function discoverRoutes(page) {
console.log("Discovering routes from homepage...");
await page.goto(ORIGIN + "/", { waitUntil: "domcontentloaded", timeout: 30000 });
try {
await page.waitForFunction(
(minLen) => (document.body?.innerText?.trim()?.length ?? 0) > minLen,
{ timeout: MAX_WAIT_MS },
MIN_TEXT_LEN
);
} catch {
console.warn("Homepage hydration timed out, discovering from current state.");
}
await new Promise((r) => setTimeout(r, 1500));
const hrefs = await page.evaluate(() =>
Array.from(document.querySelectorAll("a[href]"), (a) => a.href)
);
const routes = new Set(["/"]);
for (const href of hrefs) {
if (isPageLink(href)) {
const url = new URL(href, ORIGIN);
routes.add(url.pathname.replace(/\\/+$/, "") || "/");
}
}
console.log(`Discovered ${routes.size} routes: ${[...routes].join(", ")}`);
return [...routes];
}
async function renderOnce(page, pathname, bucket) {
const url = ORIGIN + pathname;
const key = pathToKey(pathname);
console.log(`Rendering: ${url} → ${key}`);
await page.goto(url, { waitUntil: "domcontentloaded", timeout: 30000 });
try {
await page.waitForFunction(
(minLen) => (document.body?.innerText?.trim()?.length ?? 0) > minLen,
{ timeout: MAX_WAIT_MS },
MIN_TEXT_LEN
);
} catch {
console.warn(`Hydration wait timed out for ${url}. Capturing current state.`);
}
await new Promise((r) => setTimeout(r, 1500));
const html = await page.content();
const textLen = await page.evaluate(
() => document.body?.innerText?.trim()?.length ?? 0
);
if (textLen < 50) {
console.error(
`SKIP ${key}: page body is only ${textLen} chars — probably failed to render.`
);
return { key, status: "skipped", textLen };
}
await bucket.put(key, html, {
httpMetadata: { contentType: "text/html; charset=utf-8" },
customMetadata: { renderedAt: new Date().toISOString(), textLen: String(textLen) },
});
console.log(`Saved ${key} (${textLen} chars)`);
return { key, pathname, status: "ok", textLen };
}
async function generateSitemap(routes, bucket) {
const today = new Date().toISOString().split("T")[0];
const urls = routes
.map((pathname) => {
const priority = pathname === "/" ? "1.0" : "0.8";
return ` <url>\\n <loc>${ORIGIN}${pathname}</loc>\\n <lastmod>${today}</lastmod>\\n <changefreq>weekly</changefreq>\\n <priority>${priority}</priority>\\n </url>`;
})
.join("\\n");
const xml = `<?xml version="1.0" encoding="UTF-8"?>\\n<urlset xmlns="<http://www.sitemaps.org/schemas/sitemap/0.9>">\\n${urls}\\n</urlset>\\n`;
await bucket.put("sitemap.xml", xml, {
httpMetadata: { contentType: "application/xml; charset=utf-8" },
customMetadata: { generatedAt: new Date().toISOString() },
});
console.log(`Sitemap written with ${routes.length} URLs`);
return xml;
}
async function renderAll(env) {
const results = [];
let browser;
try {
browser = await puppeteer.launch(env.MYBROWSER);
const page = await browser.newPage();
await page.setUserAgent(
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36"
);
await page.setViewport({ width: 1280, height: 800 });
// Discover routes automatically from internal links
const routes = await discoverRoutes(page);
for (const pathname of routes) {
try {
const result = await renderOnce(page, pathname, env.PRERENDER_BUCKET);
results.push(result);
} catch (e) {
console.error(`ERROR rendering ${pathname}:`, e.message);
results.push({ key: pathToKey(pathname), status: "error", error: e.message });
}
}
await page.close();
await generateSitemap(routes, env.PRERENDER_BUCKET);
} catch (e) {
console.error("Browser-level error:", e.message);
results.push({ key: "_browser", status: "error", error: e.message });
} finally {
if (browser) {
try { await browser.close(); } catch {}
}
}
return results;
}
export default {
async scheduled(controller, env, ctx) {
ctx.waitUntil(renderAll(env));
},
async fetch(request, env) {
const url = new URL(request.url);
if (url.pathname === "/__prerender") {
if (!requireToken(url, env)) {
return new Response("Forbidden", { status: 403 });
}
const results = await renderAll(env);
return new Response(JSON.stringify(results, null, 2), {
headers: { "content-type": "application/json" },
});
}
if (url.pathname === "/health") {
return new Response("ok");
}
return new Response("Not found", { status: 404 });
},
};
Install dependencies and deploy:
npm install
npx wrangler deploy
Step 4: Set the pre-render token
The renderer has a manual trigger endpoint at /__prerender, and you don't want random people hitting it. Generate a secret token and store it as a Workers secret:
openssl rand -base64 32
Copy that output. Then:
npx wrangler secret put PRERENDER_TOKEN --name your-domain-prerender-renderer
Paste the token when prompted. Keep a copy somewhere safe - you'll need it to manually trigger renders.
Step 5: Verify your DNS
Both your apex domain and www need to be proxied through Cloudflare (the orange cloud icon in the DNS settings). If they're DNS-only (grey cloud), the Workers routes won't intercept traffic.
You need at minimum:
Apex domain: An A record pointing to any IP (e.g.,
192.0.2.1) with proxying enabled. The IP doesn't matter because the Worker intercepts the request before it reaches an origin server.www: A CNAME pointing to your apex domain, also proxied. Or another A record to the same IP.
Check this in the Cloudflare dashboard under your zone's DNS settings. Both records must show the orange cloud.
Step 6: Check Cloudflare's A.I. bot settings
This one catches people out. Cloudflare has a setting buried in Security → Bots that can block A.I. crawlers at the edge, before they ever reach your Worker. If this is switched on, your beautiful pre-rendered HTML and carefully crafted robots.txt are completely irrelevant because the bots never get that far.
Go to your zone in the Cloudflare dashboard, navigate to Security → Bots, and look for anything that says "Block AI Bots" or "AI Crawlers." Make sure it's off. You want the crawlers reaching your Worker, where they'll get the pre-rendered content.
Step 7: Prime the cache
Time for the first render. Trigger it manually:
curl "<https://your-domain-prerender-renderer.YOUR-SUBDOMAIN.workers.dev/__prerender?token=YOUR_TOKEN>"
Replace YOUR-SUBDOMAIN with your Workers subdomain and YOUR_TOKEN with the secret from Step 4.
This request might time out. Workers have a ~30 second HTTP response limit, and rendering multiple pages with a headless browser can take longer than that. Don't panic - the rendering continues in the background even after the HTTP connection drops. You can watch it happen in real time:
npx wrangler tail your-domain-prerender-renderer --format pretty
You should see log lines for each page being rendered and saved. Once it finishes, verify that HTML actually landed in R2:
npx wrangler r2 object get your-domain-prerender/index.html --remote --pipe | head -20
If you see real HTML with your page content in it - not just an empty shell with <div id="root"></div> - the renderer is working.
Step 8: The big switch - removing Lovable's custom domain
This is the step that makes everything live, and the order here matters enormously. Deploy both workers first, prime the cache, confirm everything is working on the workers.dev URLs, and only then do this step. If you remove the custom domain from Lovable before your workers are ready, your site goes down.
Here's why this step is necessary at all: Lovable uses Cloudflare for SaaS - a feature called Custom Hostnames, designed for platforms that serve thousands of customer sites on custom domains - to serve your site on your custom domain. Their claim on your hostname always takes priority over zone-level Worker routes. Both systems run on Cloudflare's network, and the SaaS hostname claim wins every time. Your Worker is deployed and configured correctly, but Lovable's system intercepts the traffic before it gets there.
Lovable has a "Allow traffic through a CDN or proxy" option in their domain settings, but it doesn't help when both sides are on Cloudflare. That option is designed for non-Cloudflare CDNs. With Cloudflare-to-Cloudflare routing (what Cloudflare calls "O2O"), the SaaS claim still intercepts.
The fix is simple but irreversible in the short term: remove your custom domains from Lovable entirely.
Go to your Lovable project settings → Domains
Remove all custom domains (both apex and www)
Keep the default
your-slug.lovable.app- that's what your edge worker will proxy to
Once you remove the domains, your edge worker takes over immediately. Human visitors get proxied through to Lovable's staging URL (they'll never see it - the worker handles the routing transparently). Bots get the pre-rendered HTML from R2.
Step 9: Verify everything works
Run through these checks. Every one of them should pass:
Check the edge worker is active:
curl -sI <https://yourdomain.com/> | grep x-edge-worker
Expected: x-edge-worker: active
Check that bots get pre-rendered HTML:
curl -sI -H "User-Agent: Googlebot" <https://yourdomain.com/> | grep x-prerendered
Expected: x-prerendered: true
Check the content is real, not an empty shell:
curl -s -H "User-Agent: Googlebot" <https://yourdomain.com/> | grep -c "div id=\\"root\\""
If the pre-rendering is working, the HTML should contain your actual page content, not just the empty root div. You can pipe the output to head -50 to visually confirm there's real text in there.
Check robots.txt allows A.I. crawlers:
curl -s <https://yourdomain.com/robots.txt> | head -20
You should see explicit Allow: / entries for GPTBot, ClaudeBot, PerplexityBot, and others.
Check llms.txt returns your site summary:
curl -s <https://yourdomain.com/llms.txt>
Check www redirects to apex:
curl -sI <https://www.yourdomain.com/> | grep -i location
Expected: a 301 redirect to your apex domain.
Check sitemap:
curl -s <https://yourdomain.com/sitemap.xml> | head -10
You should see a valid XML sitemap with your page URLs and today's date as the lastmod.
Step 10: Submit to Google Search Console
The pre-rendering is live, but Google won't know about it until you tell it. If you haven't already, add your domain as a property in Google Search Console. Verify ownership via a DNS TXT record (the easiest method when you're already managing DNS in Cloudflare), then submit your sitemap URL: https://yourdomain.com/sitemap.xml.
Use the URL Inspection tool to fetch your homepage as Google would see it. You should see the fully rendered HTML, complete with all your content, meta tags, and structured data. If you still see an empty page, something in the chain is broken - go back through the verification steps above.
Adding new pages later
When you add new pages in Lovable, the renderer will pick them up automatically on its next daily run - as long as the new pages are linked from an existing page (which they almost certainly will be, since your site's navigation will include them). The sitemap regenerates from whatever the renderer discovers, so that stays in sync too.
The one thing you will need to update manually is the LLMS_TXT string in the edge worker's worker.js, if you want the new pages listed in your llms.txt file. That's a quick edit and a redeploy. If you don't bother, the pages still get pre-rendered and indexed - llms.txt is a nice-to-have, not a requirement.
A note on A.I. search visibility
Getting the pre-rendering in place removes the technical barrier - A.I. crawlers can now actually see your content. But whether they choose to cite you in their answers is a separate question. A.I. search engines typically cite somewhere between two and seven sources per response, and the competition for those citation slots is fierce.
A few things that move the needle, based on what we've seen working with clients: content freshness matters enormously. Pages updated within the last 30 days get cited at roughly three times the rate of stale content. The daily cron renderer updates your sitemap's lastmod dates, but the actual page content needs to change too - A.I. engines can tell the difference between a timestamp change and a genuine update.
Fact density helps. Specific numbers, named sources, and concrete data points give A.I. systems something to anchor a citation to. Vague marketing copy doesn't get cited.
FAQ schema markup on your pages - the FAQPage JSON-LD structured data - has a measurable impact. We've seen something in the region of a 25-30% increase in A.I. citations on pages that include it. High return for relatively little effort.
And the llms.txt file, which the edge worker already serves, gives A.I. systems a cheat sheet for understanding your site without crawling every page. The spec is still early days, but there's no downside to including it.
When things go wrong
A few of the most common issues and what causes them:
No x-edge-worker header on responses. The Lovable custom domain is still active (go back to Step 8), or your DNS records aren't proxied (Step 5), or - the sneaky one - your routes are in the wrong position in wrangler.toml (after the [[r2_buckets]] block instead of before it).
Site breaks after removing from Lovable. The edge worker wasn't deployed yet, or it deployed with an error. Always deploy and verify the worker before removing the custom domain from Lovable. If you've already pulled the trigger and the site is down, check npx wrangler tail your-domain-prerender-edge --format pretty for errors.
A.I. crawlers aren't getting pre-rendered content. Check Cloudflare's A.I. Crawl Control (Step 6). Check that the User-Agent regex in the edge worker actually matches the bot you're testing with. Check that R2 actually has the HTML files in it.
Pre-rendered HTML is stale or empty. Trigger a manual re-render and watch the renderer's logs with npx wrangler tail. If pages are rendering with very low character counts (under 50), the headless browser probably can't reach your site properly - check that the edge worker is proxying non-bot traffic to Lovable correctly.
This is a lot. What if I don't want to do it myself?
Look, I've tried to make this as clear and complete as I can, but I'm also aware that what you've just read involves two Cloudflare Workers, an R2 bucket, DNS configuration, headless browsers, cron schedules, and a TOML ordering gotcha that will silently break everything without telling you why. If you're a founder whose core skill is building the product rather than wrangling infrastructure, there is no shame whatsoever in deciding this isn't how you want to spend your afternoon.
We do this at Growthmode. The whole setup, the ongoing maintenance, the A.I. search optimisation on top of it. If you'd rather hand this off and get back to building your product, drop us a line and we'll get it sorted.
