The marketing team ships a new campaign image. The engineering team updates og:image. The deploy succeeds. Someone pastes the link into Slack to share the launch. The old preview appears.
Everyone checks the page source — the new tag is there. Someone reloads the page with cache disabled — the new image serves correctly. Someone opens the URL directly — the new image renders. Slack, LinkedIn, and Facebook all keep showing the old preview anyway.
Stale Open Graph previews are almost never a metadata-syntax problem. They are a cache-layering problem. The HTML is correct, the image is correct, and the page response is correct — but the cache that actually determines what the user sees on the platform has not refreshed yet. This guide is a forensic walkthrough of why that happens and how to fix it on each platform.
TL;DR
- Stale social previews are usually caching, not broken metadata.
- Multiple cache layers can hold an old preview: the browser, your CDN, your origin, andthe social platform’s scraper cache.
- LinkedIn, Slack, and Facebook each cache independently and for different durations; fixing one does not fix the others.
- Changing the HTML alone may not be enough if the
og:imageURL is unchanged — URL versioning is usually the fastest deterministic fix. - Also confirm the image is reachable, absolute, HTTPS, in a safe format (JPEG or PNG), and at a size platforms expect (1200×630 is the broad default).
Before you guess which cache is stale, check what your page is actually broadcasting right now with the CodeAva Open Graph & Social Preview Inspector. It fetches the live HTML server-side, extracts every Open Graph and Twitter/X tag, probes the actual og:imageURL for content type, dimensions, and file size, and renders platform-style simulations for Facebook, LinkedIn, Slack, and iMessage. That is your baseline — what a scraper would see on a fresh fetch.
The real problem: multiple cache layers, not one
Developers often assume the platform reads the live page every time someone shares a link. It does not. The preview a user sees is shaped by a stack of caches:
- Browser cache. Only affects the developer testing the page locally. No impact on what other users see on social.
- CDN / edge cache.Your CDN may serve a stale HTML response or a stale image response to a social scraper. If the CDN holds the old bytes, the new bytes never reach the platform — even though the origin has them.
- Origin cache.Reverse proxies, framework output caching, and ISR-style page caches can hold an old version of the page’s HTML long after the source has been updated.
- Platform scraper cache. LinkedIn, Slack, Facebook, and Twitter/X each maintain their own cache keyed by URL. They do not re-fetch on every share. They re-fetch when their cache expires or when a debugger workflow forces a refresh.
Fixing one layer does not purge the others. A perfectly updated HTML response at the origin is invisible if the CDN still serves the old HTML to LinkedIn’s scraper, and even a perfect scrape is invisible if the cached preview was attached to a post before the update. Debugging stale previews is fundamentally about finding which cache is holding the old version and invalidating that specific layer.
Platform cache behavior: LinkedIn, Slack, and Facebook are not the same
Each major platform handles metadata caching differently. Treating them as interchangeable is one of the biggest reasons teams waste time debugging the wrong layer.
LinkedIn provides the Post Inspector (linkedin.com/post-inspector) for fetching a URL and inspecting the metadata it read. Running the URL through Post Inspector triggers a fresh scrape and updates LinkedIn’s cached preview for that URL.
The important caveat: LinkedIn has documented that refreshed previews apply to newshares of the URL, not to posts that already used the old preview. If your CEO shared the link yesterday, that post’s preview is baked in and a re-scrape will not update it. The fix is deterministic for new posts, not retroactive for old ones.
Slack
Slack caches link-expansion metadata at the workspace-network level for roughly 30 minutes(Slack’s documented behavior). When a link is re-shared inside that window, Slack serves the cached unfurl. After the cache expires, the next share triggers a fresh fetch.
Immediate workarounds:
- Edit the message containing the link so Slack re-processes the unfurl in that specific message.
- Delete and re-paste the link after the cache window has expired.
- Append a cache-busting query parameter so Slack treats it as a new URL (
?v=2).
Slack does not offer a public debugger analogous to LinkedIn’s or Facebook’s for forcing a global re-scrape. The cache simply expires.
Facebook / Meta
Meta operates the Sharing Debugger (developers.facebook.com/tools/debug). Paste a URL, click Debug to see the currently cached OG data Facebook is holding, then click Scrape Again to force a fresh fetch. Subsequent shares of that exact URL on Facebook reflect the updated preview.
As with LinkedIn, the re-scrape updates the cached preview for future shares of the URL. Posts that already used the old preview are not updated retroactively.
Twitter / X and others
Twitter/X uses its own metadata cache keyed by URL. It previously offered a dedicated card validator; current access to that tool varies. The practical fallback is the same as Slack: append a cache-busting parameter to produce a new URL, or wait for Twitter/X’s cache to expire. iMessage and WhatsApp previews are generated on-device from the first fetch and are effectively cached for the lifetime of the message bubble.
The most dangerous misconception
Pillar 1: verify the raw metadata first
Before blaming any cache layer, confirm the live page actually exposes the correct tags in the server-rendered HTML— not just the hydrated DOM that appears after JavaScript runs. Social scrapers generally do not execute JavaScript, and those that do are unreliable about it.
At minimum, verify the following tags are present in the response body for an unauthenticated request:
<meta property="og:image" content="…">— absolute HTTPS URL to the new image.<meta property="og:title" content="…"><meta property="og:description" content="…"><meta property="og:url" content="…">— should match the canonical URL for the page.<meta property="og:type" content="website|article|…">- Optional but useful for large images:
og:image:widthandog:image:height. <meta name="twitter:card" content="summary_large_image">for the large-image Twitter/X card variant.
The fastest way to see exactly what a scraper sees is a server-side fetch — either curl -sL "https://example.com/page" | grep -E 'og:|twitter:', or the CodeAva Open Graph Inspector, which fetches server-side and extracts the extracted tag set with image probe and preview simulations in one view. If the tags are wrong or missing there, no amount of platform-cache work will help — fix the HTML first.
If your pages are rendered by a framework that may hydrate metadata on the client, this is also the moment to confirm the server-rendered HTML already contains the final OG tags. For the broader metadata-delivery story on JavaScript-first and headless stacks, see Canonical Tags for JavaScript & Headless Websites — the same render-order rules apply to Open Graph tags.
Pillar 2: the OG image URL itself may be the problem
Even when the HTML tag is correct, the image URL it references can sabotage the preview. Common failure modes:
- The URL did not changebut the image bytes at that URL did. The platform’s cached copy of the previous bytes is still valid as far as the platform is concerned.
- The CDN serves an old cached response to the platform scraper, even though the origin has the new bytes.
- The image URL redirects through one or more hops. Some scrapers handle redirects gracefully; others time out, follow the wrong hop, or silently fall back to no image.
- The image is slow or intermittent. Social scrapers use tight timeouts. A 5-second response may succeed for your browser and fail for a scraper.
- The image is blocked by a WAF, bot-management rule, or authentication layer that accepts your browser session but rejects the anonymous scraper.
- The image format is exotic. Modern formats like AVIF are not universally supported by social scrapers. Stick to JPEG or PNG for OG images unless you have confirmed support on every platform you target.
- The image is too small or wrong shape. Many platforms silently reject images below a minimum size or produce an unflattering crop from an oddly-proportioned source.
- The URL is relative.
og:imagemust be an absolute HTTPS URL. A relative/images/hero.jpgwill fail on every platform that does not guess the base URL correctly (which is most of them).
Always probe the image the way a scraper does: an anonymous HEAD or GET against the absolute URL, following redirects, from a network outside your office VPN. If the probe reveals a slow response, a redirect chain, or a wrong content type, fix that before worrying about platform caches.
The most reliable fix: URL versioning and cache busting
When an og:image URL has not changed but the underlying image has, cache-busting is almost always faster and more deterministic than chasing individual platform caches. Renaming the file or adding a version parameter produces a new URL, which social scrapers generally treat as a new resource.
Before — same URL, ambiguous cache state:
<!-- Before: image bytes changed, URL did not.
Platforms may keep serving the old cached preview. -->
<meta property="og:image" content="https://cdn.example.com/hero.jpg">After — new URL, unambiguous:
<!-- Option A: version query parameter -->
<meta property="og:image" content="https://cdn.example.com/hero.jpg?v=2">
<!-- Option B: content-hash or campaign tag in the filename -->
<meta property="og:image" content="https://cdn.example.com/hero-2026-launch.jpg">
<!-- Option C: content-hash path segment -->
<meta property="og:image" content="https://cdn.example.com/images/9f2a1c/hero.jpg">Operational rules:
- Pick a single versioning strategy and stick with it. Content hashes are the cleanest for build pipelines; campaign tags are the cleanest for human authors.
- Do not churn versions on every deploy. If the image has not actually changed, do not change its URL — that invalidates caches unnecessarily and risks previews going dark during the re-scrape window.
- Combine versioning with platform debuggers where available. URL versioning gets the new URL into caches; LinkedIn Post Inspector and Facebook Sharing Debugger let you proactively populate their caches with the new preview before a real user shares the link.
- Keep the old URL alive for a while. Existing posts that already cached the old preview will continue to reference it. Serving 404 for the old URL after a rename can produce broken thumbnails on historical posts.
Platform debugger workflow
Once the HTML and the image URL are correct, run each relevant platform’s debugger in the order that matches where your audience actually shares. A practical sequence:
- Confirm the live page with a server-side fetch (
curl, the CodeAva Inspector, or both). Verify all OG tags are present, theog:imageURL is absolute HTTPS, and the image probe returns 200 with the expected content type. - Facebook / Meta. Open the Sharing Debugger, paste the URL, review the cached OG data, click Scrape Again. Confirm the fetched image URL, title, and description match what you just published.
- LinkedIn. Open the Post Inspector, paste the URL, run the inspection. Confirm the fetched preview matches the new metadata. Remember that past LinkedIn posts that already attached the old preview will not update.
- Slack. If a user in your workspace shared the link recently, edit the message (or delete and re-paste after the cache window). For new shares, either wait for the cache to expire or append a version parameter to the URL.
- Twitter / X.If you have access to Twitter/X’s validator, use it. Otherwise, version the URL and re-share.
- Spot-check in the actual apps.Do this from an account that has not recently shared the link — your own local cache can produce misleading results.
Image requirements and delivery pitfalls
A surprising portion of “stale preview” tickets are actually “broken preview” tickets. The platform didre-scrape, but the new image failed the platform’s validation and the platform fell back to a cached or default preview. Use broadly compatible defaults:
- Dimensions:1200×630 pixels (1.91:1 aspect ratio). This is the default the major platforms expect. Going larger is fine; going much smaller frequently produces a smaller preview card or no image at all.
- File size:keep it under a few megabytes — under 5 MB is a safe rule of thumb. Very large files increase scraper-timeout risk.
- Format: JPEG or PNG. Both are universally supported. Modern formats (WebP, AVIF) are not uniformly supported across every social scraper; use them only when you have confirmed support for every target platform.
- Transport: HTTPS only. Many scrapers reject or downgrade insecure image URLs.
- URL shape: absolute URL, no authentication, no geofencing, no redirect chains. Scrapers fetch anonymously from unpredictable IP ranges.
- CDN cache headers: serve the image with a long, explicit
Cache-Control(such aspublic, max-age=31536000, immutable) when using content-hashed filenames. With version parameters, ensure the CDN varies on the query string or uses a cache key that includes it. - Hotlinking / CORS: some origins block cross-origin image requests or specific user-agents used by scrapers. If you have WAF or bot-management rules, make sure the major social scraper user-agents (e.g.
facebookexternalhit,LinkedInBot,Slackbot-LinkExpanding,Twitterbot) are explicitly allowed.
The post-update checklist
Work through these in order whenever you change an og:image or any social metadata. Each step assumes the previous one is clean.
- Confirm the updated tags are in the server-rendered HTML via a non-browser fetch.
- Confirm the
og:imageURL is absolute, HTTPS, and returns 200 with a correct content type on an anonymous fetch. - Probe the image dimensions and file size against the 1200×630 / <5 MB defaults.
- Decide whether to version the image URL. If the URL did not change but the image did, append a version parameter or rename the asset.
- Purge the CDN cache for both the HTML page and, if the image URL is unchanged, the image asset itself.
- Run Facebook Sharing Debugger → Scrape Again.
- Run LinkedIn Post Inspector.
- Force Slack to re-unfurl by editing the message, re-pasting after the cache window, or versioning the URL.
- Spot-check the preview on each target platform from an account that has not shared the link recently.
- Document the version in your release notes so the next person does not re-litigate the same debugging session.
Pre-publish previews beat post-publish firefighting
Common mistakes teams make
- Assuming the platform reads the page live on every share. It does not. Each platform maintains its own cache.
- Changing the image bytes without changing the URL and then wondering why caches keep serving the old preview.
- Only clearing the browser cache when debugging. That has zero effect on what other users see on social.
- Using a relative
og:imageURL. Scrapers require absolute HTTPS URLs. - Shipping an AVIF or WebP
og:imageand assuming every platform supports it. - Letting a WAF or bot-management rule block social scraper user-agents without explicit allow-list entries.
- Expecting platform re-scrapes to update posts that already used the old preview. They do not.
- Running the debugger from the same account that just shared the link and drawing conclusions from the cached local view.
- Changing the image URL on every deploy even when the image has not changed, which churns caches and risks intermittent missing previews.
Per-platform cheat sheet
Use this as a reviewer’s reference during a stale-preview incident. The left column tells you where to look; the right column tells you what you cannot fix.
| Platform | Cache behavior | Refresh path | Important caveat |
|---|---|---|---|
| URL-keyed metadata cache; no universal TTL documented. | Post Inspector re-fetches the page and updates the cached preview. | Refreshed preview applies to new posts; existing posts that used the old preview are not updated. | |
| Slack | Workspace-level cache, documented as roughly 30 minutes. | Edit the message to re-unfurl; or re-share after the cache window; or append a version parameter. | No public global debugger; cache expires passively. |
| Facebook / Meta | URL-keyed OG cache populated by facebookexternalhit. | Sharing Debugger → Scrape Again forces a fresh fetch and updates the cached preview. | Existing posts are not updated retroactively. |
| Twitter / X | URL-keyed card cache. | Use the platform card validator if accessible; otherwise version the URL and re-share. | Validator availability has varied over time. |
| iMessage / WhatsApp | Preview generated on the recipient device when the message is first delivered. | Send a fresh message containing the link after the change. | Previously delivered bubbles keep their original preview indefinitely. |
| Your CDN / origin | HTML response and image response caches sit between the origin and every platform scraper. | Explicit purge or soft-invalidate the HTML and, if unchanged, the image URL. | A stale CDN is invisible to every platform debugger — they will happily re-cache the stale response. |
Inspect what your page is actually broadcasting
Every debugging workflow in this guide starts from the same question: what does a fresh server-side fetch of the page return right now?That answer is the ground truth against which every platform’s cache is measured.
The CodeAva Open Graph & Social Preview Inspector fetches any public URL server-side, extracts every Open Graph and Twitter/X tag, reads the resolved og:imageURL and probes its content type, pixel dimensions, aspect ratio, and file size, and renders platform-style simulations for Facebook, LinkedIn, Slack/Discord, and iMessage/WhatsApp. It does not and cannot re-scrape those platforms’ own caches — no third-party tool can — but it gives you the authoritative “what a scraper would see on a fresh fetch” view that every platform debugger works from.
For a broader technical-hygiene snapshot of the same page, the CodeAva Website Audit covers HTTP status, on-page metadata, Open Graph and Twitter Card presence, security headers, robots.txt and sitemap reachability in one pass. Use it alongside the OG Inspector during pre-publish reviews.
Stale previews are a cache problem, not a mystery
Open Graph debugging only feels difficult when you treat the preview as a single output of a single system. Once you map the cache layers — browser, CDN, origin, platform scraper — and which one each fix actually touches, the workflow becomes mechanical. Confirm the HTML is right. Confirm the image is reachable. Version the URL when the bytes changed but the URL did not. Hit each relevant platform’s debugger. Spot-check in the apps. Move on.
The teams that ship the fewest “my preview is wrong” tickets are the ones that treat Open Graph as a pre-publish checkpoint, not a post-publish incident. A 30-second inspection before the campaign goes live saves hours of cache-chasing after it.
When you are ready to audit a specific page, run it through the CodeAva Open Graph & Social Preview Inspector for the definitive view of what scrapers see, and pair it with the CodeAva Website Audit for broader metadata hygiene. For the adjacent story on why the title or description that search engines display can differ from the one you set, see Why Google Rewrites Your Title Tags and Meta Descriptions— the rewrite logic is different from the caching problem covered here, but the audit habits overlap heavily.






