Executive Summary
This case study details the optimization of a high-performance static site (SSG) built with Astro and Headless WordPress. Facing build times exceeding 29 minutes due to an “N+1” data-fetching bottleneck, we engineered a custom caching layer that reduced external API requests by over 99%. The result was a stable, lightning-fast build process completing in just 3 minutes—a 9.5x improvement.
1. The Challenge: The “N+1” Bottleneck
Our architecture relies on Astro for the frontend and WordPress as a headless CMS. While this offers great content management, the standard method of fetching data during a Static Site Generation (SSG) build introduced a critical flaw.
For every single page generated, the build process was making unique network requests:
- Blog Post Pages: 1 request for the post content + 1 request for related posts.
- Case Studies & Guides: Similar redundant fetching patterns.
- Pagination: Repeatedly hitting the API for page counts and slicing.
The Math of Inefficiency:
With a library of ~100 posts across various types, our build server was firing hundreds of separate HTTP requests to the WordPress backend.
- Consequence 1: The WordPress server (on shared hosting) would frequently time out or throttle connections (“ECONNRESET”).
- Consequence 2: Build times ballooned to 29 minutes and 7 seconds, making rapid deployment impossible.

2. The Solution: “Fetch Once, Reuse Everywhere”
We engineered a custom data layer in TypeScript (wordpress.ts) to implement a Singleton Promise Cache. The core philosophy was simple: Never ask the API for the same data twice.
A. Smart Promise Caching
Instead of caching the result (the JSON data), we cached the active Promise (the network request itself).
- Scenario: If 50 pages start building simultaneously and ask for “All Blog Posts,” they all check the cache.
- Outcome: The first page initiates the request. The other 49 pages see the pending request and “hook onto” it.
- Result: 1 network request serves 50 pages instantly.
B. Type Isolation Strategy
To prevent data collisions between different content types (e.g., showing a “Case Study” on a “Blog” list), we implemented a Map-based cache key system:
'all-posts'→ Stores Blog Posts'all-case-study'→ Stores Case Studies'all-guide'→ Stores Guides
C. Zero-Latency Pagination & Relations
We moved logic from the server to the build process:
- Pagination: Instead of asking WordPress for “Page 2,” we fetch everything once and slice the array in memory using JavaScript.
- Related Posts: Instead of an expensive API query for every single post, we filter the in-memory array to find related content. Latency dropped from ~1.5s per post to 0ms.
D. Stability Engineering
To handle the flaky shared hosting environment, we added an exponential backoff retry mechanism. If a request failed, the system would wait 1 second and try again (up to 3 times) before failing the build.
3. Technical Implementation
Here is the core logic that powered the transformation.
The Singleton Fetcher (src/lib/wordpress.ts):
class WordPressAPI {
// A Map to store active promises for different content types
private _requestCache = new Map<string, Promise<WordPressPost[]>>();
async getAllPosts(options: PostsQueryOptions = {}) {
const type = options.postType || 'posts';
const cacheKey = `all-${type}`;
// 1. Check Memory: If a fetch is running or done, return that Promise immediately.
if (this._requestCache.has(cacheKey)) {
return this._requestCache.get(cacheKey);
}
// 2. Fetch Network: Start the single request chain
const requestPromise = (async () => {
// ... complex chunking logic to fetch 50 items at a time ...
return allData;
})();
// 3. Save Promise: Store it so other pages can reuse it
this._requestCache.set(cacheKey, requestPromise);
return requestPromise;
}
}
4. The Results
The impact of this optimization was immediate and drastic.
| Metric | Before Optimization | After Optimization | Improvement |
|---|---|---|---|
| Total Build Time | 29 min 07 sec | 3 min 06 sec | ~9.5x Faster |
| API Requests | 1,000+ (Est.) | ~4 (1 per type) | 99% Reduction |
| Stability | Frequent Timeouts | 100% Success | Reliable |
| Cost | High Server Load | Minimal Load | Efficient |
Visual Proof:

Before: Long, stalled builds waiting on network I/O.
After: Rapid completion with logs confirming cache hits:
⚡ [Cache Hit] Reuse existing promise for 'all-malware-log'
5. Key Takeaways
- Don’t rely on APIs for SSG logic: Move operations like sorting, filtering, and pagination into your build process memory whenever possible.
- Cache the Promise, not just the Data: This solves the “Race Condition” where parallel pages trigger duplicate requests before the first one finishes.
- Optimize for the Weakest Link: By adding chunking and retry logic, we made the build process resilient even against slow, shared hosting environments.
This architecture is now the standard for all our future high-performance Astro projects.

Leave a Reply