security-best-practices
December 24, 2025

Rate Limiting Strategies That Actually Work

Learn practical rate limiting strategies for APIs: IP vs user-based tracking, sliding windows, token buckets, Redis storage, and handling failures gracefully.

9 min readReal-time tracking enabled
Rate Limiting Strategies That Actually Work
Share:

Ever had your API go down because someone decided to hammer it with a million requests? Or watched your server costs skyrocket because a bot was scraping your entire database? Yeah, that's where rate limiting comes in to save the day.

Think of rate limiting like a bouncer at a club. They control how many people get in and how fast, making sure the place doesn't get overwhelmed. Your API needs the same kind of protection.

Why You Actually Need This

Let's be real here. Without rate limiting, you're basically leaving your front door wide open with a "please abuse me" sign hanging on it.

Here's what can go wrong:

DDoS attacks can flood your system with so many requests that legitimate users can't get through. Rate limiting acts like a shield, blocking the flood before it drowns your servers.

Resource exhaustion happens when someone (intentionally or not) uses up all your memory, CPU, or database connections. Rate limiting ensures fair usage so everyone gets their fair share.

Brute force attacks on login endpoints can be stopped dead in their tracks. If someone's trying thousands of password combinations, rate limiting will catch them after a few attempts.

Cost control is huge. If you're running on cloud infrastructure, every request costs money. Unlimited requests mean unlimited bills.

The Big Decision: IP-based vs User-based

When you're setting up rate limiting, your first big choice is what to track.

IP-based Rate Limiting

This is the simple approach. You count requests from each IP address. Someone sends 100 requests per minute from their IP? Block them.

Pros:

  • Super easy to implement
  • Works for anonymous users
  • Catches basic abuse quickly

Cons:

  • Shared IPs (like corporate networks) get punished together
  • VPNs and proxies make this less effective
  • Can't distinguish between different users on the same network

User-based Rate Limiting

Track limits per authenticated user instead of IP. Each user ID gets their own quota.

Pros:

  • More accurate for authenticated APIs
  • Fairer for users behind shared IPs
  • Can offer different tiers (free users get 100 requests/hour, paid users get 10,000)

Cons:

  • Doesn't work for public endpoints
  • More complex to implement
  • Users need to authenticate first

Real world tip: Use both. IP-based limiting for public endpoints and authentication attempts, user-based for authenticated API calls.

How Rate Limiting Algorithms Actually Work

Let's talk about the main strategies people use, and more importantly, which ones actually work in production.

Fixed Window Counter

The simplest approach. You count requests in fixed time windows (like "per minute" or "per hour"). When the window resets, the counter goes back to zero.

Example: Allow 100 requests per minute. At 2:00:00 PM, the counter starts. At 2:00:59 PM, someone has used 99 requests. At 2:01:00 PM, boom, the counter resets.

The problem: Edge case abuse. Someone could send 100 requests at 2:00:59 PM and another 100 at 2:01:00 PM. That's 200 requests in 2 seconds, even though your limit is 100 per minute.

Sliding Window

This is where things get interesting. Instead of fixed windows, the window slides with each request.

The sliding window algorithm keeps track of timestamps for recent requests. When a new request comes in, you look back at the last N seconds and count how many requests happened in that timeframe.

# Simple sliding window concept
def is_allowed(user_id, limit=100, window=60):
    current_time = time.now()
    # Get all requests in the last 60 seconds
    recent_requests = get_requests_since(user_id, current_time - window)
    
    if len(recent_requests) < limit:
        record_request(user_id, current_time)
        return True
    return False

The upside: No edge case abuse. The window moves with every request, so you can't game the system.

The downside: More memory intensive because you're storing timestamps, not just counts.

Token Bucket

This algorithm is like having a bucket of tokens. Tokens are added to your bucket at a steady rate. Each request costs one token. No tokens? No request.

The cool part is it allows for bursts. If you haven't made requests in a while, your bucket fills up. Then you can make several requests quickly before running out again.

class TokenBucket {
  constructor(capacity, refillRate) {
    this.capacity = capacity;
    this.tokens = capacity;
    this.refillRate = refillRate; // tokens per second
    this.lastRefill = Date.now();
  }
  
  allowRequest() {
    this.refill();
    if (this.tokens >= 1) {
      this.tokens -= 1;
      return true;
    }
    return false;
  }
  
  refill() {
    const now = Date.now();
    const timePassed = (now - this.lastRefill) / 1000;
    const tokensToAdd = timePassed * this.refillRate;
    this.tokens = Math.min(this.capacity, this.tokens + tokensToAdd);
    this.lastRefill = now;
  }
}

This is great for APIs where legitimate users might need to make several requests in quick succession occasionally.

Leaky Bucket

Requests go into a bucket and leak out at a constant rate. If the bucket overflows, requests get rejected.

The difference from token bucket? Leaky bucket processes requests at a constant rate, no bursts allowed. It's like a queue with a maximum size.

Good for when you need predictable, steady traffic. Not great when legitimate use cases involve bursts.

Storage Options: Where to Keep Your Counters

Your rate limiting algorithm is only as good as where you store the data.

In-Memory

Fast. Really fast. Store everything in your application's memory.

Good for: Single server setups, testing, development.

Bad for: Multiple servers (each server has its own memory), crashes wipe everything.

Redis

Redis is the go-to choice for rate limiting in production. It's an in-memory database that multiple servers can share.

// Example: Simple rate limit with Redis
async function checkRateLimit(userId, limit = 100, window = 60) {
  const key = `rate_limit:${userId}`;
  const current = await redis.incr(key);
  
  if (current === 1) {
    // First request in this window, set expiration
    await redis.expire(key, window);
  }
  
  return current <= limit;
}

Why Redis wins:

  • Crazy fast (millions of operations per second)
  • Built-in expiration (keys automatically delete themselves)
  • Works across multiple servers
  • Atomic operations (no race conditions)

Watch out for: Network failures between your app and Redis can cause issues. If Redis goes down, what happens to your rate limiting?

Distributed Caches

Memcached, Hazelcast, or other distributed caching solutions work similarly to Redis. They're designed to scale horizontally and handle high throughput.

The trade-off is usually complexity. Redis is simpler and more commonly used for rate limiting specifically.

When Things Go Wrong: Failure Modes

This is the part people don't talk about enough. What happens when your rate limiter fails?

The "Fail Open" Approach

When rate limiting fails (Redis is down, for example), you let all requests through.

Good because: Your API stays available. Users don't get blocked.

Bad because: You lose protection. If you're under attack when this happens, you're toast.

Use when: Availability is more important than protection (rare).

The "Fail Closed" Approach

When rate limiting fails, block everything.

Good because: You stay protected even when things break.

Bad because: Legitimate users get blocked. Your API becomes unavailable.

Use when: Security is more important than availability.

The Smart Approach: Graceful Degradation

Instead of all-or-nothing, degrade gracefully:

  • Keep a local, in-memory fallback cache
  • Use conservative limits when distributed storage is unavailable
  • Log failures but keep serving requests with reduced limits
  • Alert your team immediately
async function rateLimitWithFallback(userId, limit = 100) {
  try {
    // Try distributed rate limiting first
    return await checkRedisRateLimit(userId, limit);
  } catch (error) {
    console.error('Redis unavailable, using local fallback');
    // Fall back to local rate limiting (less accurate but works)
    return checkLocalRateLimit(userId, limit * 0.5); // More conservative
  }
}

The Race Condition Problem

When multiple servers check the rate limit simultaneously, you can get race conditions. Two requests check at the same time, both see "99 requests made," both think "I can let this through," and now you've exceeded your limit.

Solution: Use atomic operations. In Redis, use commands like INCR which increment and return the value in one atomic operation.

The Synchronization Issue

Distributed systems are hard. What if your servers' clocks aren't perfectly synced? Fixed window counters can drift.

Solution: Use Redis timestamps, not local server times. Or use sliding windows which are less sensitive to clock drift.

Best Practices That Actually Matter

Different Limits for Different Endpoints

Not all endpoints are equal. A read-only GET request is cheap. A complex POST that triggers database writes and sends emails? That's expensive.

const limits = {
  '/api/search': { requests: 1000, window: 60 },
  '/api/user/profile': { requests: 100, window: 60 },
  '/api/send-email': { requests: 10, window: 60 },
  '/api/login': { requests: 5, window: 300 } // 5 attempts per 5 minutes
};

Return Useful Headers

Tell users about their rate limit status in HTTP headers:

X-RateLimit-Limit: 100
X-RateLimit-Remaining: 75
X-RateLimit-Reset: 1640000000

When they hit the limit, return a 429 status code with a clear message and when they can try again.

Dynamic Rate Limiting

Adjust limits based on system load. If your servers are at 90% CPU, temporarily reduce limits. When things calm down, raise them back up.

Whitelist Your Own Services

Don't rate limit your own internal services or health checks. Nothing worse than your monitoring failing because it got rate limited.

Test Your Limits

Load test to find your actual system limits. Don't just guess. What looks like a reasonable limit might be way too high or unnecessarily restrictive.

Putting It All Together

Here's a production-ready rate limiting setup:

  1. Use sliding window algorithm for accuracy without the fixed window edge cases
  2. Store counters in Redis for distributed systems
  3. Implement graceful degradation with local fallbacks
  4. Set different limits per endpoint based on resource cost
  5. Return clear headers so users know their status
  6. Monitor and alert on rate limit failures
  7. Test under load to find real limits

Rate limiting isn't about making your API harder to use. It's about making sure it's available and responsive for everyone. A well-configured rate limiter is invisible to legitimate users but essential for keeping your system healthy.

The key is finding the balance between protection and usability. Too strict, and you annoy real users. Too loose, and you're vulnerable. Start conservative, monitor actual usage patterns, and adjust based on real data.


References:

Enjoyed this article?

Vote or share it with someone who might love it.

George Ongoro
George Ongoro

Blog Author & Software Engineer

I'm George Ongoro, a passionate software engineer focusing on full-stack development. This blog is where I share insights, engineering deep dives, and personal growth stories. Let's build something great!

View Full Bio

Related Posts

Comments (0)

Join the Discussion

Please login to join the discussion