George Ongoro.

Insights, engineering, and storytelling. Exploring the intersection of technology and creativity to build the future of the web.

Navigation

Home FeedFor YouAboutContactRSS FeedUse my articles on your site

Legal

Privacy PolicyTerms of ServiceAdmin Portal

Stay Updated

Get the latest engineering insights delivered to your inbox.

© 2026 George Ongoro. All rights reserved.

System Online
    Homedevops-deployment

    Why I'm Skeptical of Serverless and Why I Use It Anyway

    March 17, 20268 min read
    devops-deployment
    Why I'm Skeptical of Serverless and Why I Use It Anyway
    Cover image for Why I'm Skeptical of Serverless and Why I Use It Anyway

    A few months back I was reviewing the Vercel billing page for a client project and noticed the function invocations count had quietly crept up past what I expected. The app was not even live to the public yet. A few background jobs, some webhook handlers, and a cron-based sync routine had been running freely during staging. Nothing catastrophic, but it was a reminder that serverless is not the "set it and forget it" cost story the marketing makes it sound like.

    I have been building with serverless infrastructure for a few years now, mostly through Vercel's function runtime and some AWS Lambda work before that. My honest take is somewhere in the middle: it solves real problems, it creates new ones, and the people who swear by it completely or dismiss it completely are both missing something.

    The Case Against Serverless That Nobody Wants to Admit

    The standard criticisms are cold starts, vendor lock-in, and debugging complexity. All three are real, but they land differently depending on where you sit.

    Cold starts. Traditional Node.js serverless functions on AWS Lambda still take anywhere from 100ms to over a second to initialize after a period of inactivity. For an app serving users in Nairobi hitting a Lambda function sitting in us-east-1, that is not just a cold start penalty - it is a cold start penalty stacked on top of the baseline round-trip latency from East Africa to the US East Coast. I have seen first-load response times on serverless APIs push past two seconds in real testing, not synthetic benchmarks. That is not acceptable for anything user-facing.

    The edge runtimes (Cloudflare Workers, Vercel Edge Functions) have largely solved this specific problem. Cloudflare Workers run on V8 isolates rather than full containers or virtual machines, which cuts cold start times to sub-10ms. Vercel's "Fluid Compute" introduced bytecode caching and predictive warming, reducing cold start latency significantly compared to their earlier container-based setup. But - and this matters - edge runtimes come with their own restrictions. You get a trimmed-down API surface, no access to the full Node.js standard library, and execution time caps that make them unsuitable for anything compute-heavy.

    Cost surprises at scale. The pay-per-invocation model looks great when traffic is light or unpredictable. It stops looking great the moment you have consistent, high-frequency traffic. A few real stories have circulated in developer communities about teams that migrated e-commerce backends to Lambda, added provisioned concurrency to address performance, and ended up with AWS bills that tripled compared to what a couple of modest VMs would have cost. Serverless pricing is optimized for sporadic workloads. If your app has steady traffic all day, a small Railway instance or a DigitalOcean droplet might just be cheaper, simpler, and easier to reason about.

    Statelessness is not free. Functions are stateless by design. That is philosophically clean but operationally annoying. You cannot hold anything in memory between invocations. Caches need to be external (Upstash Redis is what I reach for). Sessions need external storage. Background jobs that need to pick up where they left off need orchestration. Each of those adds a new dependency, another thing to configure, another potential failure point. For a side project this is manageable. For a production app with a team, it adds real cognitive overhead.

    Why I Still Deploy Serverless Anyway

    Despite all of that, most of my current projects run on serverless infrastructure to some degree, and I have good reasons for it.

    The honest answer is that for most of what I build, the problems above do not actually bite me. My typical workload is a Next.js app with API routes handling user requests, some webhook endpoints that fire when external services do something, and maybe a cron job or two. That is not a workload that demands a persistent server. Traffic is uneven, peaks are hard to predict, and the last thing I want to do is babysit a Linux box at 2am because memory crept up on a weekend.

    Vercel's developer experience for Next.js is still genuinely good. Deploying a new function is just creating a file. Preview deployments per branch mean the client can review changes on a real URL before anything goes to production. I do not think about capacity planning for the web layer. For early-stage projects where the requirements are still shifting, that speed of iteration matters.

    For event-driven work specifically, serverless fits naturally. Consider a simple webhook handler that processes payments from an M-Pesa integration:

    // app/api/webhooks/mpesa/route.ts
    import { NextRequest, NextResponse } from 'next/server'
    import { db } from '@/lib/db'
    
    export async function POST(req: NextRequest) {
      const payload = await req.json()
    
      // Validate the callback
      if (!payload.Body?.stkCallback) {
        return NextResponse.json({ error: 'Invalid payload' }, { status: 400 })
      }
    
      const { CheckoutRequestID, ResultCode, CallbackMetadata } = payload.Body.stkCallback
    
      if (ResultCode !== 0) {
        await db.payment.update({
          where: { checkoutRequestId: CheckoutRequestID },
          data: { status: 'FAILED' },
        })
        return NextResponse.json({ ResultCode: 0, ResultDesc: 'Accepted' })
      }
    
      const amount = CallbackMetadata.Item.find(
        (item: { Name: string }) => item.Name === 'Amount'
      )?.Value
    
      await db.payment.update({
        where: { checkoutRequestId: CheckoutRequestID },
        data: { status: 'COMPLETED', amount },
      })
    
      return NextResponse.json({ ResultCode: 0, ResultDesc: 'Accepted' })
    }
    

    This handler runs maybe a few dozen times a day. Keeping a server alive around the clock to handle that traffic would be wasteful. Serverless is the right call here. The function wakes up, does its job, and disappears.

    When to Reach for Something Else

    The place where I have learned to stop defaulting to serverless is background processing.

    Long-running jobs, queue workers, and anything that holds state across time are awkward in serverless environments. Vercel's default function timeout is 10 seconds on the free plan, up to 60 seconds on Pro. That sounds like a lot until you are trying to process a CSV upload with a few thousand rows, kick off a PDF generation task, or sync data from a slow third-party API. You will hit the timeout and have to rethink the architecture entirely.

    What I do now for jobs like these is run them on Railway with a small persistent Node.js process. Railway gives me a simple deployment experience (not too different from Vercel) but with a proper long-running container. For anything that needs to sit and wait - a BullMQ worker processing a Redis queue, a scheduled sync job, a file processing pipeline - that setup has been rock solid.

    A rough mental model that has worked for me:

    • HTTP request handlers, webhook receivers, edge middleware: serverless, Vercel functions or edge runtime
    • Background jobs, queue workers, long-running tasks: Railway with a Node.js worker
    • Cron jobs with short execution time: Vercel cron or an external scheduler hitting a serverless function
    • Cron jobs with longer or uncertain execution time: Railway with node-cron or a simple loop
    // Example: BullMQ worker on Railway (not serverless)
    import { Worker } from 'bullmq'
    import { redis } from '@/lib/redis'
    import { sendEmail } from '@/lib/email'
    
    const emailWorker = new Worker(
      'email-queue',
      async (job) => {
        const { to, subject, html } = job.data
        await sendEmail({ to, subject, html })
      },
      {
        connection: redis,
        concurrency: 5,
      }
    )
    
    emailWorker.on('failed', (job, err) => {
      console.error(`Job ${job?.id} failed:`, err.message)
    })
    

    This worker stays alive, processes jobs as they arrive, and I pay a fixed monthly cost for it rather than being surprised by invocation counts.

    What I Got Wrong Early On

    The biggest mistake I made with serverless was treating it as a general compute platform rather than a tool with a specific sweet spot. I tried to run everything through Vercel functions because the DX was smooth and deployments were fast. That worked until it did not.

    I also underestimated the debugging experience. When a serverless function fails in production, tracing what happened is harder than reading a log file from a running server. The logs are fragmented across invocations, the execution context is gone by the time you are looking at it, and correlating a user complaint to a specific function invocation takes more tooling than most small projects have. I started adding Sentry to every project after the third time I had to reconstruct a failure from incomplete evidence.

    Vendor lock-in is also more real than I initially gave it credit for. Vercel-specific features like revalidatePath, ISR configuration through next.config.ts, and some middleware patterns are tightly coupled to their infrastructure. Moving a project off Vercel is not impossible, but it is not painless either. I try to keep the actual business logic in plain TypeScript modules that have no awareness of Vercel, and keep the platform-specific bits thin at the edges.

    What I Would Tell Myself Three Years Ago

    Use serverless for what it is good at: HTTP handlers, webhooks, event-driven work, and anything where traffic is unpredictable or sparse. Do not use it as a replacement for a real server just because deploys are easier. When you need a persistent process, run one - Railway and Render have made that cheap enough that the "no servers" argument is not compelling anymore.

    The one thing I would not give up is Vercel for Next.js deployments. The integration is tight enough that the trade-offs are worth it for me. But I pair it with Railway for the parts that need to stay alive, keep my logic portable, and check the billing dashboard more often than I probably should.

    George Ongoro
    George Ongoro

    Blog Author & Software Engineer

    I'm George Ongoro, a passionate software engineer focusing on full-stack development. This blog is where I share insights, engineering deep dives, and personal growth stories. Let's build something great!

    View Full Bio

    Related Posts

    Comments (0)

    Join the Discussion

    Please login to join the discussion