Having Fun Being Broke!

Mar 8, 2026

The fun of optimising projects/deployments/code to just about squeeze in under the generous limits of free hosting/deployment/cloud.

This post uses Peek.

Three things are certain right now:

  • Economy is bad for everyone except those at the top (existing wealth)
  • AI has disrupted and could fully automate white collar work especially for those outside it (those inside enjoy it while it lasts)
  • AI tokens + compute + memory is cheaper than it has ever been

To capitalise on this I have been dedicating my time to creativity. Having a lot of fun building web apps/tools/creative projects in this AI-accelerated hypercreation period. More importantly I've been having a lot of fun doing this all for free!

The Brokestack

I'll briefly talk about the brokestack I've settled on:

  • Compute: Cloudflare Workers oooooooh baby, Cloudflare Workers is the best free compute available.

    • 100k requests per day, globally distributed, no cold starts, no idle billing
    • All thanks to the V8 isolate model
    • Tight tight (free) constraints!
      • Your entire webapp + API routes must fit in a 1MB compressed bundle
      • Max 10ms per CPU request
  • Storage: Since I'm already on Cloudflare workers I leverage 'Workers Assets':

    • Static files served alongside your Worker (your files, images, audio, fonts, etc.)
    • Max 25MB per individual file, max 20,000 files, 1 GB total Or spin up KV storage for an easy convenient key-value store:
    • 100k reads + 1k writes per day Or R2 object storage - S3-compatible blog storage:
    • 10GB free, 1M write + 10M read operations (both programmatic access)

    Supabase is the most generous free Postgres you'll find:

    • 500MB database, 1GB file storage, max 50k monthly active users, 5GB egress, 3 active projects (has to be shared)
    • The only catch is projects pause after a week of inactivity but you can schedule something to keep it alive
  • Product analytics:

    • Posthog: 1M events/month free
    • Session recordings, funnels and feature flags all included - LFG
    • Love their product and UI, they need a better cli though I don't like MCP servers - not intuitive for Claudinho
  • Observability:

    • Cloudflare worker analytics and logs are built in and free - no need to pay for a cloudwatch equivalent or logging like Datadog
    • For static sites, umami.is cloud gives you 10k page views/month and 1 site free. Or self-host it for free on any server.
    • For dynamic logging/tracing, Grafana Cloud: 10k metric series, 50GB logs, 50GB traces, 14-day retention — all free.
  • AI:

    • Gemini 2.5 Flash-lite: 15 req/min, 1000 requests/day, no credit card required (aka milk it)
    • Unfortunately companies are feeling the squeeze so they're cutting these limits pretty actively
    • Gemini 2.0 Flash was the cheapest model by a factor of 10 and is being discontinued in June :(
    • If you want to stay entirely within Cloudflare, Workers AI lets you run OSS models at the edge — Llama 3.2, DeepSeek, GPT OSS and others. 10k neurons/day free. Good for lightweight AI features without leaving the platform.
  • Domains: Cloudflare again for anything they support as they only charge the minimum for renewal or Namecheap for anything exotic (always use promocode COUPONFCNC) If I could get away with not even paying for a domain I would, however I predict in the age of AI-accelerated hypercreation domain names will be very expensive assets. Brother I'm trying to pay for NOTHING.

  • Email: Resend: 3000 transactional emails/month free on one custom domain

Now onto the more interesting aspect of this... what can you learn by being broke?

My Budget? Zilch Mate

We have defined a specific set of constraints - hell I can even model this as an operations research problem!

Each decision xᵢ ∈ [0,1] — include this dependency/feature or not. Each carries a cost vector: bᵢ bytes of bundle, tᵢ ms of CPU per request, wᵢ KV writes consumed per day.

maximise   Σ vᵢxᵢ

subject to:
  Σ bᵢxᵢ ≤ 1,048,576   bytes   (bundle)
  Σ tᵢxᵢ ≤ 10           ms      (CPU per request)
  Σ wᵢxᵢ ≤ 1,000        /day    (KV writes)
  xᵢ ∈ {0,1}

A multi-dimensional 0/1 knapsack problem. We are maximising the value we ship subject to hard resource constraints across multiple dimensions simultaneously. Every wrong decision whether it is a bloated dependency, a server-side operation that should be client-side, a userless feature shrinks our feasible region.

Decisions

One wrong decision whether architectural, dependency, design, feature, scope - could completely break our budget (£0) and have the CFO (also me) knocking on our door.

Language

Let's think about the first, often overlooked decision when designing anything: LANGUAGE.

Python is the obvious villain. An already heavy runtime then add pydantic, type hints, FastAPI, asyncio and a few other ecosystem staples and you've consumed significant compute before doing anything useful. On Cloudflare Workers this doesn't even enter the conversation - Python Workers are experimental with severe constraints.

But even outside Workers — Lambda/container/VM — it matters:

  • Python cold starts are slower, the runtime is heavier
  • Go and Rust compile to tiny static binaries with negligible startup
  • TypeScript compiles to efficient JS, runs natively in V8

So picking the wrong language costs you: CPU time, bundle size, memory.

In the age of AI, language is no longer a bottleneck. "Translating paradigms" is easier. Converting Python to Go, understanding a Rust crate well enough to use it, reading C++ to understand what a WASM module actually does - these used to require years of experience. Realistically now you can give it a skim (provided you're decent at programming and can translate concepts well) and get to prompting.

So we go right back to picking the right language for the job. Language is a tool! Everything else is a learning experience and you have a very patient tutor (Claudius).

Hosting

The first hosting decision: do you need a server at all?

Cloudflare offers two products:

  • Pages is for static sites — unlimited requests, unlimited bandwidth, auto-deploys on git push, zero configuration. This blog runs on it. If your project is purely frontend with no server-side logic, stop here, you're done.
  • Workers is for anything dynamic. If a site needs server-side rendering, Stripe webhooks, KV storage slap it on Workers.

Cloudflare is super generous. CF's 100k requests/day is lowkey insane.

What either product eliminates: no servers to SSH into, no SSL certificates to renew, no Nginx configs, no uptime monitoring for your own machine. So you're free of all this ops work!

Compute

Once upon a time in a non-K shaped economy where the commoner had some disposable income we were able to get away with a lot. Not so much anymore so we must scrutinise and optimise.

For CF Workers the 10ms CPU limit. The budget is on compute intensity only.

So what can you do to remain within this budget?

  • Make the user pay for it (computationally)!: The browser has no CPU budget. Put anything the user can wait for that doesn't need server authority — heavy rendering, complex formatting, rich UI state — belongs there. The Worker does auth, data fetching, light transformation.

  • Profiling!!: wrangler dev (Cloudflare cli) gives you CPU time per request. You may realise your 10ms is going somewhere completely unexpected. You absolutely need to measure and profile before pushing to prod to avoid incurring any unwanted charges.

  • Caching!!!: CF's Cache API lets you serve responses without touching your Worker at all. For anything that doesn't change per-user, this eliminates CPU cost entirely.

Then there are the things that require serious hacking or simply aren't possible:

  • No native binaries: .node files and C extensions don't run in V8 isolates. If a package depends on them, find another or better yet. Write your own.

  • No runtime WASM compilation: you can use WASM, but not by compiling it at request time. It requires static imports and a custom injection pipeline at build. I've got a writeup on how I bypassed this (risky) coming soon.

Bundle and Dependency Management

Every package you install costs bundle size, potentially CPU time, potentially memory. On a generous cloud budget this is invisible. But being broke you need to think. You start asking "what does this actually do and could I write the ten lines myself?"

Especially now with AI and Claude and agents. Why the hell would I download a library for 2MB when I can identify exactly what I need out of it and code it myself.

An example for a user input I was working on recently: I wanted to filter profanity from a user leaderboard i.e. I didn't want to allow racism (bad) and sex words (bad). When I posed this to Claude it suggested I install and add the 'bad-words' library. This library was 365kb unpacked. I thought hmmm... I know all the bad words and some of the sex words. I'm sure Claude knows them all too (he did). So why wouldn't I ask it to find a solution to bypass this requirement. And we did! I'm sure I'm missing some but even then it allows for creativity and rewarding users for finding creative ways to bypass the filter...

Tree shaking.

import moment from 'moment'        // CommonJS — entire 67KB bundled no matter what
import { format } from 'date-fns'  // ESM — only format() and its deps

Named imports from an ESM package aren't enough either. The package's package.json must not mark itself as having side effects and your bundler must be configured to shake.

So we must analyse bundles. wrangler deploy --dry-run --outdir dist outputs the bundle without deploying. Feed it to esbuild --analyze or source-map-explorer and we're able to see exactly which module is taking up space. The compressed size Wrangler prints after deploy is useful; knowing which specific import caused a 200KB jump is more useful. You find things here that would never show up in code review.

Feature Selection

Careful feature selection - AI has made feature creation and extension unquenchable, but with a hard constraint we must exercise and adopt business logic. What are our users actually using. What must we sacrifice for another? Can't add everything. Bloat is bad.

Monitoring and Observability

Monitoring tools are part of the knapsack. They have bundle size, CPU cost and often latency on every request. We can't bolt on every observability tool and pretend it's free.

Two categories:

  • Infrastructure-level — zero cost. Cloudflare's built-in analytics and logs don't touch your code. Tail Workers.

  • SDK-based — costs you. PostHog JS is ~47KB gzipped. Sentry wraps every request handler and adds ~20KB.

We do have a little hack I've been experimenting with on CF Workers: ctx.waitUntil().

Ultimately though it's the same as feature selection we need to pick by value:

  • Cloudflare analytics and logs are a no-brainer they're free and already there.
  • PostHog if you're selling something and need funnel data.
  • Sentry if you're in production and need to know when things break.
  • Langfuse only if you're running LLM features.

Every one of these is a bᵢ in the knapsack.

Takeaways

Forced to consider the hard constraint of "this literally will not work unless I crack it" more than the feedback loop of "I should optimise this." Can't say I don't enjoy it though. The rich folk wouldn't understand.

I reckon I'll continue this series where rest of the posts go deep on specific parts of the stack but only as I find legit optimisations not nonsense!

Currently a WIP! Will continue to iterate on this post and polish it up, thought I'd publish it early as I still think it's useful in the state it's in.

Oh you disagree? Email me here to argue -> ossamachaib.cs@gmail.com.

You can also email me for any other reason other than SPAM!

Get notified on new posts

No spam. Ever. I will cherish and guard your email like a newborn baby. A beautiful doe.