Memoization: The Art of Teaching Your Code to Remember
The Core Insight
Memoization is elegantly simple: cache a function's results by its inputs. When the same input appears again, return the cached value instead of recomputing. You're trading memory for speed—replacing repeated work with instant lookups.
The Problem: Counting Paths Up Stairs
Imagine a staircase where you can climb 1, 3, or 5 steps at a time. How many distinct ways can you reach step n?
Let f(n) be the count of ways. The recurrence relation is:
f(n) = f(n-1) + f(n-3) + f(n-5)
With base cases:
f(0) = 1— one valid way: do nothingf(n) = 0whenn < 0— invalid path
Naive Recursion: Clean but Catastrophically Slow
This reads like the mathematical definition. Beautiful, right?
The problem: identical subproblems explode across branches. For n = 10, stepCount(5) gets called multiple times. The call tree grows exponentially, and we keep solving the same problems over and over.
Try stepCount(35) and watch your CPU fan spin up. For n = 50, you might as well go make coffee.
Memoization: Add a Memory
Store results by input. On repeat calls, return the stored value instantly.
Now n = 35 returns instantly. Even n = 100 is trivial because each distinct value of n is computed exactly once.
A Reusable Memoization Utility
Rather than manually threading caches, build a generic memoizer:
// Usage
;
LRU Cache: Bounded Memory
For long-running applications, unbounded caches are dangerous. Implement a Least Recently Used cache:
Why It Works
Memoization transforms a branching recursion tree into a directed acyclic graph. Each unique input becomes a single node. Compute once, reuse forever.
It's like saving a phone number in your contacts. The first time someone gives you their number, you type it in. Every time after that, you just look it up instead of asking again. Memoization is your contact list for function results.
When to Reach for Memoization
It shines when you have:
- Pure functions — output depends only on input, no side effects
- Overlapping subproblems — the same inputs recur across different execution paths
- Manageable state space — unique inputs fit comfortably in memory
Common applications:
- Dynamic programming: Fibonacci, coin change, edit distance, longest common subsequence
- Graph algorithms: path counting, shortest paths with repeated states
- Game engines: transposition tables in chess cache board evaluations to skip re-analyzing identical positions
- API responses: cache expensive computations or external calls
The Bottom-Up Alternative
Sometimes you can flip the script—compute iteratively from base cases up:
- Time: O(n × number of steps)
- Space: O(n)
- No recursion: avoids stack overflow for large
n
Choosing Your Approach
| Approach | Best For |
|---|---|
| Memoization | Natural recursive problems, lazy evaluation, when not all subproblems are needed |
| Bottom-up DP | Clear state space, tight memory control, avoiding recursion limits |
| LRU caching | Long-running services, memory-constrained environments |
Practical Gotchas
-
Memory growth: Unbounded caches can leak memory. Use LRU or clear caches periodically.
-
Cache key design: Keys must be immutable and capture all inputs that affect the result. Use
JSON.stringifyfor simplicity, but beware of object key ordering. -
Break-even point: For tiny inputs, cache overhead can exceed computation cost. Profile before optimizing.
-
Cache invalidation: If the underlying behavior changes (config updates, time-dependent logic), version your cache or clear it.
The Takeaway
Memoization is a small change with outsized impact. Cache results by input, reuse them on repeat calls. For problems with overlapping subproblems, it transforms exponential blowups into linear-time solutions.
Design your keys carefully, bound your memory when needed, and you'll have code that's both elegant and fast.