Skip to main content
Guides Skills and frameworks Big-O Complexity Cheatsheet for Coding Interviews 2026
Skills and frameworks

Big-O Complexity Cheatsheet for Coding Interviews 2026

10 min read · April 24, 2026

A no-fluff Big-O reference card covering every complexity class, data structure, and algorithm pattern you'll face in coding interviews.

Big-O Complexity Cheatsheet for Coding Interviews 2026

If you can't instantly recall that a binary search runs in O(log n) or that inserting into a hash map is O(1) amortized, you will bleed time in interviews explaining yourself when you should be coding. Big-O is the shared language of technical interviews — interviewers at Amazon, Google, Meta, and every mid-size tech company expect you to narrate complexity as fluently as you narrate logic. This guide is a dense, opinionated reference you can internalize in a weekend and use as a mental checklist in every coding round. We've structured it so you can skim the headings for a quick refresh or read deeply before your first interview sprint.

One important framing note before we dive in: Big-O describes the upper bound on growth rate, not raw speed. An O(n log n) sort can be faster in practice than an O(n) algorithm with enormous constant factors. Interviewers know this. You should too — and saying so out loud will make you sound like a senior engineer, not a textbook reciter.

The Seven Complexity Classes You Actually Need to Know

Forget the full academic taxonomy. In coding interviews, you will encounter exactly these complexity classes, roughly ordered from fastest to slowest:

  1. O(1) — Constant time. The operation doesn't grow with input size. Hash map lookup, array index access, stack push/pop. This is the holy grail — always ask yourself if an O(1) structure can replace a slower one.
  2. O(log n) — Logarithmic time. Input is halved at each step. Binary search, balanced BST operations, heap insert/extract. Any time you see a sorted structure, think log n.
  3. O(n) — Linear time. You touch every element once. Single-pass array scans, linked list traversal, BFS/DFS on a graph with n nodes and no backtracking.
  4. O(n log n) — Linearithmic time. The best you can do for comparison-based sorting. Merge sort, heap sort, Tim sort (Python's default). If someone asks you to sort, this is your floor.
  5. O(n²) — Quadratic time. Nested loops over the same input. Bubble sort, insertion sort on unsorted data, naive string matching. Acceptable only for small n (say, n ≤ 1,000). Flag it and optimize if n can grow.
  6. O(2ⁿ) — Exponential time. Recursive brute force without memoization. Generating all subsets, naive Fibonacci. Almost always a signal that dynamic programming or pruning is needed.
  7. O(n!) — Factorial time. Generating all permutations. Only tolerable for n ≤ 10 or so. If you're here unintentionally, you've missed a key constraint.

The single most useful interview habit: before writing a single line of code, say out loud what complexity you're targeting and why. It reframes the problem and signals seniority.

Data Structure Complexity — The Card You Should Have Memorized

Every data structure question bottoms out in these numbers. Know them cold.

Arrays (dynamic, e.g., Python list, Java ArrayList)

  • Access by index: O(1)
  • Search (unsorted): O(n)
  • Search (sorted, binary): O(log n)
  • Insert/delete at end: O(1) amortized
  • Insert/delete at middle: O(n)

Hash Maps / Hash Sets

  • Insert, delete, lookup: O(1) average, O(n) worst case (hash collisions)
  • Iteration: O(n)
  • The worst case almost never matters in interviews — say "O(1) average" and move on unless you're asked about adversarial inputs.

Linked Lists (singly and doubly)

  • Access / search: O(n)
  • Insert/delete at head: O(1)
  • Insert/delete at tail (with tail pointer): O(1)
  • Insert/delete in middle (given pointer): O(1)
  • Finding the middle: O(n) — this is why slow/fast pointer tricks exist.

Binary Search Trees (balanced: AVL, Red-Black)

  • Search, insert, delete: O(log n)
  • In-order traversal: O(n)
  • Unbalanced BST degrades to O(n) — always specify "balanced" when you claim log n.

Heaps (min-heap / max-heap)

  • Insert: O(log n)
  • Extract min/max: O(log n)
  • Peek min/max: O(1)
  • Build from array (heapify): O(n) — this surprises people; it's not O(n log n).
  • Find kth largest/smallest: a heap is almost always your tool here.

Graphs (adjacency list representation)

  • BFS / DFS: O(V + E) where V = vertices, E = edges
  • Dijkstra (with min-heap): O((V + E) log V)
  • Bellman-Ford: O(VE)
  • Always clarify whether the graph is sparse or dense — it changes whether adjacency list or matrix is better.

Stacks and Queues

  • Push, pop, enqueue, dequeue: O(1)
  • These are almost always wrappers around arrays or linked lists — the complexity follows from the underlying structure.

Tries

  • Insert / search: O(m) where m = length of the key string
  • Space: O(ALPHABET_SIZE × m × n) — tries trade space for speed aggressively.

Sorting Algorithm Complexity — Stop Guessing

Sorting comes up constantly, either directly or as a preprocessing step. Here's what you need:

  • Merge sort: O(n log n) time, O(n) space. Stable. The gold standard for comparison sorts.
  • Quick sort: O(n log n) average, O(n²) worst case, O(log n) space (in-place). Unstable. Fastest in practice due to cache behavior.
  • Heap sort: O(n log n) time, O(1) space. Unstable. Good when memory is constrained.
  • Tim sort (Python, Java default): O(n log n) worst, O(n) best (nearly sorted data). Stable.
  • Counting sort: O(n + k) where k = range of values. Only works on integers in a known range. O(n) is achievable when k is small.
  • Radix sort: O(nk) where k = number of digits. Beats comparison sorts for integers with bounded digit count.
  • Bubble / insertion / selection sort: O(n²). Never suggest these for large inputs. Insertion sort is fine for nearly sorted small arrays — know when to say so.

The rule of thumb: if you're sorting and n can be large, your expected complexity is O(n log n). If someone asks you to do better, they're pushing you toward counting sort, radix sort, or a constraint you haven't exploited yet.

Space Complexity — The Dimension Most Candidates Ignore

Interviewers at senior levels specifically probe space complexity because it's where junior engineers get lazy. Every time you quote a time complexity, have a space complexity ready.

  • In-place algorithms use O(1) auxiliary space. Quicksort (in-place partition), two-pointer techniques, most sliding window approaches.
  • Recursion depth counts as space. A recursive DFS on a tree of height h uses O(h) stack space — O(n) in the worst case (skewed tree), O(log n) for a balanced tree. If the interviewer asks you to optimize space, iterative with an explicit stack is your move.
  • Hash maps and sets used for memoization or visited tracking cost O(n) space. Don't forget to mention this.
  • Storing output doesn't count against auxiliary space in most interview conventions — but verify this assumption with your interviewer when the output is large.

Space is the dimension that reveals whether you've actually shipped production systems. A candidate who volunteers "this uses O(n) extra space — here's how we could reduce it" reads as someone who has cared about real resource constraints.

Recognizing Complexity Patterns in Interview Problems

The hardest skill isn't memorizing tables — it's looking at a problem and instantly knowing what complexity class is achievable. Here are the dominant patterns:

Sliding window → O(n). Any problem asking for the longest/shortest subarray or substring satisfying a condition. Two pointers expand and contract the window; each element enters and exits at most once.

Binary search on the answer → O(log n × cost of check). When the answer is a monotone function of some parameter ("find the minimum capacity such that..."), binary search on the answer space. The check function runs once per binary search step.

BFS for shortest path → O(V + E). Whenever you need minimum steps or minimum hops in an unweighted graph, BFS guarantees shortest path. DFS does not.

Dynamic programming → reduces exponential to polynomial. If brute force is O(2ⁿ) due to overlapping subproblems, DP with memoization typically brings it to O(n²) or O(n). Always identify the state space first — the number of unique states bounds your DP complexity.

Heap for streaming top-k → O(n log k). Maintaining a min-heap of size k while scanning n elements gives you the top k in O(n log k) — better than sorting all n when k ≪ n.

Union-Find for connectivity → O(α(n)) per operation. With path compression and union by rank, Union-Find operations are effectively O(1) for all practical purposes. α(n) is the inverse Ackermann function — it's never greater than 4 for any real-world n.

How to Communicate Complexity Without Sounding Robotic

Knowing the numbers is table stakes. Communicating them well is what separates candidates who get offers from candidates who get "strong no hire" with a note saying they lacked depth.

Here's a framework for narrating complexity live:

  1. State your target upfront. "I want to solve this in O(n) time and O(1) extra space."
  2. Justify each loop or recursive call. "This outer loop runs n times. The inner operation is a hash map lookup, so O(1). Total: O(n)."
  3. Flag trade-offs explicitly. "I'm trading O(n) space for the hash set to bring time from O(n²) to O(n). If memory were constrained, I'd sort first and use two pointers for O(n log n) time, O(1) space."
  4. Acknowledge amortization when it applies. "Dynamic array append is O(1) amortized — occasionally O(n) for a resize, but spread across operations it averages out."
  5. Connect to real constraints. "With n up to 10⁶, O(n²) would be 10¹² operations — non-starter. O(n log n) gives us about 2×10⁷, which is comfortably within a 1-second time limit."

That last point — connecting complexity to concrete operation counts — is a senior engineer move. Most candidates quote complexity in a vacuum. You should quote it in the context of the input constraints given in the problem.

The Complexity Traps That Kill Candidates in Interviews

These are the mistakes we see most often. Avoid every single one:

  • Forgetting that string concatenation is O(n). In Python and Java, s += char inside a loop is O(n²) total, not O(n). Use a list/array and join at the end.
  • Calling .sort() inside a loop. If you sort an n-element array inside an n-iteration loop, that's O(n² log n). Preprocess or use a heap.
  • Assuming hash map is always O(1). Under adversarial inputs (custom hash collision attacks), it degrades to O(n). For most interviews this doesn't matter, but if you're interviewing at a security-conscious company, know this.
  • Miscounting recursion depth for space. Recursive Fibonacci without memoization is O(2ⁿ) time and O(n) space (call stack depth). Both matter.
  • Claiming O(log n) for an unbalanced BST. Always say "balanced BST" or "we assume the tree is balanced." Interviewers will poke at this.
  • Ignoring the cost of building a data structure. Constructing a heap from scratch is O(n). Building a trie from n strings of average length m is O(nm). These costs belong in your overall analysis.
  • Mixing up O(n) and O(n²) for nested structures. Two nested loops over different arrays of size n and m is O(nm), not O(n²). O(n²) is specifically when both loops iterate over the same n.

Next Steps

You've got the reference card. Now make it stick before your next interview cycle:

  1. Build a physical flashcard deck this week. Write one data structure or algorithm per card — operation on the front, time and space complexity on the back. Quiz yourself for 10 minutes daily. Spaced repetition beats re-reading a cheat sheet every time.
  2. Solve 10 LeetCode problems and narrate complexity out loud for every single one. Don't just write the solution — record yourself explaining the time and space complexity as if you're in an interview. Listen back. Fix the hedging.
  3. Audit your last 5 coding solutions for hidden O(n²) patterns. Look for string concatenation in loops, sorting inside loops, and nested iterations where you assumed O(n). You will find at least one.
  4. Practice the trade-off conversation with a mock interview partner. Ask them to respond "can you do better?" after every solution. Your job is to have a ready answer: either a tighter algorithm or a clear argument for why the current complexity is optimal.
  5. Memorize the input size → complexity heuristic table: n ≤ 20 means exponential is okay; n ≤ 500 means O(n²) is fine; n ≤ 10⁵ means O(n log n) is your target; n ≤ 10⁶ means you probably need O(n); n > 10⁸ means you're likely doing math or O(log n) tricks. This alone will save you from proposing obviously-too-slow solutions in time-pressured rounds.