Big-O & How to Think About Complexity
What O(1), O(n), and O(log n) really mean — and why getting them right is half of CS.
Why we measure complexity
Two algorithms can solve the same problem and produce identical answers, but one runs in 1 millisecond and the other takes 11 days. The difference is complexity: how the runtime (or memory use) grows as the input grows.
We don't measure in seconds — that depends on hardware, programming language, and what else the computer is doing. Instead, we count the number of basic operations as a function of input size n, and describe how that function grows. The notation we use is Big-O.
Definition of Big-O
We say an algorithm is O(f(n)) if, for large enough n, its runtime is bounded above by some constant times f(n). In practice, this means we throw away two things:
- Constant factors — 5n and 1000n are both O(n). At scale, the constant matters less than the SHAPE of the curve.
- Lower-order terms — n² + n + 7 is just O(n²). The largest term dominates as n grows.
Each curve is normalized to its own max so you can see the SHAPE. The numbers below show the actual operation count at the chosen n.
Push the slider toward 40. O(2ⁿ) blows up to a trillion. That's why brute-force subset enumeration becomes impossible past ~30 elements.
The complexity classes you must know
- O(1) — constant. Doesn't depend on n. Hash map lookup, array indexing.
- O(log n) — logarithmic. Halves (or otherwise reduces by a constant factor) each step. Binary search.
- O(n) — linear. One pass through the data. A single for loop.
- O(n log n) — log-linear. Comparison-based sorts (mergesort, heapsort, Python's TimSort).
- O(n²) — quadratic. Nested loops over n. Acceptable for n in the thousands; painful past that.
- O(n³) — cubic. Triple-nested loops. Avoid unless n is tiny.
- O(2ⁿ) — exponential. All subsets, naive recursion of Fibonacci. Practically intractable past n ~30.
- O(n!) — factorial. All permutations. Tractable only for n ~10.
Concrete intuition
If n is 1,000,000:
- O(log n) — about 20 operations.
- O(n) — 1 million operations. Milliseconds.
- O(n log n) — 20 million. Still fast.
- O(n²) — 1 trillion. Hours.
- O(2ⁿ) — heat death of the universe.
Reading complexity from code
# O(1) — does the same thing regardless of input size
def first(arr):
return arr[0]
# O(n) — touches every element once
def contains(nums, target):
for n in nums:
if n == target:
return True
return False
# O(n²) — nested loop, both proportional to n
def has_duplicate(nums):
for i in range(len(nums)):
for j in range(i + 1, len(nums)):
if nums[i] == nums[j]:
return True
return False
# O(n) — same problem, smarter algorithm
def has_duplicate_fast(nums):
seen = set()
for n in nums:
if n in seen:
return True
seen.add(n)
return FalseNotice how `has_duplicate_fast` uses a set to trade SPACE for TIME — we use O(n) extra memory to drop runtime from O(n²) to O(n). This is the single most common optimization pattern in DSA.
O(log n) in motion: binary search
O(n²) vs O(n log n) in motion: sorting
Space complexity
Same idea, but for memory: how much extra space does the algorithm use, beyond its input? Use it to compare two solutions when their time complexity is the same.
Best, average, worst case
Big-O describes the WORST case unless stated otherwise. "Quicksort is O(n log n)" really means it's O(n log n) on average; its worst case is O(n²) on already-sorted input. Most textbook claims are average-case for hash maps, worst-case for everything else — pay attention.