Data Structures & Algorithms22 min readBeginner

Big-O & How to Think About Complexity

What O(1), O(n), and O(log n) really mean — and why getting them right is half of CS.

Why we measure complexity

Two algorithms can solve the same problem and produce identical answers, but one runs in 1 millisecond and the other takes 11 days. The difference is complexity: how the runtime (or memory use) grows as the input grows.

🚗
Real-life analogy — Choosing a car for a road trip
For a 5-mile trip, ANY car works. For a 5,000-mile trip across continents, the choice between a city car and a long-haul truck makes a HUGE difference. Algorithms are the same — at small n, everything is fast; at large n, the wrong shape is unusable.

We don't measure in seconds — that depends on hardware, programming language, and what else the computer is doing. Instead, we count the number of basic operations as a function of input size n, and describe how that function grows. The notation we use is Big-O.

Definition of Big-O

We say an algorithm is O(f(n)) if, for large enough n, its runtime is bounded above by some constant times f(n). In practice, this means we throw away two things:

  • Constant factors — 5n and 1000n are both O(n). At scale, the constant matters less than the SHAPE of the curve.
  • Lower-order terms — n² + n + 7 is just O(n²). The largest term dominates as n grows.
Big-O at a glance — drag to change n

Each curve is normalized to its own max so you can see the SHAPE. The numbers below show the actual operation count at the chosen n.

n = 20
O(1)1
O(log n)4
O(n)20
O(n log n)86
O(n²)400
O(2ⁿ)1,048,576

Push the slider toward 40. O(2ⁿ) blows up to a trillion. That's why brute-force subset enumeration becomes impossible past ~30 elements.

The complexity classes you must know

  • O(1) — constant. Doesn't depend on n. Hash map lookup, array indexing.
  • O(log n) — logarithmic. Halves (or otherwise reduces by a constant factor) each step. Binary search.
  • O(n) — linear. One pass through the data. A single for loop.
  • O(n log n) — log-linear. Comparison-based sorts (mergesort, heapsort, Python's TimSort).
  • O(n²) — quadratic. Nested loops over n. Acceptable for n in the thousands; painful past that.
  • O(n³) — cubic. Triple-nested loops. Avoid unless n is tiny.
  • O(2ⁿ) — exponential. All subsets, naive recursion of Fibonacci. Practically intractable past n ~30.
  • O(n!) — factorial. All permutations. Tractable only for n ~10.

Concrete intuition

If n is 1,000,000:

  • O(log n) — about 20 operations.
  • O(n) — 1 million operations. Milliseconds.
  • O(n log n) — 20 million. Still fast.
  • O(n²) — 1 trillion. Hours.
  • O(2ⁿ) — heat death of the universe.

Reading complexity from code

# O(1) — does the same thing regardless of input size
def first(arr):
    return arr[0]

# O(n) — touches every element once
def contains(nums, target):
    for n in nums:
        if n == target:
            return True
    return False

# O(n²) — nested loop, both proportional to n
def has_duplicate(nums):
    for i in range(len(nums)):
        for j in range(i + 1, len(nums)):
            if nums[i] == nums[j]:
                return True
    return False

# O(n) — same problem, smarter algorithm
def has_duplicate_fast(nums):
    seen = set()
    for n in nums:
        if n in seen:
            return True
        seen.add(n)
    return False

Notice how `has_duplicate_fast` uses a set to trade SPACE for TIME — we use O(n) extra memory to drop runtime from O(n²) to O(n). This is the single most common optimization pattern in DSA.

O(log n) in motion: binary search

Binary search — O(log n)
Each step halves the search space. With 1 million items, you find the answer in 20 steps.
lo
2
5
8
12
16
23
38
56
72
hi
91
Search for 23 in sorted array.

O(n²) vs O(n log n) in motion: sorting

Bubble sort — O(n²)
Repeatedly swap adjacent out-of-order pairs.
5
2
8
1
9
3
7
4
Start
Merge sort — O(n log n)
Recursively split and merge sorted halves.
5
2
8
1
9
3
7
4
Start (merge sort: divide & conquer)

Space complexity

Same idea, but for memory: how much extra space does the algorithm use, beyond its input? Use it to compare two solutions when their time complexity is the same.

Best, average, worst case

Big-O describes the WORST case unless stated otherwise. "Quicksort is O(n log n)" really means it's O(n log n) on average; its worst case is O(n²) on already-sorted input. Most textbook claims are average-case for hash maps, worst-case for everything else — pay attention.

💡 Tip
On any coding interview, BEFORE writing code, state your approach's time and space complexity. Then write code. Interviewers care more about that habit than about a one-liner solution.
Which best describes the complexity of looking up a key in a Python dict?