Operating Systems22 min readIntermediate

Processes & Threads

What's the difference, when to use each, and the cost of context switches.

A process

A process is a running program with its OWN memory space — its own heap, stack, and code. The OS gives each process the illusion that it owns the entire machine. Two processes cannot read each other's memory unless they explicitly arrange to (via shared memory, pipes, sockets, or files).

A thread

A thread is a unit of execution INSIDE a process. Threads in the same process share heap memory, file descriptors, and code, but each has its own stack and CPU registers. Multiple threads in one process can run on different CPU cores in parallel — fast, but with the danger that two threads might write the same memory at the same time.

🏠
Real-life analogy — Process = house, thread = roommate
Processes are different houses — locked doors, separate kitchens. Threads are roommates in one house — they share the kitchen and fridge. Sharing is faster (no need to ship food across town), but two roommates grabbing the last egg at the same instant causes problems.

Context switch

When the OS switches the CPU from running thread A to thread B, it must save A's registers, swap address spaces (if a different process), and load B's registers. This costs roughly 1-10 microseconds. Multiply by thousands of context switches per second on a busy server and you get a real slice of CPU time.

When to use threads vs processes

  • Threads — share state, fast to create, single-machine parallelism. Risky: race conditions.
  • Processes — isolated, more memory overhead, can crash without taking down siblings, scale across machines via the network.
  • On Python: threads are limited by the Global Interpreter Lock (GIL) for CPU work — use multiprocessing or async for that.