Operating Systems•18 min read•Intermediate
CPU Scheduling
Round-robin, priority, fairness — how the OS picks who runs next.
Why scheduling exists
Your laptop has maybe 8-16 CPU cores, but it's running thousands of threads. The scheduler is the part of the kernel that decides, every few milliseconds, which thread runs next on each core. It must balance several competing goals: keep CPUs busy (throughput), keep latency low for interactive apps, be fair to background work, and respect priorities.
CPU scheduling — round-robin with one core, three threads
The kernel preempts running threads on a timer, or when they block on I/O.
CPU Core
A
Currently running
Threads
A
Thread ARUNNINGB
Thread BREADYC
Thread CREADYThread A is running. B and C wait in the ready queue.
Common scheduling strategies
- FCFS (First-Come-First-Served) — simple, but a long job blocks everyone.
- Round-robin — each thread gets a fixed time slice (~10ms), then rotates. Most fair; default for general-purpose OSes.
- Priority — higher-priority threads run first. Risk: 'starvation' of low-priority work.
- Multilevel feedback queue — the standard for Linux/Windows. Multiple queues; threads that use up their slice get demoted; those that block (I/O-bound, interactive) get promoted. Captures the intuition that interactive apps should feel snappy.
💡 Tip
On Linux, `nice` and `renice` adjust priority. `chrt` switches scheduling class entirely (real-time vs default).
Preemptive vs cooperative
- Preemptive — the OS can interrupt a running thread at any moment. Modern OSes are all preemptive.
- Cooperative — threads voluntarily yield. Simpler but a misbehaving thread can hang the system. (Old Mac OS, Windows 3.x worked this way.)