Scheduling

"I know of no way of judging the future but by the past." — Patrick Henry, 1775

Overview

CPU scheduling forms the basis of multi-programmed operating systems.

Process schedulingscheduling

Scheduling Criteria

Scheduling algorithms have different properties depending on what the algorithm is trying to minimize/maximize. (e.g. interactive vs. non-interactive processes)

CriteriaMin/MaxDescription
ThroughputMaximizeHow many jobs (processes) are completed in a given time.
Waiting timeMinimizeHow much time a process spends in the ready queue. (Running/blocked doesn't count.)
CPU utilizationMaximizeHow busy the CPU is. The goal is to keep the CPU working.
Response timeMinimizeHow long it takes from the time a process is created until it produces some response (output).
Turnaround timeMinimizeHow much time has elapsed from the time a process is created until it finishes.
Simple Examples

Non-preemptive Scheduling Details

FCFS scheduling and waiting time Priority scheduling and waiting time Example priority values

Preemptive Scheduling Details

Round robin scheduling Preemptive SJF scheduling Multilevel queue scheduling Multilevel feedback queue scheduling Completely fair scheduling (CFS) Why so many? Links: ksysguard, gnome-system-monitor

Multicore Scheduling

Up until now, everything that was discussed ignored the concept of multiple CPUs or cores. With multicore processors, we can now achieve true parallelism.

Real World examples:

In my opinion, the most approachable introduction and explanation of scheduling is a book called P.S. To Operating Systems. The explanations use ordinary daily activities to describe different scheduling approaches (e.g. fast-food restaurant, video rental). Be warned that some of the math gets pretty involved and tedious. However, you don't need to understand all of the math to get the ideas. It covers single core, multiple core, processor scheduling, disk scheduling, process synchronization, and memory management. Well worth the read for those that want to understand how the OS works behind the scenes.

The first system uses a sno-cone stand to model how much profit is made. (Employees are CPUs, customers are jobs, the line of customers is the ready queue.)

Example:

The sno-cone model describes how a single-core CPU works in regards to efficiency.

After a somewhat detailed statistical analysis it can be shown that Employee A will make $2.73 per minute profit, while Employee B will make $2.70 per minute profit.

Chapter1-sample.pdf

Consider how customers get serviced at the local Enormo-Burger.

All of the methods can have different times associated with throughput, waiting time, and utilization. In operating systems, this models a computer with two CPUs or two cores. Again, after detailed statistical analysis, the results:

For throughput (how many customers get served per time unit), this is how they fare (larger number is better)

Twice as fast - 0.466
Sequential    - 0.423
Random line   - 0.423
Shortest line - 0.450
Common line   - 0.454
For wait time, (smaller is better)
Twice as fast - 0.571
Sequential    - 0.909
Random line   - 0.909
Shortest line - 0.333
Common line   - 0.202

sample2.pdf

Other things to take into consideration:

Applications to scheduling: