SlideShare a Scribd company logo
CPU Scheduling
CPU Scheduling Basic Concepts Scheduling Criteria  Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling
Basic Concepts Maximum CPU utilization obtained with multiprogramming While a process waits for I/O, CPU sits idle if no multiprogramming Instead the OS can give CPU to another process CPU burst  distribution
CPU Scheduler Short-term Scheduler Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state 2. Switches from running to ready state 3. Switches from waiting to ready 4. Terminates Scheduling under 1 and 4 is  nonpreemptive/cooperative All other scheduling is  preemptive
CPU Scheduler Nonpreemptive: Once the process is allocated the CPU, it keeps it until termination/wait. No special hardware (like timers) needed. Preemptive scheduling – running process can be removed for another Issues: Shared data consistency – Synchronization Typically we cannot disable interrupts
Dispatcher Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: switching context switching to user mode jumping to the proper location in the user program to restart that program Dispatch latency  – time it takes for the dispatcher to stop one process and start another running
Scheduling Criteria CPU utilization  – keep the CPU as busy as possible Throughput  – # of processes that complete their execution per time unit Turnaround time  – amount of time to execute a particular process Waiting time  – amount of time a process has been waiting in the ready queue Response time  – amount of time it takes from when a request was submitted until the first response is produced, not output  (for time-sharing environment)
Scheduling Algorithm Optimization Criteria Max CPU utilization Max throughput Min turnaround time  Min waiting time  Min response time
Scheduling Algorithms First-Come, First-Served Scheduling Shortest-Job-First Scheduling Priority Scheduling Round-Robin Scheduling Multilevel Queue Scheduling Multilevel Feedback Queue Scheduling
First-Come, First-Served (FCFS) Scheduling Process Burst Time   P 1 24   P 2   3   P 3   3   Suppose that the processes arrive in the order:  P 1  ,  P 2  ,  P 3  The Gantt Chart for the schedule is: Waiting time for  P 1   = 0;  P 2   = 24;  P 3  = 27 Average waiting time:  (0 + 24 + 27)/3 = 17 P 1 P 2 P 3 24 27 30 0
FCFS Scheduling (Cont) Suppose that the processes arrive in the order   P 2  ,  P 3  ,  P 1   The Gantt chart for the schedule is: Waiting time for  P 1  =  6 ;   P 2  = 0 ;  P 3  =  3 Average waiting time:  (6 + 0 + 3)/3 = 3 Much better than previous case P 1 P 3 P 2 6 3 30 0
Shortest-Job-First (SJF) Scheduling Associate with each process the length of its next CPU burst.  Use these lengths to schedule the process with the shortest time If burst times are the same – break ties using FCFS SJF is provably optimal – gives minimum average waiting time for a given set of processes
Example of SJF Process Arrival Time Burst Time   P 1 0.0 6   P 2  2.0 8   P 3 4.0 7   P 4 5.0 3 SJF scheduling chart Average waiting time = (3 + 16 + 9 + 0) / 4 = 7 P 4 P 3 P 1 3 16 0 9 P 2 24
Priority Scheduling A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer    highest priority) Preemptive nonpreemptive SJF is a priority scheduling where priority is the predicted next CPU burst time
Multilevel Queue Ready queue is partitioned into separate queues: foreground (interactive) background (batch) Each queue has its own scheduling algorithm foreground – RR background – FCFS Scheduling must be done between the queues Fixed priority scheduling; (i.e., serve all from foreground then from background).  Possibility of starvation. Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR 20% to background in FCFS
Multilevel Queue Scheduling
Thread Scheduling Distinction between user-level and kernel-level threads OS only schedules kernel-level threads. User-level threads are scheduled through a direct or indirect (LWP) mapping Many-to-one and many-to-many models, thread library schedules user-level threads to run on LWP Known as  process-contention scope (PCS)  since scheduling competition is within the process Kernel thread scheduled onto available CPU is  system-contention scope (SCS)  – competition among all threads in system Typically – PCS is priority based. Programmer can set user-level thread priorities
Multiple-Processor Scheduling CPU scheduling more complex when multiple CPUs are available ASSUMPTION - Homogeneous processors  within a multiprocessor Asymmetric multiprocessing  – only one processor accesses the system data structures. Symmetric multiprocessing  (SMP)  – each processor is self-scheduling, all processes in common ready queue, or each has its own private queue of ready processesMost common – Windows XP, 2000, Linux, OS X
Multiprocessor Scheduling Affinity may be decided by the architecture of the main-memory. NUMA – Non Uniform Memory Access CPU has faster access to some memory. Multiprocessors systems where each CPU has a memory board. It can also access memory on other CPU’s but there is a  delay OS design  influenced  by the architecture and optimized for performance
NUMA and CPU Scheduling
Thank you

More Related Content

Cpu scheduling(suresh)

  • 2. CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling
  • 3. Basic Concepts Maximum CPU utilization obtained with multiprogramming While a process waits for I/O, CPU sits idle if no multiprogramming Instead the OS can give CPU to another process CPU burst distribution
  • 4. CPU Scheduler Short-term Scheduler Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state 2. Switches from running to ready state 3. Switches from waiting to ready 4. Terminates Scheduling under 1 and 4 is nonpreemptive/cooperative All other scheduling is preemptive
  • 5. CPU Scheduler Nonpreemptive: Once the process is allocated the CPU, it keeps it until termination/wait. No special hardware (like timers) needed. Preemptive scheduling – running process can be removed for another Issues: Shared data consistency – Synchronization Typically we cannot disable interrupts
  • 6. Dispatcher Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: switching context switching to user mode jumping to the proper location in the user program to restart that program Dispatch latency – time it takes for the dispatcher to stop one process and start another running
  • 7. Scheduling Criteria CPU utilization – keep the CPU as busy as possible Throughput – # of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular process Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)
  • 8. Scheduling Algorithm Optimization Criteria Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time
  • 9. Scheduling Algorithms First-Come, First-Served Scheduling Shortest-Job-First Scheduling Priority Scheduling Round-Robin Scheduling Multilevel Queue Scheduling Multilevel Feedback Queue Scheduling
  • 10. First-Come, First-Served (FCFS) Scheduling Process Burst Time P 1 24 P 2 3 P 3 3 Suppose that the processes arrive in the order: P 1 , P 2 , P 3 The Gantt Chart for the schedule is: Waiting time for P 1 = 0; P 2 = 24; P 3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 P 1 P 2 P 3 24 27 30 0
  • 11. FCFS Scheduling (Cont) Suppose that the processes arrive in the order P 2 , P 3 , P 1 The Gantt chart for the schedule is: Waiting time for P 1 = 6 ; P 2 = 0 ; P 3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case P 1 P 3 P 2 6 3 30 0
  • 12. Shortest-Job-First (SJF) Scheduling Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time If burst times are the same – break ties using FCFS SJF is provably optimal – gives minimum average waiting time for a given set of processes
  • 13. Example of SJF Process Arrival Time Burst Time P 1 0.0 6 P 2 2.0 8 P 3 4.0 7 P 4 5.0 3 SJF scheduling chart Average waiting time = (3 + 16 + 9 + 0) / 4 = 7 P 4 P 3 P 1 3 16 0 9 P 2 24
  • 14. Priority Scheduling A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer  highest priority) Preemptive nonpreemptive SJF is a priority scheduling where priority is the predicted next CPU burst time
  • 15. Multilevel Queue Ready queue is partitioned into separate queues: foreground (interactive) background (batch) Each queue has its own scheduling algorithm foreground – RR background – FCFS Scheduling must be done between the queues Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR 20% to background in FCFS
  • 17. Thread Scheduling Distinction between user-level and kernel-level threads OS only schedules kernel-level threads. User-level threads are scheduled through a direct or indirect (LWP) mapping Many-to-one and many-to-many models, thread library schedules user-level threads to run on LWP Known as process-contention scope (PCS) since scheduling competition is within the process Kernel thread scheduled onto available CPU is system-contention scope (SCS) – competition among all threads in system Typically – PCS is priority based. Programmer can set user-level thread priorities
  • 18. Multiple-Processor Scheduling CPU scheduling more complex when multiple CPUs are available ASSUMPTION - Homogeneous processors within a multiprocessor Asymmetric multiprocessing – only one processor accesses the system data structures. Symmetric multiprocessing (SMP) – each processor is self-scheduling, all processes in common ready queue, or each has its own private queue of ready processesMost common – Windows XP, 2000, Linux, OS X
  • 19. Multiprocessor Scheduling Affinity may be decided by the architecture of the main-memory. NUMA – Non Uniform Memory Access CPU has faster access to some memory. Multiprocessors systems where each CPU has a memory board. It can also access memory on other CPU’s but there is a delay OS design influenced by the architecture and optimized for performance
  • 20. NUMA and CPU Scheduling