Scheduling algorithm (chammu)
- 4. CPU Scheduling Basic Concept s CPU Scheduler Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling 09/16/10
- 5. Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait CPU burst distribution 09/16/10
- 7. CPU Scheduler Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state 2. Switches from running to ready state 3. Switches from waiting to ready 4. Terminates Scheduling under 1 and 4 is non-preemptive All other scheduling is preemptive 09/16/10
- 8. Scheduling Criteria CPU utilization – keep the CPU as busy as possible Throughput – # of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular process Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment) 09/16/10
- 10. First-Come, First-Served (FCFS) Scheduling Shortest-Job-First (SJR) Scheduling Example of Non-Preemptive SJF Example of Preemptive SJF Determining Length of Next CPU Burst Examples of Exponential Averaging Priority Scheduling Round Robin (RR) Example of RR with Time Quantum = 20 09/16/10
- 12. First-Come, First-Served (FCFS) Scheduling Process Burst Time P 1 24 P 2 3 P 3 3 Suppose that the processes arrive in the order: P 1 , P 2 , P 3 The Gantt Chart for the schedule is: Waiting time for P 1 = 0; P 2 = 24; P 3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 09/16/10 P 1 P 2 P 3 24 27 30 0
- 13. FCFS Scheduling (Cont.) Suppose that the processes arrive in the order P 2 , P 3 , P 1 The Gantt chart for the schedule is: Waiting time for P 1 = 6 ; P 2 = 0 ; P 3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case Convoy effect short process behind long process 09/16/10 P 1 P 3 P 2 6 3 30 0
- 14. Shortest-Job-First (SJR) Scheduling Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time Two schemes: Non-preemptive – once CPU given to the process it cannot be preempted until completes its CPU burst preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF) SJF is optimal – gives minimum average waiting time for a given set of processes 09/16/10
- 15. Example of Non-Preemptive SJF Process Arrival Time Burst Time P 1 0.0 7 P 2 2.0 4 P 3 4.0 1 P 4 5.0 4 SJF (non-preemptive) Average waiting time = (0 + 6 + 3 + 7)/4 = 4 09/16/10 P 1 P 3 P 2 7 3 16 0 P 4 8 12
- 16. Example of Preemptive SJF Process Arrival Time Burst Time P 1 0.0 7 P 2 2.0 4 P 3 4.0 1 P 4 5.0 4 SJF (preemptive) Average waiting time = (9 + 1 + 0 +2)/4 = 3 09/16/10 P 1 P 3 P 2 4 2 11 0 P 4 5 7 P 2 P 1 16
- 17. Determining Length of Next CPU Burst Can only estimate the length Can be done by using the length of previous CPU bursts, using exponential averaging 09/16/10
- 19. Examples of Exponential Averaging =0 n+1 = n Recent history does not count =1 n+1 = t n Only the actual last CPU burst counts If we expand the formula, we get: n +1 = t n +(1 - ) t n -1 + … +( 1 - ) j t n - j + … +( 1 - ) n +1 0 Since both and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor 09/16/10
- 20. Priority Scheduling A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer highest priority) Preemptive Non-preemptive SJF is a priority scheduling where priority is the predicted next CPU burst time Problem Starvation – low priority processes may never execute Solution Aging – as time progresses increase the priority of the process 09/16/10
- 21. Round Robin (RR) Each process gets a small unit of CPU time ( time quantum ), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q , then each process gets 1/ n of the CPU time in chunks of at most q time units at once. No process waits more than ( n -1) q time units. Performance q large FIFO q small q must be large with respect to context switch, otherwise overhead is too high 09/16/10
- 22. Example of RR with Time Quantum = 20 Process Burst Time P 1 53 P 2 17 P 3 68 P 4 24 The Gantt chart is: Typically, higher average turnaround than SJF, but better response 09/16/10 P 1 P 2 P 3 P 4 P 1 P 3 P 4 P 1 P 3 P 3 0 20 37 57 77 97 117 121 134 154 162
- 25. Multilevel Queue Ready queue is partitioned into separate queues: foreground (interactive) background (batch) Each queue has its own scheduling algorithm foreground – RR background – FCFS Scheduling must be done between the queues Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR 20% to background in FCFS 09/16/10
- 27. Multilevel Feedback Queue A process can move between the various queues; aging can be implemented this way Multilevel-feedback-queue scheduler defined by the following parameters: number of queues scheduling algorithms for each queue method used to determine when to upgrade a process method used to determine when to demote a process method used to determine which queue a process will enter when that process needs service 09/16/10
- 28. Example of Multilevel Feedback Queue Three queues: Q 0 – RR with time quantum 8 milliseconds Q 1 – RR time quantum 16 milliseconds Q 2 – FCFS Scheduling A new job enters queue Q 0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q 1 . At Q 1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q 2 . 09/16/10
- 32. Multiple-Processor Scheduling So far, we’ve only dealt with a single processor. CPU scheduling more complex when multiple CPUs are available due to load sharing. No single best solutions – no great surprise. 09/16/10
- 33. Multiple-Processor Scheduling CPU scheduling more complex when multiple CPUs are available Homogeneous processors within a multiprocessor Load sharing Asymmetric multiprocessing – only one processor accesses the system data structures, alleviating the need for data sharing 09/16/10
- 34. Approaches to Multiple-Processor Scheduling Asymmetric multiprocessing (master) Here there is one processor that makes the decisions for Scheduling, I/O processing, system activities Other processor(s) execute only user code. This is a simple approach because only one processor accesses the system data structures, so sharing is not an issue with other processors. Symmetric Multiprocessing (SMP) Here, each processor is self-scheduling . Share a common ready queue or each processor may have its own private queue of ready processes. Whether we have a common ready queue or private ready queues, we have a scheduler for each processor that examines the ready queue and dispatches the CPU to a specific process for execution. . Clearly, there are sharing issues, here, since each processor may update this common data structure or try to access a specific PCB in a queue … Most all modern operating systems support SMP including Windows XP, Solaris, Linux, and Mac OS X. Most of our discussions here on out usually apply to SMPs. 09/16/10
- 35. Processor Affinity in Multiprocessors Big question: once a process starts to execute on one processor, does it continue subsequent executions on the same processor?? If a process starts executing on one processor successive memory accesses are normally cached. This is the norm. But if a process is now migrated to a different processor, this cache must be invalidated and a new cache must/will be established… High cost; some loss in efficiency too. So what is ‘ Processor Affinity ?’ (the tendency to favor one processor over another…) Most SMP systems try to avoid this processor affinity . If policy is to try to keep a process running on the same processor (no guarantee), called soft affinity . Some systems (Linux…) provide system calls that support what they call hard affinity , where a process may specify that it is not to migrate to a different processor. 09/16/10
- 36. Summary Basic Concepts Scheduling Criteria Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them CPU utilization – keep the CPU as busy as possible Scheduling Algorithms First-Come, First-Served (FCFS) Scheduling Shortest-Job-First (SJR) Scheduling Example of Non-Preemptive SJF Example of Preemptive SJF Determining Length of Next CPU Burst Prediction of the Length of the Next CPU Burst Examples of Exponential Averaging 09/16/10
- 37. Priority Scheduling Multiple-Processor Scheduling Round Robin (RR) Example of RR with Time Quantum = 20 Time Quantum and Context Switch Time Turnaround Time Varies With The Time Quantu multilevel Queue Multilevel Queue Multilevel Queue Scheduling Multilevel Feedback Queue Example of Multilevel Feedback Queue Multiple-Processor Scheduling Approaches to Multiple-Processor Scheduling Processor Affinity in Multiprocessors Summary 09/16/10