Operating Systems – Process Scheduling
October 22, 2009 1 Comment
We noticed from the previous OS post that “Shortest-Job-First”, or SJF, can sometimes produce a better result. It can be proved that SJF is optimal for a given set of processes that become available simultaneously.
Assume – for example – that we have 4 processes A, B, C and D that arrive at the same time with CPU bursts of 7,4,9 and 5 respectively , then the process with the shortest CPU burst will be chosen first.
Shortest Remaining Time First/Next
This is a pre-emptive version of SJF. This version says that every time a new process arrives in the ready state, check the CPU burst requirement of this process. If it is less than the time needed to complete the current process on the CPU, then move the process currently on the CPU to the ready state and schedule the newly arrived process.
The problem is that if a process with a long CPU burst time comes along, then it may have to wait too long if a process with a shorter CPU burst time keeps arriving. On top of this it is almost impossible to predict for how long a process will need the CPU – cos we cant predict the future!!
Round Robin scheduling implicitly assumes that all processes are equally important, but this can be bad, as what if there really is a process more important than all the other processes!
In order to solve this, we need to introduce a notion of priority for each process, for example, give a higher time slice to higher-priority processes.
SJF uses a form of process priority, such as the CPU time needed.
Many options for priority scheduling are impossible. There is one major problem, however, that we really need to avoid: starvation. This is when low-priority processes are waiting indefinitely for the CPUs attention.
Static vs. Dynamic Priorities
Static priorities are predetermined priorities for each process.
Dynamic processes are assigned by the system to achieve certain goals, such as boosting the priority of I/O processes.
Both of these priorities are externally and internally defined that may be used.
Still… we haven’t seen how to map priorities onto the actual scheduling decisions!
One simple way of mapping priorities onto actual scheduling decisions would be to give to each process a time slice that is related to its priority.
However, a more convenient approach is to view the ready state as not only one queue of processes, but multiple queues, each with its own priority!
Again – several options may exist:
- Processes of queues of higher priority may have to complete before processes of queues of lower priority get a chance to start running
- Higher priority queues may get more time than lower priority queues.
- Processes may move between queues (this is a type of dynamically adjusted priority).
More Process/Thread Scheduling
The examples shown of these 2 posts have shown a few out of many possible scheduling algorithms! UNIX and Windows have their priority/multiqueue based process scheduling algorithms.
Real-time systems may require other algorithms, such as “Earliest Deadline First” which starts with the process whose deadline is first; or “Least Slack First”, which starts with the process that has the smallest deadline.
Process scheduling is rather settled, this is because many open problems exist with multiprocessors and hyper threading.
A Final Note
From all of this we can conclude that multilevel feedback queue scheduling might be the most generic mechanism, since it can be configured to deal with different policies. However, it is also the most complex scheduling algorithm!
Policies decide what needs to be done, and mechanisms determine how it will be done.