Contents Index Search Previous Next
D.2.1 The Task Dispatching Model
1
The task dispatching model specifies preemptive
scheduling, based on conceptual priority-ordered ready queues.
Dynamic Semantics
2
A task runs (that is, it becomes a
running
task) only when it is ready (see
9.2) and
the execution resources required by that task are available. Processors
are allocated to tasks based on each task's active priority.
3
It is implementation defined whether, on a multiprocessor,
a task that is waiting for access to a protected object keeps its processor
busy.
4
Task
dispatching is the process by which one ready task is selected for
execution on a processor. This selection is done at certain points during
the execution of a task called
task dispatching points. A task
reaches a task dispatching point whenever it becomes blocked, and whenever
it becomes ready. In addition, the completion of an
accept_statement
(see
9.5.2), and task termination are task
dispatching points for the executing task. Other task dispatching points
are defined throughout this Annex.
5
Task
dispatching policies are specified in terms of conceptual
ready
queues, task states, and task preemption. A ready queue is an ordered
list of ready tasks. The first position in a queue is called the
head
of the queue, and the last position is called the
tail of the
queue. A task is
ready if it is in a ready queue, or if it
is running. Each processor has one ready queue for each priority value.
At any instant, each ready queue of a processor contains exactly the
set of tasks of that priority that are ready for execution on that processor,
but are not running on any processor; that is, those tasks that are ready,
are not running on any processor, and can be executed using that processor
and other available resources. A task can be on the ready queues of more
than one processor.
6
Each processor also has one
running task, which is the task currently being executed by that
processor. Whenever a task running on a processor reaches a task dispatching
point, one task is selected to run on that processor. The task selected
is the one at the head of the highest priority nonempty ready queue;
this task is then removed from all ready queues to which it belongs.
7
A preemptible resource is
a resource that while allocated to one task can be allocated (temporarily)
to another instead. Processors are preemptible resources. Access to a
protected object (see
9.5.1) is a nonpreemptible
resource.
When a higher-priority task is dispatched
to the processor, and the previously running task is placed on the appropriate
ready queue, the latter task is said to be
preempted.
8
A new
running task is also selected whenever there is a nonempty ready queue
with a higher priority than the priority of the running task, or when
the task dispatching policy requires a running task to go back to a ready
queue. These are also task dispatching points.
Implementation Permissions
9
An implementation is allowed to define additional
resources as execution resources, and to define the corresponding allocation
policies for them. Such resources may have an implementation defined
effect on task dispatching (see
D.2.2).
10
An implementation may place implementation-defined
restrictions on tasks whose active priority is in the Interrupt_Priority
range.
11
7 Section 9 specifies under
which circumstances a task becomes ready. The ready state is affected
by the rules for task activation and termination, delay statements, and
entry calls. When a task is not ready, it is said
to be blocked.
12
8 An example of a possible
implementation-defined execution resource is a page of physical memory,
which needs to be loaded with a particular page of virtual memory before
a task can continue execution.
13
9 The ready queues are
purely conceptual; there is no requirement that such lists physically
exist in an implementation.
14
10 While a task is running,
it is not on any ready queue. Any time the task that is running on a
processor is added to a ready queue, a new running task is selected for
that processor.
15
11 In a multiprocessor
system, a task can be on the ready queues of more than one processor.
At the extreme, if several processors share the same set of ready tasks,
the contents of their ready queues is identical, and so they can be viewed
as sharing one ready queue, and can be implemented that way. Thus, the
dispatching model covers multiprocessors where dispatching is implemented
using a single ready queue, as well as those with separate dispatching
domains.
16
Contents Index Search Previous Next Legal