Lecture #5: Mutual Exclusion
These topics are from Chapter 2 (Synchronization) and Chapter 6
(Distributed Mutual Exclusion) in Advanced Concepts in OS.
Topics for Today
- quiz
- Review:
- Give a high level description for Birman-Schiper-Stephenson protocol for causal ordering of messages.
- describe in English: consistent global state, inconsistent global state,
strongly consistent global state
- Chandy-Lamport GS recording algorithm, how does it work?
- Mutual exclusion
- the need for it
- how it is achieved in a local context
- mutual exclusion in distributed context
- the centralized approach
- first view of Lamport's mutual exclusion algorithm
Review of Mutual Exclusion
- What is mutual exclusion?
- How would you implement a semaphore?
(Students should all be able to do answer these, from
the prerequisite course in operating systems.)
The need for mutual exclusion comes with concurrency.
There are several kinds of concurrent execution:
- Interrupt handlers
- Interleaved preemptively scheduled processes/threads
- Multiprocessor clusters, with shared memory
- Distributed systems
The above executions are all possible,
depending on the scheduling policy of the operating system.
(What are the three scheduling policies shown?)
- processors are generally assumed to be preemptable
- some other resources are not preemptable
i.e., mutual exclusion must be enforced between users
procedure A is
begin ...
M := M + 1; ...
end A;
procedure B is
begin ...
M := M - 1; ...
end B;
Everything works fine.
Which task executes first does not matter.
Races
With parallel or interleaved execution, what happens?
We have a race between tasks A and B.
The effect of the execution depends on who "wins" the race.
Even supposing we have interleaved execution
and the primitive memory fetch and store operations are atomic,
the outcome depends on the particular interleaving of the operations.
- Lock operation
- ``blocks'' caller until no other task is holding the lock
- the caller proceeds when it is able to get the lock
- Unlock operation
- releases the lock held by the caller
- this may allow another task that is blocked on a Lock operation
to proceed
This is the general idea. There are LOTS of variations on the
details, which I hope you have seen in a prior course.
procedure A is
begin ...
Lock;
M := M + 1;
Unlock; ...
end A;
procedure B is
begin ...
Lock;
M := M - 1;
Unlock; ...
end B;
- tape drive
- printer
- modem
- region of RAM to hold program
- Counting semaphores are used for keeping track of multi-unit
resources.
- Semaphore object has a (non-negative) integer Value
- Wait operation:
wait until Semaphore.Value is positive, then decrement it
- Post operation:
increment Semaphore.Value
- These operations are atomic, meaning they are
implemented in a way that prevents them from being interleaved.
Non-Distributed Mutual Exclusion Mechanisms
- single processor: mask all interrupts
- shared-memory multiprocessor: spin-lock
Why does each of these work?
Why doesn't it work in more general environments?
One of the topics that is often skipped in the first course
on operating systems is the details of how mutual exclusion is
implemented. It is useful to understand these details.
One benefit is better perspective on the comparative costs of
local versus distributed mutual exclusion.
The code that implements spinlocks in version 2.2 of the Linux kernel
is mostly contained in the file spinlock.h,
from /usr/src/linux-2.2.14/include/asm-i386.
This code is not easy to read without some more explanation. A
detailed commentary on the more subtle parts of the Linux spinlock
implementation code is given in a separate
file (click here).
Look through the explanation in detail. Observe the
following:
- C language systems programming subtleties, such as the use of
macros, conditional compilation, volatile variables (why are they
needed?), in-line machine code, use of multiple code-segments to
control code positioning
- interrupts must be disabled over critical sections protected
by spinlocks, generally including the "spinning" loops." Why?
- with interrupts disabled and processors possibly
spinning, it is important to keep such critical sections
very short
- the hierarchy of techniques for mutual exclusion:
- interrupt masking -- mutual exclusion on the CPU resource, implemented
in hardware
- memory bus locking -- mutual exclusion on the memory bus
resource, implemented in hardware
Here, we can view the memory as the processor,
and then we have a single processor system again and we are masking interrupts
on it (the memory).
Observe the comparative cost of disabling/enabling interrupts (2 instructions,
no idle CPU time) versus spinlocks (a few instructions, but maybe
idling CPU's) versus distributed mutual exclusion. For example,
consider the following distribute algorithm:
We have three messages being sent, times the number of systems,
and we have to wait for replies from all systems. Given a spinlock
takes a few microseconds to execute and local area network delays
are a few milliseconds, the difference in scale of the overhead
of mutual exclusion is at least 1000. This is a good reason to
try to avoid designing systems that require distributed mutual
exclusion.
Applications of Distributed Mutual Exclusion
- Because mutual exclusion is a central need in local
operating systems, one tends to assume a distributed
form is required in distributed systems.
- This is not necessarily true.
- A well-designed distributed system may be able to avoid
creating sistuations that require distributed mutual exclusion.
- For example, each resource may be assigned to a server, generally
co-located with the actual resource.
The server has exclusive access to the resource, and handles
mutual exclusion locally.
- A few examples where distributed mutual
exclusion is needed may remain.
- Are updates of a distributed directory a good example?
In general, one should "think outside the box". In
particular, do not assume that a good approach to solving a
small/local problem will scale up, or vice versa.
Classification of Mutual Exclusion Algorithms
- nontoken-based
- require multiple rounds of message exchanges
for local states to stabilize
token-based
- permission passes around from one site to another
Requirements for Mutual Exclusion
- freedom from deadlocks
- freedom from starvation
- fairness
- fault tolerance
What does each of these mean?
Requirements for Mutual Exclusion
- deadlock* = endless waiting due to circular wait relationships
- starvation = unbounded waiting due to order of service policy
- unfairness = requests are not served in order they are made
- fault intolerance = algorithm breaks if processes die or messages
are lost or garbled
Why might fairness not be appropriate in some systems?
*Livelock will come up later, under transaction
systems that support cancellation and rollback. It is an infinite
cycle of cancellations and rollbacks.
Performance Metrics
- number of messages per CS invocation
- synchronization delay (sd)
A leaves ® B enters
- response time
A requests ® A leaves
- throughput
CS requests handled per time unit = 1 / (sd + E)
where E is average execution time of CS
Performance may depend on whether load is low or high.
Best case, worst case, and average cases are all of interest.
Centralized Control
- request and release messages are sent to control site
- good: control site sequences granting of access
- bad:
- 3 messages per CS execution
- single point of failure
- bottleneck at control site
- sd = 2T
What is the throughput?
Throughput and Synchronization Delay
- sd = 2T seconds/CS
- response time = sd + E = 2T + E seconds/CS
- throughput = 1/(sd + E) = 1/(2T + E) CS/second
Why is the execution time not sd + E + sd?
Next class: continue with Advanced Concepts in OS,
Chapter 6, Distributed Mutual Exclusion.