This is a closed-book examination. You are not allowed to
use any notes or other references during the exam. You will have
2 hours to answer all the questions. Answer the true/false and
short-answer questions on this paper. For other questions, write
as much of your answer as will fit in the space provided below the
question. If your answer does not fit in the space below the
question, use the blank paper provided to you to complete your
answer.
Each of the numbered questions is
worth 10 points. It is
possible to achieve a score of up to 170,
but the exam will be graded on a scale of 150. (In other words,
there are two bonus questions.)
You will probably find
some questions are easier for you than others. Therefore, it is
important to budget your time. Do not miss the chance to answer
some easier questions by puzzling too long over a question that is
giving you difficulty. Remember to write your name on every page,
and turn in all pages of the exam.
-
Answer each question by marking a "T" or "F" next to the
statement, inside the parentheses on the left.
- ( ) External fragmentation cannot occur with a
paged memory system.
- ( ) Internal fragmentation occurs when a larger
block of storage is allocated than was requested.
- ( ) The Buddy System fights external fragmentation
without memory compaction.
- ( ) Dynamic relocation is necessary in a
system that uses memory compaction.
- ( ) Best-Fit generally causes less fragmentation
than First-Fit.
- ( ) A function of a linker is to replace
symbolic references between object modules by
absolute references to locations in a single load image.
- ( ) Hashing is used by the Unix block
buffering system to find the buffer associated with a given
disk block.
- ( ) If the swapping device is busy more than 50% of the time,
the system should consider reducing the resident set size of one or more
processes.
- ( ) The TLB contains the page frame numbers
of recently accessed virtual pages.
- ( ) If a reference is made to
a page whose corresponding frame number is not found in the TLB, a page
fault occurs.
-
Answer each question by marking a "T" or "F" next to the
statement, inside the parentheses on the left.
- ( ) Paging can be combined with a segmented memory system
to eliminate external fragmentation.
- ( ) If a process
performs a signal operation on a counting semaphore whose
value is positive, it will wake up one or more of the processes that
are waiting on the semaphore.
- ( ) If a thread
has a signal blocked and the signal is generated for the thread
the handler will execute at least once.
- ( ) Two Unix
processes may simultaneously have open file descriptors that refer to the same
open file description.
- ( ) In Unix a single file may be referred to by
several different names.
- ( ) Threads that are implemented at the kernel
level are likely to be more able to take advantage of an SMP
system for parallel execution than threads that are implemented
only at the library/user level.
- ( ) All the threads in a process share access to
the same set of open files (i.e., the same mapping of open file descriptors to
open file descriptions).
- ( ) All the threads in a process share access to
the same virtual memory space.
- ( ) After return from a call to
pthread_cond_wait (&C, &M) the calling thread can rely that the mutex M is locked.
- ( ) A procedure is reentrant if it is safe for
interleaved execution with other calls to the same procedure from
different threads.
-
Answer each question by marking a "T" or "F" next to the
statement, inside the parentheses on the left.
- ( ) The earliest-deadine-first scheduling
policy minimizes average response time.
- ( ) The shortest-remaining-processing-time-first scheduling
policy maximizes worst-case response time.
- ( ) With a round-robin preemptive time-sliced
scheduler, decreasing the quantum size reduces context-switching overhead
and thereby increases overall system throughput.
- ( ) With a round-robin preemptive time-sliced
scheduler, decreasing the quantum size reduces the amount of time
a short-execution-time job needs to wait for a long-execution-time job,
and so decreases average response time.
- ( ) One of the largest components of disk access time
is the transfer time.
- ( ) Seek time is the time it takes the disk to rotate
around until the desired block is under the head.
- ( ) Blocks that are in the same cylinder can be
accessed without any delay for seek time.
- ( ) RAID 1 consists of mirroring only.
- ( ) RAID 0 can tolerate failure of one disk.
- ( ) RAID 3 makes use of a parity disk and the XOR operation
to allow reconstruction of data after failure of any one out of N disks.
-
Answer each question by marking a "T" or "F" next to the
statement, inside the parentheses on the left.
- ( ) Malicious code embedded in an
otherwise legitimate program that is set to cause
a catastrophic failure when certain conditions are met is called a Trojan Horse.
- ( ) The term "worm" is used for a malicious program that
reproduces itself by modifying other programs so that they
they will then reproduce the worm in turn.
- ( ) In Unix, the term "zombie" is used for a process
that has terminated but whose process table entry cannot be deallocated yet
because its parent has not yet collected the zombies termination status information.
- ( ) In computer security, the term "zombie" is used
for a program that secretly takes over a computer that is attached to the
Internet, and can later be remotedly activated to launch attacks that are
hard to trace back to the zombie's creator.
- ( ) A Reference Monitor can defend against a
Trojan Horse attack by detecting and blocking attempts to pass
data from higher security level to a lower security level.
- ( ) The most commonly encountered file protection
mechanism in modern operating systems is the access matrix.
- ( ) The term "stealth virus" is used for
a type of virus that changes form as it reproduces, so as to make identification
by virus-scanning softare more difficult.
- ( ) Public-key data encryption mechanisms can be used
for authentication as well as for privacy.
- ( ) In a public-key cryptosystem the public key
is divided into two parts: the private key and the secret key.
- ( ) In both interrupt-driven and DMA I/O completion
of an I/O operation is signaled from the I/O controller to the operating
system via a hardware interrupt.
- Explain and contrast
short-term
versus medium-term versus long-term scheduling.
What is scheduled? What are the objectives? Which component (s)
of the operating system are responsible?
Consider the following program.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>
#include <sys/types.h>
int main () {
int n, status;
pid_t child;
n = 0;
if (! (child = fork () ) ) {
n = 2;
waitpid (child, &status, 0);
} else {
n = 4;
}
fprintf (stderr, "n = %d\n", n);
return 0;
}
What is the output?
Explain how a lockfile may be used to implement mutual
exclusion between two processes. Your answer should include
pseudo-code for the lock and unlock operations.
Consider a system with one producer and several consumer threads,
and a bounded pool of buffers. The code of each thread is
outlined below. Assume the procedures produce_data and
consume_data may take a long time to execute, so you don't
want the producer or consumer to hold mutex M locked while these
are executing. Also assume the variable free has been
initialized to point to a list of empty buffers, and head
and tailhave been initialized to NULL.
pthread_t Producer, Consumer[N];
pthread_mutex_t M;
pthread_cond_t P,C;
typedef struct buffer {
char *data;
next *buffer;
} buffer_t;
buffer_t *free; /* head of null-terminated list of empty buffers */
buffer_t *head; /* head of list of full buffers */
buffer_t *tail; /* tail of list of full buffers */
void Producer ()
{ buffer_t *temp;
pthread_mutex_lock (&M);
while (TRUE) {
while (free == NULL) {
________________________________________________________________________
}
temp = free; free = free->next;
___________________________________________________________________________
produce_data (temp);
___________________________________________________________________________
___________________________________________________________________________
if (head == NULL) head = temp;
else tail->next = temp;
tail = temp;
}
pthread_mutex_unlock (&M);
}
void Consumer ()
{ buffer_t *temp;
pthread_mutex_lock (&M);
while (TRUE) {
while (head == NULL) {
________________________________________________________________________
}
temp = head; head = head->next;
___________________________________________________________________________
use_data (temp);
___________________________________________________________________________
___________________________________________________________________________
temp->next = free; free = temp;
}
pthread_mutex_unlock (&M);
}
Fill in POSIX mutex and condition variable API calls in
the blank lines above to provide the appropriate mutual exclusion.
Explain why such a solution, using a POSIX mutex for mutual
exclusion, would not be appropriate in the case where the producer
is the handler-procedure for a hardware interrupt. Explain how
mutual exclusion between the handler and the consumer could be
implemented in the latter case.
Consider the following example of a
multi-unit resource allocation problem, for which the Banker's
Algorithm was designed to avoid deadlock.
Process | R1 | R2 | R3 | R4 |
A | 4 | 1 | 1 | 1 |
B | 0 | 2 | 1 | 2 |
C | 4 | 2 | 1 | 0 |
D | 1 | 1 | 1 | 1 |
E | 2 | 1 | 1 | 0 |
Claim Matrix
(Maximum Amount Each Process May Hold at One Time)
|
Process | R1 | R2 | R3 | R4 |
A | 3 | 0 | 1 | 1 |
B | 0 | 1 | 0 | 0 |
C | 1 | 1 | 1 | 0 |
D | 1 | 1 | 0 | 1 |
E | 0 | 0 | 1 | 0 |
Allocation Matrix
(Resources Currently Held by Processes)
|
Resource Vector
(Total Resources Existing)
Available Vector
(Resources Currently Available)
|
|
-
Give a feasible sequence of process completions, to demonstrate
that the current state is safe, showing the updated Available Vector
after each process completion.
- Give an example of a request that is consistent
with the remaining unallocated claims and available resources
but would, if granted, take the
system into an unsafe state.
-
Consider a paged virtual memory system.
Suppose the page table for the process currently executing on
the processor looks like the following. All numbers are decimal,
everything is numbered starting from zero, and all addreses are
memory byte addresses. The page size is 1024 bytes.
Virtual page number | Valid bit | Reference bit | Modify bit | Page frame number |
0 | 1 | 1 | 0 | 3 |
1 | 0 | 0 | 0 | - |
2 | 1 | 0 | 1 | 7 |
3 | 1 | 1 | 1 | 0 |
4 | 1 | 1 | 0 | 2 |
5 | 0 | 0 | 0 | - |
What physical address, if any, would each of the following
virtual addresses correspond to? If there would be a page
fault, first indicate that one would take place and then compute
the physical address after the page fault,
assuming the Reference and Modify bits are used to
approximate an LRU replacement policy.
- 1500
4000
- Explain what the working set of a process is, and how the
concept is used in operating system design. How can the OS detect
when the working set of a process changes? Supposing a local page
replacement scheme is used, where each process is allocated a
number of frames, that can be adjusted up or down as necessary.
How can the OS detect when a process needs to be allocated more
frames, to hold its (growing) working set? What should the OS do
in this case, if it cannot give the process another frame without
stealing a page from the working set of some other process?
- Explain the relationship of inodes, data blocks,
and the three degrees of indirect blocks in the Unix filesystem.
Then explain how this structure allows for there to be holes in
a file.
- What is the difference between an access control list and a
capability list. Explain how these concepts apply to the Unix
operating system's file protection bits, and to the set
of open file descriptors of a process.
- Explain the difference between a Trojan Horse, a Worm, and a
Virus.
- Assume I/O requests come in to the disk driver subsystem as
shown in the table below. Those with negative times are requests
that arrived in the past. Assume a seek takes 1 ms (1/1000
second) for the start and stop time, plus 1 ms for every 10 cylinders
traveled. Thus, for example, to move from cylinder 70 to cylinder
30 takes 1ms+4ms = 5ms. Assume the time is now zero (0), the disk
head is at cylinder 40, the last direction it was moved is
upwards, and the requests that arrived at times -5 and -2 still
need to be served.
time of request | -5 | -2 |
4 | 8 | 14 | 18 |
cylinder | 80 | 20 | 9 |
40 | 100 | 10 |
Compute how much total seek time is needed to process six of these
requests according to the each of the following algorithms. Show how you
computed the answer in each case.
- FIFO
- SCAN
- CSCAN
- Explain the UNIX filesystem's block buffering scheme. How is it
implemented? What is the benefit? How does it affect the ability
of the system to recover from system crashes? To what type
of I/O devices does it apply? Is this part of the
device-dependent or device-independent part of the I/O subsystem?