Return to the Homework #1 Index
Homework #1 Grading, Solutions, and General Comments
Index
Statistics
General Comments
Question 1
Question 2
Question 3
Question 4
Question 5
Question 6
Question 7
"Lies, Damn Lies, and Statistics"
Overall (100 points):
Mean: 75.80
Median: 81
Std. Dev: 20.37
Question 1 (5 points):
Mean: 4.08
Median: 4
Std. Dev: 0.96
Question 2 (5 points)
Mean: 4.52
Median: 5
Std. Dev: 0.90
Question 3 (20 points)
Mean: 16.80
Median: 18
Std. Dev: 3.89
Question 4 (15 points)
Mean: 13.18
Median: 18
Std. Dev: 3.89
Question 5 (15 points)
Mean: 8.05
Median: 9
Std. Dev: 3.96
Question 6 (20 points)
Mean: 19.03
Median: 20
Std. Dev: 3.20
Question 7 (20 points)
Mean: 13.34
Median: 15
Std. Dev: 6.74
General Comments
- Scores were high on questions 1 & 2 -- definitions.
- Scores on #3 were mostly high (median of 18).
Common errors included making uncommon assumptions without documenting
them or failing to overlap CPU and I/O devices, where appropriate.
- Question 4 (PCB allocation/deallocation and scheduling queues)
was high scoring, but almost everyone who lost points lost them
because they deallocated the PCB in exit() and did not discuss
how wait_() would get the status information. Another common
error was failure to discuss moving wait_() to a wait queue
until the child signaled.
- Question 5 was low-scoring for many people (sorry!) For the most part,
folks didn't consider enough of the relavent characteristics of processes,
user-level threads, and kernel-supported threads. Unfortunately,
some people also chose to restate the goal as the solution to the
goal -- this got very little credit. A frightening number of people
(very small, but still too many) seemed to believe that
kernel-supported threads made user processes run within the kernel's
address space (ouch!) Some people also lost points for other
non-truths big-and-small.
- Question 6 was high-scoring. No surprise. As two of the students
noted, "Simple math."
- Question 7 was another rough question. The biggest problem was that
peoples' arguments weren't complete. Some people observe some A and B
and then suggest that A AND B implies mutual exclusion, whereas the two
didn't or didn't by themselves. Other common wrong answers started
off well and then impeached themselves with mistruths along the way.
Some people got full credit for transforming the solution into
Peterson's Algorithm and then citing the class discussion. Short and
correct.
Question 1
Answers varied, but the following were important characteristics to mention.
In most cases, incorrect or missing characteristics cost 1 point each.
Traditional Interrupt-based I/O
- Device interrupts the CPU when it needs attention
- CPU is required to perform the I/O operation
- CPU stops processing other jobs while servicing the interrupt
DMA-based I/O
- CPU is required to initialize memory, registers, counters, &c
- DMA controller (not CPU) handles I/O operations
- Interrupt generated each time an entire block is transfered
Question 2
The three mechanisms include interrupts, traps, and exceptions. Full
credit was given for answers that described these mechanisms and specifically
mentioned at least two of the three. A few words also needed to be said about
privileged instructions, and "kernel vs. system mode" (by any name).
Any errors or omissions cost 1-2 points each.
Question 3
Answers varied with assumptions. Plenty of variety was accepted. If
required or unusual assumptions weren't stated and we could figure them out,
we typically took off only a point or two. Other errors typically cost 2 points
each, up to a total of 6 points for each part.
Question 4
Each part was weighted equally at three points.
DMA-based network transfers do not allocate or free any PCBs. The default
behavior is typically not to block, but we accepted answers either way.
Execve() does not allocate or free any PCBs. The old PCB is recycled. It
typically will move to a wait queue while the process is read from disk and
is moved back once this is done.
An exit call removes a process from the runnable queue. It typically frees
the resources, except the PCB, and places the PCB into a terminated queue
until a wait_() reads the status and frees the PCB. Other answers were
accepted, as long as the answer here or under wait mentioned handling process
exit status. Many modern OSs free the PCB, but keep the smaller amount of
process status information around until it is collected by wait_().
fork() creates a PCB. It places one or both processes into the runnable queue.
One process might continue running (but not both).
By default, waitpid() blocks until the child information can be collected.
It can be used in a non-blocking mode, so if this was explicitly assumed,
a non-blocking answer was accepted. Either here or in exit() the PCB needed
to be freed. If it was freed in exit, a few words needed to be said about
the exit status information of the child.
Question 5
A complete answer to this question addressed 4 issues:
- Overhead: In general, processes have more overhead than kernel
threads, which have more overhead than user-threads. User threads
do not require a system-call to context switch. Kernel threads do
not incur TLB flushes. Threads have easier and more-efficient
synchronization options (such as shared memory) than do processes.
- Blocking: If a user-level thread blocks on I/O, it blocks all
other threads in that process.
- Robustness: In general, processes are more robust than kernel
threads, which are more robust than user-threads. Threads which
share a common address space can scribble on each others memory.
Also, if a user-level thread dies due to an exception, all other
threads in that process die. Similarly, a kernel thread which
shares a common PCB with other kernel threads can take them
down if it fails.
- Scheduling: It is not commonly possible to schedule user-level
threads of the same process on multiple processors.
Grading information:
- -4 pts. for not discussing overhead. -2 pts. for not comparing
either process and kernel threads, or kernel-threads and user-threads.
- -3 pts for not discussing user-level threads blocking other threads.
- -4 pts. for not discussing robustness issues.
- -2 pts. for not discussing scheduling user-level threads on multiple
processes.
- -5 pts. for not discussing one of the three options.
- -1 pt. for each random blatantly incorrect statement.
- 0/15 for an answer which showed no understanding of the problem.
Question 6
The time to service an interrupt is 10 us (for saving and restoring
the CPU) + 80 us (for copying the packet from the interface) = 90 us.
(100 us is considered a valid interpretation and gets full credit).
The number of packets received per second is:
(5*220 b/s) / ((1024 B/packet) * (8 b/B)) = 640 packets/sec.
(either binary or decimal representation of MB and KB are OK here)
So, the amount of time spent servicing interrupts in 1 second is:
(90 us/packet * 640 packets/s) / 106 us/s = 0.0576 s.
which comes out to a 5.76% overhead.
Grading information:
- -1 for silly math errors.
- -3 for neglecting to handle bit/byte conversion.
- -5 for not thinking that network packets wait for interrupts before being
transmitted by sending host.
- 5/20 if interrupt calculation was OK.
Question 7
This code trivially reduces to Peterson's Algorithm. Correct solutions either
stated this fact and cited the classroom discussion, or provided a proof
similar to the proof of Peterson's Algorithm.
- Full credit was given for a brief reduction to Peterson's Algorithm and a
citation of this proof. Full credit was also given for reproving this
solution.
- -3 for lack of rigor
- -5 for the right idea, but more of anm intuition than an argument
- -10 for something that was weak, even on the intuition side of things