


Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
A set of questions from the cs 162, spring 1996 midterm exam focusing on threading and synchronization concepts. The questions cover topics such as identifying items stored in the thread control block, sequence of context switches, thread join implementation, atomic transfer, and countermeasures for cpu scheduling policies.
Typology: Exams
1 / 4
This page cannot be seen from the preview
Don't miss anything!
Of the following items, circle those that are stored in the thread control block. (a) CPU registers (b) page table pointer (c) stack pointer (d) ready list (e) segment table (f) thread priority (g) program counter
Write down the sequence of context switches that would occur in Nachos if the "main" thread were to call the following code. Assume that the CPU scheduler runs threads in FIFO order, with no time-slicing and all threads having the same priority. The WillJoin flag is used to signify that the thread will be joined to by its parent. For example, "child2 => child1" would signify that child2 switches to child1.
void Thread::SelfTest2() { Thread *t1 = new Thread("child 1", WillJoin); Thread *t2 = new Thread("child 2", WillJoin);
t1->Fork((VoidFuncionPtr) &Thread::Yield, t1); t2->Fork((VoidFuncionPtr) &Thread::Yield, t2); t2->Join(); t1->Join(); }
For the following implementation of Thread::Join(), say whether it either (i) works, (ii) doesn't work, or (iii) is dangerous -- that is, sometimes works and sometimes doesn't. If the implementation does not work or is dangerous, explain why and show how to fix it so it does work. You may assume parents always call Thread::Join() on their children threads; Thead::Join is a method on the child thread, not the parent.
class Thread { Semphore *finished; // synch parent and child
Thread::Thread() { finished == new Semaphore(0); // initial value = 0 // plus other standard stuff from the Nachos code
CS 162, Spring 1996 Midterm Professor 1
Thread::~Thread() { delete finished; // plus other standard stuff from the Nachos code } };
void // called by parent to wait for child thread Thread::Join() { finished->P(); // wait for thread to finish }
void // called by child thread when it is done Thread::Finish() { Thread *oldThread = kernel->currentThread; Thread *nextThread;
(void) kernel->interrupt->SetLevel(IntOff); // first turn i finished->V(); // then wake up parent delete this; // deallocate current thread nextThread = kernel->scheduler->FindNextToRun()); // find kernel->currentThread = nextThread; SWITCH(oldThread, nextThread); //context switch to it }
For the following implementation of atomic transfer, say whether it either (i) works, (ii) doesn't work, or (iii) is dangerous -- that is, sometimes works and sometimes doesn't. If the implementation does not work or is dangerous, explain why and show how to fix it so it does work. The problem statement is as follows: The atomic transfer routine dequeues an item from one queue and enqueues it on another. The transfer must appear to occur atomically: there should be no interval of time during which an external thread can determine that an item has been removed from one queue but not yet placed on another. In addition, the implementation must be highly concurrent -- it must allow multiple transfers between unrelated queues to happen in parallel. You may assume that queue1 and queue2 never refer to the same queue.
void AtomicTransfer (Queue *queue1, *queue2) { Item thing; // thing being transferred
queue1->lock.Acquire(); thing = queue1->Dequeue(); if (thing != NULL) { queue2->lock.Acquire(); queue2->Enqueue(thing); queue2->lock.Release(); }
Problem #3 (10 points) 2
oxygen->P(); }
void oReady() { pairOfHydrogen->P(); makeWater(); oxygen->V(); oxygen->V(); }
(b) Another proposed solution to the "H2O" problem:
Semaphore hPresent(0); // initially 0 Semaphore waitForWater(0); // initially 0
void hReady() { hPresent->V(); waitForWater->P(); }
void oReady() { hPresent->P(); hPresent->P(); makeWater(); waitForWater->V(); waitForWater->V(); }
Which provides the best average response time when there are multiple servers (bank tellers, supermarket cash registers, airline ticket takers, etc.): a single FIFO queue or a FIFO queue per server? Why? Assume that you can't predict how long any customer is going to take at the server, and that once you pick a queue to wait in, you are stuck and can't change queues.
Problem 6: (20 points) 4