Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

CS330 Spring 2007: CPU-I/O Overhead, Memory Hierarchy, Multiprogramming vs Time-sharing, S, Exams of Operating Systems

A solution to test 1 for cs330 spring 2007 course. It covers various topics including cpu-i/o overhead, memory hierarchy, multiprogramming vs time-sharing, system calls, pcbs, threading, and scheduling. It includes explanations and formulas for calculating hit-ratios and effective access times, comparisons between multiprogramming and time-sharing, and discussions on the roles of system calls, pcbs, and threading in a computer system.

Typology: Exams

Pre 2010

Uploaded on 08/09/2009

koofers-user-bgz
koofers-user-bgz 🇺🇸

10 documents

1 / 3

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Solution
Test 1
CS330
Spring 2007
1. The terminals cannot access the bus without going through CPU. The Device Drivers suffer from
the same problem. Accordingly, CPU-bound tasks (computations) would take considerably longer
time (roughly, the amount of time to complete an I/O cycle) to finish. This is not acceptable.
2. Memory hierarchy with four principal characteristics (not all independent) are outlined below
3. Assume, 1 Mbyte =
6
108
bits . The total RAM memory cost =
63
10810
cents =
$80. The total Cache memory cost =
62
10810
=$800
Assume, for the problem, the hit-ratio to be H. The effective access time = 110 ns.
This implies
110 ns = H * 100 ns + (1-H) * 1300 ns
Or, 1200* H = 1190. This yields H = 1190/1200 =
4. Both Multiprogramming (MP) and Time-sharing systems (TS) harbor multiple programs in their
own CPU-Queue (or Ready-Queue); however, programs in MP are each executed till completion
unless (a) blocked for I/O, or (b) interrupted. Compare this with TS where each program receives
a CPU burst lasting the time-slice
units of time after which it either exits or goes back to the
CPU Registers
L1, L2, L3
Caches
Core Memory
Hard Disks,
CD-ROM,
DVD-ROM
Web storage
Archival
Storage
More Expensive
per unit Byte
Faster access time
Smaller volume
Higher access rate
Volatile
Non-volatile
pf3

Partial preview of the text

Download CS330 Spring 2007: CPU-I/O Overhead, Memory Hierarchy, Multiprogramming vs Time-sharing, S and more Exams Operating Systems in PDF only on Docsity!

Solution Test 1 CS Spring 2007

  1. The terminals cannot access the bus without going through CPU. The Device Drivers suffer from the same problem. Accordingly, CPU-bound tasks (computations) would take considerably longer time (roughly, the amount of time to complete an I/O cycle) to finish. This is not acceptable.
  2. Memory hierarchy with four principal characteristics (not all independent) are outlined below

3. Assume, 1 Mbyte = 8  10 6 bits. The total RAM memory cost = 10 ^3  8  106 cents =

$80. The total Cache memory cost = 10 ^2  8  106 =$

Assume, for the problem, the hit-ratio to be H. The effective access time = 110 ns. This implies 110 ns = H * 100 ns + (1-H) * 1300 ns Or, 1200* H = 1190. This yields H = 1190/1200 =

  1. Both Multiprogramming (MP) and Time-sharing systems (TS) harbor multiple programs in their own CPU-Queue (or Ready-Queue); however, programs in MP are each executed till completion unless (a) blocked for I/O, or (b) interrupted. Compare this with TS where each program receives a CPU burst lasting the time-slice (^)  units of time after which it either exits or goes back to the CPU Registers L1, L2, L Caches Core Memory Hard Disks, CD-ROM, DVD-ROM Web storage Archival Storage More Expensive per unit Byte Faster access time Smaller volume Higher access rate Volatile Non-volatile

Ready Queue to receive more CPU bursts unless blocked for I/O or interrupted. A typical example

of TS would be RR system. Note that when  , a TS ^ MP.

The objective of a MP is maximization of processor’s utilization (minimization of its idle-time).

  1. The purpose of a system call is to get things done via the kernel which are not accessible from the user level. User makes a system call; the kernel picks it up at kits user-kernel interface and decided how best to respond to it without endangering system integrity and system resources. A computer system runs in user mode when a user process is executed in CPU without invoking any system call. When a user makes a system call, the control is transferred to kernel which then responds to the user call. In that milieu, the system is considered running in kernel mode. Which entity actually controls the CPU at any given time resolves the user-kernel dichotomy in favor of one over the other.
  2. A PCB is a data structure associated with a process. It describes succinctly the entire profile and scope of the associated process and is prepared by the kernel when a job (or an execution request) becomes a process by the MMU. It contains: PID, Register values, Address space, PC, priority, list of open files and sockets, children processes it has spawned, parent process, accounting information (e.g. time it last ran, CPU time consumed, page-fault events ..) If a PCB block of a process is accidentally erased (e.g. by overwriting its registers), it cannot be reconstituted unless it is copied and stored just before it is run or the process is check-pointed periodically during its sojourn.
  3. A single-threaded process is a usual process with a single locus of control that moves from task to task in a sequential manner. A multi-threaded process is the one manifesting multiple asynchronous threads or loci of controls capable of executing multiple set of tasks concurrently. In a system with multiple CPUs, a single threaded process can engage at most one CPU at a time; in a multi-threaded situation more than one CPU may be used at the same time. If a kernel itself could be configured as a multithreaded kernel, a number of kernel threads may concurrently attend to multiple user threads. Multiple kernel threads would be scheduled by the kernel itself whereas the user-threads would be handled exclusively by the user process. Kernel would always treat the user process as a single process requiring its own address space, own stack, program counter, etc. even though the user may view it logically comprising multiple threads.
  4. A non-preemptive schedule is the one where a process (ready or running) cannot be preempted from running by another process using the latter’s higher priority. A preemptive schedule is just the opposite. If preemption is allowed, only process priorities are considered to decide its outcome. An interrupt-event temporarily stops the current running process. After the interrupt is serviced, the paused process can resume. This is in contrast to preemption where the preempted process is usually sent to the back of its ready queue.
  5. In either case, the schedule time is 19 units. The average wait-time in FCFS is 3.6 units, and the

average wait-time in RR (with   3 ) is 4.8 units. The Gantt chart for each case is shown below: