




Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
The cs 152 midterm ii exam for the computer architecture and engineering course at the university of california at berkeley. The exam covers topics such as virtual memory, i/o, and caches. Students are required to answer questions related to address translation, disk performance analysis, and cache configurations. The exam includes a tlb state table, memory accesses, and cache configurations for evaluation.
Typology: Exams
1 / 8
This page cannot be seen from the preview
Don't miss anything!
CS 152 D. Patterson & R. Yung Fall 1995
You may use two pages of notes. You have 180 minutes. Please write your name on this cover sheet and also at the top left of each page. The point value of each question is indicated in brackets after it. Please show your work. Write neatly and be well organized. Good luck!
Your Name:
SID Number:
Discussion TA(s):
Problem Score
1 / 27
2 / 13
3 / 30
4 / 20
Total / 90
Question 1: Virtual Memory
An eight-entry direct-mapped TLB is implemented in the current design. Both the virtual and physical addresses are 32 bits wide, and page size is 4kB. Address translation is performed by this TLB for every memory access. NOTE: All addresses are given in hexadecimal.
a) Label the virtual and physical address fields used in address translation. [4 pt]
Virtual Address
31 12 11 0
Physical Address 31 12 11 0
-1 for having VPN=[31:13], PFN=[12:0] -2 for all other mistakes
Virtual Page Number (VPN) Offset
Physical Frame Number (PFN) Offset
c) Fill in the content of the TLB after the four memory accesses. [7 pt]
1 pt. for each correct row
Table 3: Final TLB state
Index Virtual Page Number Physical Frame Number Read Write Valid
0 0x0 0x3 1 1 1
1 0x1 0x0 1 0 1
2 0
3 0
4 0x104 0x1 1 1 1
5 0x4005 0x4 1 1 1
6 0
7 0
Question 2: I/O
A 1993 3.5 inch IBM disk rotates at 4318 revolutions-per-minute (RPM), has a ran- dom seek time of 11ms, transfers at 4 MB/s, has a capacity of 1 GB, and the mean-time-to-failure (MTTF) is 400,000 hours. The SCSI controller overhead is 2ms. A 1995 3.5 inch IBM disk rotates at 7200 RPM, has a random seek time of 8ms, trans- fers at 12 MB/s, has a capacity is 4.2 GB, and the MTTF is 1,000,000 hours. The SCSI controller overhead today is 1ms. a) On average, how much faster is the new disk than the old disk for a read of 4 kB assuming random seeks? Assume the disks are idle so that there is no waiting time. [4 pt]
b) If the actual seek time is 25% of the random seek, how much faster is the new disk now? [4 pt]
c) Now assume the read size is 1 megabyte, with seeks being 25% of random time. How much faster is the new disk? [4 pt]
d) What does this performance change in just two years suggested in the design of computer sys- tems? [1 pt] For a , b , c : Get wrong speedup consistently: - Leave off adapter overhead: - Leave off rotation time: - Off by factor of 100: - Poss. ans. for d : Since disks have become 1.69 times faster for small transfers and 2.89 times faster for large transfers in the last two years, it suggests that larger disk transfers will be encouraged in the future (e.g., larger page sizes).
t t (^) SEEK^12 ---^ t ROT = + + t (^) OH + xfersize^ -------------------- BW
t (^) OLD 11 ms
(^60) min^ ---------sec^ × 1000 ms sec------- 2 × (^4318) min^ --------- rev
------------------------------------- 2 ms
4 kB × 1000 ms sec------- 4 MB -------- sec-^ × (^1000) MB^ -------- kB -
= + + + ---------------------------------- = 20.95 ms
t (^) NEW 8 ms
(^60) min^ ---------sec^ × 1000 ms sec------- 2 × (^7200) min --------- rev
------------------------------------- (^1) ms 4 kB × 1000 ms sec------- 12 MB -------- sec- × (^1000) MB -------- kB - = + + + ------------------------------------- =13.50 ms
Speedup t (^) OLD t^ ---------- (^) NEW^
20.95 ms = = (^) 13.50 -------------------- ms = 1.55 timesfaster
t (^) OLD 0.25 × 11 ms
(^60) min^ ---------sec^ × 1000 ms sec------- 2 × (^4318) min^ --------- rev
------------------------------------- (^2) ms 4 kB × 1000 ms sec------- 4 MB -------- sec-^ × (^1000) MB^ -------- kB - = + + + ---------------------------------- = 12.70 ms
t (^) NEW 0.25 × 8 ms
(^60) min^ ---------sec^ × 1000 ms sec------- 2 × (^7200) min^ --------- rev
------------------------------------- (^1) ms 4 kB × 1000 ms sec------- 12 MB sec---------^ × (^1000) MB^ -------- kB - = + + + ------------------------------------- = 7.50 ms
Speedup = 12.70^ -------------------- 7.50 msms^ = 1.69 timesfaster
t (^) OLD 0.25 × 11 ms
(^60) min ---------sec × (^1000) sec------ ms - 2 × 4318 revmin^ ---------
------------------------------------- 2 ms
1 MB × (^1000) sec------ ms - 4 MB -------- sec-
= + + + ------------------------------------ = 261.70 ms
t (^) NEW 0.25 × 8 ms
(^60) min^ ---------sec^ × (^1000) sec------ ms - 2 × 7200 revmin --------- ------------------------------------- 1 ms
1 MB × 1000 ms sec------- 12 MB -------- sec- = + + + ------------------------------------ = 90.50 ms
Speedup = 261.70 ----------------------- 90.50 msms = 2.89 timesfaster
b) Here is a sequence of ten one-byte memory references (in hex) to the caches: R 0x0, W 0x4, R 0x6, R 0x20, R 0x25, W 0x27, W 0x1, R 0x23, R 0x3, R 0x4. NOTE: R - read access, W - write access
Fill in hit or miss for each memory references for the four cache configurations. Compute the final cache miss rates. NOTE: each cache starts out empty. [22 pt]
Results for the above example, cache E, are shown in the last column.
-1/2 pt. for every wrong answer
Table 6: Results of the cache references
references cache configurations
A B C D E
0x00 R miss miss miss miss miss
0x04 W miss hit hit miss miss
0x06 R miss hit hit hit miss
0x20 R miss miss miss miss miss
0x25 R miss hit hit miss miss
0x27 W hit hit hit hit miss
0x01 W miss hit miss hit miss
0x23 R hit hit miss hit miss
0x03 R miss hit miss hit miss
0x04 R miss hit hit hit miss
miss rate (%) 80% 20% 50% 40% 100%
Question 4: Pipelining
The designers are concerned about stalls in the pipeline. (Un)fortunately, the chief architect is out teaching. You are asked to examine the pipeline and recommend necessary bypasses to make the pipeline functional and to minimize pipeline stalls.
Assume a five-stage pipeline (IF, ID, EX, MEM, WB), with pipeline registers between adjacent stages. Pipeline registers are labeled T1 (IF/ID), T2 (ID/EX), T3 (EX/MEM), T4 (MEM/WB).
Fill in your bypass recommendations in Table 7. The table has a <pipeline register, opcode> pair in each column heading, and a <pipeline stage, opcode> pair for each row heading. The columns represent the source of bypasses. For example, column <T3, SUBU> is the output of EX stage latched in pipeline register T3 after executing SUBU. The rows represent the sink of the bypasses. For example, row <EX, LW> is the input to the ALU in EX stage for LW. [20 pt]
You may only forward to the components listed below: IF ID EX MEM WB Instr. Memory Comparator ALU Data Memory Sign Extend PC Only add bypasses when it is needed for correctness or to prevent stalls. When no forwarding is needed, say so. You will lose points for adding unnecessary bypasses or leaving entries blank!
A few entries are labeled for you as examples. -1 for every wrong box
Table 7: Pipeline bypass
<pipeline register, opcode>
<stage, opcode>
IF, BEQ No forwarding
No forwarding
No forwarding
No forwarding
No forwarding
ID, JR No forwarding
T3 -> PC No forwarding
ID, BNEZ No forwarding
T3 -> Comp. No forwarding
T4 -> Comp. T4 -> Comp.
EX, JR Not applicable
No forwarding
No forwarding
No forwarding
No forwarding
EX, LW Not applicable
T3 -> ALU No forwarding