Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Advanced computer architecture solved question papers, Exams of Advanced Computer Architecture

Solved question papers of aca vtu

Typology: Exams

2017/2018
On special offer
30 Points
Discount

Limited-time offer


Uploaded on 12/03/2018

prince-amer
prince-amer 🇮🇳

3.5

(2)

1 document

1 / 111

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Advance Computer Architecture 10CS74
Question Papers Solutions
UNIT 1
1
Define Computer Architecture. Illustrate the seven dimensions of an ISA?(10
marks
) (June 2012) (Dec 2012)(June 2011)(June 2013)(Dec 2013)(Jan
2014)
The computer designer has to ascertain the attributes that are important
for a new computer and design the system to maximize the performance while
staying within cost, power and availability constraints. The task has few important
aspects such as Instruction Set design, Functional organization, Logic design and
implementation.
Instruction Set Architecture (ISA)
ISA refers to the actual programmer visible Instruction set. The ISA serves as
boundary between the software and hardware. Th e seven dimensions of the ISA are:
i)Class of ISA: Nearly all ISAs today ar e classified as General-Purpose-
Register architectures. The operands are either Registers or Memory locations.
The two popular versions of this class are:
Register-Memory ISAs : ISA of 80x86, can access memory as part of many
instructions.
Load -Store ISA Eg. ISA of MIPS, can access memory only with Load or
Store instructions.
ii)Memory addressing: Byte addressing scheme is most widely used in all
desktop and server computers. Both 80x86 and MIPS use byte addressing.
Incase of MIPS the object must be aligned. An access to an object of s b yte at
byte address A is aligned if A mod s =0. 80x86 does not require alignment.
Accesses are faster if operands are aligned.
iii) Addressing modes:Specify the address of a M object apart from register and constant
operands.
MIPS Addressing modes:
•Register mode addressing
•Immediate mode addressing
•Displacement mode addressing
80x86 in addition to the above addressing modes supports the additional
modes of addressing:
i. Register Indirect
ii. Indexed
iii,Based with Scaled index
iv)Types and sizes of operands:
MIPS and x86 support:
•8 bit (ASCII character), 16 bit(Unicode character)
Dept of CSE,SJBIT,Bangalore Page 1
For More Paperswww.knowledgeadda.com
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f
pf50
pf51
pf52
pf53
pf54
pf55
pf56
pf57
pf58
pf59
pf5a
pf5b
pf5c
pf5d
pf5e
pf5f
pf60
pf61
pf62
pf63
pf64
Discount

On special offer

Partial preview of the text

Download Advanced computer architecture solved question papers and more Exams Advanced Computer Architecture in PDF only on Docsity!

Advance Computer Architecture 10CS 74

Question Papers Solutions

UNIT 1

1 Define Computer Architecture. Illustrate the seven dimensions of an ISA?(

marks) (June 2012) (Dec 2012)(June 2011 )(June 2013 )(Dec 2013)(Jan

The computer designer has to ascertain the attributes that are important for a new computer and design the system to maximize the performance while staying within cost, power and availability constraints. The task has few important aspects such as Instruction Set design, Functional organization, Logic design and implementation. Instruction Set Architecture (ISA) ISA refers to the actual programmer visible Instruction set. The ISA serves as boundary between the software and hardware. Th e seven dimensions of the ISA are: i)Class of ISA : Nearly all ISAs today ar e classified as General-Purpose- Register architectures. The operands are either Registers or Memory locations. The two popular versions of this class are: Register-Memory ISAs : ISA of 80x86, can access memory as part of many instructions. Load - Store ISA Eg. ISA of MIPS, can access memory only with Load or Store instructions. ii) Memory addressing : Byte addressing scheme is most widely used in all desktop and server computers. Both 80x86 and MIPS use byte addressing. Incase of MIPS the object must be aligned. An access to an object of s b yte at byte address A is aligned if A mod s =0. 80x86 does not require alignment. Accesses are faster if operands are aligned. iii) Addressing modes :Specify the address of a M object apart from register and constant operands. MIPS Addressing modes:

  • Register mode addressing
  • Immediate mode addressing
  • Displacement mode addressing 80x86 in addition to the above addressing modes supports the additional modes of addressing: i. Register Indirect ii. Indexed iii,Based with Scaled index iv)Types and sizes of operands: MIPS and x 86 support:
  • 8 bit (ASCII character), 1 6 bit(Unicode character)

For More Papers – www.knowledgeadda.com

  • 32 bit (Integer/word )
  • 64 bit (long integer/ Double word)
  • 32 bit (IEEE- 754 floating point)
  • 64 bit (Double precision floating point)
  • 80 x86 also supports 8 0 bit floating point operand.(extended double Precision v)Operations :The general category o f operations are: oData Transfer oArithmetic operations oLogic operations oControl operations oMIPS ISA: simple & easy to implement ox 86 ISA: richer & larger set of operations vi) Control flow instructions :All ISAs support: Conditional & Unconditional Branches Procedure C alls & Returns MIPS 8 0x 86
  • Conditional Branches tests content of Register Condition code bits
  • Procedure C all JAL CALLF
  • Return Address in a R Stack in M vii) Encoding an ISA Fixed Length ISA Variable Length ISA MIPS 32 Bit long 80 x 86 ( 1 - 18 bytes) Simplifies decoding Takes less space Number of Registers and number of Addressing modes hav e significant impact on the length of instruction as the register field and addressing mode field can appear many times in a single instruction. 2.What is dependability? Explain the two measures of Dependability? (06 marks)

(May/June 2012 ) (Dec 2012 )

The Infrastructure providers offer Service Level Agreement (SLA) or Service Level Objectives (SLO) to guarantee that their networking or power services would be dependable.

  • Systems alternate between 2 states of service with respect to an SLA:
  1. Service accomplishment , where the service is delivered as specified in SLA
  2. Service interruption , where the delivered service is different from the SLA
  • Failure = transition from state 1 to state 2
  • Restoration = transition from state 2 to state 1 The two main measures of Dependability are Module Reliability and Module Availability. Module reliability is a measure of continuous service accomplishment (or

Computer user says that computer is faster when a program runs in less time. The routinely executed programs are the best candidates for evaluating the performance of the new computers. To evaluate new system the user would simply compare the execution time of their workloads.

Benchmarks

The real applications are the best choice of benchmarks to evaluate the performance. However, for many of the cases, the workloads will not be known at the time of evaluation. Hence, the benchmark program which resemble the real applications are chosen. The three types of benchmarks are:

  • KERNELS, which are small, key pieces of real applications;
  • Toy Programs: which are 100 line programs from beginning programming assignments, such Quicksort etc.,
  • Synthetic Benchmarks: Fake programs invented to try to match the profile and behavior of real applications such as Dhrystone. To make the process of evaluation a fair justice, the following points are to be followed.
  • Source code modifications are not allowed.
  • Source code modifications are allowed, but are essentially impossible.
  • Source code modifications are allowed, as long as the modified version produces the same output.
  • To increase predictability, collections of benchmark applications, called benchmark suites , are popular
  • SPECCPU: popular desktop benchmark suite given by Standard Performance Evaluation committee (SPEC)
  • CPU only, split between integer and floating point programs
  • SPECint2000 has 12 integer, SPECfp2000 has 14 integer programs
  • SPECCPU2006 announced in Spring 2006. SPECSFS (NFS file server) and SPECWeb (WebServer) added as server benchmarks
  • Transaction Processing Council measures server performance and costperformance for databases
  • TPC-C Complex query for Online Transaction Processing
  • TPC-H models ad hoc decision support
  • TPC-W a transactional web benchmark
  • TPC-App application server and web services benchmark SPEC Ratio: Normalize execution times to reference computer, yielding a ratio proportional to performance = time on reference computer/time on computer being rated
  • If program SPECRatio on Computer A is 1.25 times bigger than Computer B, then Quantitative Principles of Computer Design While designing the computer, the advantage of the following points can be exploited to enhance the performance.
  • Parallelism: is one of most important methods for improving performance. - One of the simplest ways to do this is through pipelining ie, to over lap the instruction Execution to reduce the total time to complete an instruction sequence.
  • Parallelism can also be exploited at the level of detailed digital design.
  • Set- associative caches use multiple banks of memory that are typically searched n parallel. Carry look ahead which uses parallelism to speed the process of computing.
  • Principle of locality: program tends to reuse data and instructions they have used recently. The rule of thumb is that program spends 90 % of its execution time in only 10% of the code. With reasonable good accuracy, prediction can be made to find what instruction and data the program will use in the near future based on its accesses in the recent past.
  • Focus on the common case while making a design trade off, favor the frequent case over the infrequent case. This principle applies when determining how to spend resources, since the impact of the improvement is higher if the occurrence is frequent. Amdahl’s Law: Amdahl’s law is used to find the performance gain that can be obtained by improving some portion or a functional unit of a computer Amdahl’s law defines the speedup that can be gained by using a particular feature.

5. Explain with learning curve how the cost of processor varies with time along with factors influencing the cost. (6 marks) (June/July 2013) 1960 : Large Main frames (Millions of $ ) (Applications: Business Data processing, large Scientific computin g) 1970: Minicomputers (Scientific laboratories, Time sharing concepts) 1980: Desktop Computers (μPs) in the form of Personal computers and workstations. (Larger Memory, more computing power, Replaced Time sharing g systems) 1990: Emergence of Internet and WWW, PDAs, emergence of high performance digital consumer electronics 2000: Cell phones These changes in computer use have led to three different computing classes each characterized by different applications, requirements and computing technologies.owth in processor performance since 1980s 6 ..Find the number of dies per 200 cm wafer of circular shape that is used to cut die that is 1. 5 cm side and compare the number of dies produced on the same wafer if die is 1.2 5 cm. ( 6 marks) (June/July 2012) Cost of die + Cost of testing die + Cost of packaging and final test Cost of IC = Final test yield Cost of wafer Cost of die = Dies per wafer x die die yield π x (wafer diameter/2)^2 Dies per wafer = Die area - π x wafer diameter sqrt(2xdie area )

7.Define Amdahl's law. Derive n expression for CPU clock as a function of instruction count, clocks per instruction and clock cycle time. ( 8 marks) (June/July 2012) (Dec 2013) Amdahl’s Law: Amdahl’s law is used to find the performance gain that can be obtained by improving some portion or a functional unit of a computer Amdahl’s law defines the speedup that can be gained by using a particular feature. Speedup is the ratio of performance for entire task without using the enhancement when possible to the performance for entire task without using the enhancement. Execution time is the reciprocal of performance. Alternatively, speedup is defined as thee ratio of execution time for entire task without using the enhancement to the execution time for entair task using the enhancement when possible. Speedup from some enhancement depends an two factors: i. The fraction of the computation time in the original computer that can be converted to take advantage of the enhancement. Fraction enhanced is always less than or equal to Example: If 15 seconds of the execution time of a program that takes 50 seconds in total can use an enhancement, the fraction is 15/50 or 0. ii. The improvement gained by the enhanced execution mode; ie how much faster the task would run if the enhanced mode were used for the entire program. Speedup enhanced is the time of the original mode over the time of the enhanced mode and is always greater then 1. 8. List and explain four important technologies, which has lead to improvements in

computer system. (Dec 2012 )

Desktop computing The first and still the largest market in dollar terms is desktop computing. Desktop computing system cost range from $ 500 (low end) to $ 5000 (high-end configuration). Throughout this range in price, the desktop market tends to drive to optimize price- performance. The perf ormance concerned is compute performance and graphics performance. The combination of performance and price are the driving factors to the customers and the computer designer. Hence, the newest, high performance and cost effective processor often appears first in desktop computers.

Servers:

i>Given that 40% of the first application is parallelizable, how much speed up would you achieve with that application if run in isolation? ii>Given that 99%pf the second application is parallelized, how much speed up would this application observe if run in isolation? iii> Given that 40% of the first application is parallelizable, how much overall speed up would you observe if you parallelized it? iv> Given that 99%pf the second application is parallelized, how much overall speed up would you get? (June2013)

UNIT 2

1. With a neat diagram explain the classic five stage pipeline for a RISC processor.

( 10 marks) (June 2012) (Dec 2012 ) (June 2013)

Instruction set of implementation in RISC takes at most 5 cycles without pipelining. The 5 clock cycles are: 1. Instruction fetch (IF) cycle: Send the content of program count (PC) to memory and fetch the current instruction from memory to update the PC. 2. Instruction decode / Register fetch cycle (ID): Decode the instruction and access the register file. Decoding is done in parallel with reading registers, which is possible because the register specifies are at a fixed location in a RISC architecture. This corresponds to fixed field decoding. In addition it involves:

  • Perform equality test on the register as they are read for a possible branch.
  • Sign-extend the offset field of the instruction in case it is needed.
  • Compute the possible branch target address. 3. Execution / Effective address Cycle (EXE) The ALU operates on the operands prepared in the previous cycle and performs one of the following function defending on the instruction type.
  • Register- Register ALU instruction: ALU performs the operation specified in the instruction using the values read from the register file.
  • Register- Immediate ALU instruction: ALU performs the operation specified in the instruction using the first value read from the register file and that sign extended immediate. 4. Memory access (MEM) For a load instruction, using effective address the memory is read. For a store instruction memory writes the data from the 2nd register read using effective address. 5. Write back cycle (WB) Write the result in to the register file, whether it comes from memory system (for a LOAD instruction) or from the ALU. Each instruction taken at most 5 clock cycles for the execution
  • Instruction fetch cycle (IF)
  • Instruction decode / register fetch cycle (ID)
  • Execution / Effective address cycle (EX)
  • Memory access (MEM)
  • Write back cycle (WB) The execution of the instruction comprising of the above subtask can be pipelined. Each of the clock cycles from the previous section becomes a pipe stage – a cycle in the pipeline. A new instruction can be started on each clock cycle which results in the execution pattern shown figure 2.1. Though each instruction takes 5 clock cycles to complete, during each clock cycle the hardware will initiate a new instruction and will be executing some part of the five different instructions as illustrated in figure 2.1.

2. What are the major hurdles of pipelining? Illustrate the branch hazard in

detail?( 10 marks) (Dec 2012) (June 2012 ) (July 2013 ) (Jan 2014 )

Hazards may cause the pipeline to stall. When an instruction is stalled, all the instructions issued later than the stalled instructions are also stalled. Instructions issued earlier than the stalled instructions will continue in a normal way. No new instructions are fetched during the stall. Hazard is situation that prevents the next instruction in the instruction stream fromk executing during its designated clock cycle. Hazards will reduce the pipeline performance. Performance with Pipeline stall A stall causes the pipeline performance to degrade from ideal performance. Performance improvement from pipelining is obtained from: Assume that, i) cycle time overhead of pipeline is ignored ii) stages are balanced With theses assumptions If all the instructions take the same number of cycles and is equal to the number of

pipeline stages or depth of the pipeline, then, If there are no pipeline stalls, Pipeline stall cycles per instruction = zero Therefore, Speedup = Depth of the pipeline. When a branch is executed, it may or may not change the content of PC. If a branch is taken, the content of PC is changed to target address. If a branch is taken, the content of PC is not changed The simple way of dealing with the branches is to redo the fetch of the instruction following a branch. The first IF cycle is essentially a stall, because, it never performs useful work. One stall cycle for every branch will yield a performance loss 10% to 30 % depending on the branch frequency Reducing the Brach Penalties There are many methods for dealing with the pipeline stalls caused by branch delay

  1. Freeze or Flush the pipeline, holding or deleting any instructions after the ranch until the branch destination is known. It is a simple scheme and branch penalty is fixed and cannot be reduced by software
  2. Treat every branch as not taken, simply allowing the hardware to continue as if the branch were not to executed. Care must be taken not to change the processor state until the branch outcome is known. Instructions were fetched as if the branch were a normal instruction. If the branch is taken, it is necessary to turn the fetched instruction in to a no-of instruction and restart the fetch at the target address. Figure 2.8 shows the timing diagram of both the situations.

3 With a neat diagram explain the classic five stage pipeline for a RISC processor.

( 10 marks) (Dec 2012) (June 2013 )

Instruction set of implementation in RISC takes at most 5 cycles without pipelining. The 5 clock cycles are: 1. Instruction fetch (IF) cycle: Send the content of program count (PC) to memory and fetch the current instruction from memory to update the PC. 2. Instruction decode / Register fetch cycle (ID): Decode the instruction and access the register file. Decoding is done in parallel with reading registers, which is possible because the register specifies are at a fixed location in a RISC architecture. This corresponds to fixed field decoding. In addition it involves:

  • Perform equality test on the register as they are read for a possible branch.
  • Sign-extend the offset field of the instruction in case it is needed.
  • Compute the possible branch target address.

3. Execution / Effective address Cycle (EXE) The ALU operates on the operands prepared in the previous cycle and performs one of the following function defending on the instruction type.

  • Register- Register ALU instruction: ALU performs the operation specified in the instruction using the values read from the register file.
  • Register- Immediate ALU instruction: ALU performs the operation specified in the instruction using the first value read from the register file and that sign extended immediate. 4. Memory access (MEM) For a load instruction, using effective address the memory is read. For a store instruction memory writes the data from the 2nd register read using effective address. 5. Write back cycle (WB) Write the result in to the register file, whether it comes from memory system (for a LOAD instruction) or from the ALU. Each instruction taken at most 5 clock cycles for the execution
  • Instruction fetch cycle (IF)
  • Instruction decode / register fetch cycle (ID)
  • Execution / Effective address cycle (EX)
  • Memory access (MEM)
  • Write back cycle (WB)

4. Explain how pipeline is implemented in MIPS. ( 6 marks) (June 2012 )

Instruction set of implementation in RISC takes at most 5 cycles without pipelining. The 5 clock cycles are: 1. Instruction fetch (IF) cycle: Send the content of program count (PC) to memory and fetch the current instruction from memory to update the PC. 2. Instruction decode / Register fetch cycle (ID): Decode the instruction and access the register file. Decoding is done in parallel with reading registers, which is possible because the register specifies are at a fixed location in a RISC architecture. This corresponds to fixed field decoding. In addition it involves:

  • Perform equality test on the register as they are read for a possible branch.
  • Sign-extend the offset field of the instruction in case it is needed.
  • Compute the possible branch target address. 3. Execution / Effective address Cycle (EXE) The ALU operates on the operands prepared in the previous cycle and performs one of the following function defending on the instruction type.
  • Register- Register ALU instruction: ALU performs the operation specified in the instruction using the values read from the register file.
  • Register- Immediate ALU instruction: ALU performs the operation specified in the instruction using the first value read from the register file and that sign extended immediate. 4. Memory access (MEM) For a load instruction, using effective address the memory is read. For a store instruction memory writes the data from the 2nd register read using effective address. 5. Write back cycle (WB) Write the result in to the register file, whether it comes from memory system (for a LOAD instruction) or from the ALU. Each instruction taken at most 5 clock cycles for the execution
  • Instruction fetch cycle (IF)
  • Instruction decode / register fetch cycle (ID)
  • Execution / Effective address cycle (EX)
  • Memory access (MEM)
  • Write back cycle (WB) The execution of the instruction comprising of the above subtask can be pipelined. Each of the clock cycles from the previous section becomes a pipe stage – a cycle in the pipeline. A new instruction can be started on each clock cycle which results in the execution pattern shown figure 2.1. Though each instruction takes 5 clock cycles to complete, during each clock cycle the hardware will initiate a new instruction and will be executing some part of the five different instructions as illustrated in figure 2.1. Write the result in to the register file, whether it comes from memory system (for a LOAD instruction) or from the ALU. Each instruction taken at most 5 clock cycles for the execution
  • Instruction fetch cycle (IF)
  • Instruction decode / register fetch cycle (ID)
  • Execution / Effective address cycle (EX)
  • Memory access (MEM)
  • Write back cycle (WB) The execution of the instruction comprising of the above subtask can be pipelined. Each of the clock cycles from the previous section becomes a pipe stage – a cycle in the