Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

CA Lectures for b.tech, Essays (university) of Compiler Construction

Computer Architecture lectures

Typology: Essays (university)

2018/2019

Uploaded on 04/28/2019

hadeshades
hadeshades 🇮🇳

1 document

1 / 46

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e

Partial preview of the text

Download CA Lectures for b.tech and more Essays (university) Compiler Construction in PDF only on Docsity!

Chapter 2 Program and Network Properties ; This chapter covers fundamental properties of program behavior and introduces major classes of interconnection networks. We begin with a study of computational gran- ularity, conditions for program partitioning, matching software with hardware, program flow mechanisms, and compilation support for parallelism. Interconnection architectures introduced include static and dynamic networks. Network complexity, communication bandwidth, and data-routing capabilities are discussed. 2.1 Conditions of Parallelism The exploitation of parallelism has created a new dimension in computer science. In order to move parallel processing into the mainstream of computing, H.T. Kung (1991) has identified the need to make significant progress in three key areas: computation models for parallel computing, interprocessor communication in parallel architectures, and system integration for incorporating parallel systems into general computing envi- ronments. A theoretical treatment of parallelism is thus needed to build a basis for the above challenges. In practice, parallelism appears in various forms in a computing environ- ment. All forms can be attributed to levels of parallelism, computational granularity, time and space complexities, communication latencies, scheduling policies, and load balancing. Very often, tradeoffs exist among time, space, performance, and cost factors. 2.1.1 Data and Resource Dependences The ability to execute several program segments in parallel requires each segment to be independent of the other segments. The independence comes in various forms as defined below separately. For simplicity, to illustrate the idea, we consider the depen- dence relations among instructions in a program. In general, each code segment may contain one or more statements. We use a dependence graph to describe the relations. The nodes of a dependence graph correspond to the program statements (instructions), and the directed edges 51