























Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Material Type: Notes; Professor: Bosworth; Class: Intro to Information Tech; Subject: Computer Science; University: Columbus State University; Term: Unknown 2007;
Typology: Study notes
1 / 31
This page cannot be seen from the preview
Don't miss anything!
Computing machines are very common in a modern industrialized society. The number of functions performed by these devices is almost endless. Here is a partial list.
This course will focus on general purpose computers, also called “Stored Program Computers” or “Von Neumann Machines”. In a stored program computer, a program and its starting data are read into the primary memory of a computer and then executed. Early computers had no memory into which programs could be stored. The first stored program computer designed was the EDVAC , designed by John Von Neumann (hence the name), John Mauchley, and J. Presper Eckert. The “ E lectronic D iscrete V ariable A utomatic C omputer” was described in a paper, published on June 30, 1945 with Von Neumann as the sole author. The first stored program computer to become operational was the EDSAC ( E lectronic D elay S torage A utomatic C omputer), completed May 6, 1949. This was developed by Maurice Wilkes of Cambridge University in England. The first stored program computer that contained all of the components of a modern computer was the MIT Whirlwind , first demonstrated on April 20,
The system memory (of which this computer has 512 MB) is used for transient storage of programs and data. This is accessed much like an array, with the memory address serving the function of an array index. The Input / Output system (I/O System) is used for the computer to save data and programs and for it to accept input data and communicate output data. Technically the hard drive is an I/O device. The Central Processing Unit (CPU) handles execution of the program. It has four main components:
The design on the previous slide is logically correct, but IT WON’T WORK. IT IS TOO SLOW. Problem: A single system level bus cannot handle the load. Modern gamers demand fast video; this requires a fast bus to the video chip. The memory system is always a performance bottleneck. We need a dedicated memory bus in order to allow acceptable performance. Here is a refinement of the above diagram. This design is getting closer to reality. At least, it acknowledges two of the devices requiring high data rates in access to the CPU.
The memory stores the instructions and data for an executing program. The memory can be imagined as a collection of “boxes”, each referenced by an address. The address allows the CPU to read data from memory and place data into memory. The CPU has two registers dedicated to handling memory. The MAR ( Memory Address Register ) holds the address being accessed. The MBR ( Memory Buffer Register ) holds the data being written to the memory or being read from the memory. This is sometimes called the Memory Data Register. READ sequence: 1. Place the address into the MAR and command a memory read.
What we want is a very large memory, in which each memory element is fabricated from very fast components. But fast means expensive. What we can afford is a very large memory, in which each memory element is fabricated from moderately fast, but inexpensive, components. Modern computers achieve good performance from a large, moderately fast, main memory by using two levels of cache memories , called L1 and L2. These work due to an observed property of programs, called the locality principle. A typical arrangement would have a large L2 cache and a split L1 cache. The L1 cache has an Instruction Cache and a Data Cache. Note that the Instruction Cache (I Cache) does not write back to the L2 cache.
Also called “core memory”, “store”, or “storage”. Beginning with the MIT Whirlwind and continuing for about 30 years, the basic technology for primary memory involved “cores” of magnetic material.
All modern computer systems use virtual memory. At various times in the course, we shall give a precise definition, but here is the common setup. In MS–Windows, the area of the system disk that handles virtual memory is called the paging file. My system has a 768 MB paging file.
Modern computers, such as the P4, have placed both L1 caches and the L2 cache on the CPU chip itself. Here is a picture of the P4 chip, annotated by Intel. In older computers, the main difference between CPU registers and memory was that the registers were on the chip and memory was not. This no longer holds.
The ALU performs all of the arithmetic and logical operations for the CPU. These include the following: Arithmetic: addition, subtraction, negation, etc. Logical: AND, OR, NOT, Exclusive OR, etc. This symbol has been used for the ALU since the mid 1950’s. It shows to inputs and one output. The reason for two inputs is the fact that many operations, such as addition and logical AND, are dyadic ; that is, they take two input arguments.
This cycle is the logical basis of all stored program computers. Instructions are stored in memory as machine language. Instructions are fetched from memory and then executed. The common fetch cycle can be expressed in the following control sequence. MAR PC. // The PC contains the address of the instruction. READ. // Put the address into the MAR and read memory. IR MBR. // Place the instruction into the MBR. This cycle is described in many different ways, most of which serve to highlight additional steps required to execute the instruction. Examples of additional steps are: Decode the Instruction, Fetch the Arguments, Store the Result, etc. A stored program computer is often called a “von Neumann Machine” after one of the originators of the EDVAC. This Fetch–Execute cycle is often called the “von Neumann bottleneck” , as the necessity for fetching every instruction from memory slows the computer.
The most convenient definition of the term “byte” is that it is the smallest unit of memory that can store a character: letter, digit, punctuation mark, etc. In sizing internal memory, the multiples of bytes represent powers of two. This is due to the use of binary numbers in addressing memory. 1 KB 2 10 bytes 1, 024 bytes 1 MB 2 20 bytes 1, 048, 576 bytes 1 GB 2 30 bytes 1, 073, 741, 824 bytes Commercial use adopts powers of ten for disk sizes 1 GB 10 9 bytes 1, 000, 000, 000 bytes 1 TB 10 12 bytes 1, 000, 000, 000, 000 bytes Experiment: 1. Go to Explorer and right click on the C: drive.
Here are two pictures. The one on the left shows the internal workings of a drive. The picture on the right shows an external disk drive that is connected to the computer via a USB port. I bought a 512 GB external drive for $150.
The typical large–capacity (and physically small) disk drive has a number of glass platters with magnetic coating. These spin at a high rate (7,200 rpm or 120 / second) This drawing shows a disk with three platters and six surfaces. In general, a disk drive with N platters will have 2N surfaces, the top and bottom of each platter. On early disk drives, before the introduction of sealed drives, the top and bottom surfaces would not be used because they would become dirty.