




























































































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Fundamentals of Electrical Engineering I
Typology: Lecture notes
1 / 278
This page cannot be seen from the preview
Don't miss anything!
Rice University, Houston, Texas
©2016 Don Johnson
This selection and arrangement of content is licensed under the Creative Commons Attribution License: http://creativecommons.org/licenses/by/1.
1
From its beginnings in the late nineteenth century, electrical engineering has blossomed from focusing on electrical circuits for power, telegraphy and telephony to focusing on a much broader range of disciplines. However, the underlying themes are relevant today: Power creation and transmission and information have been the underlying themes of electrical engineering for a century and a half. This course concentrates on the latter theme: the representation, manipulation, transmission, and reception of information by electrical means. This course describes what information is, how engineers quantify information, and how electrical signals represent information. Information can take a variety of forms. When you speak to a friend, your thoughts are translated by your brain into motor commands that cause various vocal tract components–the jaw, the tongue, the lips–to move in a coordinated fashion. Information arises in your thoughts and is represented by speech, which must have a well defined, broadly known structure so that someone else can understand what you say. Utterances convey information in sound pressure waves, which propagate to your friend’s ear. There, sound energy is converted back to neural activity, and, if what you say makes sense, she understands what you say. Your words could have been recorded on a compact disc (CD), mailed to your friend and listened to by her on her stereo. Information can take the form of a text file you type into your word processor. You might send the file via e-mail to a friend, who reads it and understands it. From an information theoretic viewpoint, all of these scenarios are equivalent, although the forms of the information representation—sound waves, plastic and computer files—are very different. Engineers, who don’t care about information content, categorize information into two different forms: analog and digital. Analog information is continuous valued; examples are audio and video. Digital information is discrete valued; examples are text (like what you are reading now) and DNA sequences. The conversion of information-bearing signals from one energy form into another is known as energy conversion or transduction. All conversion systems are inefficient since some input energy is lost as heat, but this loss does not necessarily mean that the conveyed information is lost. Conceptually we could use any form of energy to represent information, but electric signals are uniquely well-suited for information repre- sentation, transmission (signals can be broadcast from antennas or sent through wires), and manipulation (circuits can be built to reduce noise and computers can be used to modify information). Thus, we will be concerned with how to
Telegraphy represents the earliest electrical information system, and it dates from 1837. At that time, electrical science was largely empirical, and only those with experience and intuition could develop telegraph systems. Electrical science came of age when James Clerk Maxwell^2 proclaimed in 1864 a set of equations that he claimed governed all electrical phenomena. These equations predicted that light was an electro- magnetic wave, and that energy could propagate. Because of the complexity of Maxwell’s presentation, the development of the telephone in 1876 was due largely to empirical work. Once Heinrich Hertz confirmed Maxwell’s prediction of what we now call radio waves in about 1882, Maxwell’s equations were simplified by Oliver Heaviside and others, and were widely read. This understanding of fundamentals led to a quick succession of inventions–the wireless telegraph (1899), the vacuum tube (1905), and radio broadcasting–that marked the true emergence of the communications age. During the first part of the twentieth century, circuit theory and electromagnetic theory were all an electrical engineer needed to know to be qualified and produce first-rate designs. Consequently, circuit theory served as the foundation and the framework of all of electrical engineering education. At mid-century, three “inventions” changed the ground rules. These were the first public demonstration of the first electronic computer (1946), the invention of the transistor (1947), and the publication of A Mathematical Theory of Communication by Claude Shannon (1948). Although conceived separately, these creations gave birth to the information age, in which digital and analog communication systems interact and compete for design preferences. About twenty years later, the laser was invented, which opened even more design possibilities. Thus, the primary focus shifted from how to build communication systems (the circuit theory era) to what communications systems were intended to accomplish. Only once the intended system is specified can an implementation be selected. Today’s electrical engineer must be mindful of the system’s ultimate goal, and understand the tradeoffs between digital and analog alternatives, and between hardware and software configurations in designing information systems.
note: Thanks to the translation efforts of Rice University’s Disability Support Services^3 , this collection is now available in a Braille-printable version. Please click here^4 to download a .zip file containing all the necessary .dxb and image files.
5
Whether analog or digital, information is represented by the fundamental quantity in electrical engineering: the signal. Stated in mathematical terms, a signal is merely a function. Analog signals are continuous- valued; digital signals are discrete-valued. The independent variable of the signal could be time (speech, for example), space (images), or the integers (denoting the sequencing of letters and numbers in the football score).
Analog signals are usually signals defined over continuous independent variable(s). Speech, as described in Section 4.10, is produced by your vocal cords exciting acoustic resonances in your vocal tract. The result is pressure waves propagating in the air, and the speech signal thus corresponds to a function having independent variables of space and time and a value corresponding to air pressure: s (x, t) (Here we use vector notation x to denote spatial coordinates). When you record someone talking, you are evaluating the speech signal at a particular spatial location, x 0 say. An example of the resulting waveform s (x 0 , t) is shown in Figure 1.1. Photographs are static, and are continuous-valued signals defined over space. Black-and-white images have only one value at each point in space, which amounts to its optical reflection properties. In Figure 1.2, an image is shown, demonstrating that it (and all other images as well) are functions of two independent spatial variables.
(^2) http://www-groups.dcs.st-andrews.ac.uk/~history/Biographies/Maxwell.html (^3) http://www.dss.rice.edu/ (^4) http://cnx.org/content/m0000/latest/FundElecEngBraille.zip (^5) This content is available online at http://cnx.org/content/m0001/2.27/.
00 nul 01 soh 02 stx 03 etx 04 eot 05 enq 06 ack 07 bel 08 bs 09 ht 0A nl 0B vt 0C np 0D cr 0E so 0F si 10 dle 11 dc1 12 dc2 13 dc3 14 dc4 15 nak 16 syn 17 etb 18 car 19 em 1A sub 1B esc 1C fs 1D gs 1E rs 1F us 20 sp 21! 22 " 23 # 24 $ 25 % 26 & 27 ’ 28 ( 29 ) 2A * 2B + 2C , 2D - 2E. 2F / 30 0 31 1 32 2 33 3 34 4 35 5 36 6 37 7 38 8 39 9 3A : 3B ; 3C < 3D = 3E > 3F? 40 @ 41 A 42 B 43 C 44 D 45 E 46 F 47 G 48 H 49 I 4A J 4B K 4C L 4D M 4E N 4F 0 50 P 51 Q 52 R 53 S 54 T 55 U 56 V 57 W 58 X 59 Y 5A Z 5B [ 5C \ 5D ] 5E ^ 5F _ 60 ’ 61 a 62 b 63 c 64 d 65 e 66 f 67 g 68 h 69 i 6A j 6B k 6C l 6D m 6E n 6F o 70 p 71 q 72 r 73 s 74 t 75 u 76 v 77 w 78 x 79 y 7A z 7B { 7C | 7D } 7E ∼ 7F del
Table 1.1: The ASCII translation table shows how standard keyboard characters are represented by integers. In pairs of columns, this table displays first the so-called 7-bit code (how many characters in a seven-bit code?), then the character the number represents. The numeric codes are represented in hexadecimal (base-16) notation. Mnemonic characters correspond to control characters, some of which may be familiar (like cr for carriage return) and some not (bel means a “bell”).
in space, but a different set of colors is used: How much of red, green and blue is present. Mathematically, color pictures are multivalued–vector-valued–signals: s (x) = (r (x) , g (x) , b (x))T^. Interesting cases abound where the analog signal depends not on a continuous variable, such as time, but on a discrete variable. For example, temperature readings taken every hour have continuous–analog–values, but the signal’s independent variable is (essentially) the integers.
The word “digital” means discrete-valued and implies the signal depends on the integers rather than a continuous variable. Digital information includes numbers and symbols (characters typed on the keyboard, for example). Computers rely on the digital representation of information to manipulate and transform information. Symbols do not have a numeric value, however each is typically represented by a unique number but performing arithmetic with these representations makes no sense. The ASCII character code shown in Table 1.1 has the upper- and lowercase characters, the numbers, punctuation marks, and various other symbols represented by a seven-bit integer. For example, the ASCII code represents the letter a as the number 97 , the letter A with 65.
6
The fundamental model of communications is portrayed in Figure 1.3 (Fundamental model of communi- cation). In this fundamental model, each message-bearing signal, exemplified by s (t), is analog and is a function of time. A system operates on zero, one, or several signals to produce more signals or to simply absorb them (Figure 1.4). In electrical engineering, we represent a system as a box, receiving input signals (usually coming from the left) and producing from them new output signals. This graphical representation is known as a block diagram. We denote input signals by lines having arrows pointing into the box, output signals by arrows pointing away. As typified by the communications model, how information flows, how it is corrupted and manipulated, and how it is ultimately received is summarized by interconnecting block diagrams: The outputs of one or more systems serve as the inputs to others. (^6) This content is available online at http://cnx.org/content/m0002/2.17/.
Figure 1.3: The Fundamental Model of Communication.
System
x(t) y(t)
Figure 1.4: A system operates on its input signal x (t) to produce an output y (t).
In the communications model, the source produces a signal that will be absorbed by the sink. Examples of time-domain signals produced by a source are music, speech, and characters typed on a keyboard. Signals can also be functions of two variables—an image is a signal that depends on two spatial variables—or more— television pictures (video signals) are functions of two spatial variables and time. Thus, information sources produce signals. In physical systems, each signal corresponds to an electrical voltage or current. To be able to design systems, we must understand electrical science and technology. However, we first need to understand the big picture to appreciate the context in which the electrical engineer works. In communication systems, messages—signals produced by sources—must be recast for transmission. The block diagram has the message s (t) passing through a block labeled transmitter that produces the signal x (t). In the case of a radio transmitter, it accepts an input audio signal and produces a signal that physically is an electromagnetic wave radiated by an antenna and propagating as Maxwell’s equations predict. In the case of a computer network, typed characters are encapsulated in packets, attached with a destination address, and launched into the Internet. From the communication systems “big picture” perspective, the same block diagram applies although the systems can be very different. In any case, the transmitter should not operate in such a way that the message s (t) cannot be recovered from x (t). In the mathematical sense, the inverse system must exist, else the communication system cannot be considered reliable. (It is ridiculous to transmit a signal in such a way that no one can recover the original. However, clever systems exist that transmit signals so that only the “in crowd” can recover them. Such cryptographic systems underlie secret communications.) Transmitted signals next pass through the next stage, the evil channel. Nothing good happens to a signal in a channel: It can become corrupted by noise, distorted, and attenuated among many possibilities. The channel cannot be escaped (the real world is cruel), and transmitter design and receiver design focus on how best to jointly fend off the channel’s effects on signals. The channel is another system in our block diagram, and produces r (t), the signal received by the receiver. If the channel were benign (good luck finding such a channel in the real world), the receiver would serve as the inverse system to the transmitter, and yield the message with no distortion. However, because of the channel, the receiver must do its best to produce a received message sˆ (t) that resembles s (t) as much as possible. Shannon^7 showed in his 1948 paper that reliable—for the moment, take this word to mean error-free—digital communication was possible over arbitrarily noisy channels. It is this result that modern communications systems exploit, and why many communications systems are going “digital.” The module on Chapter 6, titled Information Communication, details Shannon’s theory of information, and there we learn of Shannon’s result and how to use it. Finally, the received message is passed to the information sink that somehow makes use of the message. (^7) http://www-gap.dcs.st-and.ac.uk/~history/Biographies/Shannon.html
Figure 1.
temperature, we would keep the frequency constant (so the receiver would know what to expect) and change the amplitude at midnight. We could relate temperature to amplitude by the formula A = A 0 (1 + kT ), where A 0 and k are constants that the transmitter and receiver must both know. If we had two numbers we wanted to send at the same time, we could modulate the sinusoid’s frequency as well as its amplitude. This modulation scheme assumes we can estimate the sinusoid’s amplitude and frequency; we shall learn that this is indeed possible. Now suppose we have a sequence of parameters to send. We have exploited all of the sinusoid’s two parameters. What we can do is modulate them for a limited time (say T seconds), and send two parameters every T. This simple notion corresponds to how a modem works. Here, typed characters are encoded into eight bits, and the individual bits are encoded into a sinusoid’s amplitude and frequency. We’ll learn how this is done in subsequent modules, and more importantly, we’ll learn what the limits are on such digital communication schemes.
9
Problem 1.1: RMS Values The rms (root-mean-square) value of a periodic signal is defined to be
rms[s] =
0
s^2 (t) dt
where T is defined to be the signal’s period: the smallest positive number such that s (t) = s (t + T ).
(a) What is the period of s (t) = A sin (2πf 0 t + φ)? (b) What is the rms value of this signal? How is it related to the peak value? (c) What is the period and rms value of the depicted (Figure 1.5) square wave, generically denoted by sq (t)? (d) By inspecting any device you plug into a wall socket, you’ll see that it is labeled “110 volts AC.” What is the expression for the voltage provided by a wall socket? What is its rms value?
Problem 1.2: Modems The word “modem” is short for “modulator-demodulator.” Modems are used not only for connecting com- puters to telephone lines, but also for connecting digital (discrete-valued) sources to generic channels. In this problem, we explore a simple kind of modem, in which binary information is represented by the presence or absence of a sinusoid (presence representing a “1” and absence a “0”). Consequently, the modem’s transmitted signal that represents a single bit has the form
x (t) = A sin (2πf 0 t) , 0 ≤ t ≤ T
Within each bit interval T , the amplitude is either A or zero. (^9) This content is available online at http://cnx.org/content/m10353/2.17/.
(a) What is the smallest transmission interval that makes sense with the frequency f 0? (b) Assuming that ten cycles of the sinusoid comprise a single bit’s transmission interval, what is the datarate of this transmission scheme? (c) Now suppose instead of using “on-off” signaling, we allow one of several different values for the amplitude during any transmission interval. If N amplitude values are used, what is the resulting datarate? (d) The classic communications block diagram applies to the modem. Discuss how the transmitter must interface with the message source since the source is producing letters of the alphabet, not bits.
Problem 1.3: Advanced Modems To transmit symbols, such as letters of the alphabet, RU computer modems use two frequencies ( and 1800 Hz) and several amplitude levels. A transmission is sent for a period of time T (known as the transmission or baud interval) and equals the sum of two amplitude-weighted carriers.
x (t) = A 1 sin (2πf 1 t) + A 2 sin (2πf 2 t) , 0 ≤ t ≤ T
We send successive symbols by choosing an appropriate frequency and amplitude combination, and sending them one after another.
(a) What is the smallest transmission interval that makes sense to use with the frequencies given above? In other words, what should T be so that an integer number of cycles of the carrier occurs? (b) Sketch (using Matlab) the signal that modem produces over several transmission intervals. Make sure you axes are labeled. (c) Using your signal transmission interval, how many amplitude levels are needed to transmit ASCII characters at a datarate of 3,200 bits/s? Assume use of the extended (8-bit) ASCII code.
note: We use a discrete set of values for A 1 and A 2. If we have N 1 values for amplitude A 1 , and N 2 values for A 2 , we have N 1 N 2 possible symbols that can be sent during each T second interval. To convert this number into bits (the fundamental unit of information engineers use to qualify things), compute log 2 (N 1 N 2 ).
1
While the fundamental signal used in electrical engineering is the sinusoid, it can be expressed mathematically in terms of an even more fundamental signal: the complex exponential. Representing sinusoids in terms of complex exponentials is not a mathematical oddity. Fluency with complex numbers and rational functions of complex variables is a critical skill all engineers master. Understanding information and power system designs and developing new systems all hinge on using complex numbers. In short, they are critical to modern electrical engineering, a realization made over a century ago.
The notion of the square root of − 1 originated with the quadratic formula: the solution of certain quadratic equations mathematically exists only if the so-called imaginary quantity
− 1 could be defined. Euler^2 first used i for the imaginary unit but that notation did not take hold until roughly Ampère’s time. Ampère^3 used the symbol i to denote current (intensité de current). It wasn’t until the twentieth century that the importance of complex numbers to circuit theory became evident. By then, using i for current was entrenched and electrical engineers chose j for writing complex numbers. An imaginary number has the form jb =
−b^2. A complex number, z, consists of the ordered pair (a,b), a is the real component and b is the imaginary component (the j is suppressed because the imaginary component of the pair is always in the second position). The imaginary number jb equals ( 0 ,b). Note that a and b are real-valued numbers. Figure 2.1 shows that we can locate a complex number in what we call the complex plane. Here, a, the real part, is the x-coordinate and b, the imaginary part, is the y-coordinate. From analytic geometry, we know that locations in the plane can be expressed as the sum of vectors, with the vectors corresponding to the x and y directions. Consequently, a complex number z can be expressed as the (vector) sum z = a + jb where j indicates the y-coordinate. This representation is known as the Cartesian form of z. An imaginary number can’t be numerically added to a real number; rather, this notation for a complex number represents vector addition, but it provides a convenient notation when we perform arithmetic manipulations. Some obvious terminology. The real part of the complex number z = a + jb, written as Re [z], equals a. We consider the real part as a function that works by selecting that component of a complex number not multiplied by j. The imaginary part of z, Im [z], equals b: that part of a complex number that is multiplied by j. Again, both the real and imaginary parts of a complex number are real-valued. The complex conjugate of z, written as z∗, has the same real part as z but an imaginary part of the (^1) This content is available online at http://cnx.org/content/m0081/2.27/. (^2) http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Euler.html (^3) http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Ampere.html
Surprisingly, the polar form of a complex number z can be expressed mathematically as
z = rejθ^ (2.2)
To show this result, we use Euler’s relations that express exponentials with imaginary arguments in terms of trigonometric functions. ejθ^ = cos θ + j sin θ (2.3)
cos θ =
ejθ^ + e−jθ 2
sin θ =
ejθ^ − e−jθ 2 j
The first of these is easily derived from the Taylor’s series for the exponential.
ex^ = 1 + x 1!
x^2 2!
x^3 3!
Substituting jθ for x, we find that
ejθ^ = 1 + j
θ 1!
θ^2 2!
− j
θ^3 3!
because j^2 = − 1 , j^3 = −j, and j^4 = 1. Grouping separately the real-valued terms and the imaginary-valued ones,
ejθ^ = 1 − θ^2 2!
θ 1!
θ^3 3!
The real-valued terms correspond to the Taylor’s series for cos (θ), the imaginary ones to sin (θ), and Euler’s first relation results. The remaining relations are easily derived from the first. Because of the relationship r =
a^2 + b^2 , we see that multiplying the exponential in (2.3) by a real constant corresponds to setting the radius of the complex number by the constant.
Adding and subtracting complex numbers expressed in Cartesian form is quite easy: You add (subtract) the real parts and imaginary parts separately.
(z 1 ± z 2 ) = (a 1 ± a 2 ) + j (b 1 ± b 2 ) (2.5)
To multiply two complex numbers in Cartesian form is not quite as easy, but follows directly from following the usual rules of arithmetic. z 1 z 2 = (a 1 + jb 1 ) (a 2 + jb 2 ) = a 1 a 2 − b 1 b 2 + j (a 1 b 2 + a 2 b 1 )
Note that we are, in a sense, multiplying two vectors to obtain another vector. Complex arithmetic provides a unique way of defining vector multiplication.
Exercise 2.3 (Solution on p. 30.) What is the product of a complex number and its conjugate?
Division requires mathematical manipulation. We convert the division problem into a multiplication problem by multiplying both the numerator and denominator by the conjugate of the denominator.
z 1 z 2
a 1 + jb 1 a 2 + jb 2
=
a 1 + jb 1 a 2 + jb 2
a 2 − jb 2 a 2 − jb 2
=
(a 1 + jb 1 ) (a 2 − jb 2 ) a 22 + b 22
=
a 1 a 2 + b 1 b 2 + j (a 2 b 1 − a 1 b 2 ) a 22 + b 22
Because the final result is so complicated, it’s best to remember how to perform division—multiplying numerator and denominator by the complex conjugate of the denominator—than trying to remember the final result. The properties of the exponential make calculating the product and ratio of two complex numbers much simpler when the numbers are expressed in polar form.
z 1 z 2 = r 1 ejθ^1 r 2 ejθ^2 = r 1 r 2 ej(θ^1 +θ^2 ) z 1 z 2
r 1 ejθ^1 r 2 ejθ^2
r 1 r 2
ej(θ^1 −θ^2 )^
To multiply, the radius equals the product of the radii and the angle the sum of the angles. To divide, the radius equals the ratio of the radii and the angle the difference of the angles. When the original complex numbers are in Cartesian form, it’s usually worth translating into polar form, then performing the multiplication or division (especially in the case of the latter). Addition and subtraction of polar forms amounts to converting to Cartesian form, performing the arithmetic operation, and converting back to polar form. Example 2. When we solve circuit problems, the crucial quantity, known as a transfer function, will always be expressed as the ratio of polynomials in the variable s = j 2 πf. What we’ll need to understand the circuit’s effect is the transfer function in polar form. For instance, suppose the transfer function equals s + 2 s^2 + s + 1
s = j 2 πf (2.10) Performing the required division is most easily accomplished by first expressing the numerator and denominator each in polar form, then calculating the ratio. Thus,
s + 2 s^2 + s + 1
j 2 πf + 2 − 4 π^2 f 2 + j 2 πf + 1
4 + 4π^2 f 2 ej^ arctan(πf^ ) √ (1 − 4 π^2 f 2 )^2 + 4π^2 f 2 ej^ arctan(2πf /(1−^4 π^2 f^2 ))
4 + 4π^2 f 2 1 − 4 π^2 f 2 + 16π^4 f 4
ej(arctan(πf^ )−arctan(arctan(^2 πf /(1−^4 π
(^2) f 2 )))) (2.13)
4
Elemental signals are the building blocks with which we build complicated signals. By definition, elemental signals have a simple structure. Exactly what we mean by the “structure of a signal” will unfold in this section of the course. Signals are nothing more than functions defined with respect to some independent variable, which we take to be time for the most part. Very interesting signals are not functions solely of time; one great example of which is an image. For it, the independent variables are x and y (two-dimensional space). Video signals are functions of three variables: two spatial dimensions and time. Fortunately, most of the ideas underlying modern signal theory can be exemplified with one-dimensional signals.
(^4) This content is available online at http://cnx.org/content/m0004/2.29/.