Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Derivative and Logarithm of a Differentiable Function on the Positive Reals, Study notes of Calculus

How to find the derivative of a differentiable function L(x) defined on the positive reals, which satisfies the equation L'(x) = C/x, where C = L'(1). The document also discusses the relationship between exponents and logarithms and provides a definition of the natural logarithm function.

Typology: Study notes

2021/2022

Uploaded on 09/12/2022

francyne
francyne 🇺🇸

4.7

(21)

268 documents

1 / 14

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
The Logarithm Function
E. L. Lady
(November 5, 2005)
A logarithm function is characterized by the equation L(xy)=L(x)+L(y). If L(x)
is any differentiable function defined on the positive reals and satisfying this equation,
then its derivative is given by L0(x)=C/x,whereC=L
0
(1) . Furthermore,
L(x)=log
Bx,whereB=e
C.
The main purpose of these notes is to give a modern definition of the logarithm function, which is the
definition found in several contemporary calculus books, and to motivate the definition by examining
the properties of the logarithm function as classically defined. This will involve going through the
fairly standard calculation of the derivative of the logarithm, and thinking rather carefully about what
is really going on in this calculation. (A somewhat condensed version of this treatment is given in the
latest (i.e. 7th ) edition of Salas & Hille, the book I’ve been teaching from for the past 20 years.)
1. The Classical Definition
I think that many students find it difficult to become comfortable with the logarithm function. I
know that I did. And I have an idea as to why this is so.
The definitions of some functions tell you directly how to compute the value of that function at
any given point. An example of such a definition would be f(x)= x
49x
3+27
x
5+8x
37x
2+9.These
functions are easy for the mind to accept.
Even the trig functions fall in this category, although the method for computing them described by
the usual definitions involves constructing a geometric figure, and thus is not very helpful in practice.
(To determine sin θ, for instance, one can construct a right triangle having θas one of the acute
angles and a hypotenuse of length 1. The value of sin θcan then be measured as the length of the side
opposite the angle θ.)
The definitions of other functions, though, don’t tell you how to compute the value. Instead, they
tell you how to recognize whether a particular answer is the correct one or not.
An example of such a function is f(x)= 3
x. One can check that 3
125 is 5 by verifying that
53= 125. But when required to compute, say, 3
216, one winds up resorting to trial and error to
discover the answer 6. If one is asked for 3
50, then trial and error does not work, since the answer is
not an integer (and is in fact not even a rational number). If one doesn’t have a calculator or set of
tables handy, then one will be able to come up with at best only a very crude approximation (unless
one knows a sophisticated trick such a Newton’s Method, which can produce a fairly good
approximation). The answer is clearly between 3 and 4, and a fairly quick calculation shows that 3.5
is too small and 3.7 is slightly too big, since 3.73=50.653.
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe

Partial preview of the text

Download Derivative and Logarithm of a Differentiable Function on the Positive Reals and more Study notes Calculus in PDF only on Docsity!

The Logarithm Function E. L. Lady (November 5, 2005)

A logarithm function is characterized by the equation L(xy) = L(x) + L(y). If L(x) is any differentiable function defined on the positive reals and satisfying this equation, then its derivative is given by L′(x) = C/x , where C = L′(1). Furthermore, L(x) = logB x , where B = eC^.

The main purpose of these notes is to give a modern definition of the logarithm function, which is the definition found in several contemporary calculus books, and to motivate the definition by examining the properties of the logarithm function as classically defined. This will involve going through the fairly standard calculation of the derivative of the logarithm, and thinking rather carefully about what is really going on in this calculation. (A somewhat condensed version of this treatment is given in the latest (i.e. 7th ) edition of Salas & Hille, the book I’ve been teaching from for the past 20 years.)

  1. The Classical Definition

I think that many students find it difficult to become comfortable with the logarithm function. I know that I did. And I have an idea as to why this is so.

The definitions of some functions tell you directly how to compute the value of that function at

any given point. An example of such a definition would be f (x) = x

(^4) − 9 x (^3) + 27 x^5 + 8x^3 − 7 x^2 + 9. These functions are easy for the mind to accept.

Even the trig functions fall in this category, although the method for computing them described by the usual definitions involves constructing a geometric figure, and thus is not very helpful in practice. (To determine sin θ , for instance, one can construct a right triangle having θ as one of the acute angles and a hypotenuse of length 1. The value of sin θ can then be measured as the length of the side opposite the angle θ .)

The definitions of other functions, though, don’t tell you how to compute the value. Instead, they tell you how to recognize whether a particular answer is the correct one or not.

An example of such a function is f (x) = √^3 x. One can check that 3

125 is 5 by verifying that 53 = 125. But when required to compute, say, 3

216 , one winds up resorting to trial and error to discover the answer 6. If one is asked for 3

50 , then trial and error does not work, since the answer is not an integer (and is in fact not even a rational number). If one doesn’t have a calculator or set of tables handy, then one will be able to come up with at best only a very crude approximation (unless one knows a sophisticated trick such a Newton’s Method, which can produce a fairly good approximation). The answer is clearly between 3 and 4, and a fairly quick calculation shows that 3. is too small and 3.7 is slightly too big, since 3. 73 = 50..

It’s not really the practical difficulty that matters, though. In practice, a function such as

f (x) = x

(^4) − 9 x (^3) + 27 x^5 + 8x^3 − 7 x^2 + 9 is also not very easy to compute by hand. And one usually has a calculator handy in any case.

But conceptually, I believe that a function that’s described by a specific algorithm is easier for the mind to accept than one that’s essentially described as the solution of an equation. (For instance, the definition of the cube root function basically says that √^3 a is defined to be the solution to the equation x^3 = a .)

Classically, logB a is defined to be the solution to the equation Bx^ = a. For instance, one verifies that log 2 32 = 5 by noting that 2^5 = 32.

If one wants to calculate by hand log 2 12 , then one is in real trouble. One sees quickly that the answer is between 3 and 4, since 2^3 = 8 and 2^4 = 16. It seems reasonable to try 3.5. This involves seeing whether 2^3.^5 is larger or smaller than 12.

But what is 2^3.^5?

Since 3.5 = 7/2,

23.^5 = 2 72 =

Now

128 is not something one can compute instantaneously. However since 11^2 = 121 , one can say that 2^3.^5 =

128 is slightly bigger than 11, and certainly smaller than 12, so that log 2 12 is definitely larger than 3.5.

One might try 3.6. This would involve comparing 2^3.^6 with 12. To do this, note that 3 .6 = 36/10 = 18/5, so that 2^3.^6 = 5

  1. At this point, any reasonable person is going to go find a good calculator, but if one resorts to a little cleverness one can see that

125 = (2^2 )^535 = 2^10 35 = 2^10

which is smaller than

218 = 2^10 .

Since 12^5 < 218 , we see that 12 < 5

218 = 2^3.^6 , so that the exponent x for which 2x^ = 12 must be smaller than 3.6. In other words, log 2 12 < 3..

Now at this point, almost any mathematician will object and say that all the above is irrelevant and has almost nothing to do with calculus. It’s the concept of the logarithm that’s important. One doesn’t have to know how to actually calculate logarithms in order to understand how the logarithm function behaves for purposes of calculus.

This is quite correct. But I think that accepting this point of view is a big step in sophistication for many students, and one should not simply gloss over it. If the student finds the logarithm concept somehow very alien, then I think that the going carefully though the preceding calculations may help.

logarithms of the two factors, plus a few other tricks.

hlim→ 0 logB^ (x^ +^ hh) −^ logB^ x= lim h→ 0 logB^ (x^

1 + hx

) − logB x h = lim h→ 0 logB^ x^ + logB

1 + hx

− logB x h = lim h→ 0

logB (1 + hx ) − 0 h = lim h x →^0

logB (1 + hx ) − logB 1 h

= (^) h/xlim→ 0

logB (1 + hx ) − logB 1 x h/x

=^1 x h/xlim→ 0 logB^ (1 +^

hx ) − logB 1 h/x

We have used here the fact that logB 1 = 0 and that as we let h approach 0 in order to take the limit, x does not change, so that saying that h approaches 0 is the same as saying that h/x approaches 0. For the same reason, the factor (^1) x can be brought outside the limit sign.

We have been able to do the whole calculation above without giving any thought to what the logarithm function really means. On the other hand, the final result seems, if anything, even more complicated than what we started with.

In fact, though, we have achieved a substantial simplification. This is because

h/xlim→ 0 logB^ (1 +^

hx ) − logB 1 h/x

does not actually depend on x at all. The quantity h/x is simply something that is approaching 0, and if we write k = h/x , then the calculation so far yields

d dx logB^ x^ =

x klim→^0

logB (1 + k) − logB 1 k =

x C, where C is a constant independent of x.

Computing C is the most difficult part of the whole calculation. It turns out that C = logB e, where e is a special irrational number which is as important in calculus as that other more famous irrational number π. But it’s more interesting to notice that

C = lim k→ 0 logB^ (1 +^ k k)^ −^ logB^1

is simply the derivative of the logarithm function evaluated at x = 1. In other words, if we let L(x) denote logB x , then the calculation above shows that

L′(x) =^1 x L′(1).

And we derived this simply by using the algebraic rules obeyed by the logarithm.

  1. Some Problems with the Classical Definition

As stated in the beginning, the real purpose of these notes is not to simply repeat the standard calculation of the derivative of the logarithm function, but to replace the classical definition of the logarithm by a more subtle one, which is in some ways superior.

There are actually a number of problems with the classical definition. Although these problems are by no means insurmountable, they do require quite a bit of effort to get around.

We have defined logB x as the solution to an equation. To wit, we have said that = logB x means B^ = x.

There’s a very important principle that should cause one to object to this definition. Namely, before one can defined a function as the solution to an equation, one needs to find a reason for believing that such a solution exists.

In other words, How do we know that there is an exponent such that B^ = x (for positive x )?

(We might also ask how we know that there’s only one. But this turns out to be an easier question.)

This is not a frivolous objection, when one considers the fact that one is almost never able to calculate the value for the logarithm precisely and in fact, as seen in the beginning of these notes, it’s not even all that easy to see whether a particular value for is the correct choice or not, since computing B^ is itself not at all easy if is not an integer.

Getting around these difficulties leads one into some rather deep mathematical waters, involving the use of something called the Intermediate Value Theorem for continuous functions.

But even without getting into all this, one can note that the function B^ is not at all as straightforward as it might as first seem. For instance, it is not even completely obvious that, for fixed B , the function f () = B^ is a continuous function of. Take two numbers, for instance, such as 5/8 and 7/11. These are fairly close to each other. In fact, 5/8 = .625 and 7/ 11 ≈ .636. Now choose a fixed B , say B = 10. Then

105 /^8 = 8

107 /^11 = 11

It doesn’t seem at all obvious that these two values will be at all close to each other. (They are, however. To see why, put both fractions over a common denominator.

105 /^8 = 10^55 /^88 = 88

107 /^11 = 10^56 /^88 = 88

It’s not to hard to prove that these are fairly close.)

The most serious objection to the use of the function B^ is the question of what one does when is irrational. If ` =

3, then it’s impossible to write as a fraction p/q with p and q both integers, and so, technically at least, B^ isn’t even defined. An engineer or a physicist might not have a big

Using the modern approach, we don’t worry in the beginning about how the logarithm function is concretely defined. Instead, one simply considers a differentiable function L(x) defined for all strictly positive real numbers x and satisfying the axiom

L(xy) = L(x) + L(y).

As already shown above, this axiom is all we need to derive the formula L′(x) = C/x where C = L′(1).

But it turns out that we could also have started out with this second formula.

Theorem. Let L(x) be a differentiable function defined on the (strictly) positive real numbers. Then

L(xy) = L(x) + L(y) for all x and y

if and only if L(1) = 0 and L′(x) = C/x for some constant C.

Furthermore, in this case C = L′(1) and for also L(xr) = rL(x) for all r.

proof: If L(xy) = L(x) + L(y) , then in particular L(1) = L(1 · 1) = L(1) + L(1). Subtracting L(1) from both sides, we see that L(1) = 0. Also, we get L(x) = L(y · x/y) = L(y) + L(x/y). Subtracting L(y) from both sides of this equation yields L(x/y) = L(x) − L(y). Furthermore, the calculation on p. 4 can be adapted to show that L′(x) = L′(1)/x. Namely (where this time we skip a few steps),

L′(x) = lim h→ 0 L(x^ +^ h h)^ −^ L(x)

= lim h→ 0

L(x) + L

1 + hx

− L(x) h = lim h x →^0

L(1 + hx ) − L(1) h

=^1 x h/xlim→ 0 L(1 +^

hx ) − L(1) h/x =^1 x L′(1).

On the other hand, start with the principles L(1) = 0 and L′(x) = C/x for some constant C. Now let a be an arbitrary but fixed real number and compute the derivative of the function L(x + a). By the chain rule, d dx L(xa) =^

C

xa

d (xa) dx =^

C

xa a^ =^ L

′(x).

Therefore the two functions L(xa) and L′(x) have the same derivative, so they must differ by a constant: L(xa) = L(x) + K.

In particular, L(a) = L(1 · a) = L(1) + K = 0 + K = K, so that K = L(a). Therefore we have L(xa) = L(x) + L(a). Writing y instead of a , we have established that

L(xy) = L(x) + L(y).

Now compute the derivative of L(xr) by the Chain Rule.

d dx L(x

r) = C xr

d dx (x

r)

= C rx

r− 1 xr = rCx = rL′(x).

From this we see that the two functions rL(x) and L(xr) have the same derivative and therefore differ by a constant. But substituting x = 1 shows that this constant must be 0. Therefore

L(xr) = rL(x). X

The above theorem is at first glance amazing, because it shows that the formula L′(x) = L′(1)/x is true not only for logarithm functions, but for any function satisfying the axiom L(xy) = L(x) + L(y).

It turns out, though, that the only functions satisfying this axiom are logarithms. In fact,

Theorem. If L(x) satisfies the axiom L(xy) = L(x) + L(y) and if B is a number such that L(B) = 1 , then = L(x) if and only if B^ = x.

(Thus L(x) is the logarithm of x with respect to the base B in the traditional sense.)

proof: On the one hand, if x = B^ then L(x) = L(B) = L(B) = · 1 = . Conversely, if = L(x) , then the above shows that L(x) = L(B). But since L′(x) = L′(1)/x and we are considering only positive x , then L′(x) > 0 for all x if L′(1) > 0 , and therefore L is a strictly increasing function; likewise L is strictly decreasing if L′(1) < 0. In either case, L(x) = L(B) is only possible if x = B`^ , as claimed.

(Note: L′(1) = 0 is not possible. Otherwise L′(x) = L′(1)/x = 0 for all x , so that L(x) is a

constant function. Thus L(B) = L(1) = 0, contrary to the assumption that L(B) = 1 .) X

note: If one is being ultra-careful, one should ask, How do we know that there exists a number B such that L(B) = 1? This will in fact be the case if and only if L′(1) 6 = 0. As pointed out in the above proof, if L′(1) = 0 then L must be a constant function, and in fact L(x) = 0 for all x. On the other hand, if L′(x) > 0 or L′(x) < 0 , then one can see from the Mean Value Theorem, for instance, that there must exist numbers a such that L(a) 6 = 0. Now solve for r such that rL(a) = 1 and set B = ar^. Then L(B) = L(ar) = rL(a) = 1. (There’s actually still a small glitch here if one is being totally rigorous. But this is pretty convincing, and a more rigorous proof is indeed possible.)

number like 5 or −3.)

If one uses the classical definition of the logarithm at this stage, one winds up doing basically all the same work involved in developing the logarithm function in the traditional way. The only difference in the two approaches is whether one does the really hard work first or last.

  1. The Bottom Line

The advantage of doing the theoretical work first, though, is that knowing the theory of the logarithm can suggest easier methods for actually constructing it.

In particular, the method used here will be based on the fact that (^) dxd (ln x) = x^1. Since the

function 1/x is continuous for x positive, and since the Fundamental Theorem of Calculus guarantees that every continuous function has an anti-derivative, we can define the natural logarithm to be an anti-derivative of the function 1/x. To make this specific, we will adjust the “constant of integration,” as it were, to make ln 1 = 0.

The way we actually do this is as follows. We choose a (theoretical) anti-derivative F (x) for 1/x. And then we define ln x = F (x) − F (1). It follows immediately that

d dx (ln^ x) =^

dF dx =

x and that ln 1 = F (1) − F (1) = 0.

To a mathematician, this is everything that’s needed to show the existence of the natural logarithm function.

To most non-mathematicians, though, this definition seems somehow too mystical. This anti-derivative F (x) , although guaranteed by the Fundamental Theorem of Calculus, seems to live somewhere in Never Never Land. We have no idea what it actually looks like.

There is a way of making this definition seem more concrete. If F (x) is an anti-derivative for 1/x, then (^) ∫ (^) b

a

dt t =^ F^ (b)^ −^ F^ (a). (Changing the variable of integration to t doesn’t affect the answer, but avoids confusion in the next step.) In particular, we now get

ln x = F (x) − F (1) =

∫ (^) x 1

dt t. This definition,

ln x =

∫ (^) x

1

dt t ,

is the one given in several modern calculus books, for instance Salas & Hille. And while it may at first seem a bit difficult to think of a function as being defined by an integral which one doesn’t know how to calculate (except by using the function ln x itself), the definition is actually quite concrete and one can visualize ln x as being the area under the appropriate part of the curve y =^1 x. And one can

compute it approximately (to as much accuracy as desired) by approximately calculating that area. (This is not the best way of calculating the logarithm function, though.)

In some ways, I think that once one gets used to it, this definition is actually more tangible than the traditional one.

  1. The Exponential Function

By definition, e is the unique number such that ln e = 1 , i. e.

∫ (^) e 1

t dt^ = 1. It turns out that^ e^ is irrational, but an approximate value is 2.718.

From the preceding, it can be seen that ln ex^ = x ln e = x.

As discussed earlier, for certain values of x , for instance x =

3 (as well as x = π and x = 3

it is not really clear how to raise a number to the x power, because x cannot be written precisely as a fraction p/q with p and q being integers. So what we can really say is that ln ex^ = x whenever ex^ is defined.

This actually gives us a cheap solution to the problem of deciding how to define ex^ when x is irrational. Namely, we can specify, for instance, that e

√ 3 is the number y such that ln y =

  1. (This works because ln x is an increasing function for positive x , since its derivative 1/x is positive. Therefore there can’t be two different values of y such that ln y =
  1. Furthermore, one can prove that there does exist a value of y that works by using something called the Intermediate Value Theorem.)

Since this new definition of ex^ agrees with the usual definition whenever the usual one applies, in this way we get ex^ defined as a differentiable function of x for all real numbers x. And it is also easy to show that the usual rule for exponents applies:

ex+y^ = exey^.

(To see that this is true, just take the logarithm of both sides.

ln(ex+y^ ) = x + y = ln ex^ + ln ey^ = ln(exey^ ),

using the principle that ln a + ln b = ln(ab). )

It is easy to compute the derivative of ex^. In fact, if y = ex^ then x = ln y and so

1 = dxdx = (^) dxd (ln y) = y

′ y (using the chain rule), so that y′^ = y.

In other words, (^) dxd (ex) = ex^. It is also interesting to note the following more general principle, analogous to the way we computed the derivative of the logarithm function.

  1. Complex Variables

Looking at the exponential function as characterized by the fact that it takes addition to multiplication leads in a natural and fairly simple way to Euler’s formula:

eiθ^ = cos θ + i sin θ ,

where i is the square root of -1. This is pointed out in the very interesting book Where Mathematics Comes From by George Lakoff and Rafael E. N´u˜nez.

Euler’s formula can be derived quite easily by looking at the Taylor series expansions for ex^ , sin x , and cos x. But for some reason, undoubtedly personal prejudice on my part, I have always found describing ex^ in terms of its Taylor series unsatisfactory on an intuitive level.

It will be useful to begin by reviewing the most basic facts of the algebra of complex numbers. The complex number system is obtained from the set of real numbers by adjoining the number i =

All complex numbers have the form a + bi, where a and b are real numbers. One thinks of the number a + bi as represented by the point (or vector) (a, b) in the Euclidean plans. Addition of complex numbers then corresponds to elementary vector addition. This fact is completely non-profound.

What is deeper is that multiplication of complex numbers can also be described geometrically if one thinks in terms of polar coordinates. The usual rules for change of coordinates show that a point with polar coordinates r and θ will have cartesian coordinates (x, y) where x = r cos θ and y = r sin θ. Thus the complex number corresponding to polar coordinates r and θ will be

r cos θ + ir sin θ = r(cos θ + i sin θ).

And here, in the absolutely most basic conceptual framework of the complex number system, we suddenly encounter the very expression that constitutes the right-hand side of Euler’s formula

But this in itself is not very deep. What is more significant is the fact that when one multiplies complex numbers what happens is that the absolute values multiply and the corresponding angles add. (Here, by the absolute value (or modulus) of a complex number a + bi is meant

a^2 + b^2 , which is the value for r when we write (a, b) in polar coordinates. There are of course many numerical values corresponding to the angle θ (called the argument) for the polar coordinates of (a, b) , but one

choice that works is tan−^1

( (^) b a

in case a + bi is in the right half-plane, i. e. when a ≥ 0 .)

The proof that multiplication of complex numbers corresponds to addition of the angles and multipication of the moduli follows from the addition formulas for the sine and cosine:

r 1 (cos θ 1 + i sin θ 1 )r 2 (cos θ 2 + i sin θ 2 ) = r 1 r 2 ((cos θ 1 cos θ 2 − sin θ 1 sin θ 2 ) + i(cos θ 1 sin θ 2 + sin θ 1 cos θ 2 ))

= r 1 r 2 ( cos(θ 1 + θ 2 ) + i sin(θ 1 + θ 2 ) ).

Thus algebra for complex numbers works just fine as long as one is adding, subtracting, and multiplying. Division is also not a problem. But when it comes to exponentiation, things are not so clear. Expressions such as ii^ , or even 2i^ , don’t have any obvious interpretation. And one can’t hope to prove Euler’s formula until one define what eiθ^ means.

Now having just looked at the exponential function as being characterized by the fact that it takes addition to multiplication, the fact that multiplication of complex numbers corresponds to addition of the corresponding angles should certainly attract our interest. If we set F (θ) = cos θ + i sin θ , then as just seen, F (θ 1 + θ 2 ) = F (θ 1 )F (θ 2 ) , which suggests that the function F is something like an exponential function.

We also notice that F ′(θ) = (^) dθd (cos θ + i sin θ) = − sin θ + i cos θ = i(cos θ + i sin θ) = iF (θ).

Now if the function eiθ^ is meaningful and behaves according to the familiar rules of algebra and calculus, then we must have

ei(θ^1 +θ^2 )^ = eθ^1 eiθ^2

= (^) dθ deiθ^ = ieiθ^.

We have seen that the function F (θ) = cos θ + i sin θ (defined when θ is a real number) has these properties. Furthermore, it is the only function which has them. Because if we look at the quotient eiθ/F (θ) and apply the quotient rule, using the properties we have established, we get

d d θ

eiθ F (θ) =^

F (θ)ieiθ^ − iF (θ)eiθ F (θ)^2 = 0^.

So we see that eiθ/F (θ) is constant. Since it equals 1 when θ = 0 , it follows that eiθ^ = F (θ) = cos θ + i sin θ.

So here we get Euler’s formula almost as a matter of definition. Alternatively, we could plausibly define eiθ^ by its Taylor series expansion. Or we could define it as the solution to the differential equation F ′^ = iF with F (0) = 1. Ultimately it doesn’t matter, because all these characterizations can be shown to be equivalent.

At this point, we have ez^ defined when z is a real number and when z is pure imaginary: z = iy. But if z = x + iy , we get ez^ = ex+iy^ = exeiy^ = ex(cos y + i sin y).

Thus ez^ is meaningful for all complex numbers.

As for logarithms, clearly we should require that ln(cos y + i sin y) = ln eiy^ = iy. Then if we write a complex number x + iy in terms of polar coordinates, i. e. x + iy = reiθ^ , we get

ln(x + iy) = ln r + ln(eiθ) = ln r + iθ.

Since r is a real number, ln r is defined, and thus ln z is defined for all complex numbers z.

There is a slight catch, though. The value θ as given above is not uniquely determined by x + iy. We can replace θ by θ + 2π , θ + 4π , θ − 2 π , etc., and sin θ and cos θ will not change, and thus x + iy will not change. Thus ln z for a complex number z is only defined up to multiple of 2πi.

This is extremely unfortunate. But that’s the way it is. Those who work in complex analysis use the oxymoron “multiple-valued function” (multivalent function) to describe this situation.