Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Linear and Nonlinear Equations: Analytical and Numerical Solutions with Mathematica, Exercises of Differential Equations

The differences between linear and nonlinear equations, their analytical and numerical solutions using Mathematica, and provides examples of algebraic and transcendental equations. It covers the use of Solve, NSolve, Reduce, and Newton's method for solving equations.

What you will learn

  • How to find the roots of a quadratic equation?
  • What are algebraic equations and how to solve them?
  • What are transcendental equations and how to solve them numerically?
  • How to solve linear equations analytically and numerically using Mathematica?
  • What is the difference between linear and nonlinear equations?

Typology: Exercises

2021/2022

Uploaded on 09/27/2022

heathl
heathl 🇺🇸

4.5

(11)

237 documents

1 / 24

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Equations and Systems of Equations
Linear and nonlinear equations
There are two groups of equations and systems of equations: linear and nonlinear. This division holds for both equations for
unknown quantities (numbers) that are considered in this chapter and equations for unknown functions such as differential
equations that will be considered later. Linear equations contain only zero and first combined powers of unknowns, such as
the equation
ax
b
and the system of equations
ax
+by c
dx +ey f.
Here x and y are unknowns and other symbols are parameters or numbers. Equations containing higher combined powers of
unknowns such as x
2
, xy, or functions of unknowns such as Sin[x], are nonlinear.
Linear equations and systems of equations are analytically solvable and the solution, if it exists, is unique. Nonlinear equa-
tions can have several solutions and analytical solutions are available in a limited number of cases such as quadratic, cubic,
and quartic equations
ax
2
+bx +c0
ax
3
+bx
2
+cx +d== 0
ax
4
+bx
3
+cx
2
+dx +e== 0,
as well as some irrational equations
xa+xbc,
some trigonometric equations
Sin
@
x
D
== Cos
@
2 x
D
and other equations. All these equations are algebraic because they can be simplified to polynomial equations
If equations contain different functions of unknowns at the same time, such as
x
==
Cos
@
x
D
they are called transcedental equations because they are non-algebraic and never can be solved analytically. In many cases
solution of nonlinear equations must be done numerically and special care should be taken in the case of many solutions.
Solution of equations with Mathematica
üLinear equations
Both analytical and numerical solutions of equations are output by Mathematica in the form of rules
In[25]:=
Solve
@
a x +b0, x
D
Out[25]=
:
:
x b
a
>
>
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18

Partial preview of the text

Download Linear and Nonlinear Equations: Analytical and Numerical Solutions with Mathematica and more Exercises Differential Equations in PDF only on Docsity!

Equations and Systems of Equations

Linear and nonlinear equations

There are two groups of equations and systems of equations: linear and nonlinear. This division holds for both equations for

unknown quantities (numbers) that are considered in this chapter and equations for unknown functions such as differential

equations that will be considered later. Linear equations contain only zero and first combined powers of unknowns, such as

the equation

ax  b

and the system of equations

ax + by  c

dx + ey  f.

Here x and y are unknowns and other symbols are parameters or numbers. Equations containing higher combined powers of

unknowns such as x^2 , xy , or functions of unknowns such as Sin[x], are nonlinear.

Linear equations and systems of equations are analytically solvable and the solution, if it exists, is unique. Nonlinear equa-

tions can have several solutions and analytical solutions are available in a limited number of cases such as quadratic, cubic,

and quartic equations

ax^2 + bx + c  0

ax^3 + bx^2 + cx + d == 0

ax^4 + bx^3 + cx^2 + dx + e == 0,

as well as some irrational equations

x − a + x − b  c,

some trigonometric equations

Sin@xD == Cos@2 xD

and other equations. All these equations are algebraic because they can be simplified to polynomial equations

If equations contain different functions of unknowns at the same time, such as

x == Cos@xD

they are called transcedental equations because they are non-algebraic and never can be solved analytically. In many cases

solution of nonlinear equations must be done numerically and special care should be taken in the case of many solutions.

Solution of equations with Mathematica

ü Linear equations

Both analytical and numerical solutions of equations are output by Mathematica in the form of rules

In[25]:= Solve@a x^ +^ b^ ^ 0, xD

Out[25]= ::x^ → −^

b a

that with the help of replacement can be further used to find the resulting expression for x

x ê. Solve@a x + b  0, xD

b a

or better

x ê. Solve@a x + b  0, xD@@ 1 DD

b a

or to define x as a function of the parameters

xab@a_, b_D = x ê. Solve@a x + b  0, xD@@ 1 DD; xab@a, bD

b a

ü Nonlinear algebraic equations

Note that the solution of an equation is a list of rules corresponding to all its roots. Quadratic equation has two roots,

sol = SolveAa x^2 + b x + c  0, xE

::x →

− b − b^2 − 4 a c 2 a

, :x →

− b + b^2 − 4 a c 2 a

so that sol is a list of two elements. The roots of this equations form the list

x ê. sol

− b − b^2 − 4 a c 2 a

− b + b^2 − 4 a c 2 a

and the individual roots can be addressed as

x ê. sol@@ 1 DD x ê. sol@@ 2 DD

− b − b^2 − 4 a c 2 a

− b + b^2 − 4 a c 2 a

The following irrational equation can be reduced to a quadratic equation by squaring it two times. The resulting quadratic

equation has two roots, only one of which is the solution of the initial equation, why the other solution is parasite. This is why

this equation is still algebraic.

sol = SolveB x − a + x − b  c, xF

::x →

a^2 − 2 a b + b^2 + 2 a c^2 + 2 b c^2 + c^4 4 c^2

PlotB: x − 1 + x + x + 1 , 3>, 8 x, 1, 1.5<, PlotRange → AllF

1.1 1.2 1.3 1.4 1.

Indeed, the solution above is OK. More practical is to force Mathematica to solve this equation numericaly from the

beginning

SolveB x − 1 + x + x + 1  3.F

88 x → 1.1866<<

For more complicated equations there can be an output saying that Mathematica has reduced the equation to a polynomial

equation

SolveB x − 1 + x + x + 1 + x + 2  5 F

99 x → RootA− 82 534 969 521 + 124 036 315 200  1 − 62 668 165 600  12 + 14 394 809 600  13 − 1 552 800 000  14 + 64 000 000  15 &, 1E==

that can be solved only numerically

N@%D

88 x → 1.28929<<

Again, numerical solution can be done from the beginning

SolveB x − 1 + x + x + 1 + x + 2  5.F

88 x → 1.28929<<

In this case one can use the NSolve command that works on polynomial equations, according to Mathematica 's help

NSolveB x − 1 + x + x + 1 + x + 2  5 F

88 x → 1.28929<<

Although this equation is not polynomial, Mathematica can reduce it to a polynomial equation, this is why NSolve works

here.

Many trigonometric equations can be solved by their reduction to polynomial equations

Solve@Sin@xD  Cos@2 xD, xD N@%D Solve::ifun : Inverse functions are being used by Solve, so some solutions may not be found; use Reduce for complete solution information. à

::x → −

π 2

, :x →

π 6

, :x →

5 π 6

88 x → − 1.5708<, 8 x → 0.523599<, 8 x → 2.61799<<

Indeed, this equation is equivalent to a quadratic equation for Sin[x] because

Cos@2 xD = Cos@xD^2 − Sin@xD^2 = 1 − 2 Sin@xD^2

This formula can be obtained with Mathematica by

TrigExpand@Cos@2 xDD ê. Cos@xD^2 → 1 − Sin@xD^2 1 − 2 Sin@xD^2

The solution of the resulting quadratic equation with respect to Sin[x] has the form

SolveASin@xD  1 − 2 Sin@xD^2 , Sin@xDE

: 8 Sin@xD → − 1 <, :Sin@xD →

This output constitutes two simple triginometric equations for x that can be solved to give the result above, x = −p/2, x = p/6,

and x = 5p/6. Since the original equation is algebraic, also NSolve (applicable to polynomials) does the job

NSolve@Sin@xD  Cos@2 xD, xD

Solve::ifun : Inverse functions are being used by Solve, so some solutions may not be found; use Reduce for complete solution information. à 88 x → − 1.5708<, 8 x → 0.523599<, 8 x → 2.61799<<

Both Solve and NSolve do not take into account the periodicity of the trigonometric function and thus lose some solutions.

Complete solutions can be obtained by Reduce

Reduce@Sin@xD  Cos@2 xD, xD N@%D

C@ 1 D ∈ Integers && x  −

π 2

  • 2 π C@ 1 D »» x 

π 6

  • 2 π C@ 1 D »» x 

5 π 6

  • 2 π C@ 1 D

C@ 1 D ∈ Integers && Hx  − 1.5708 + 6.28319 C@ 1 D »» x  0.523599 + 6.28319 C@ 1 D »» x  2.61799 + 6.28319 C@ 1 DL

Another example of an algebraic equation is

SolveAx^ − 2 x^ + 1  0 E

Solve::ifun : Inverse functions are being used by Solve, so some solutions may not be found; use Reduce for complete solution information. à

::x →  π + LogB

J− 1 + 5 NF>, :x → LogB

J 1 + 5 NF>>

Plot@ 8 Cos@xD, x<, 8 x, − 3, 3<D

  • 3 - 2 - 1 1 2 3
    • 3
    • 2
    • 1

1

2

3

the choice of the starting point is irrelevant. If the equation has several solutions, usually (but not always!) FindRoot finds the

solution closest to the starting point. For instance, the equation

Cos@xD  x ê 5

has three solutions:

Plot@ 8 Cos@xD, x ê 5 <, 8 x, − 7, 7<D

  • 6 - 4 - 2 2 4 6

FindRoot@Cos@xD  x ê 5, 8 x, 0<D

FindRoot::lstol : The line search decreased the step size to within tolerance specified by AccuracyGoal and PrecisionGoal but was unable to find a sufficient decrease in the merit function. You may need more than MachinePrecision digits of working precision to meet these tolerances. à 8 x → 6.08183<

Here FindRoot got stuck in the vicinity of the near-solution x º 6.

FindRoot@Cos@xD  x ê 5, 8 x, 2<D

8 x → 1.30644<

This is the correct positive solution.

FindRoot@Cos@xD  x ê 5, 8 x, − 1 <D 8 x → − 1.97738<

Also a correct solution.

FindRoot@Cos@xD  x ê 5, 8 x, − 5 <D 8 x → − 3.83747<

But

FindRoot@Cos@xD  x ê 5, 8 x, − 6 <D

8 x → − 1.97738<

gives a solution that is not the closest to the starting point. One can plot the dependence of the solution on the starting point

xx0@x0_D := x ê. FindRoot@Cos@xD  x ê 5, 8 x, x0<D Plot@xx0@x0D, 8 x0, − 5, 2<D

  • 5 - 4 - 3 - 2 - 1 1 2
    • 4
    • 2

2

4

6

Here one can see all three roots plus the wrong root x º 6. One can eliminate the wrong root by bracketing the search interval

to, say, {-5,2} in the FindRoot command

xx0@x0_D := x ê. FindRoot@Cos@xD  x ê 5, 8 x, x0, − 5, 2<D Plot@xx0@x0D, 8 x0, − 5, 2<D

  • 5 - 4 - 3 - 2 - 1 1 2
    • 3
    • 2
    • 1

1

There are functions that make problems in the standard version of FindRoot if the starting point is not sufficiently close to

the root:

8 x → 1.30644<

Steps = 4 Evaluations = 5

or

In[41]:= s^ =^ 0; e^ =^ 0; FindRoot@f@xD  0, 8 x, − 2, 200<, StepMonitor  s ++, EvaluationMonitor  8 e ++, Print@e, " ", xD<D Print@"Steps" → s, " Evaluations" → eD 1 − 2. 2 200. 3 197.

4 97. 5 47.

6 22. 7 10. 8 4.

9 1. 10 − 0.

11 0. 12 − 0. 13 0.

14 − 5.32008 × 10 −^6 15 3.22404 × 10 −^11 16 − 9.12497 × 10 −^22

Out[42]= 9 x^ → −^ 9.12497^ ×^10 −^22 = Steps → 14 Evaluations → 16

ü Numerical method of solving transcedental equations

The basic method for solving transcedental equations of the type F [ x ] == 0 is the Newton method, in which the function is

linearized around the starting point x 0

Flin@xD = F@x 0 D + F '@x 0 D Hx − x 0 L

and the root of the Flin@xD is found as

x 1 = x 0 −

F@x 0 D

F'@x 0 D

Taking x 1 as the next starting point, one repeats this procedure iteratively until x 1 - x 0 < d. Below is the procedure code.

The first argument of the Module statement is the list of local variables. As we need xList as an output, we keep it global.

Also the maximal number of iterations kMax is kept global.

In[12]:= FindRootNewton@Func_, xStart_D^ :=^ ModuleB^8 del, k, x0, x1, FDer<, FDer@x_D = ∂x Func@xD; x1 = xStart; x0 = xStart + 1; k = 1; del = 10 −^16 ; xList = 8 x1<; WhileBAbs@x1 − x0D > del && k  kMax, k ++; x0 = x1;

x1 = x0 −

Func@x0D FDer@x0D

xList = Append@xList, x1D F; x1F;

FindRootNewton is programmed as a function, the output is the value x 1, not a rule. Its application is the following:

In[133]:= F@y_D^ =^ y^ +^ y^2 ;^ kMax^ =^ 30; FindRootNewton@F, 1.D

Out[134]= 0.

For a comparison,

In[23]:= FindRoot@F@xD,^8 x, 1.<D

Out[23]= 9 x^ →^ 5.42101^ ×^10 −^20 =

Number of iterations and iteration results in FindRootNewton:

In[60]:= Length@xListD xList

Out[60]= 8

Out[61]= 9 1., 0.333333, 0.0666667, 0.00392157, 0.000015259, 2.32831 × 10 −^10 , 5.42101 × 10 −^20 , 0.=

One can see that the convergence becomes very fast when x becomes close to the root.

In[62]:= ListLogPlot@xListD

Out[62]=

2 3 4 5 6 7

10 -^15

10 -^11

10 -^7

Here is the plotting procedure visualizing how the Newton's method works. kShow is the maximal number of iterations to

show.

In[144]:= F@y_D^ =^ y^ +^ y^2 ;^ kMax^ =^ 30; FindRootNewton@F, − 0.4D kShow = 4; ShowHowNewtonWorks@kShow, − 1.5, 1D

Out[145]= 0.

Out[147]=

  • 1.5 - 1.0 - 0.5 0.5 1.

But for the starting value −0.6 and less the procedure converges to another root.

In[148]:= F@y_D^ =^ y^ +^ y^2 ;^ kMax^ =^ 30; FindRootNewton@F, − 0.6D kShow = 4; ShowHowNewtonWorks@kShow, − 2, 0.5D

Out[149]= −^ 1.

Out[151]=

  • 2.0 - 1.5 - 1.0 - 0.5 0.

For the function considered above, x ë I 1 + x^2 M and similar, the Newton method diverges for most of starting points

In[152]:= F@x_D =

20 x 1 + x^2

; kMax = 3;

FindRootNewton@F, 2.D kShow = 2; ShowHowNewtonWorks@kShow, − 5, 15D

Out[153]= 11.

Out[155]=

  • 5 5 10 15
    • 10
      • 5

5

10

or even

In[156]:= F@x_D^ =^

4 x 1 + x^2

; kMax = 3;

FindRootNewton@F, 0.6D kShow = 2; ShowHowNewtonWorks@kShow, − 2, 2D

Out[157]= 1.

Out[159]=

  • 2 - 1 1 2
    • 2
    • 1

1

2

x + y + z  0

x + y + 2 z  1

2 x + 2 y + 3 z  2

Here adding the first and second equation, one obtains 2 x + 2 y + 3 z ã 2 that contradicts the third equation. In all these

cases the determinant of the system of equations is zero (see below).

Consider a rigid rod of mass M and length L lying on two supports. Support 1 is located at the distance x 1 from the left end of

the rod and support 2 is located at the distance x 2 from the left end of the rod. Let us find the upward reaction forces

F 1 and F 2 acting on the rod from each support.

There are two linear equations

F 1 + F 2 − M g  0 H∗ Sum of all forces is zero ∗L

F 1 x 1 + F 2 x 2 − M g L ê 2  0

H∗ Sum of all torques Hhere with respect to the left endL is zero ∗L

This system of equation can be solved with Mathematica

In[104]:= Solve@^8 F 1 +^ F 2 −^ M g^ ^ 0, F 1 x 1 +^ F 2 x 2 −^ M g L^ ê^2 ^0 <,^8 F 1 , F 2 <D^ êê^ Simplify

Out[104]= ::F 1 →^

g M HL − 2 x 2 L 2 Hx 1 − x 2 L

, F 2 → −

g M HL − 2 x 1 L 2 Hx 1 − x 2 L

or by hand. The human form of this result is

F 1 =

L ê 2 − x 2

x 1 − x 2

M g, F 2 =

L ê 2 − x 1

x 2 − x 1

M g.

In the case of three or more supports, the number of unknowns is the number of supports, whereas there are still two equa-

tions. Thus the problem becomes underdetermined. It can be made well determined if one renounces the model of the rigid

rod and takes into account its elasticity. In this case, however, the problem becomes much more complicated.

The problem with two supports is very simple and it does not require Mathematica. Let us find the effective resistance of the

so-called bridge consisting of five resistors.

R 1 R 2

R 3 R 4

R 5

I

I 1

I 3

I 2

I 4

I 5 I

V

The currents in the nodes are related by the obvious Kirchhof's equations

I == I 1 + I 3

I 1  I 2 + I 5

I 3 + I 5  I 4

while the equation for the rightmost node I 2 + I 4 ã I should be discarded because it follows from the above three equations.

Ohm's law for the three ways from left to right have the form

R 1 I 1 + R 2 I 2  V

R 3 I 3 + R 4 I 4  V

R 1 I 1 + R 5 I 5 + R 4 I 4  V

One can write down more equations of this type but they follow from the above equations. For the given total voltage V, one

finds 6 currents from the 6 equations and then defines the effective resistance as

R =

V

I

Solution of this problem by hand is cumbersome. One can eliminate some currents from the first three equations but then one

has to solve the remaining system of three equations with three unknowns. Mathematica solves this problem as

R =

HR 1 + R 2 L HR 3 + R 4 L

HR 1 + R 2 L + HR 3 + R 4 L

that also can be obtained elementarily.

ü Systems of linear equations in the matrix form

Systems of linear equations can be written in the matrix form

A.X  B,

where is the square matrix of the coefficients of the system of equations, X is the vector (List) of unknowns, and B is the

vector (List) of right parts. In particular, for the problem of a rod on two supports described by the system of equations

F 1 + F 2 − M g  0 H∗ Sum of all forces is zero ∗L

F 1 x 1 + F 2 x 2 − M g L ê 2  0

H∗ Sum of all torques Hhere with respect to the left endL is zero ∗L

one has

In[36]:= A^ =^ K^

x 1 x 2 O B = 8 M g, M g L ê 2 < X = 8 F 1 , F 2 < Out[36]= 88 1, 1<, 8 x 1 , x 2 <<

Out[37]= :g M,^

g L M 2

Out[38]= 8 F 1 , F 2 <

It looks natural to define

In[20]:= B^ =^ K^

M g M g L ê 2

O

X = J F F^1

2

N

Out[20]= :^8 g M<,^ :^

g L M 2

Out[21]= 88 F 1 <,^8 F 2 <<

but, unfortunately, the current version of Mathematica understands this as Lists with two indices (Lists of Lists) that are

inappropriate for us. Please, rerun now the previous definitions!

One can see that technically matrix is a List with two indices, whereas a vector is a list with two indices. The equations for

the rod are

In[25]:= A.X^ ^ B

Out[25]= 8 F 1 +^ F 2 , F 1 x 1 +^ F 2 x 2 <^ ^ :g M,^

g L M 2

that is the same as equations written above.

The advantage of the matrix form is that properties of matrices are very well investigated and many operations can be

performed on matrices as the whole rather than on their components.There are functions of matrices etc. The same

pertains to vectors. If the problem is put into the matrix-vector form (that is, in the form of Lists) , one says that the

problem is vectorized. Vectorized problems are solved faster because there are special highly efficient algorithms for

Lists.

The solution of the rod problem with Mathematica can be obtained as

In[39]:= Solve@A.X^ ^ B, XD^ êê^ Simplify

Out[39]= ::F 1 →

g M HL − 2 x 2 L 2 Hx 1 − x 2 L

, F 2 → −

g M HL − 2 x 1 L 2 Hx 1 − x 2 L

Also one can solve the matrix equation simply by left-multiplying the right and left parts by the inverse A matrix

In[28]:= Inverse@AD

Out[28]= ::

x 2 − x 1 + x 2

− x 1 + x 2

x 1 − x 1 + x 2

− x 1 + x 2

and using

In[31]:= Inverse@AD.A^ ==^ K^

O êê Simplify

Out[31]= True

or, in other words

In[32]:= Inverse@AD.A^ ^ IdentityMatrix@^2 D^ êê^ Simplify

Out[32]= True

This yields the solution

In[34]:= X^ =^ Inverse@AD.B^ êê^ Simplify

Out[34]= :^

g M HL − 2 x 2 L 2 Hx 1 − x 2 L

g M HL − 2 x 1 L 2 Hx 1 − x 2 L

Note that the command

In[27]:= A−^1

Out[27]= :^8 1, 1<,^ :^

x 1

x 2

does not yield the inverse matrix, it simply inverts each element. All operations on matrices have special names. Similarly,

the exponential of a matrix is given by MatrixExp@ A ] and not by Exp@ A ] (the latter exponentiates each element).