Jump to content

Solving quadratic equations with continued fractions

From Wikipedia, the free encyclopedia

This is the current revision of this page, as edited by Dyspophyr (talk | contribs) at 15:44, 11 November 2024 (Simple example: link -> simple continued fraction). The present address (URL) is a permanent link to this version.

(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

In mathematics, a quadratic equation is a polynomial equation of the second degree. The general form is

where a ≠ 0.

The quadratic equation on a number can be solved using the well-known quadratic formula, which can be derived by completing the square. That formula always gives the roots of the quadratic equation, but the solutions are expressed in a form that often involves a quadratic irrational number, which is an algebraic fraction that can be evaluated as a decimal fraction only by applying an additional root extraction algorithm.

If the roots are real, there is an alternative technique that obtains a rational approximation to one of the roots by manipulating the equation directly. The method works in many cases, and long ago it stimulated further development of the analytical theory of continued fractions.

Simple example

[edit]

Here is a simple example to illustrate the solution of a quadratic equation using continued fractions. We begin with the equation

and manipulate it directly. Subtracting one from both sides we obtain

This is easily factored into

from which we obtain

and finally

Now comes the crucial step. We substitute this expression for x back into itself, recursively, to obtain

But now we can make the same recursive substitution again, and again, and again, pushing the unknown quantity x as far down and to the right as we please, and obtaining in the limit the infinite simple continued fraction

By applying the fundamental recurrence formulas we may easily compute the successive convergents of this continued fraction to be 1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, ..., where each successive convergent is formed by taking the numerator plus the denominator of the preceding term as the denominator in the next term, then adding in the preceding denominator to form the new numerator. This sequence of denominators is a particular Lucas sequence known as the Pell numbers.

Algebraic explanation

[edit]

We can gain further insight into this simple example by considering the successive powers of

That sequence of successive powers is given by

and so forth. Notice how the fractions derived as successive approximants to 2 appear in this geometric progression.

Since 0 < ω < 1, the sequence {ωn} clearly tends toward zero, by well-known properties of the positive real numbers. This fact can be used to prove, rigorously, that the convergents discussed in the simple example above do in fact converge to 2, in the limit.

We can also find these numerators and denominators appearing in the successive powers of

The sequence of successive powers {ωn} does not approach zero; it grows without limit instead. But it can still be used to obtain the convergents in our simple example.

Notice also that the set obtained by forming all the combinations a + b2, where a and b are integers, is an example of an object known in abstract algebra as a ring, and more specifically as an integral domain. The number ω is a unit in that integral domain. See also algebraic number field.

General quadratic equation

[edit]

Continued fractions are most conveniently applied to solve the general quadratic equation expressed in the form of a monic polynomial

which can always be obtained by dividing the original equation by its leading coefficient. Starting from this monic equation we see that

But now we can apply the last equation to itself recursively to obtain

If this infinite continued fraction converges at all, it must converge to one of the roots of the monic polynomial x2 + bx + c = 0. Unfortunately, this particular continued fraction does not converge to a finite number in every case. We can easily see that this is so by considering the quadratic formula and a monic polynomial with real coefficients. If the discriminant of such a polynomial is negative, then both roots of the quadratic equation have imaginary parts. In particular, if b and c are real numbers and b2 − 4c < 0, all the convergents of this continued fraction "solution" will be real numbers, and they cannot possibly converge to a root of the form u + iv (where v ≠ 0), which does not lie on the real number line.

General theorem

[edit]

By applying a result obtained by Euler in 1748 it can be shown that the continued fraction solution to the general monic quadratic equation with real coefficients

given by

either converges or diverges depending on both the coefficient b and the value of the discriminant, b2 − 4c.

If b = 0 the general continued fraction solution is totally divergent; the convergents alternate between 0 and . If b ≠ 0 we distinguish three cases.

  1. If the discriminant is negative, the fraction diverges by oscillation, which means that its convergents wander around in a regular or even chaotic fashion, never approaching a finite limit.
  2. If the discriminant is zero the fraction converges to the single root of multiplicity two.
  3. If the discriminant is positive the equation has two real roots, and the continued fraction converges to the larger (in absolute value) of these. The rate of convergence depends on the absolute value of the ratio between the two roots: the farther that ratio is from unity, the more quickly the continued fraction converges.

When the monic quadratic equation with real coefficients is of the form x2 = c, the general solution described above is useless because division by zero is not well defined. As long as c is positive, though, it is always possible to transform the equation by subtracting a perfect square from both sides and proceeding along the lines illustrated with 2 above. In symbols, if

just choose some positive real number p such that

Then by direct manipulation we obtain

and this transformed continued fraction must converge because all the partial numerators and partial denominators are positive real numbers.

Complex coefficients

[edit]

By the fundamental theorem of algebra, if the monic polynomial equation x2 + bx + c = 0 has complex coefficients, it must have two (not necessarily distinct) complex roots. Unfortunately, the discriminant b2 − 4c is not as useful in this situation, because it may be a complex number. Still, a modified version of the general theorem can be proved.

The continued fraction solution to the general monic quadratic equation with complex coefficients

given by

converges or not depending on the value of the discriminant, b2 − 4c, and on the relative magnitude of its two roots.

Denoting the two roots by r1 and r2 we distinguish three cases.

  1. If the discriminant is zero the fraction converges to the single root of multiplicity two.
  2. If the discriminant is not zero, and |r1| ≠ |r2|, the continued fraction converges to the root of maximum modulus (i.e., to the root with the greater absolute value).
  3. If the discriminant is not zero, and |r1| = |r2|, the continued fraction diverges by oscillation.

In case 2, the rate of convergence depends on the absolute value of the ratio between the two roots: the farther that ratio is from unity, the more quickly the continued fraction converges.

This general solution of monic quadratic equations with complex coefficients is usually not very useful for obtaining rational approximations to the roots, because the criteria are circular (that is, the relative magnitudes of the two roots must be known before we can conclude that the fraction converges, in most cases). But this solution does find useful applications in the further analysis of the convergence problem for continued fractions with complex elements.

See also

[edit]

References

[edit]
  • H. S. Wall, Analytic Theory of Continued Fractions, D. Van Nostrand Company, Inc., 1948 ISBN 0-8284-0207-8