20世纪十大算法

来源:百度文库 编辑:神马文学网 时间:2024/04/24 21:57:06
Algos is the Greek word for pain. Algor is Latin, to be cold. Neither is
the root for algorithm, which stems instead from al-
Khwarizmi, the name of the ninth-century Arab scholar whose book al-jabr
wa’l muqabalah devolved into today’s high school
algebra textbooks. Al-Khwarizmi stressed the importance of methodical
procedures for solving problems. Were he around today,
he’d no doubt be impressed by the advances in his eponymous approach.
Some of the very best algorithms of the computer age are highlighted
in the January/February 2000 issue of Computing in Science &
Engineering, a joint publication of the American Institute of Physics
and the IEEE Computer Society. Guest editors Jack Don-garra of the
University of Tennessee and Oak Ridge National Laboratory and Fran-cis
Sullivan of the Center for Comput-ing Sciences at the Institute for
Defense Analyses put togeth-er a list they call the “Top Ten Algorithms
of the Century.”
“We tried to assemble the 10 al-gorithms with the greatest influence on
the development and practice of science and engineering in the 20th
century,” Dongarra and Sullivan write. As with any top-10 list, their
selections—and non-selections—are bound to be controversial, they
acknowledge. When it comes to picking the algorithmic best, there
seems to be no best algorithm.
Without further ado, here’s the CiSE top-10 list, in chronological
order. (Dates and names associated with the algorithms should be read as
first-order approximations. Most algorithms take shape over time,
with many contributors.)
1.
1946: John von Neumann, Stan Ulam, and Nick Metropolis, all at the Los
Alamos Scientific Laboratory, cook up the Metropolis
algorithm, also known as the Monte Carlo method.
The Metropolis algorithm aims to obtain approximate solutions to
numerical problems with unmanageably many degrees of freedom
and to combinatorial problems of factorial size, by mimicking a random
process. Given the digital computer’s reputation for
deterministic calculation, it’s fitting that one of its earliest
applications was the generation of random numbers.
2.
1947: George Dantzig, at the RAND Corporation, creates the simplex
method for linear programming.
In terms of widespread application, Dantzig’s algorithm is one of the
most successful of all time: Linear
programming dominates the world of industry, where economic survival
depends on the ability to optimize
within budgetary and other constraints. (Of course, the “real”
problems of industry are often nonlinear; the use
of linear programming is sometimes dictated by the computational
budget.) The simplex method is an elegant
way of arriving at optimal answers. Although theoretically susceptible
to exponential delays, the algorithm
in practice is highly efficient—which in itself says something
interesting about the nature of computation.
In terms of widespread use, George Dantzig’s simplex method is among
the most successful algorithms of all time.
3.
1950: Magnus Hestenes, Eduard Stiefel, and Cornelius Lanczos, all from
the Institute for Numerical Analysis
at the National Bureau of Standards, initiate the development of
Krylov subspace iteration methods.
These algorithms address the seemingly simple task of solving
equations of the form Ax = b. The catch,
of course, is that A is a huge n x n matrix, so that the algebraic
answer x = b/A is not so easy to compute.
(Indeed, matrix “division” is not a particularly useful concept.)
Iterative methods—such as solving equations of
the form Kxi + 1 = Kxi + b – Axi with a simpler matrix K that’s
ideally “close” to A—lead to the study of Krylov subspaces. Named for
the Russian mathematician Nikolai Krylov, Krylov subspaces are
spanned by powers of a matrix applied to an initial“remainder”
vector r0 = b – Ax0. Lanczos found a nifty way to generate an
orthogonal basis for such a subspace when the matrix is symmetric.
Hestenes and Stiefel proposed an even niftier method, known as the
conjugate gradient method, for systems that are both symmetric and
positive definite. Over the last 50 years, numerous researchers have
improved and extended these algorithms.
The current suite includes techniques for non-symmetric systems, with
acronyms like GMRES and Bi-CGSTAB. (GMRES and
Bi-CGSTAB premiered in SIAM Journal on Scientific and Statistical
Computing, in 1986 and 1992,
respectively.)
4.
1951: Alston Householder of Oak Ridge National Laboratory formalizes the
decompositional approach
to matrix computations.
The ability to factor matrices into triangular, diagonal, orthogonal,
and other special forms has turned
out to be extremely useful. The decompositional approach has enabled
software developers to produce
flexible and efficient matrix packages. It also facilitates the analysis
of rounding errors, one of the big
bugbears of numerical linear algebra. (In 1961, James Wilkinson of the
National Physical Laboratory in
London published a seminal paper in the Journal of the ACM, titled “
Error Analysis of Direct Methods
of Matrix Inversion,” based on the LU decomposition of a matrix as a
product of lower and upper
triangular factors.)
5.
1957: John Backus leads a team at IBM in developing the Fortran
optimizing compiler.
The creation of Fortran may rank as the single most important event in
the history of computer programming: Finally, scientists (and others)
could tell the computer what they wanted it to do, without having to
descend into the netherworld of machine code.
Although modest by modern compiler standards—Fortran I consisted of a
mere 23,500 assembly-language instructions—the early
compiler was nonetheless capable of surprisingly sophisticated
computations. As Backus himself recalls in a recent history of
Fortran I, II, and III, published in 1998 in the IEEE Annals of the
History of Computing, the compiler “produced code of such efficiency
that its output would startle the programmers who studied it.”
6.
1959–61: J.G.F. Francis of Ferranti Ltd., London, finds a stable method
for computing eigenvalues, known as the QR algorithm.
Eigenvalues are arguably the most important numbers associated with
matrices—and they can be the trickiest to compute. It’s
relatively easy to transform a square matrix into a matrix that’s “
almost” upper triangular, meaning one with a single extra set of
nonzero entries just below the main diagonal. But chipping away those
final nonzeros, without launching an avalanche of error, is nontrivial.
The QR algorithm is just the ticket. Based on the QR decomposition,
which writes A as the product of an orthogonal matrix Q and an upper
triangular matrix R, this approach iteratively changes Ai = QR into Ai +
1 = RQ, with a few bells and whistles for accelerating convergence to
upper triangular form. By the mid-1960s, the QR algorithm had turned
once-formidable eigenvalue problems into routine calculations.
7.
1962: Tony Hoare of Elliott Brothers, Ltd., London, presents Quicksort.
Putting N things in numerical or alphabetical order is mind-numbingly
mundane. The intellectual challenge lies in devising ways of doing so
quickly. Hoare’s algorithm uses the age-old recursive strategy of
divide and conquer to solve the problem: Pick one element as a “pivot,
” separate the rest into piles of “big” and “small” elements (as
compared with the pivot), and then repeat this procedure on each pile.
Although it’s possible to get stuck doing all N(N – 1)/2 comparisons
(especially if you use as your pivot the first item on a list that’s
already sorted!), Quicksort runs on average with O(N log N) efficiency.
Its elegant simplicity has made Quicksort the pos-terchild of
computational complexity.
8.
1965: James Cooley of the IBM T.J. Watson Research Center and John Tukey
of Princeton University and AT&T Bell Laboratories unveil the fast Fourier
transform. Easily the most far-reaching algo-rithm in applied mathematics, the
FFT revolutionized signal processing. The underlying idea goes back to Gauss (who needed to
calculate orbits of asteroids), but it was the Cooley–Tukey paper that made it clear how
easily Fourier transforms can be computed. Like Quicksort, the FFT relies on a
divide-and-conquer strategy to reduce an ostensibly O(N2) chore to an O(N log N) frolic.
But unlike Quick- sort, the implementation is (at first sight) nonintuitive and less than
straightforward. This in itself gave computer science an impetus to investigate the inherent
complexity of computational problems and algorithms.
9.
1977: Helaman Ferguson and Rodney Forcade of Brigham Young University
advance an integer relation detection algorithm.
The problem is an old one: Given a bunch of real numbers, say x1, x2, .
. . , xn, are there integers a1, a2, . . . , an (not all 0) for which
a1x1 + a2x2 + . . . + anxn = 0? For n = 2, the venerable Euclidean
algorithm does the job, computing terms in the continued-fraction
expansion of x1/x2. If x1/x2 is rational, the expansion terminates and,
with proper unraveling, gives the “smallest” integers a1 and a2.
If the Euclidean algorithm doesn’t terminate—or if you simply get
tired of computing it—then the unraveling procedure at least provides
lower bounds on the size of the smallest integer relation. Ferguson
and Forcade’s generalization, although much more difficult to implement
(and to understand), is also more powerful. Their detection algorithm,
for example, has been used to find the precise coefficients of the
polynomials satisfied by the third and fourth bifurcation points, B3 =
3.544090 and B4 = 3.564407, of the logistic map. (The latter
polynomial is of degree 120; its largest coefficient is 25730.) It has
also proved useful in simplifying calculations with Feynman diagrams
in quantum field theory.
10.
1987: Leslie Greengard and Vladimir Rokhlin of Yale University invent
the fast multipole algorithm.
This algorithm overcomes one of the biggest headaches of N-body
simulations: the fact that accurate calculations of the motions of N
particles interacting via gravitational or electrostatic forces (think
stars in a galaxy, or atoms in a protein) would seem to require O(N2)
computations—one for each pair of particles. The fast multipole
algorithm gets by with O(N) computations. It does so by using
multipole expansions (net charge or mass, dipole moment, quadrupole, and
so forth) to approximate the effects of a distant group of particles on
a local group. A hierarchical decomposition of space is used to
define ever-larger groups as distances increase.
One of the distinct advantages of the fast multipole algorithm is that
it comes equipped with rigorous error estimates, a feature that many
methods lack.
What new insights and algorithms will the 21st century bring? The
complete answer obviously won’t be known for another
hundred years. One thing seems certain, however. As Sullivan writes in
the introduction to the top-10 list, “The new century is not going to
be very restful for us, but it is not going to be dull either!”