Articles

Here are articles written by others.


Genetic Algorithms and Evolutionary Computation
by Adam Marczyk
Copyright © 2004
[Posted: April 23, 2004]

Other Links:

Algoritmos Genéticos y Computación Evolutiva
A Spanish translation of this article.
Introduction
What is a genetic algorithm?
A brief history of genetic algorithms
What are the strengths of genetic algorithms?
What are the limitations of genetic algorithms?
Some specific examples of genetic algorithms
Creationist arguments
Conclusion
References and resources

Introduction
Top
reationists occasionally charge that evolution is useless as a scientific theory because it produces no practical benefits and has no relevance to daily life. However, the evidence of biology alone shows that this claim is untrue. There are numerous natural phenomena for which evolution gives us a sound theoretical underpinning. To name just one, the observed development of resistance - to insecticides in crop pests, to antibiotics in bacteria, to chemotherapy in cancer cells, and to anti-retroviral drugs in viruses such as HIV - is a straightforward consequence of the laws of mutation and selection, and understanding these principles has helped us to craft strategies for dealing with these harmful organisms. The evolutionary postulate of common descent has aided the development of new medical drugs and techniques by giving researchers a good idea of which organisms they should experiment on to obtain results that are most likely to be relevant to humans. Finally, the principle of selective breeding has been used to great effect by humans to create customized organisms unlike anything found in nature for their own benefit. The canonical example, of course, is the many varieties of domesticated dogs (breeds as diverse as bulldogs, chihuahuas and dachshunds have been produced from wolves in only a few thousand years), but less well-known examples include cultivated maize (very different from its wild relatives, none of which have the familiar "ears" of human-grown corn), goldfish (like dogs, we have bred varieties that look dramatically different from the wild type), and dairy cows (with immense udders far larger than would be required just for nourishing offspring).
Critics might charge that creationists can explain these things without recourse to evolution. For example, creationists often explain the development of resistance to antibiotic agents in bacteria, or the changes wrought in domesticated animals by artificial selection, by presuming that God decided to create organisms in fixed groups, called "kinds" or baramin. Though natural microevolution or human-guided artificial selection can bring about different varieties within the originally created "dog-kind," or "cow-kind," or "bacteria-kind" (!), no amount of time or genetic change can transform one "kind" into another. However, exactly how the creationists determine what a "kind" is, or what mechanism prevents living things from evolving beyond its boundaries, is invariably never explained.
But in the last few decades, the continuing advance of modern technology has brought about something new. Evolution is now producing practical benefits in a very different field, and this time, the creationists cannot claim that their explanation fits the facts just as well. This field is computer science, and the benefits come from a programming strategy called genetic algorithms. This essay will explain what genetic algorithms are and will show how they are relevant to the evolution/creationism debate.

What is a genetic algorithm?
Top
Methods of representation
Methods of selection
Methods of change
Other problem-solving techniques
Concisely stated, a genetic algorithm (or GA for short) is a programming technique that mimics biological evolution as a problem-solving strategy. Given a specific problem to solve, the input to the GA is a set of potential solutions to that problem, encoded in some fashion, and a metric called afitness function that allows each candidate to be quantitatively evaluated. These candidates may be solutions already known to work, with the aim of the GA being to improve them, but more often they are generated at random.
The GA then evaluates each candidate according to the fitness function. In a pool of randomly generated candidates, of course, most will not work at all, and these will be deleted. However, purely by chance, a few may hold promise - they may show activity, even if only weak and imperfect activity, toward solving the problem.
These promising candidates are kept and allowed to reproduce. Multiple copies are made of them, but the copies are not perfect; random changes are introduced during the copying process. These digital offspring then go on to the next generation, forming a new pool of candidate solutions, and are subjected to a second round of fitness evaluation. Those candidate solutions which were worsened, or made no better, by the changes to their code are again deleted; but again, purely by chance, the random variations introduced into the population may have improved some individuals, making them into better, more complete or more efficient solutions to the problem at hand. Again these winning individuals are selected and copied over into the next generation with random changes, and the process repeats. The expectation is that the average fitness of the population will increase each round, and so by repeating this process for hundreds or thousands of rounds, very good solutions to the problem can be discovered.
As astonishing and counterintuitive as it may seem to some, genetic algorithms have proven to be an enormously powerful and successful problem-solving strategy, dramatically demonstrating the power of evolutionary principles. Genetic algorithms have been used in a wide variety of fields to evolve solutions to problems as difficult as or more difficult than those faced by human designers. Moreover, the solutions they come up with are often more efficient, more elegant, or more complex than anything comparable a human engineer would produce. In some cases, genetic algorithms have come up with solutions that baffle the programmers who wrote the algorithms in the first place!
Methods of representation
Before a genetic algorithm can be put to work on any problem, a method is needed to encode potential solutions to that problem in a form that a computer can process. One common approach is to encode solutions as binary strings: sequences of 1's and 0's, where the digit at each position represents the value of some aspect of the solution. Another, similar approach is to encode solutions as arrays of integers or decimal numbers, with each position again representing some particular aspect of the solution. This approach allows for greater precision and complexity than the comparatively restricted method of using binary numbers only and often "is intuitively closer to the problem space" (Fleming and Purshouse 2002, p. 1228).
This technique was used, for example, in the work of Steffen Schulze-Kremer, who wrote a genetic algorithm to predict the three-dimensional structure of a protein based on the sequence of amino acids that go into it (Mitchell 1996, p. 62). Schulze-Kremer's GA used real-valued numbers to represent the so-called "torsion angles" between the peptide bonds that connect amino acids. (A protein is made up of a sequence of basic building blocks called amino acids, which are joined together like the links in a chain. Once all the amino acids are linked, the protein folds up into a complex three-dimensional shape based on which amino acids attract each other and which ones repel each other. The shape of a protein determines its function.) Genetic algorithms for trainingneural networks often use this method of encoding also.
A third approach is to represent individuals in a GA as strings of letters, where each letter again stands for a specific aspect of the solution. One example of this technique is Hiroaki Kitano's "grammatical encoding" approach, where a GA was put to the task of evolving a simple set of rules called a context-free grammar that was in turn used to generate neural networks for a variety of problems (Mitchell 1996, p. 74).
The virtue of all three of these methods is that they make it easy to define operators that cause the random changes in the selected candidates: flip a 0 to a 1 or vice versa, add or subtract from the value of a number by a randomly chosen amount, or change one letter to another. (See the section on Methods of change for more detail about the genetic operators.) Another strategy, developed principally by John Koza of Stanford University and called genetic programming, represents programs as branching data structures called trees (Koza et al. 2003, p. 35). In this approach, random changes can be brought about by changing the operator or altering the value at a given node in the tree, or replacing one subtree with another.

Figure 1: Three simple program trees of the kind normally used in genetic programming. The mathematical expression that each one represents is given underneath.
It is important to note that evolutionary algorithms do not need to represent candidate solutions as data strings of fixed length. Some do represent them in this way, but others do not; for example, Kitano's grammatical encoding discussed above can be efficiently scaled to create large and complex neural networks, and Koza's genetic programming trees can grow arbitrarily large as necessary to solve whatever problem they are applied to.
Methods of selection
There are many different techniques which a genetic algorithm can use to select the individuals to be copied over into the next generation, but listed below are some of the most common methods. Some of these methods are mutually exclusive, but others can be and often are used in combination.
Elitist selection: The most fit members of each generation are guaranteed to be selected. (Most GAs do not use pure elitism, but instead use a modified form where the single best, or a few of the best, individuals from each generation are copied into the next generation just in case nothing better turns up.)
Fitness-proportionate selection: More fit individuals are more likely, but not certain, to be selected.
Roulette-wheel selection: A form of fitness-proportionate selection in which the chance of an individual's being selected is proportional to the amount by which its fitness is greater or less than its competitors' fitness. (Conceptually, this can be represented as a game of roulette - each individual gets a slice of the wheel, but more fit ones get larger slices than less fit ones. The wheel is then spun, and whichever individual "owns" the section on which it lands each time is chosen.)
Scaling selection: As the average fitness of the population increases, the strength of the selective pressure also increases and the fitness function becomes more discriminating. This method can be helpful in making the best selection later on when all individuals have relatively high fitness and only small differences in fitness distinguish one from another.
Tournament selection: Subgroups of individuals are chosen from the larger population, and members of each subgroup compete against each other. Only one individual from each subgroup is chosen to reproduce.
Rank selection: Each individual in the population is assigned a numerical rank based on fitness, and selection is based on this ranking rather than absolute differences in fitness. The advantage of this method is that it can prevent very fit individuals from gaining dominance early at the expense of less fit ones, which would reduce the population's genetic diversity and might hinder attempts to find an acceptable solution.
Generational selection: The offspring of the individuals selected from each generation become the entire next generation. No individuals are retained between generations.
Steady-state selection: The offspring of the individuals selected from each generation go back into the pre-existing gene pool, replacing some of the less fit members of the previous generation. Some individuals are retained between generations.
Hierarchical selection: Individuals go through multiple rounds of selection each generation. Lower-level evaluations are faster and less discriminating, while those that survive to higher levels are evaluated more rigorously. The advantage of this method is that it reduces overall computation time by using faster, less selective evaluation to weed out the majority of individuals that show little or no promise, and only subjecting those who survive this initial test to more rigorous and more computationally expensive fitness evaluation.
Methods of change
Once selection has chosen fit individuals, they must be randomly altered in hopes of improving their fitness for the next generation. There are two basic strategies to accomplish this. The first and simplest is called mutation. Just as mutation in living things changes one gene to another, so mutation in a genetic algorithm causes small alterations at single points in an individual's code.
The second method is called crossover, and entails choosing two individuals to swap segments of their code, producing artificial "offspring" that are combinations of their parents. This process is intended to simulate the analogous process of recombination that occurs to chromosomes during sexual reproduction. Common forms of crossover include single-point crossover, in which a point of exchange is set at a random location in the two individuals' genomes, and one individual contributes all its code from before that point and the other contributes all its code from after that point to produce an offspring, and uniform crossover, in which the value at any given location in the offspring's genome is either the value of one parent's genome at that location or the value of the other parent's genome at that location, chosen with 50/50 probability.


Figure 2: Crossover and mutation. The above diagrams illustrate the effect of each of these genetic operators on individuals in a population of 8-bit strings. The upper diagram shows two individuals undergoing single-point crossover; the point of exchange is set between the fifth and sixth positions in the genome, producing a new individual that is a hybrid of its progenitors. The second diagram shows an individual undergoing mutation at position 4, changing the 0 at that position in its genome to a 1.
Other problem-solving techniques
With the rise of artificial life computing and the development of heuristic methods, other computerized problem-solving techniques have emerged that are in some ways similar to genetic algorithms. This section explains some of these techniques, in what ways they resemble GAs and in what ways they differ.
Neural networks
A neural network, or neural net for short, is a problem-solving method based on a computer model of how neurons are connected in the brain. A neural network consists of layers of processing units called nodes joined by directional links: one input layer, one output layer, and zero or more hidden layers in between. An initial pattern of input is presented to the input layer of the neural network, and nodes that are stimulated then transmit a signal to the nodes of the next layer to which they are connected. If the sum of all the inputs entering one of these virtual neurons is higher than that neuron's so-called activation threshold, that neuron itself activates, and passes on its own signal to neurons in the next layer. The pattern of activation therefore spreads forward until it reaches the output layer and is there returned as a solution to the presented input. Just as in the nervous system of biological organisms, neural networks learn and fine-tune their performance over time via repeated rounds of adjusting their thresholds until the actual output matches the desired output for any given input. This process can be supervised by a human experimenter or may run automatically using a learning algorithm (Mitchell 1996, p. 52). Genetic algorithms have been used both to build and to train neural networks.

Figure 3: A simple feedforward neural network, with one input layer consisting of four neurons, one hidden layer consisting of three neurons, and one output layer consisting of four neurons. The number on each neuron represents its activation threshold: it will only fire if it receives at least that many inputs. The diagram shows the neural network being presented with an input string and shows how activation spreads forward through the network to produce an output.
Hill-climbing
Similar to genetic algorithms, though more systematic and less random, a hill-climbing algorithm begins with one initial solution to the problem at hand, usually chosen at random. The string is then mutated, and if the mutation results in higher fitness for the new solution than for the previous one, the new solution is kept; otherwise, the current solution is retained. The algorithm is then repeated until no mutation can be found that causes an increase in the current solution's fitness, and this solution is returned as the result (Koza et al. 2003, p. 59). (To understand where the name of this technique comes from, imagine that the space of all possible solutions to a given problem is represented as a three-dimensional contour landscape. A given set of coordinates on that landscape represents one particular solution. Those solutions that are better are higher in altitude, forming hills and peaks; those that are worse are lower in altitude, forming valleys. A "hill-climber" is then an algorithm that starts out at a given point on the landscape and moves inexorably uphill.) Hill-climbing is what is known as a greedy algorithm, meaning it always makes the best choice available at each step in the hope that the overall best result can be achieved this way. By contrast, methods such as genetic algorithms and simulated annealing, discussed below, are not greedy; these methods sometimes make suboptimal choices in the hopes that they will lead to better solutions later on.
Simulated annealing
Another optimization technique similar to evolutionary algorithms is known as simulated annealing. The idea borrows its name from the industrial process of annealing in which a material is heated to above a critical point to soften it, then gradually cooled in order to erase defects in its crystalline structure, producing a more stable and regular lattice arrangement of atoms (Haupt and Haupt 1998, p. 16). In simulated annealing, as in genetic algorithms, there is a fitness function that defines a fitness landscape; however, rather than a population of candidates as in GAs, there is only one candidate solution. Simulated annealing also adds the concept of "temperature", a global numerical quantity which gradually decreases over time. At each step of the algorithm, the solution mutates (which is equivalent to moving to an adjacent point of the fitness landscape). The fitness of the new solution is then compared to the fitness of the previous solution; if it is higher, the new solution is kept. Otherwise, the algorithm makes a decision whether to keep or discard it based on temperature. If the temperature is high, as it is initially, even changes that cause significant decreases in fitness may be kept and used as the basis for the next round of the algorithm, but as temperature decreases, the algorithm becomes more and more inclined to only accept fitness-increasing changes. Finally, the temperature reaches zero and the system "freezes"; whatever configuration it is in at that point becomes the solution. Simulated annealing is often used for engineering design applications such as determining the physical layout of components on a computer chip (Kirkpatrick, Gelatt and Vecchi 1983).

A brief history of GAs
Top
The earliest instances of what might today be called genetic algorithms appeared in the late 1950s and early 1960s, programmed on computers by evolutionary biologists who were explicitly seeking to model aspects of natural evolution. It did not occur to any of them that this strategy might be more generally applicable to artificial problems, but that recognition was not long in coming: "Evolutionary computation was definitely in the air in the formative days of the electronic computer" (Mitchell 1996, p.2). By 1962, researchers such as G.E.P. Box, G.J. Friedman, W.W. Bledsoe and H.J. Bremermann had all independently developed evolution-inspired algorithms for function optimization and machine learning, but their work attracted little followup. A more successful development in this area came in 1965, when Ingo Rechenberg, then of the Technical University of Berlin, introduced a technique he called evolution strategy, though it was more similar to hill-climbers than to genetic algorithms. In this technique, there was no population or crossover; one parent was mutated to produce one offspring, and the better of the two was kept and became the parent for the next round of mutation (Haupt and Haupt 1998, p.146). Later versions introduced the idea of a population. Evolution strategies are still employed today by engineers and scientists, especially in Germany.
The next important development in the field came in 1966, when L.J. Fogel, A.J. Owens and M.J. Walsh introduced in America a technique they called evolutionary programming. In this method, candidate solutions to problems were represented as simple finite-state machines; like Rechenberg's evolution strategy, their algorithm worked by randomly mutating one of these simulated machines and keeping the better of the two (Mitchell 1996, p.2; Goldberg 1989, p.105). Also like evolution strategies, a broader formulation of the evolutionary programming technique is still an area of ongoing research today. However, what was still lacking in both these methodologies was recognition of the importance of crossover.
As early as 1962, John Holland's work on adaptive systems laid the foundation for later developments; most notably, Holland was also the first to explicitly propose crossover and other recombination operators. However, the seminal work in the field of genetic algorithms came in 1975, with the publication of the book Adaptation in Natural and Artificial Systems. Building on earlier research and papers both by Holland himself and by colleagues at the University of Michigan, this book was the first to systematically and rigorously present the concept of adaptive digital systems using mutation, selection and crossover, simulating processes of biological evolution, as a problem-solving strategy. The book also attempted to put genetic algorithms on a firm theoretical footing by introducing the notion of schemata (Mitchell 1996, p.3; Haupt and Haupt 1998, p.147). That same year, Kenneth De Jong's important dissertation established the potential of GAs by showing that they could perform well on a wide variety of test functions, including noisy, discontinuous, and multimodal search landscapes (Goldberg 1989, p.107).
These foundational works established more widespread interest in evolutionary computation. By the early to mid-1980s, genetic algorithms were being applied to a broad range of subjects, from abstract mathematical problems like bin-packing and graph coloring to tangible engineering issues such as pipeline flow control, pattern recognition and classification, and structural optimization (Goldberg 1989, p. 128).
At first, these applications were mainly theoretical. However, as research continued to proliferate, genetic algorithms migrated into the commercial sector, their rise fueled by the exponential growth of computing power and the development of the Internet. Today, evolutionary computation is a thriving field, and genetic algorithms are "solving problems of everyday interest" (Haupt and Haupt 1998, p.147) in areas of study as diverse as stock market prediction and portfolio planning, aerospace engineering, microchip design, biochemistry and molecular biology, and scheduling at airports and assembly lines. The power of evolution has touched virtually any field one cares to name, shaping the world around us invisibly in countless ways, and new uses continue to be discovered as research is ongoing. And at the heart of it all lies nothing more than Charles Darwin's simple, powerful insight: that the random chance of variation, coupled with the law of selection, is a problem-solving technique of immense power and nearly unlimited application.

What are the strengths of GAs?
Top
The first and most important point is that genetic algorithms are intrinsically parallel. Most other algorithms are serial and can only explore the solution space to a problem in one direction at a time, and if the solution they discover turns out to be suboptimal, there is nothing to do but abandon all work previously completed and start over. However, since GAs have multiple offspring, they can explore the solution space in multiple directions at once. If one path turns out to be a dead end, they can easily eliminate it and continue work on more promising avenues, giving them a greater chance each run of finding the optimal solution.

However, the advantage of parallelism goes beyond this. Consider the following: All the 8-digit binary strings (strings of 0's and 1's) form a search space, which can be represented as ******** (where the * stands for "either 0 or 1"). The string 01101010 is one member of this space. However, it is also a member of the space 0*******, the space 01******, the space 0******0, the space 0*1*1*1*, the space 01*01**0, and so on. By evaluating the fitness of this one particular string, a genetic algorithm would be sampling each of these many spaces to which it belongs. Over many such evaluations, it would build up an increasingly accurate value for the average fitness of each of these spaces, each of which has many members. Therefore, a GA that explicitly evaluates a small number of individuals is implicitly evaluating a much larger group of individuals - just as a pollster who asks questions of a certain member of an ethnic, religious or social group hopes to learn something about the opinions of all members of that group, and therefore can reliably predict national opinion while sampling only a small percentage of the population. In the same way, the GA can "home in" on the space with the highest-fitness individuals and find the overall best one from that group. In the context of evolutionary algorithms, this is known as the Schema Theorem, and is the "central advantage" of a GA over other problem-solving methods (Holland 1992, p. 68; Mitchell 1996, p.28-29;Goldberg 1989, p.20).
Due to the parallelism that allows them to implicitly evaluate many schema at once, genetic algorithms are particularly well-suited to solving problems where the space of all potential solutions is truly huge - too vast to search exhaustively in any reasonable amount of time. Most problems that fall into this category are known as "nonlinear". In a linear problem, the fitness of each component is independent, so any improvement to any one part will result in an improvement of the system as a whole. Needless to say, few real-world problems are like this. Nonlinearity is the norm, where changing one component may have ripple effects on the entire system, and where multiple changes that individually are detrimental may lead to much greater improvements in fitness when combined. Nonlinearity results in a combinatorial explosion: the space of 1,000-digit binary strings can be exhaustively searched by evaluating only 2,000 possibilities if the problem is linear, whereas if it is nonlinear, an exhaustive search requires evaluating 21000 possibilities - a number that would take over 300 digits to write out in full.

Fortunately, the implicit parallelism of a GA allows it to surmount even this enormous number of possibilities, successfully finding optimal or very good results in a short period of time after directly sampling only small regions of the vast fitness landscape (Forrest 1993, p. 877). For example, a genetic algorithm developed jointly by engineers from General Electric and Rensselaer Polytechnic Institute produced a high-performance jet engine turbine design that was three times better than a human-designed configuration and 50% better than a configuration designed by an expert system by successfully navigating a solution space containing more than 10387 possibilities. Conventional methods for designing such turbines are a central part of engineering projects that can take up to five years and cost over $2 billion; the genetic algorithm discovered this solution after two days on a typical engineering desktop workstation (Holland 1992, p.72).
Another notable strength of genetic algorithms is that they perform well in problems for which the fitness landscape is complex - ones where the fitness function is discontinuous, noisy, changes over time, or has many local optima. Most practical problems have a vast solution space, impossible to search exhaustively; the challenge then becomes how to avoid the local optima - solutions that are better than all the others that are similar to them, but that are not as good as different ones elsewhere in the solution space. Many search algorithms can become trapped by local optima: if they reach the top of a hill on the fitness landscape, they will discover that no better solutions exist nearby and conclude that they have reached the best one, even though higher peaks exist elsewhere on the map.

Evolutionary algorithms, on the other hand, have proven to be effective at escaping local optima and discovering the global optimum in even a very rugged and complex fitness landscape. (It should be noted that, in reality, there is usually no way to tell whether a given solution to a problem is the one global optimum or just a very high local optimum. However, even if a GA does not always deliver a provably perfect solution to a problem, it can almost always deliver at least a very good solution.) All four of a GA's major components - parallelism, selection, mutation, and crossover - work together to accomplish this. In the beginning, the GA generates a diverse initial population, casting a "net" over the fitness landscape. (Koza (2003, p. 506) compares this to an army of parachutists dropping onto the landscape of a problem's search space, with each one being given orders to find the highest peak.) Small mutations enable each individual to explore its immediate neighborhood, while selection focuses progress, guiding the algorithm's offspring uphill to more promising parts of the solution space (Holland 1992, p. 68).

However, crossover is the key element that distinguishes genetic algorithms from other methods such as hill-climbers and simulated annealing. Without crossover, each individual solution is on its own, exploring the search space in its immediate vicinity without reference to what other individuals may have discovered. However, with crossover in place, there is atransfer of information between successful candidates - individuals can benefit from what others have learned, and schemata can be mixed and combined, with the potential to produce an offspring that has the strengths of both its parents and the weaknesses of neither. This point is illustrated in Koza et al. 1999, p.486, where the authors discuss a problem of synthesizing a lowpass filter using genetic programming. In one generation, two parent circuits were selected to undergo crossover; one parent had good topology (components such as inductors and capacitors in the right places) but bad sizing (values of inductance and capacitance for its components that were far too low). The other parent had bad topology, but good sizing. The result of mating the two through crossover was an offspring with the good topology of one parent and the good sizing of the other, resulting in a substantial improvement in fitness over both its parents.

The problem of finding the global optimum in a space with many local optima is also known as the dilemma of exploration vs. exploitation, "a classic problem for all systems that can adapt and learn" (Holland 1992, p. 69). Once an algorithm (or a human designer) has found a problem-solving strategy that seems to work satisfactorily, should it concentrate on making the best use of that strategy, or should it search for others? Abandoning a proven strategy to look for new ones is almost guaranteed to involve losses and degradation of performance, at least in the short term. But if one sticks with a particular strategy to the exclusion of all others, one runs the risk of not discovering better strategies that exist but have not yet been found. Again, genetic algorithms have shown themselves to be very good at striking this balance and discovering good solutions with a reasonable amount of time and computational effort.
Another area in which genetic algorithms excel is their ability to manipulate many parameters simultaneously (Forrest 1993, p. 874). Many real-world problems cannot be stated in terms of a single value to be minimized or maximized, but must be expressed in terms of multiple objectives, usually with tradeoffs involved: one can only be improved at the expense of another. GAs are very good at solving such problems: in particular, their use of parallelism enables them to produce multiple equally good solutions to the same problem, possibly with one candidate solution optimizing one parameter and another candidate optimizing a different one (Haupt and Haupt 1998, p.17), and a human overseer can then select one of these candidates to use. If a particular solution to a multiobjective problem optimizes one parameter to a degree such that that parameter cannot be further improved without causing a corresponding decrease in the quality of some other parameter, that solution is called Pareto optimal or non-dominated(Coello 2000, p. 112).
Finally, one of the qualities of genetic algorithms which might at first appear to be a liability turns out to be one of their strengths: namely, GAs know nothing about the problems they are deployed to solve. Instead of using previously known domain-specific information to guide each step and making changes with a specific eye towards improvement, as human designers do, they are "blind watchmakers" (Dawkins 1996); they make random changes to their candidate solutions and then use the fitness function to determine whether those changes produce an improvement.

The virtue of this technique is that it allows genetic algorithms to start out with an open mind, so to speak. Since its decisions are based on randomness, all possible search pathways are theoretically open to a GA; by contrast, any problem-solving strategy that relies on prior knowledge must inevitably begin by ruling out many pathways a priori, therefore missing any novel solutions that may exist there (Koza et al. 1999, p. 547). Lacking preconceptions based on established beliefs of "how things should be done" or what "couldn't possibly work", GAs do not have this problem. Similarly, any technique that relies on prior knowledge will break down when such knowledge is not available, but again, GAs are not adversely affected by ignorance (Goldberg 1989, p. 23). Through their components of parallelism, crossover and mutation, they can range widely over the fitness landscape, exploring regions which intelligently produced algorithms might have overlooked, and potentially uncovering solutions of startling and unexpected creativity that might never have occurred to human designers. One vivid illustration of this is the rediscovery, by genetic programming, of the concept of negative feedback - a principle crucial to many important electronic components today, but one that, when it was first discovered, was denied a patent for nine years because the concept was so contrary to established beliefs (Koza et al. 2003, p. 413). Evolutionary algorithms, of course, are neither aware nor concerned whether a solution runs counter to established beliefs - only whether it works.

What are the limitations of GAs?
Top
Although genetic algorithms have proven to be an efficient and powerful problem-solving strategy, they are not a panacea. GAs do have certain limitations; however, it will be shown that all of these can be overcome and none of them bear on the validity of biological evolution.
The first, and most important, consideration in creating a genetic algorithm is defining a representation for the problem. The language used to specify candidate solutions must be robust; i.e., it must be able to tolerate random changes such that fatal errors or nonsense do not consistently result.

There are two main ways of achieving this. The first, which is used by most genetic algorithms, is to define individuals as lists of numbers - binary-valued, integer-valued, or real-valued - where each number represents some aspect of a candidate solution. If the individuals are binary strings, 0 or 1 could stand for the absence or presence of a given feature. If they are lists of numbers, these numbers could represent many different things: the weights of the links in a neural network, the order of the cities visited in a given tour, the spatial placement of electronic components, the values fed into a controller, the torsion angles of peptide bonds in a protein, and so on. Mutation then entails changing these numbers, flipping bits or adding or subtracting random values. In this case, the actual program code does not change; the code is what manages the simulation and keeps track of the individuals, evaluating their fitness and perhaps ensuring that only values realistic and possible for the given problem result.

In another method, genetic programming, the actual program code does change. As discussed in the section Methods of representation, GP represents individuals as executable trees of code that can be mutated by changing or swapping subtrees. Both of these methods produce representations that are robust against mutation and can represent many different kinds of problems, and as discussed in the section Some specific examples, both have had considerable success.

This issue of representing candidate solutions in a robust way does not arise in nature, because the method of representation used by evolution, namely the genetic code, is inherently robust: with only a very few exceptions, such as a string of stop codons, there is no such thing as a sequence of DNA bases that cannot be translated into a protein. Therefore, virtually any change to an individual's genes will still produce an intelligible result, and so mutations in evolution have a higher chance of producing an improvement. This is in contrast to human-created languages such as English, where the number of meaningful words is small compared to the total number of ways one can combine letters of the alphabet, and therefore random changes to an English sentence are likely to produce nonsense.
The problem of how to write the fitness function must be carefully considered so that higher fitness is attainable and actually does equate to a better solution for the given problem. If the fitness function is chosen poorly or defined imprecisely, the genetic algorithm may be unable to find a solution to the problem, or may end up solving the wrong problem. (This latter situation is sometimes described as the tendency of a GA to "cheat", although in reality all that is happening is that the GA is doing what it was told to do, not what its creators intended it to do.) An example of this can be found in Graham-Rowe 2002, in which researchers used an evolutionary algorithm in conjunction with a reprogrammable hardware array, setting up the fitness function to reward the evolving circuit for outputting an oscillating signal. At the end of the experiment, an oscillating signal was indeed being produced - but instead of the circuit itself acting as an oscillator, as the researchers had intended, they discovered that it had become a radio receiver that was picking up and relaying an oscillating signal from a nearby piece of electronic equipment!

This is not a problem in nature, however. In the laboratory of biological evolution there is only one fitness function, which is the same for all living things - the drive to survive and reproduce, no matter what adaptations make this possible. Those organisms which reproduce more abundantly compared to their competitors are more fit; those which fail to reproduce are unfit.
In addition to making a good choice of fitness function, the other parameters of a GA - the size of the population, the rate of mutation and crossover, the type and strength of selection - must be also chosen with care. If the population size is too small, the genetic algorithm may not explore enough of the solution space to consistently find good solutions. If the rate of genetic change is too high or the selection scheme is chosen poorly, beneficial schema may be disrupted and the population may enter error catastrophe, changing too fast for selection to ever bring about convergence.

Living things do face similar difficulties, and evolution has dealt with them. It is true that if a population size falls too low, mutation rates are too high, or the selection pressure is too strong (such a situation might be caused by drastic environmental change), then the species may go extinct. The solution has been "the evolution of evolvability" - adaptations that alter a species' ability to adapt. For example, most living things have evolved elaborate molecular machinery that checks for and corrects errors during the process of DNA replication, keeping their mutation rate down to acceptably low levels; conversely, in times of severe environmental stress, some bacterial species enter a state of hypermutation where the rate of DNA replication errors rises sharply, increasing the chance that a compensating mutation will be discovered. Of course, not all catastrophes can be evaded, but the enormous diversity and highly complex adaptations of living things today show that, in general, evolution is a successful strategy. Likewise, the diverse applications of and impressive results produced by genetic algorithms show them to be a powerful and worthwhile field of study.
One type of problem that genetic algorithms have difficulty dealing with are problems with "deceptive" fitness functions (Mitchell 1996, p.125), those where the locations of improved points give misleading information about where the global optimum is likely to be found. For example, imagine a problem where the search space consisted of all eight-character binary strings, and the fitness of an individual was directly proportional to the number of 1s in it - i.e., 00000001 would be less fit than 00000011, which would be less fit than 00000111, and so on - with two exceptions: the string 11111111 turned out to have very low fitness, and the string 00000000 turned out to have very high fitness. In such a problem, a GA (as well as most other algorithms) would be no more likely to find the global optimum than random search.

The resolution to this problem is the same for both genetic algorithms and biological evolution: evolution is not a process that has to find the single global optimum every time. It can do almost as well by reaching the top of a high local optimum, and for most situations, this will suffice, even if the global optimum cannot easily be reached from that point. Evolution is very much a "satisficer" - an algorithm that delivers a "good enough" solution, though not necessarily the best possible solution, given a reasonable amount of time and effort invested in the search. The Evidence for Jury-Rigged Design in Nature FAQ gives examples of this very outcome appearing in nature. (It is also worth noting that few, if any, real-world problems are as fully deceptive as the somewhat contrived example given above. Usually, the location of local improvements gives at least some information about the location of the global optimum.)
One well-known problem that can occur with a GA is known as premature convergence. If an individual that is more fit than most of its competitors emerges early on in the course of the run, it may reproduce so abundantly that it drives down the population's diversity too soon, leading the algorithm to converge on the local optimum that that individual represents rather than searching the fitness landscape thoroughly enough to find the global optimum (Forrest 1993, p. 876; Mitchell 1996, p. 167). This is an especially common problem in small populations, where even chance variations in reproduction rate may cause one genotype to become dominant over others.

The most common methods implemented by GA researchers to deal with this problem all involve controlling the strength of selection, so as not to give excessively fit individuals too great of an advantage. Rank, scaling and tournament selection, discussed earlier, are three major means for accomplishing this; some methods of scaling selection include sigma scaling, in which reproduction is based on a statistical comparison to the population's average fitness, and Boltzmann selection, in which the strength of selection increases over the course of a run in a manner similar to the "temperature" variable in simulated annealing (Mitchell 1996, p. 168).

Premature convergence does occur in nature (where it is called genetic drift by biologists). This should not be surprising; as discussed above, evolution as a problem-solving strategy is under no obligation to find the single best solution, merely one that is good enough. However, premature convergence in nature is less common since most beneficial mutations in living things produce only small, incremental fitness improvements; mutations that produce such a large fitness gain as to give their possessors dramatic reproductive advantage are rare.
Finally, several researchers (Holland 1992, p.72; Forrest 1993, p.875; Haupt and Haupt 1998, p.18) advise against using genetic algorithms on analytically solvable problems. It is not that genetic algorithms cannot find good solutions to such problems; it is merely that traditional analytic methods take much less time and computational effort than GAs and, unlike GAs, are usually mathematically guaranteed to deliver the one exact solution. Of course, since there is no such thing as a mathematically perfect solution to any problem of biological adaptation, this issue does not arise in nature.

Some specific examples of GAs
Top
As the power of evolution gains increasingly widespread recognition, genetic algorithms have been used to tackle a broad variety of problems in an extremely diverse array of fields, clearly showing their power and their potential. This section will discuss some of the more noteworthy uses to which they have been put.
Acoustics
Aerospace engineering
Astronomy and astrophysics
Chemistry
Electrical engineering
Financial markets
Game playing
Geophysics
Materials engineering
Mathematics and algorithmics
Military and law enforcement
Molecular biology
Pattern recognition and data mining
Robotics
Routing and scheduling
Systems engineering
Acoustics
Sato et al. 2002 used genetic algorithms to design a concert hall with optimal acoustic properties, maximizing the sound quality for the audience, for the conductor, and for the musicians on stage. This task involves the simultaneous optimization of multiple variables. Beginning with a shoebox-shaped hall, the authors' GA produced two non-dominated solutions, both of which were described as "leaf-shaped" (p.526). The authors state that these solutions have proportions similar to Vienna's Grosser Musikvereinsaal, which is widely agreed to be one of the best - if not the best - concert hall in the world in terms of acoustic properties.

Porto, Fogel and Fogel 1995 used evolutionary programming to train neural networks to distinguish between sonar reflections from different types of objects: man-made metal spheres, sea mounts, fish and plant life, and random background noise. After 500 generations, the best evolved neural network had a probability of correct classification ranging between 94% and 98% and a probability of misclassification between 7.4% and 1.5%, which are "reasonable probabilities of detection and false alarm" (p.21). The evolved network matched the performance of another network developed by simulated annealing and consistently outperformed networks trained by back propagation, which "repeatedly stalled at suboptimal weight sets that did not yield satisfactory results" (p.21). By contrast, both stochastic methods showed themselves able to overcome these local optima and produce smaller, effective and more robust networks; but the authors suggest that the evolutionary algorithm, unlike simulated annealing, operates on a population and so takes advantage of global information about the search space, potentially leading to better performance in the long run.

Tang et al. 1996 survey the uses of genetic algorithms within the field of acoustics and signal processing. One area of particular interest involves the use of GAs to design Active Noise Control (ANC) systems, which cancel out undesired sound by producing sound waves that destructively interfere with the unwanted noise. This is a multiple-objective problem requiring the precise placement and control of multiple loudspeakers; GAs have been used both to design the controllers and find the optimal placement of the loudspeakers for such systems, resulting in the "effective attenuation of noise" (p.33) in experimental tests.
Aerospace engineering
Obayashi et al. 2000 used a multiple-objective genetic algorithm to design the wing shape for a supersonic aircraft. Three major considerations govern the wing's configuration - minimizing aerodynamic drag at supersonic cruising speeds, minimizing drag at subsonic speeds, and minimizing aerodynamic load (the bending force on the wing). These objectives are mutually exclusive, and optimizing them all simultaneously requires tradeoffs to be made.

The chromosome in this problem is a string of 66 real-valued numbers, each of which corresponds to a specific aspect of the wing: its shape, its thickness, its twist, and so on. Evolution with elitist rank selection was simulated for 70 generations, with a population size of 64 individuals. At the termination of this process, there were several Pareto-optimal individuals, each one representing a single non-dominated solution to the problem. The paper notes that these best-of-run individuals have "physically reasonable" characteristics, indicating the validity of the optimization technique (p.186). To further evaluate the quality of the solutions, six of the best were compared to a supersonic wing design produced by the SST Design Team of Japan's National Aerospace Laboratory. All six were competitive, having drag and load values approximately equal to or less than the human-designed wing; one of the evolved solutions in particular outperformed the NAL's design in all three objectives. The authors note that the GA's solutions are similar to a design called the "arrow wing" which was first suggested in the late 1950s, but ultimately abandoned in favor of the more conventional delta-wing design.

In a follow-up paper (Sasaki et al. 2001), the authors repeat their experiment while adding afourth objective, namely minimizing the twisting moment of the wing (a known potential problem for arrow-wing SST designs). Additional control points for thickness are also added to the array of design variables. After 75 generations of evolution, two of the best Pareto-optimal solutions were again compared to the Japanese National Aerospace Laboratory's wing design for the NEXST-1 experimental supersonic airplane. It was found that both of these designs (as well as one optimal design from the previous simulation, discussed above) were physically reasonable and superior to the NAL's design in all four objectives.

Williams, Crossley and Lang 2001 applied genetic algorithms to the task of spacing satellite orbits to minimize coverage blackouts. As telecommunications technology continues to improve, humans are increasingly dependent on Earth-orbiting satellites to perform many vital functions, and one of the problems engineers face is designing their orbital trajectories. Satellites in high Earth orbit, around 22,000 miles up, can see large sections of the planet at once and be in constant contact with ground stations, but these are far more expensive to launch and more vulnerable to cosmic radiation. It is more economical to put satellites in low orbits, as low as a few hundred miles in some cases, but because of the curvature of the Earth it is inevitable that these satellites will at times lose line-of-sight access to surface receivers and thus be useless. Even constellations of several satellites experience unavoidable blackouts and losses of coverage for this reason. The challenge is to arrange the satellites' orbits to minimize this downtime. This is a multi-objective problem, involving the minimization of both the average blackout time for all locations and the maximum blackout time for any one location; in practice, these goals turn out to be mutually exclusive.

When the GA was applied to this problem, the evolved results for three, four and five-satellite constellations were unusual, highly asymmetric orbit configurations, with the satellites spaced by alternating large and small gaps rather than equal-sized gaps as conventional techniques would produce. However, this solution significantly reduced both average and maximum revisit times, in some cases by up to 90 minutes. In a news article about the results, Dr. William Crossley noted that "engineers with years of aerospace experience were surprised by the higher performance offered by the unconventional design".
Keane and Brown 1996 used a GA to evolve a new design for a load-bearing truss or boom that could be assembled in orbit and used for satellites, space stations and other aerospace construction projects. The result, a twisted, organic-looking structure that has been compared to a human leg bone, uses no more material than the standard truss design but is lightweight, strong and far superior at damping out damaging vibrations, as was confirmed by real-world tests of the final product. And yet "No intelligence made the designs. They just evolved" (Petit 1998). The authors of the paper further note that their GA only ran for 10 generations due to the computationally intensive nature of the simulation, and the population had not become stagnant. Continuing the run for more generations would undoubtedly have produced further improvements in performance.

Figure 4: A genetically optimized three-dimensional truss with improved frequency response. (Adapted from [1].)


Finally, as reported in Gibbs 1996, Lockheed Martin has used a genetic algorithm to evolve a series of maneuvers to shift a spacecraft from one orientation to another within 2% of the theoretical minimum time for such maneuvers. The evolved solution was 10% faster than a solution hand-crafted by an expert for the same problem.
Astronomy and astrophysics
Charbonneau 1995 suggests the usefulness of GAs for problems in astrophysics by applying them to three example problems: fitting the rotation curve of a galaxy based on observed rotational velocities of its components, determining the pulsation period of a variable star based on time-series data, and solving for the critical parameters in a magnetohydrodynamic model of the solar wind. All three of these are hard multi-dimensional nonlinear problems.

Charbonneau's genetic algorithm, PIKAIA, uses generational, fitness-proportionate ranking selection coupled with elitism, ensuring that the single best individual is copied over once into the next generation without modification. PIKAIA has a crossover rate of 0.65 and a variable mutation rate that is set to 0.003 initially and gradually increases later on, as the population approaches convergence, to maintain variability in the gene pool.

In the galactic rotation-curve problem, the GA produced two curves, both of which were very good fits to the data (a common result in this type of problem, in which there is little contrast between neighboring hilltops); further observations can then distinguish which one is to be preferred. In the time-series problem, the GA was impressively successful in autonomously generating a high-quality fit for the data, but harder problems were not fitted as well (although, Charbonneau points out, these problems are equally difficult to solve with conventional techniques). The paper suggests that a hybrid GA employing both artificial evolution and standard analytic techniques might perform better. Finally, in solving for the six critical parameters of the solar wind, the GA successfully determined the value of three of them to an accuracy of within 0.1% and the remaining three to accuracies of within 1 to 10%. (Though lower experimental error for these three would always be preferable, Charbonneau notes that there are no other robust, efficient methods for experimentally solving a six-dimensional nonlinear problem of this type; a conjugate gradient method works "as long as a very good starting guess can be provided" (p.323). By contrast, GAs do not require such finely tuned domain-specific knowledge.)

Based on the results obtained so far, Charbonneau suggests that GAs can and should find use in other difficult problems in astrophysics, in particular inverse problems such as Doppler imaging and helioseismic inversions. In closing, Charbonneau argues that GAs are a "strong and promising contender" (p.324) in this field, one that can be expected to complement rather than replace traditional optimization techniques, and concludes that "the bottom line, if there is to be one, is that genetic algorithms work, and often frightfully well" (p.325).
Chemistry
High-powered, ultrashort pulses of laser energy can split apart complex molecules into simpler molecules, a process with important applications to organic chemistry and microelectronics. The specific end products of such a reaction can be controlled by modulating the phase of the laser pulse. However, for large molecules, solving for the desired pulse shape analytically is too difficult: the calculations are too complex and the relevant characteristics (the potential energy surfaces of the molecules) are not known precisely enough.

Assion et al. 1998 solved this problem by using an evolutionary algorithm to design the pulse shape. Instead of inputting complex, problem-specific knowledge about the quantum characteristics of the input molecules to design the pulse to specifications, the EA fires a pulse, measures the proportions of the resulting product molecules, randomly mutates the beam characteristics with the hope of getting these proportions closer to the desired output, and the process repeats. (Rather than fine-tune any characteristics of the laser beam directly, the authors' GA represents individuals as a set of 128 numbers, each of which is a voltage value that controls the refractive index of one of the pixels in the laser light modulator. Again, no problem-specific knowledge about the properties of either the laser or the reaction products is needed.) The authors state that their algorithm, when applied to two sample substances, "automatically finds the best configuration... no matter how complicated the molecular response may be" (p.920), demonstrating "automated coherent control on products that are chemically different from each other and from the parent molecule" (p.921).

In the early to mid-1990s, the widespread adoption of a novel drug design technique calledcombinatorial chemistry revolutionized the pharmaceutical industry. In this method, rather than the painstaking, precise synthesis of a single compound at a time, biochemists deliberately mix a wide variety of reactants to produce an even wider variety of products - hundreds, thousands or millions of different compounds per batch - which can then be rapidly screened for biochemical activity. In designing libraries of reactants for this technique, there are two main approaches: reactant-based design, which chooses optimized groups of reactants without considering what products will result, and product-based design, which selects reactants most likely to produce products with the desired properties. Product-based design is more difficult and complex, but has been shown to result in better and more diverse combinatorial libraries and a greater likelihood of getting a usable result.

In a paper funded by GlaxoSmithKline's research and development department, Gillet 2002discusses the use of a multiobjective genetic algorithm for the product-based design of combinatorial libraries. In choosing the compounds that go into a particular library, qualities such as molecular diversity and weight, cost of supplies, toxicity, absorption, distribution, and metabolism must all be considered. If the aim is to find molecules similar to an existing molecule of known function (a common method of new drug design), structural similarity can also be taken into account. This paper presents a multiobjective approach where a set of Pareto-optimal results that maximize or minimize each of these objectives can be developed. The author concludes that the GA was able to simultaneously satisfy the criteria of molecular diversity and maximum synthetic efficiency, and was able to find molecules that were drug-like as well as "very similar to given target molecules after exploring a very small fraction of the total search space" (p.378).

In a related paper, Glen and Payne 1995 discuss the use of genetic algorithms to automatically design new molecules from scratch to fit a given set of specifications. Given an initial population either generated randomly or using the simple molecule ethane as a seed, the GA randomly adds, removes and alters atoms and molecular fragments with the aim of generating molecules that fit the given constraints. The GA can simultaneously optimize a large number of objectives, including molecular weight, molecular volume, number of bonds, number of chiral centers, number of atoms, number of rotatable bonds, polarizability, dipole moment, and more in order to produce candidate molecules with the desired properties. Based on experimental tests, including one difficult optimization problem that involved generating molecules with properties similar to ribose (a sugar compound frequently mimicked in antiviral drugs), the authors conclude that the GA is an "excellent idea generator" (p.199) that offers "fast and powerful optimisation properties" and can generate "a diverse set of possible structures" (p.182). They go on to state, "Of particular note is the powerful optimising ability of the genetic algorithm, even with relatively small population sizes" (p.200). In a sign that these results are not merely theoretical, Lemley 2001 reports that the Unilever corporation has used genetic algorithms to design new antimicrobial compounds for use in cleansers, which it has patented.
Electrical engineering
A field-programmable gate array, or FPGA for short, is a special type of circuit board with an array of logic cells, each of which can act as any type of logic gate, connected by flexible interlinks which can connect cells. Both of these functions are controlled by software, so merely by loading a special program into the board, it can be altered on the fly to perform the functions of any one of a vast variety of hardware devices.

Dr. Adrian Thompson has exploited this device, in conjunction with the principles of evolution, to produce a prototype voice-recognition circuit that can distinguish between and respond to spoken commands using only 37 logic gates - a task that would have been considered impossible for any human engineer. He generated random bit strings of 0s and 1s and used them as configurations for the FPGA, selecting the fittest individuals from each generation, reproducing and randomly mutating them, swapping sections of their code and passing them on to another round of selection. His goal was to evolve a device that could at first discriminate between tones of different frequencies (1 and 10 kilohertz), then distinguish between the spoken words "go" and "stop".

This aim was achieved within 3000 generations, but the success was even greater than had been anticipated. The evolved system uses far fewer cells than anything a human engineer could have designed, and it does not even need the most critical component of human-built systems - a clock. How does it work? Thompson has no idea, though he has traced the input signal through a complex arrangement of feedback loops within the evolved circuit. In fact, out of the 37 logic gates the final product uses, five of them are not even connected to the rest of the circuit in any way - yet if their power supply is removed, the circuit stops working. It seems that evolution has exploited some subtle electromagnetic effect of these cells to come up with its solution, yet the exact workings of the complex and intricate evolved structure remain a mystery (Davidson 1997).

Altshuler and Linden 1997 used a genetic algorithm to evolve wire antennas with pre-specified properties. The authors note that the design of such antennas is an imprecise process, starting with the desired properties and then determining the antenna's shape through "guesses.... intuition, experience, approximate equations or empirical studies" (p.50). This technique is time-consuming, often does not produce optimal results, and tends to work well only for relatively simple, symmetric designs. By contrast, in the genetic algorithm approach, the engineer specifies the antenna's electromagnetic properties, and the GA automatically synthesizes a matching configuration.

Figure 5: A crooked-wire genetic antenna
(after Altshuler and Linden 1997, figure 1).
Altshuler and Linden used their GA to design a circularly polarized seven-segment antenna with hemispherical coverage; the result is shown to the left. Each individual in the GA consisted of a binary chromosome specifying the three-dimensional coordinates of each end of each wire. Fitness was evaluated by simulating each candidate according to an electromagnetic wiring code, and the best-of-run individual was then built and tested. The authors describe the shape of this antenna, which does not resemble traditional antennas and has no obvious symmetry, as "unusually weird" and "counter-intuitive" (p.52), yet it had a nearly uniform radiation pattern with high bandwidth both in simulation and in experimental testing, excellently matching the prior specification. The authors conclude that a genetic algorithm-based method for antenna design shows "remarkable promise". "...this new design procedure is capable of finding genetic antennas able to effectively solve difficult antenna problems, and it will be particularly useful in situations where existing designs are not adequate" (p.52).
Financial markets
Mahfoud and Mani 1996 used a genetic algorithm to predict the future performance of 1600 publicly traded stocks. Specifically, the GA was tasked with forecasting the relative return of each stock, defined as that stock's return minus the average return of all 1600 stocks over the time period in question, 12 weeks (one calendar quarter) into the future. As input, the GA was given historical data about each stock in the form of a list of 15 attributes, such as price-to-earnings ratio and growth rate, measured at various past points in time; the GA was asked to evolve a set of if/then rules to classify each stock and to provide, as output, both a recommendation on what to do with regards to that stock (buy, sell, or no prediction) and a numerical forecast of the relative return. The GA's results were compared to those of an established neural net-based system which the authors had been using to forecast stock prices and manage portfolios for three years previously. Of course, the stock market is an extremely noisy and nonlinear system, and no predictive mechanism can be correct 100% of the time; the challenge is to find a predictor that is accurate more often than not.

In the experiment, the GA and the neural net each made forecasts at the end of the week for each one of the 1600 stocks, for twelve consecutive weeks. Twelve weeks after each prediction, the actual performance was compared with the predicted relative return. Overall, the GA significantly outperformed the neural network: in one trial run, the GA correctly predicted the direction of one stock 47.6% of the time, made no prediction 45.8% of the time, and made an incorrect prediction only 6.6% of the time, for an overall predictive accuracy of 87.8%. Although the neural network made definite predictions more often, it was also wrong in its predictions more often (in fact, the authors speculate that the GA's greater ability to make no prediction when the data were uncertain was a factor in its success; the neural net always produces a prediction unless explicitly restricted by the programmer). In the 1600-stock experiment, the GA produced a relative return of +5.47%, versus +4.40% for the neural net - a statistically significant difference. In fact, the GA also significantly outperformed three major stock market indices - the S&P 500, the S&P 400, and the Russell 2000 - over this period; chance was excluded as the cause of this result at the 95% confidence level. The authors attribute this compelling success to the ability of the genetic algorithm to learn nonlinear relationships not readily apparent to human observers, as well as the fact that it lacks a human expert's "a priori bias against counterintuitive or contrarian rules" (p.562).

Similar success was achieved by Andreou, Georgopoulos and Likothanassis 2002, who used hybrid genetic algorithms to evolve neural networks that predicted the exchange rates of foreign currencies up to one month ahead. As opposed to the last example, where GAs and neural nets were in competition, here the two worked in concert, with the GA evolving the architecture (number of input units, number of hidden units, and the arrangement of the links between them) of the network which was then trained by a filter algorithm.

As historical information, the algorithm was given 1300 previous raw daily values of five currencies - the American dollar, the German deutsche mark, the French franc, the British pound, and the Greek drachma - and asked to predict their future values 1, 2, 5, and 20 days ahead. The hybrid GA's performance, in general, showed a "remarkable level of accuracy" (p.200) in all cases tested, outperforming several other methods including neural networks alone. Correlations for the one-day case ranged from 92 to 99%, and though accuracy decreased over increasingly greater time lags, the GA continued to be "quite successful" (p.206) and clearly outperformed the other methods. The authors conclude that "remarkable prediction success has been achieved in both a one-step ahead and a multistep predicting horizon" (p.208) - in fact, they state that their results are better by far than any related predictive strategies attempted on this data series or other currencies.

The uses of GAs on the financial markets have begun to spread into real-world brokerage firms.Naik 1996 reports that LBS Capital Management, an American firm headquartered in Florida, uses genetic algorithms to pick stocks for a pension fund it manages. Coale 1997 and Begley and Beals 1995 report that First Quadrant, an investment firm in California that manages over $2.2 billion, uses GAs to make investment decisions for all of their financial services. Their evolved model earns, on average, $255 for every $100 invested over six years, as opposed to $205 for other types of modeling systems.
Game playing
One of the most novel and compelling demonstrations of the power of genetic algorithms was presented by Chellapilla and Fogel 2001, who used a GA to evolve neural networks that could play the game of checkers. The authors state that one of the major difficulties in these sorts of strategy-related problems is the credit assignment problem - in other words, how does one write a fitness function? It has been widely believed that the mere criterion of win, lose or draw does not provide sufficient information for an evolutionary algorithm to figure out what constitutes good play.

In this paper, Chellapilla and Fogel overturn that assumption. Given only the spatial positions of pieces on the checkerboard and the total number of pieces possessed by each side, they were able to evolve a checkers program that plays at a level competitive with human experts, without any intelligent input as to what constitutes good play - indeed, the individuals in the evolutionary algorithm were not even told what the criteria for a win were, nor were they told the result of any one game.

In Chellapilla and Fogel's representation, the game state was represented by a numeric list of 32 elements, with each position in the list corresponding to an available position on the board. The value at each position was either 0 for an unoccupied square, -1 if that square was occupied by an enemy checker, +1 if that square was occupied by one of the program's checkers, and -K or +K for a square occupied by an enemy or friendly king. (The value of K was not pre-specified, but again was determined by evolution over the course of the algorithm.) Accompanying this was a neural network with multiple processing layers and one input layer with a node for each of the possible 4x4, 5x5, 6x6, 7x7 and 8x8 squares on the board. The output of the neural net for any given arrangement of pieces was a value from -1 to +1 indicating how good it felt that position was for it. For each move, the neural network was presented with a game tree listing all possible moves up to four turns into the future, and a move decision was made based on which branch of the tree produced the best results for it.

The evolutionary algorithm began with a population of 15 neural networks with randomly generated weights and biases assigned to each node and link; each individual then reproduced once, generating an offspring with variations in the values of the network. These 30 individuals then competed for survival by playing against each other, with each individual competing against 5 randomly chosen opponents per turn. 1 point was awarded for each win and 2 points were deducted for each loss. The 15 best performers, based on total score, were selected to produce offspring for the next generation, and the process repeated. Evolution was continued for 840 generations (approximately six months of computer time).
Class
Rating
Senior Master
2400+
Master
2200-2399
Expert
2000-2199
Class A
1800-1999
Class B
1600-1799
Class C
1400-1599
Class J
< 200 The single best individual that emerged from this selection was entered as a competitor on the gaming websitewww.zone.com. Over a period of two months, it played against 165 human opponents comprising a range of high skill levels, from class C to master, according to the ranking system of the United States Chess Federation (shown at left, some ranks omitted for clarity). Of these games, the neural net won 94, lost 39 and drew 32; based on the rankings of the opponents in these games, the evolved neural net was equivalent to a player with a mean rating of 2045.85, placing it at the expert level - a higher ranking than 99.61% of over 80,000 players registered at the website. One of the neural net's most significant victories was when it defeated a player ranked 98th out of all registered players, whose rating was just 27 points below master level. Tests conducted with a simple piece-differential program (which bases moves solely on the difference between the number of checkers remaining to each side) with an eight-move look-ahead showed the neural net to be significantly superior, with a rating over 400 points higher. "A program that relies only on the piece count and an eight-ply search will defeat a lot of people, but it is not an expert. The best evolved neural network is" (p.425). Even when it was searching positions two further moves ahead than the neural net, the piece-differential program lost decisively in eight out of ten games. This conclusively demonstrates that the evolved neural net is not merely counting pieces, but is somehow processing spatial characteristics of the board to decide its moves. The authors point out that opponents on zone.com often commented that the neural net's moves were "strange", but its overall level of play was described as "very tough" or with similar complimentary terms. To further test the evolved neural network (which the authors named "Anaconda" since it often won by restricting its opponents' mobility), it was played against a commercial checkers program, Hoyle's Classic Games, distributed by Sierra Online (Chellapilla and Fogel 2000). This program comes with a variety of built-in characters, each of whom plays at a different skill level. Anaconda was tested against three characters ("Beatrice", "Natasha" and "Leopold") designated as expert-level players, playing one game as red and one game as white against each of them with a six-ply look-ahead. Though the authors doubted that this depth of look-ahead would give Anaconda the ability to play at the expert skill level it had previously shown, it won six straight victories out of all six games played. Based on this outcome, the authors expressed skepticism over whether the Hoyle software played at the skill level advertised, though it should be noted that they reached this conclusion based solely on the ease with which Anaconda defeated it! The ultimate test of Anaconda was given in Chellapilla and Fogel 2002, where the evolved neural net was matched against the best checkers player in the world: Chinook, a program designed principally by Dr. Jonathan Schaeffer of the University of Alberta. Rated at 2814 in 1996 (with its closest human competitors rated in the 2600s), Chinook incorporates a book of opening moves provided by human grandmasters, a sophisticated set of middle-game algorithms, and a complete database of all possible moves with ten pieces on the board or less, so it never makes a mistake in the endgame. An enormous amount of human intelligence and expertise went into the design of this program. Chellapilla and Fogel pitted Anaconda against Chinook in a 10-game tournament, with Chinook playing at a 5-ply skill level, making it roughly approximate to master level. Chinook won this contest, four wins to two with four draws. (Interestingly, the authors note, in two of the games that ended as draws, Anaconda held the lead with four kings to Chinook's three. Furthermore, one of Chinook's wins came from a 10-ply series of movies drawn from its endgame database, which Anaconda with an 8-ply look-ahead could not have anticipated. If Anaconda had had access to an endgame database of the same quality as Chinook's, the outcome of the tournament might well have been a victory for Anaconda, four wins to three.) These results "provide good support for the expert-level rating that Anaconda earned on www.zone.com" (p.76), with an overall rating of 2030-2055, comparable to the 2045 rating it earned by playing against humans. While Anaconda is not an invulnerable player, it is able to play competitively at the expert level and hold its own against a variety of extremely skilled human checkers players. When one considers the very simple fitness criterion under which these results were obtained, the emergence of Anaconda provides dramatic corroboration of the power of evolution. Geophysics Sambridge and Gallagher 1993 used a genetic algorithm to locate earthquake hypocenters based on seismological data. (The hypocenter is the point beneath the Earth's surface at which an earthquake begins. The epicenter is the point on the surface directly above the hypocenter.) This is an exceedingly complex task, since the properties of seismic waves depend on the properties of the rock layers through which they travel. The traditional method for locating the hypocenter relies upon what is known as a seismic inversion algorithm, which starts with a best guess of the location, calculates the derivatives of wave travel time with respect to source position, and performs a matrix operation to provide an updated location. This process is repeated until an acceptable solution is reached. (This Post of the Month, from November 2003, provides more information.) However, this method requires derivative information and is prone to becoming trapped on local optima. A location algorithm that does not depend on derivative information or velocity models can avoid these shortfalls by calculating only the forward problem - the difference between observed and predicted wave arrival times for different hypocenter locations. However, an exhaustive search based on this method would be far too computationally expensive. This, of course, is precisely the type of optimization problem at which genetic algorithms excel. Like all GAs, the one proposed by the cited paper is parallel in nature - rather than progressively perturbing a single hypocenter closer and closer to the solution, it begins with a cloud of potential hypocenters which shrinks over time to converge on a single solution. The authors state that their approach "can rapidly locate near optimal solutions without an exhaustive search of the parameter space" (p.1467), displays "highly organized behavior resulting in efficient search" and is "a compromise between the efficiency of derivative based methods and the robustness of a fully nonlinear exhaustive search" (p.1469). The authors conclude that their genetic algorithm is "efficient for truly global optimization" (p.1488) and "a powerful new tool for performing robust hypocenter location" (p.1489). Materials engineering Giro, Cyrillo and Galvão 2002 used genetic algorithms to design electrically conductive carbon-based polymers known as polyanilines. These polymers, a recently invented class of synthetic materials, have "large technological potential applications" and may open up windows onto "new fundamental physical phenomena" (p.170). However, due to their high reactivity, carbon atoms can form a virtually infinite number of structures, making a systematic search for new molecules with interesting properties all but impossible. In this paper, the authors apply a GA-based approach to the task of designing new molecules with pre-specified properties, starting with a randomly generated population of initial candidates. They conclude that their methodology can be a "very effective tool" (p.174) to guide experimentalists in the search for new compounds and is general enough to be extended to the design of novel materials belonging to virtually any class of molecules. Weismann, Hammel and Bäck 1998 applied evolutionary algorithms to a "nontrivial" (p.162) industrial problem: the design of multilayer optical coatings used for filters that reflect, transmit or absorb light of specified frequencies. These coatings are used in the manufacture of sunglasses, for example, or compact discs. Their manufacture is a precise task: the layers must be laid down in a particular sequence and particular thicknesses to produce the desired result, and uncontrollable environmental variations in the manufacturing environment such as temperature, pollution and humidity may affect the performance of the finished product. Many local optima are not robust against such variations, meaning that maximum product quality must be paid for with higher rates of undesirable deviation. The particular problem considered in this paper also had multiple criteria: in addition to the reflectance, the spectral composition (color) of the reflected light was also considered. The EA operated by varying the number of coating layers and the thickness of each, and produced designs that were "substantially more robust to parameter variation" (p.166) and had higher average performance than traditional methods. The authors conclude that "evolutionary algorithms can compete with or even outperform traditional methods" (p.167) of multilayer optical coating design, without having to incorporate domain-specific knowledge into the search function and without having to seed the population with good initial designs. One more use of GAs in the field of materials engineering merits mention: Robin et al. 2003used GAs to design exposure patterns for an electron lithography beam, used to etch submicrometer-scale structures onto integrated circuits. Designing these patterns is a highly difficult task; it is cumbersome and wasteful to determine them experimentally, but the high dimensionality of the search space defeats most search algorithms. As many as 100 parameters must be optimized simultaneously to control the electron beam and prevent scattering and proximity effects that would otherwise ruin the fine structures being sculpted. The forward problem - determining the resulting structure as a function of the electron dose - is straightforward and easy to simulate, but the inverse problem of determining the electron dose to produce a given structure, which is what is being solved here, is far harder and no deterministic solution exists. Genetic algorithms, which are "known to be able to find good solutions to very complex problems of high dimensionality" (p.75) without needing to be supplied with domain-specific information on the topology of the search landscape, were applied successfully to this problem. The paper's authors employed a steady-state GA with roulette-wheel selection in a computer simulation, which yielded "very good optimized" (p.77) exposure patterns. By contrast, a type of hill-climber known as a simplex-downhill algorithm was applied to the same problem, without success; the SD method quickly became trapped in local optima which it could not escape, yielding solutions of poor quality. A hybrid approach of the GA and SD methods also could not improve on the results delivered by the GA alone. Mathematics and algorithmics Although some of the most promising applications and compelling demonstrations of GAs' power are in the field of engineering design, they are also relevant to "pure" mathematical problems. Haupt and Haupt 1998 (p.140) discuss the use of GAs to solve high-order nonlinear partial differential equations, typically by finding the values for which the equations equal zero, and give as an example a near-perfect GA solution for the coefficients of the fifth-order Super Korteweg-de Vries equation. Sorting a list of items into order is an important task in computer science, and a sorting networkis an efficient way to accomplish this. A sorting network is a fixed list of comparisons performed on a set of a given size; in each comparison, two elements are compared and exchanged if not in order. Koza et al. 1999, p. 952 used genetic programming to evolve minimal sorting networks for 7-item sets (16 comparisons), 8-item sets (19 comparisons), and 9-item sets (25 comparisons). Mitchell 1996, p.21, discusses the use of genetic algorithms by W. Daniel Hillis to find a 61-comparison sorting network for a 16-item set, only one step more than the smallest known. This latter example is particularly interesting for two innovations it used: diploid chromosomes, and more notably, host-parasite coevolution. Both the sorting networks and the test cases evolved alongside each other; sorting networks were given higher fitness based on how many test cases they sorted correctly, while test cases were given higher fitness based on how many sorting networks they could "trick" into sorting incorrectly. The GA with coevolution performed significantly better than the same GA without it. One final, noteworthy example of GAs in the field of algorithmics can be found in Koza et al. 1999, who used genetic programming to discover a rule for the majority classification problem in one-dimensional cellular automata that is better than all known rules written by humans. A one-dimensional cellular automaton can be thought of as a finite tape with a given number of positions (cells) on it, each of which can hold either the state 0 or the state 1. The automaton runs for a given number of time steps; at each step, every cell acquires a new value based on its previous value and the value of its nearest neighbors. (The Game of Life is a two-dimensional cellular automaton.) The majority classification problem entails finding a table of rules such that, if more than half the cells on the tape are 1 initially, all the cells go to 1; otherwise all the cells go to 0. The challenge lies in the fact that any individual cell can only access information about its nearest neighbors; therefore, good rule sets must somehow find a way to transmit information about distant regions of the tape. It is known that a perfect solution to this problem does not exist - no rule set can accurately classify all possible initial configurations - but over the past twenty years, there has been a long succession of increasingly better solutions. In 1978, three researchers developed the so-called GKL rule, which correctly classifies 81.6% of the possible initial states. In 1993, a better rule with an accuracy of 81.8% was discovered; in 1995, another rule with accuracy of 82.178% was found. All of these rules required significant work by intelligent, creative humans to develop. By contrast, the best rule discovered by a run of genetic programming, given in Koza et al. 1999, p.973, has an overall accuracy of 82.326% - better than any of the human-created solutions that have been developed over the last two decades. The authors note that their new rules are qualitatively different from previously published rules, employing fine-grained internal representations of state density and intricate sets of signals for communicating information over long distance. Military and law enforcement Kewley and Embrechts 2002 used genetic algorithms to evolve tactical plans for military battles. The authors note that "[p]lanning for a tactical military battle is a complex, high-dimensional task which often bedevils experienced professionals" (p.163), not only because such decisions are usually made under high-stress conditions, but also because even simple plans require a great number of conflicting variables and outcomes to take into account: minimizing friendly casualties, maximizing enemy casualties, controlling desired terrain, conserving resources, and so on. Human planners have difficulty dealing with the complexities of this task and often must resort to "quick and dirty" approaches, such as doing whatever worked last time. To overcome these difficulties, the authors of the cited paper developed a genetic algorithm to automate the creation of battle plans, in conjunction with a graphical battle simulator program. The commander enters the preferred outcome, and the GA automatically evolves a battle plan; in the simulation used, factors such as the topography of the land, vegetative cover, troop movement speed, and firing accuracy were taken into account. In this experiment, co-evolution was also used to improve the quality of the solutions: battle plans for the enemy forces evolved concurrently with friendly plans, forcing the GA to correct any weaknesses in its own plan that an enemy could exploit. To measure the quality of solutions produced by the GA, they were compared to battle plans for the same scenario produced by a group of "experienced military experts... considered to be very capable of developing tactical courses of action for the size forces used in this experiment" (p.166). These seasoned experts both developed their own plan and, when the GA's solution was complete, were given a chance to examine it and modify it as they saw fit. Finally, all the sets of plans were run multiple times on the simulator to determine their average quality. The results speak for themselves: the evolved solution outperformed both the military experts' own plan and the plan produced by their modifications to the GA's solution. "...[T]he plans produced by automated algorithms had a significantly higher mean performance than those generated by experienced military experts" (p.161). Furthermore, the authors note that the GA's plan made good tactical sense. (It involved a two-pronged attack on the enemy position by mechanized infantry platoons supported by attack helicopters and ground scouts, in conjunction with unmanned aerial vehicles conducting reconnaissance to direct artillery fire.) In addition, the evolved plan included individual friendly units performing doctrinal missions - an emergent property that appeared during the course of the run, rather than being specified by the experimenter. In increasingly networked modern battlefields, the attractive potential of an evolutionary algorithm that can automate the production of high-quality tactical plans should be obvious. An interesting use of GAs in law enforcement was reported in Naik 1996, which described the "FacePrints" software, a project to help witnesses identify and describe criminal suspects. The cliched image of the police sketch artist drawing a picture of the suspect's face in response to witnesses' promptings is a difficult and inefficient method: most people are not good at describing individual aspects of a person's face, such as the size of the nose or shape of the jaw, but instead are better at recognizing whole faces. FacePrints takes advantage of this by using a genetic algorithm that evolves pictures of faces based on databases of hundreds of individual features that can be combined in a vast number of ways. The program shows randomly generated face images to witnesses, who pick the ones that most resemble the person they saw; the selected faces are then mutated and bred together to generate new combinations of features, and the process repeats until an accurate portrait of the suspect's face emerges. In one real-life robbery case, the final portraits created by the three witnesses were strikingly similar, and the resulting picture was printed in the local paper. Molecular biology In living things, transmembrane proteins are proteins that protrude through a cellular membrane. Transmembrane proteins often perform important functions such as sensing the presence of certain substances outside the cell or transporting them into the cell. Understanding the behavior of a transmembrane protein requires identifying the segment of that protein that is actually embedded within the membrane, which is called the transmembrane domain. Over the last two decades, molecular biologists have published a succession of increasingly accurate algorithms for this purpose. All proteins used by living things are made up of the same 20 amino acids. Some of these amino acids are hydrophobic, meaning they are repelled by water, and some are hydrophilic, meaning they are attracted to water. Amino acid sequences that are part of a transmembrane domain are more likely to be hydrophobic. However, hydrophobicity is not a precisely defined characteristic, and there is no one agreed-upon scale for measuring it. Koza et al. 1999, chapter 16, used genetic programming to design an algorithm to identify transmembrane domains of a protein. Genetic programming was given a set of standard mathematical operators to work with, as well as a set of boolean amino-acid-detecting functions that return +1 if the amino acid at a given position is the amino acid they detect and otherwise return -1. (For example, the A? function takes as an argument one number corresponding to a position within the protein, and returns +1 if the amino acid at that position is alanine, which is denoted by the letter A; otherwise it returns -1). A single shared memory variable kept a running count of the overall sum, and when the algorithm completed, the protein segment was identified as a transmembrane domain if its value was positive. Given only these tools, would it entail the creation of new information for a human designer to produce an efficient solution to this problem? The solutions produced by genetic programming were evaluated for fitness by testing them on 246 protein segments whose transmembrane status was known. The best-of-run individual was then evaluated on 250 additional, out-of-sample, test cases and compared to the performance of the four best known human-written algorithms for the same purpose. The result: Genetic programming produced a transmembrane segment-identifying algorithm with an overall error rate of 1.6% - significantly lower than all four human-written algorithms, the best of which had an error rate of 2.5%. The genetically designed algorithm, which the authors dubbed the 0-2-4 rule, operates as follows: Increment the running sum by 4 for each instance of glutamic acid (an electrically charged and highly hydrophilic) amino acid in the protein segment. Increment the running sum by 0 for each instance of alanine, phenylalanine, isoleucine, leucine, methionine, or valine (all highly hydrophobic amino acids) in the protein segment. Increment the running sum by 2 for each instance of all other amino acids. If [(SUM - 3.1544)/0.9357] is less than the length of the protein segment, classify that segment as a transmembrane domain; otherwise, classify it as a nontransmembrane domain. Pattern recognition and data mining Competition in the telecommunications industry today is fierce, and a new term - "churn" - has been coined to describe the rapid rate at which subscribers switch from one service provider to another. Churn costs telecom carriers a large amount of money each year, and reducing churn is an important factor in increasing profitability. If carriers can contact customers who are likely to switch and offer them special incentives to stay, churn rates can be reduced; but no carrier has the resources to contact more than a small percent of its customers. The problem is therefore how to identify customers who are more likely to churn. All carriers have extensive databases of customer information that can theoretically be used for this purpose; but what method works best for sifting through this vast amount of data to identify the subtle patterns and trends that signify a customer's likelihood of churning? Au, Chan and Yao 2003 applied genetic algorithms to this problem to generate a set of if-then rules that predict the churning probability of different groups of customers. In their GA, the first generation of rules, all of which had one condition, was generated using a probabilistic induction technique. Subsequent generations then refine these, combining simple, single-condition rules into more complex, multi-condition rules. The fitness measure used an objective "interestingness" measure of correlation which requires no subjective input. The evolutionary data-mining algorithm was tested on a real-world database of 100,000 subscribers provided by a Malaysian carrier, and its performance was compared against two alternative methods: a multilayer neural network and a widely used decision-tree-based algorithm, C4.5. The authors state that their EA was able to discover hidden regularities in the database and was "able to make accurate churn prediction under different churn rates" (p.542), outperforming C4.5 under all circumstances, outperforming the neural network under low monthly churn rates and matching the neural network under larger churn rates, and reaching conclusions more quickly in both cases. Some further advantages of the evolutionary approach are that it can operate efficiently even when some data fields are missing and that it can express its findings in easily understood rule sets, unlike the neural net. Among some of the more interesting rules discovered by the EA are as follows: subscribers are more likely to churn if they are personally subscribed to the service plan and have not been admitted to any bonus scheme (a potential solution is to admit all such subscribers to bonus schemes); subscribers are more likely to churn if they live in Kuala Lumpur, are between 36 and 44 in age, and pay their bills with cash (presumably because it is easier for subscribers who pay cash, rather than those whose accounts are automatically debited, to switch providers); and subscribers living in Penang who signed up through a certain dealer are more likely to churn (this dealer may be providing poor customer service and should be investigated). Rizki, Zmuda and Tamburino 2002 used evolutionary algorithms to evolve a complex pattern recognition system with a wide variety of potential uses. The authors note that the task of pattern recognition is increasingly being performed by machine learning algorithms, evolutionary algorithms in particular. Most such approaches begin with a pool of predefined features, from which an EA can select appropriate combinations for the task at hand; by contrast, this approach began from the ground up, first evolving individual feature detectors in the form of expression trees, then evolving cooperative combinations of those detectors to produce a complete pattern recognition system. The evolutionary process automatically selects the number of feature detectors, the complexity of the detectors, and the specific aspects of the data each detector responds to. To test their system, the authors gave it the task of classifying aircraft based on their radar reflections. The same kind of aircraft can return quite different signals depending on the angle and elevation at which it is viewed, and different kinds of aircraft can return very similar signals, so this is a non-trivial task. The evolved pattern recognition system correctly classified 97.2% of the targets, a higher net percentage than any of the three other techniques - a perceptron neural network, a nearest-neighbor classifier algorithm, and a radial basis algorithm - against which it was tested. (The radial basis network's accuracy was only 0.5% less than the evolved classifier, which is not a statistically significant difference, but the radial basis network required 256 feature detectors while the evolved recognition system used only 17.) As the authors state, "The recognition systems that evolve use fewer features than systems formed using conventional techniques, yet achieve comparable or superior recognition accuracy" (p.607). Various aspects of their system have also been applied to problems including optical character recognition, industrial inspection and medical image analysis. Hughes and Leyland 2000 also applied multiple-objective GAs to the task of classifying targets based on their radar reflections. High-resolution radar cross section data requires massive amounts of disk storage space, and it is very computationally intensive to produce an actual model of the source from the data. By contrast, the authors' GA-based approach proved very successful, producing a model as good as the traditional iterative approach while reducing the computational overhead and storage requirements to the point where it was feasible to generate good models on a desktop computer. By contrast, the traditional iterative approach requires ten times the resolution and 560,000 times as many accesses of image data to produce models of similar quality. The authors conclude that their results "clearly demonstrate" (p.160) the ability of the GA to process both two- and three-dimensional radar data of any level of resolution with far fewer calculations than traditional methods, while retaining acceptably high accuracy. Robotics The international RoboCup tournament is a project to promote advances in robotics, artificial intelligence, and related fields by providing a standard problem where new technologies can be tried out - specifically, it is an annual soccer tournament between teams of autonomous robots. (The stated goal is to develop a team of humanoid robots that can win against the world-champion human soccer team by 2050; at the present time, most of the competing robot teams are wheeled.) The programs that control the robotic team members must display complex behavior, deciding when to block, when to kick, how to move, when to pass the ball to teammates, how to coordinate defense and offense, and so on. In the simulator league of the 1997 competition, David Andre and Astro Teller entered a team named Darwin United whose control programs had been developed automatically from the ground up by genetic programming, a challenge to the conventional wisdom that "this problem is just too difficult for such a technique" (Andre and Teller 1999, p. 346). To solve this difficult problem, Andre and Teller provided the genetic programming algorithm with a set of primitive control functions such as turning, moving, kicking, and so on. (These functions were themselves subject to change and refinement during the course of evolution.) Their fitness function, written to reward good play in general rather than scoring specifically, provided a list of increasingly important objectives: getting near the ball, kicking the ball, keeping the ball on the opponent's side of the field, moving in the correct direction, scoring goals, and winning the game. It should be noted that no code was provided to teach the team specifically how to achieve these complex objectives. The evolved programs were then evaluated using a hierarchical selection model: first, the candidate teams were tested on an empty field and rejected if they did not score a goal within 30 seconds. Next, they were evaluated against a team of stationary "kicking posts" that kick the ball toward the opposite side of the field. Thirdly, the team played a game against the winning team from the RoboCup 1997 competition. Finally, teams that scored at least one goal against this team were played off against each other to determine which was best. Out of 34 teams in its division, Darwin United ultimately came in 17th, placing squarely in the middle of the field and outranking half of the human-written entries. While a tournament victory would undoubtedly have been more impressive, this result is competitive and significant in its own right, and appears even more so in the light of history. About 25 years ago, chess-playing computer programs were in their infancy; a computer had only recently entered even a regional competition for the first time, although it did not win (Sagan 1979, p.286). But "[a] machine that plays chess in the middle range of human expertise is a very capable machine" (ibid.), and it might be said that the same is true of robot soccer. Just as chess-playing machines compete at world grandmaster levels today, what types of systems will genetic programming be producing 20 or 30 years from now? Routing and scheduling Burke and Newall 1999 used genetic algorithms to schedule exams among university students. The timetable problem in general is known to be NP-complete, meaning that no method is known to find a guaranteed-optimal solution in a reasonable amount of time. In such a problem, there are both hard constraints - two exams may not be assigned to the same room at the same time - and soft constraints - students should not be assigned to multiple exams in succession, if possible, to minimize fatigue. Hard constraints must be satisfied, while soft constraints should be satisfied as far as possible. The authors dub their hybrid approach for solving this problem a "memetic algorithm": an evolutionary algorithm with rank-based, fitness-proportionate selection, combined with a local hill-climber to optimize solutions found by the EA. The EA was applied to data sets from four real universities (the smallest of which had an enrollment of 25,000 students), and its results were compared to results produced by a heuristic backtracking method, a well-established algorithm that is among the best known for this problem and that is used at several real universities. Compared to this method, the EA produced a result with a quite uniform 40% reduction in penalty. He and Mort 2000 applied genetic algorithms to the problem of finding optimal routing paths in telecommunications networks (such as phone networks and the Internet) which are used to relay data from senders to recipients. This is an NP-hard optimization problem, a type of problem for which GAs are "extremely well suited... and have found an enormous range of successful applications in such areas" (p.42). It is also a multiobjective problem, balancing conflicting objectives such as maximizing data throughput, minimizing transmission delay and data loss, finding low-cost paths, and distributing the load evenly among routers or switches in the network. Any successful real-world algorithm must also be able to re-route around primary paths that fail or become congested. In the authors' hybrid GA, a shortest-path-first algorithm, which minimizes the number of "hops" a given data packet must pass through, is used to generate the seed for the initial population. However, this solution does not take into account link congestion or failure, which are inevitable conditions in real networks, and so the GA takes over, swapping and exchanging sections of paths. When tested on a data set derived from a real Oracle network database, the GA was found to be able to efficiently route around broken or congested links, balancing traffic load and maximizing the total network throughput. The authors state that these results demonstrate the "effectiveness and scalability" of the GA and show that "optimal or near-optimal solutions can be achieved" (p.49). This technique has found real-world applications for similar purposes, as reported in Begley and Beals 1995. The telecommunications company U.S. West (now merged with Qwest) was faced with the task of laying a network of fiber-optic cable. Until recently, the problem of designing the network to minimize the total length of cable laid was solved by an experienced engineer; now the company uses a genetic algorithm to perform the task automatically. The results: "Design time for new networks has fallen from two months to two days and saves US West $1 million to $10 million each" (p.70). Jensen 2003 and Chryssolouris and Subramaniam 2001 applied genetic algorithms to the task of generating schedules for job shops. This is an NP-hard optimization problem with multiple criteria: factors such as cost, tardiness, and throughput must all be taken into account, and job schedules may have to be rearranged on the fly due to machine breakdowns, employee absences, delays in delivery of parts, and other complications, making robustness in a schedule an important consideration. Both papers concluded that GAs are significantly superior to commonly used dispatching rules, producing efficient schedules that can more easily handle delays and breakdowns. These results are not merely theoretical, but have been applied to real-world situations: As reported in Naik 1996, organizers of the 1992 Paralympic Games used a GA to schedule events. As reported in Petzinger 1995, John Deere & Co. has used GAs to generate schedules for a Moline, Illinois plant that manufactures planters and other heavy agricultural equipment. Like luxury cars, these can be built in a wide variety of configurations with many different parts and options, and the vast number of possible ways to build them made efficient scheduling a seemingly intractable problem. Productivity was hampered by scheduling bottlenecks, worker teams were bickering, and money was being lost. Finally, in 1993, Deere turned to Bill Fulkerson, a staff analyst and engineer who conceived of using a genetic algorithm to produce schedules for the plant. Overcoming initial skepticism, the GA quickly proved itself: monthly output has risen by 50 percent, overtime has nearly vanished, and other Deere plants are incorporating GAs into their own scheduling. As reported in Rao 1998, Volvo has used an evolutionary program called OptiFlex to schedule its million-square-foot factory in Dublin, Virginia, a task that requires handling hundreds of constraints and millions of possible permutations for each vehicle. Like all genetic algorithms, OptiFlex works by randomly combining different scheduling possibilities and variables, determines their fitness by ranking them according to costs, benefits and constraints, then causes the best solutions to swap genes and sends them back into the population for another trial. Until recently, this daunting task was handled by a human engineer who took up to four days to produce the schedule for each week; now, thanks to GAs, this task can be completed in one day with minimal human intervention. As reported in Lemley 2001, United Distillers and Vintners, a Scottish company that is the largest and most profitable spirits distributor in the world and accounts for over one-third of global grain whiskey production, uses a genetic algorithm to manage its inventory and supply. This is a daunting task, requiring the efficient storage and distribution of over 7 million barrels containing 60 distinct recipes among a vast system of warehouses and distilleries, depending on a multitude of factors such as age, malt number, wood type and market conditions. Previously, coordinating this complex flow of supply and demand required five full-time employees. Today, a few keystrokes on a computer instruct a genetic algorithm to generate a new schedule each week, and warehouse efficiency has nearly doubled. Beasley, Sonander and Havelock 2001 used a GA to schedule airport landings at London Heathrow, the United Kingdom's busiest airport. This is a multiobjective problem that involves, among other things, minimizing delays and maximizing number of flights while maintaining adequate separation distances between planes (air vortices that form in a plane's wake can be dangerous to another flying too closely behind). When compared to actual schedules from a busy period at the airport, the GA was able to reduce average wait time by 2-5%, equating to one to three extra flights taking off and landing per hour - a significant improvement. However, even greater improvements have been achieved: as reported in Wired 2002, major international airports and airlines such as Heathrow, Toronto, Sydney, Las Vegas, San Francisco, America West Airlines, AeroMexico, and Delta Airlines are using genetic algorithms to schedule takeoffs, landings, maintenance and other tasks, in the form of Ascent Technology's SmartAirport Operations Center software (see http://www.ascent.com/faq.html). Breeding and mutating solutions in the form of schedules that incorporate thousands of variables, "Ascent beats humans hands-down, raising productivity by up to 30 percent at every airport where it's been implemented." Systems engineering Benini and Toffolo 2002 applied a genetic algorithm to the multi-objective task of designing wind turbines used to generate electric power. This design "is a complex procedure characterized by several trade-off decisions... The decision-making process is very difficult and the design trends are not uniquely established" (p.357); as a result, there are a number of different turbine types in existence today and no agreement on which, if any, is optimal. Mutually exclusive objectives such as maximum annual energy production and minimal cost of energy must be taken into account. In this paper, a multi-objective evolutionary algorithm was used to find the best trade-offs between these goals, constructing turbine blades with the optimal configuration of characteristics such as tip speed, hub/tip ratio, and chord and twist distribution. In the end, the GA was able to find solutions competitive with commercial designs, as well as more clearly elucidate the margins by which annual energy production can be improved without producing overly expensive designs. Haas, Burnham and Mills 1997 used a multiobjective genetic algorithm to optimize the beam shape, orientation and intensity of X-ray emitters used in targeted radiotherapy to destroy cancerous tumors while sparing healthy tissue. (X-ray photons aimed at a tumor tend to be partially scattered by structures within the body, unintentionally damaging internal organs. The challenge is to minimize this effect while maximizing the radiation dose delivered to the tumor.) Using a rank-based fitness model, the researchers began with the solution produced by the conventional method, an iterative least-squares approach, and then used the GA to modify and improve it. By constructing a model of a human body and exposing it to the beam configuration evolved by the GA, they found good agreement between the predicted and actual distributions of radiation. The authors conclude that their results "show a sparing of [healthy organs] that could not be achieved using conventional techniques" (p.1745). Lee and Zak 2002 used a genetic algorithm to evolve a set of rules to control an automotive anti-lock braking system. While the ability of antilock brake systems to reduce stopping distance and improve maneuverability has saved many lives, the performance of an ABS is dependent on road surface conditions: for example, an ABS controller that is optimized for dry asphalt will not work as well on wet or icy roads, and vice versa. In this paper, the authors propose a GA to fine-tune an ABS controller that can identify the road surface properties (by monitoring wheel slip and acceleration) and respond accordingly, delivering the appropriate amount of braking force to maximize the wheels' traction. In testing, the genetically tuned ABS "exhibits excellent tracking properties" (p.206) and was "far superior" (p.209) to two other methods of braking maneuvers, quickly finding new optimal values for wheel slip when the type of terrain changes beneath a moving car and reducing total stopping distance. "The lesson we learned from our experiment... is that a GA can help to fine-tune even a well-designed controller. In our case, we already had a good solution to the problem; yet, with the help of a GA, we were able to improve the control strategy significantly. In summary, it seems that it is worthwhile to try to apply a GA, even to a well-designed controller, because there is a good chance that one can find a better set of the controller settings using GAs" (p.211). As cited in Schechter 2000, Dr. Peter Senecal of the University of Wisconsin used small-population genetic algorithms to improve the efficiency of diesel engines. These engines work by injecting fuel into a combustion chamber which is filled with extremely compressed and therefore extremely hot air, hot enough to cause the fuel to explode and drive a piston that produces the vehicle's motive force. This basic design has changed little since it was invented by Rudolf Diesel in 1893; although vast amounts of effort have been put into making improvements, this is a very difficult task to perform analytically because it requires precise knowledge of the turbulent behavior displayed by the fuel-air mixture and simultaneous variation of many interdependent parameters. Senecal's approach, however, eschewed the use of such problem-specific knowledge and instead worked by evolving parameters such as the pressure of the combustion chamber, the timing of the fuel injections and the amount of fuel in each injection. The result: the simulation produced an improved engine that consumed 15% less fuel than a normal diesel engine and produced one-third as much nitric oxide exhaust and half as much soot. Senecal's team then built a real diesel engine according to the specifications of the evolved solution and got the same results. Senecal is now moving on to evolving the geometry of the engine itself, hopefully producing even greater improvements. As cited in Begley and Beals 1995, Texas Instruments used a genetic algorithm to optimize the layout of components on a computer chip, placing structures so as to minimize the overall area and create the smallest chip possible. Using a connection strategy that no human had thought of, the GA came up with a design that took 18% less space. Finally, as cited in Ashley 1992, a proprietary software system known as Engineous that employs genetic algorithms is being used by companies in the aerospace, automotive, manufacturing, turbomachinery and electronics industries to design and improve engines, motors, turbines and other industrial devices. In the words of its creator, Dr. Siu Shing Tong, Engineous is "a master 'tweaker,' tirelessly trying out scores of 'what-if' scenarios until the best possible design emerges" (p.49). In one trial of the system, Engineous was able to produce a 0.92 percent increase in the efficiency of an experimental turbine in only one week, while ten weeks of work by a human designer produced only a 0.5 percent improvement. Granted, Engineous does not rely solely on genetic algorithms; it also employs numerical optimization techniques and expert systems which use logical if-then rules to mimic the decision-making process of a human engineer. However, these techniques are heavily dependent on domain-specific knowledge, lack general applicability, and are prone to becoming trapped on local optima. By contrast, the use of genetic algorithms allows Engineous to explore regions of the search space that other methods miss. Engineous has found widespread use in a variety of industries and problems. Most famously, it was used to improve the turbine power plant of the Boeing 777 airliner; as reported in Begley and Beals 1995, the genetically optimized design was almost 1% more fuel-efficient than previous engines, which in a field such as this is "a windfall". Engineous has also been used to optimize the configuration of industrial DC motors, hydroelectric generators and steam turbines, to plan out power grids, and to design superconducting generators and nuclear power plants for orbiting satellites. Rao 1998 also reports that NASA has used Engineous to optimize the design of a high-altitude airplane for sampling ozone depletion, which must be both light and efficient. Creationist arguments Top As one might expect, the real-world demonstration of the power of evolution that GAs represent has proven surprising and disconcerting for creationists, who have always claimed that only intelligent design, not random variation and selection, could have produced the information content and complexity of living things. They have therefore argued that the success of genetic algorithms does not allow us to infer anything about biological evolution. The criticisms of two anti-evolutionists, representing two different viewpoints, will be addressed: young-earth creationist Dr. Don Batten of Answers in Genesis, who has written an article entitled " Genetic algorithms -- do they show that evolution works?", and old-earth creationist and intelligent-design advocate Dr. William Dembski, whose recent book No Free Lunch (Dembski 2002) discusses this topic. Don Batten William Dembski Don Batten Some traits in living things are qualitative, whereas GAs are always quantitative Batten states that GAs must be quantitative, so that any improvement can be selected for. This is true. He then goes on to say, "Many biological traits are qualitative--it either works or it does not, so there is no step-wise means of getting from no function to the function." This assertion has not been demonstrated, however, and is not supported by evidence. Batten does not even attempt to give an example of a biological trait that either "works or does not" and thus cannot be built up in a stepwise fashion. But even if he did offer such a trait, how could he possibly prove that there is no stepwise path to it? Even if we do not know of such a path, does it follow that there is none? Of course not. Batten is effectively claiming that if we do not understand how certain traits evolved, then it is impossible for those traits to have evolved - a classic example of the elementary logical fallacy of argument from ignorance. The search space of all possible variants of any given biological trait is enormous, and in most cases our knowledge subsumes only an infinitesimal fraction of the possibilities. There may well be numerous paths to a structure which we do not yet know about; there is no reason whatsoever to believe our current ignorance sets limits on our future progress. Indeed, history gives us reason to be confident: scientists have made enormous progress in explaining the evolution of many complex biological structures and systems, both macroscopic and microscopic (for example, see these pages on the evolution of complex molecular systems, "clock" genes, the woodpecker's tongue or the bombardier beetle). We are justified in believing it likely that the ones that have so far eluded us will also be made clear in the future. In fact, GAs themselves give us an excellent reason to believe this. Many of the problems to which they have been applied are complex engineering and design issues where the solution was not known ahead of time and therefore the problem could not be "rigged" to aid the algorithm's success. If the creationists were correct, it would have been entirely reasonable to expect genetic algorithms to fail dismally time after time when applied to these problems, but instead, just the opposite has occurred: GAs have discovered powerful, high-quality solutions to difficult problems in a diverse variety of fields. This calls into serious question whether there even are any problems such as Batten describes, whose solutions are inaccessible to an evolutionary process. GAs select for one trait at a time, whereas living things are multidimensional Batten states that in GAs, "A single trait is selected for, whereas any living thing is multidimensional", and asserts that in living things with hundreds of traits, "selection has to operate on all traits that affect survival", whereas "[a] GA will not work with three or four different objectives, or I dare say even just two." This argument reveals Batten's profound ignorance of the relevant literature. Even a cursory survey of the work done on evolutionary algorithms (or a look at an earlier section of this essay) would have revealed that multiobjective genetic algorithms are a major, thriving area of research within the broader field of evolutionary computation and prevented him from making such an embarrassingly incorrect claim. There are journal articles, entire issues of prominent journals on evolutionary computation, entire conferences, and entire books on the topic of multiobjective GAs. Coello 2000 provides a very extensive survey, with five pages of references to papers on the use of multiobjective genetic algorithms in a broad range of fields; see alsoFleming and Purshouse 2002; Hanne 2000; Zitzler and Thiele 1999; Fonseca and Fleming 1995; Srinivas and Deb 1994; Goldberg 1989, p.197. For some books and papers discussing the use of multiobjective GAs to solve specific problems, see: Obayashi et al. 2000; Sasaki et al. 2001; Benini and Toffolo 2002; Haas, Burnham and Mills 1997; Chryssolouris and Subramaniam 2001; Hughes and Leyland 2000; He and Mort 2000; Kewley and Embrechts 2002; Beasley, Sonander and Havelock 2001; Sato et al. 2002; Tang et al. 1996; Williams, Crossley and Lang 2001; Koza et al. 1999; Koza et al. 2003. For a comprehensive repository of citations on multiobjective GAs, see http://www.lania.mx/~ccoello/EMOO/. GAs do not allow the possibility of extinction or error catastrophe Batten claims that, in GAs, "Something always survives to carry on the process", while this is not necessarily true in the real world - in short, GAs do not allow the possibility of extinction. However, this is not true; extinction can occur. For example, some GAs use a model of selection called thresholding, in which individuals must have a fitness higher than some predetermined level to survive and reproduce (Haupt and Haupt 1998, p. 37). If no individual meets this standard in such a GA, the population can indeed go extinct. But even in GAs that do not use thresholding, states analogous to extinction can occur. If mutation rates are too high or selective pressures too strong, then a GA will never find a feasible solution. The population may become hopelessly scrambled as deleterious mutations building up faster than selection can remove them disrupt fitter candidates (error catastrophe), or it may thrash around helplessly, unable to achieve any gain in fitness large enough to be selected for. Just as in nature, there must be a balance or a solution will never be reached. The one advantage a programmer has in this respect is that, if this does happen, he can reload the program with different values - for population size, for mutation rate, for selection pressure - and start over again. Obviously this is not an option for living things. Batten says, "There is no rule in evolution that says that some organism(s) in the evolving population will remain viable no matter what mutations occur," but there is no such rule in genetic algorithms either. Batten also states that "the GAs that I have looked at artificially preserve the best of the previous generation and protect it from mutations or recombination in case nothing better is produced in the next iteration". This criticism will be addressed in the next point.  GAs ignore the cost of substitution Batten's next claim is that GAs neglect "Haldane's Dilemma", which states that an allele which contributes less to an organism's fitness will take a correspondingly longer time to become fixated in a population. Obviously, what he is referring to is the elitist selection technique, which automatically selects the best candidate at each generation no matter how small its advantage over its competitors is. He is right to suggest that, in nature, very slight competitive advantages might take much longer to propagate. Genetic algorithms are not an exact model of biological evolution in this respect. However, this is beside the point. Elitist selection is an idealization of biological evolution - a model of what would happen in nature if chance did not intervene from time to time. As Batten acknowledges, Haldane's dilemma does not state that a slightly advantageous mutation will never become fixed in a population; it states that it will take more time for it to do so. However, when computation time is at a premium or a GA researcher wishes to obtain a solution more quickly, it may be desirable to skip this process by implementing elitism. An important point is that elitism does not affect which mutations arise, merely makes certain the selection of the best ones that do arise. It would not matter what the strength of selection was if information-increasing mutations did not occur. In other words, elitism speeds up convergence once a good solution has been discovered - it does not bring about an outcome which would not otherwise have occurred. Therefore, if genetic algorithms with elitism can produce new information, then so can evolution in the wild. Furthermore, not all GAs use elitist selection. Many do not, instead relying only on roulette-wheel selection and other stochastic sampling techniques, and yet these have been no less successful. For instance, Koza et al. 2003, p.8-9, gives examples of 36 instances where genetic programming has produced human-competitive results, including the automated recreation of 21 previously patented inventions (six of which were patented during or after 2000), 10 of which duplicate the functionality of the patent in a new way, and also including two patentable new inventions and five new algorithms that outperform any human-written algorithms for the same purpose. As Dr. Koza states in an earlier reference to the same work (1999, p.1070): "The elitist strategy is not used." Some other papers cited in this essay in which elitism is not used include: Robin et al. 2003; Rizki, Zmuda and Tamburino 2002; Chryssolouris and Subramaniam 2001; Burke and Newall 1999; Glen and Payne 1995; Au, Chan and Yao 2003; Jensen 2003;Kewley and Embrechts 2002; Williams, Crossley and Lang 2001; Mahfoud and Mani 1996. In each of these cases, without any mechanism to ensure that the best individuals were selected at each generation, without exempting these individuals from potentially detrimental random change, genetic algorithms still produce powerful, efficient, human-competitive results. This fact may be surprising to creationists such as Batten, but it is wholly expected by advocates of evolution. GAs ignore generation time constraints This criticism is puzzling. Batten claims that a single generation in a GA can take microseconds, whereas a single generation in any living organism can take anywhere from minutes to years. This is true, but how it is supposed to bear on the validity of GAs as evidence for evolution is not explained. If a GA can generate new information, regardless of how long it takes, then surely evolution in the wild can do so as well; that GAs can indeed do so is all this essay intends to demonstrate. The only remaining issue would then be whether biological evolution has actually had enough time to cause significant change, and the answer to this question would be one for biologists, geologists and physicists, not computer programmers. The answer these scientists have provided is fully in accord with evolutionary timescales, however. Numerous lines of independent evidence, including radiometric isochron dating, thecooling rates of white dwarfs, the nonexistence of isotopes with short halflives in nature, therecession rates of distant galaxies, and analysis of the cosmic microwave background radiationall converge on the same conclusion: an Earth and a universe many billions of years old, easily long enough for evolution to produce all the diversity of life we see today by all reasonable estimates. GAs employ unrealistically high rates of mutation and reproduction Batten asserts, without providing any supporting evidence or citations, that GAs "commonly produce 100s or 1000s of 'offspring' per generation", a rate even bacteria, the fastest-reproducing biological organisms, cannot match. This criticism misses the mark in several ways. First of all, if the metric being used is (as it should be) number of offspring per generation, rather than number of offspring per unit of absolute time, then there clearly are biological organisms that can reproduce at rates faster than that of bacteria and roughly equal to the rates Batten claims are unrealistic. For example, a single frog can lay thousands of eggs at a time, each of which has the potential to develop into an adult. Granted, most of these usually will not survive due to resource limitations and predation, but then most of the "offspring" in each generation of a GA will not go on either. Secondly, and more importantly, a genetic algorithm working on solving a problem is not meant to represent a single organism. Instead, a genetic algorithm is more analogous to anentire population of organisms - after all, it is populations, not individuals, that evolve. Of course, it is eminently plausible for a whole population to collectively have hundreds or thousands of offspring per generation. (Creationist Walter ReMine makes this same mistake with regards to Dr. Richard Dawkins' "weasel" program. See this Post of the Month for more.) Additionally, Batten says, the mutation rate is artificially high in GAs, whereas living organisms have error-checking machinery designed to limit the mutation rate to about 1 in 10 billion base pairs (though this is too small - the actual figure is closer to 1 in 1 billion. See Dawkins 1996, p.124). Now of course this is true. If GAs mutated at this rate, they would take far too long to solve real-world problems. Clearly, what should be considered relevant is the rate of mutationrelative to the size of the genome. The mutation rate should be high enough to promote a sufficient amount of diversity in the population without overwhelming the individuals. An average human will possess between one and five mutations; this is not at all unrealistic for the offspring in a GA. GAs have artificially small genomes Batten's argument that the genome of a genetic algorithm "is artificially small and only does one thing" is badly misguided. In the first place, as we have seen, it is not true that a GA only does one thing; there are many examples of genetic algorithms specifically designed to optimize many parameters simultaneously, often far more parameters simultaneously than a human designer ever could. And how exactly does Batten quantify "artificially small"? Many evolutionary algorithms, such as John Koza's genetic programming, use variable-length encodings where the size of candidate solutions can grow arbitrarily large. Batten claims that even the simplest living organism has far more information in its genome than a GA has ever produced, but while organisms living todaymay have relatively large genomes, that is because much complexity has been gained over the course of billions of years of evolution. As the Probability of Abiogenesis article points out, there is good reason to believe that the earliest living organisms were very much simpler than any species currently extant - self-replicating molecules probably no longer than 30 or 40 subunits, which could easily be specified by the 1800 bits of information that Batten apparently concedes at least one GA has generated. Genetic algorithms are similarly a very new technique whose full potential has not yet been tapped; digital computers themselves are only a few decades old, and as Koza (2003, p. 25) points out, evolutionary computing techniques have been generating increasingly more substantial and complex results over the last 15 years, in synchrony with the ongoing rapid increase in computing power often referred to as "Moore's Law". Just as early life was very simple compared to what came after, today's genetic algorithms, despite the impressive results they have already produced, are likely to give rise to far greater things in the future. GAs ignore the possibility of mutation occurring throughout the genome Batten apparently does not understand how genetic algorithms work, and he shows it by making this argument. He states that in real life, "mutations occur throughout the genome, not just in a gene or section that specifies a given trait". This is true, but when he says that the same is not true of GAs, he is wrong. Exactly like in living organisms, GAs permit mutation and recombination to occur anywhere in the genomes of their candidate solutions; exactly like in living organisms, GAs must weed out the deleterious changes while simultaneously selecting for the beneficial ones. Batten goes on to claim that "the program itself is protected from mutations; only target sequences are mutated", and if the program itself were mutated it would soon crash. This criticism, however, is irrelevant. There is no reason why the governing program of a GA should be mutated. The program is not part of the genetic algorithm; the program is what supervises the genetic algorithm and mutates the candidate solutions, which are what the programmer is seeking to improve. The program running the GA is not analogous to the reproductive machinery of an organism, a comparison which Batten tries to make. Rather, it is analogous to the invariant natural laws that govern the environments in which living organisms live and reproduce, and these are neither expected to change nor need to be "protected" from it. GAs ignore problems of irreducible complexity Using old-earth creationist Michael Behe's argument of "irreducible complexity", Batten argues, "Many biological traits require many different components to be present, functioning together, for the trait to exist at all," whereas this does not happen in GAs. However, it is trivial to show that such a claim is false, as genetic algorithms have producedirreducibly complex systems. For example, the voice-recognition circuit Dr. Adrian Thompson evolved (Davidson 1997) is composed of 37 core logic gates. Five of them are not even connected to the rest of the circuit, yet all 37 are required for the circuit to work; if any of them are disconnected from their power supply, the entire system ceases to function. This fits Behe's definition of an irreducibly complex system and shows that an evolutionary process can produce such things. It should be noted that this is the same argument as the first one, merely presented in different language, and thus the refutation is the same. Irreducible complexity is not a problem for evolution, whether that evolution is occurring in living beings in the wild or in silicon on a computer's processor chip. GAs ignore polygeny, pleiotropy, and other genetic complexity Batten argues that GAs ignore issues of polygeny (the determination of one trait by multiple genes), pleiotropy (one gene affecting multiple traits), and dominant and recessive genes. However, none of these claims are true. GAs do not ignore polygeny and pleiotropy: these properties are merely allowed to arise naturally rather than being deliberately coded in. It is obvious that in any complex interdependent system (i.e., a nonlinear system), the alteration or removal of one part will cause a ripple effect of changes throughout; thus GAs naturally incorporate polygeny and pleiotropy. "In the genetic algorithm literature, parameter interaction is called epistasis (a biological term for gene interaction). When there is little to no epistasis, minimum seeking algorithms [i.e., hill-climbers --A.M.] perform best. Genetic algorithms shine when the epistasis is medium to high..." (Haupt and Haupt 1998, p. 31, original emphasis). Likewise, there are some genetic algorithm implementations that do have diploid chromosomes and dominant and recessive genes (Goldberg 1989, p.150; Mitchell 1996, p.22). However, those that do not are simply more like haploid organisms, such as bacteria, than they are like diploid organisms, such as human beings. Since (by certain measures) bacteria are among the most successful organisms on this planet, such GAs remain a good model of evolution. GAs do not have multiple reading frames Batten discusses the existence of multiple reading frames in the genomes of some living things, in which the DNA sequences code for different functional proteins when read in different directions or with different starting offsets. He asserts that "Creating a GA to generate such information-dense coding would seem to be out of the question". Such a challenge begs for an answer, and here it is: Soule and Ball 2001. In this paper, the authors present a genetic algorithm with multiple reading frames and dense coding, enabling it to store more information than the total length of its genome. Like the three-nucleotide codons that specify amino acids in the genomes of living organisms, this GA's codons were five-digit binary strings. Since the codons were five digits long, there were five different possible reading frames. The sequence 11111 serves as a "start" codon and 00000 as a "stop" codon; because the start and stop codons could occur anywhere in the genome, the length of each individual was variable. Regions of the chromosome which did not fall between start-stop pairs were ignored. The GA was tested on four classic function maximization problems. "Initially, the majority of the bits do not participate in any gene, i.e., most of a chromosome is non-coding. Again this is because in the initial random individuals there are relatively few start-stop codon pairs. However, the number of bits that do not participate decreases extremely rapidly." During the course of the run, the GA can increase the effective length of its genome by introducing new start codons in different reading frames. By the end of the run, "the amount of overlap is quite high. Many bits are participating in several (and often all five) genes." On all test problems, the GA started, on average, with 5 variables specified; by the end of the run, that number had increased to an average of around 25. In the test problems, the GA with multiple reading frames produced significantly better solutions than a standard GA on two out of the four problems and better average solutions on the remaining two. In one problem, the GA successfully compressed 625 total bits of information into a chromosome only 250 bits long by using alternative reading frames. The authors label this behavior "extremely sophisticated" and conclude that "These data show that a GA can successfully use reading frames despite the added complexity" and "It is clear that a GA can introduce new 'genes' as necessary to solve a given problem, even with the difficulties imposed by using start and stop codons and overlapping genes". GAs have preordained goals Like several others, this objection shows that Batten does not fully understand what a genetic algorithm is and how it works. He argues that GAs, unlike evolution, have goals predetermined and specified at the outset, and as an example of this, offers Dr. Richard Dawkins' "weasel" program. However, the weasel program is not a true genetic algorithm, and is not typical of genetic algorithms, for precisely that reason. It was not intended to demonstrate the problem-solving power of evolution. Instead, its only intent was to show the difference between single-step selection (the infamous "tornado blowing through a junkyard producing a 747") and cumulative, multi-step selection. It did have a specific goal predetermined at the outset. True genetic algorithms, however, do not. In a broadly general sense, GAs do have a goal: namely, to find an acceptable solution to a given problem. In this same sense, evolution also has a goal: to produce organisms that are better adapted to their environment and thus experience greater reproductive success. But just as evolution is a process without specific goals, GAs do not specify at the outset how a given problem should be solved. The fitness function is merely set up to evaluate how well a candidate solution performs, without specifying any particular way it should work and without passing judgment on whatever way it does invent. The solution itself then emerges through a process of mutation and selection. Batten's next statement shows clearly that he does not understand what a genetic algorithm is. He asserts that "Perhaps if the programmer could come up with a program that allowed anything to happen and then measured the survivability of the 'organisms', it might be getting closer to what evolution is supposed to do" - but that is exactly how genetic algorithms work. They randomly generate candidate solutions and randomly mutate them over many generations. No configuration is specified in advance; as Batten puts it, anything is allowed to happen. As John Koza (2003, p. 37) writes, uncannily echoing Batten's words: "An important feature... is that the selection [in genetic programming] is not greedy. Individuals that are known to be inferior will be selected to a certain degree. The best individual in the population is not guaranteed to be selected. Moreover, the worst individual in the population will not necessarily be excluded. Anything can happen and nothing is guaranteed." (An earlier sectiondiscussed this very point as one of a GA's strengths.) And yet, by applying a selective filter to these randomly mutating candidates, efficient, complex and powerful solutions to difficult problems arise, solutions that were not designed by any intelligence and that can often equal or outperform solutions that were designed by humans. Batten's blithe assertion that "Of course that is impossible" is squarely contradicted by reality. GAs do not actually generate new information Batten's final criticism runs: "With a particular GA, we need to ask how much of the 'information' generated by the program is actually specified in the program, rather than being generated de novo." He charges that GAs often do nothing more than find the best way for certain modules to interact when both the modules themselves and the ways they can interact are specified ahead of time. It is difficult to know what to make of this argument. Any imaginable problem - terms in a calculus equation, molecules in a cell, components of an engine, stocks on a financial market - can be expressed in terms of modules that interact in given ways. If all one has is unspecified modules that interact in unspecified ways, there is no problem to be solved. Does this mean that the solution to no problem requires the generation of new information? In regards to Batten's criticism about information contained in the solution being prespecified in the problem, the best way to assuage his concerns is to point out the many examples in which GAs begin with randomly generated initial populations that are not in any way designed to help the GA solve the problem. Some such examples include: Graham-Rowe 2004; Davidson 1997;Assion et al. 1998; Giro, Cyrillo and Galvão 2002; Glen and Payne 1995; Chryssolouris and Subramaniam 2001; Williams, Crossley and Lang 2001; Robin et al. 2003; Andreou, Georgopoulos and Likothanassis 2002; Kewley and Embrechts 2002; Rizki, Zmuda and Tamburino 2002; and especially Koza et al. 1999 and Koza et al. 2003, which discuss the use of genetic programming to generate 36 human-competitive inventions in analog circuit design, molecular biology, algorithmics, industrial controller design, and other fields, all starting from populations of randomly generated initial candidates. Granted, some GAs do begin with intelligently generated solutions which they then seek to improve, but this is irrelevant: in such cases the aim is not just to return the initially input solution, but to improve it by the production of new information. In any case, even if the initial situation is as Batten describes, finding the most efficient way a number of modules can interact under a given set of constraints can be a far from trivial task, and one whose solution involves a considerable amount of new information: scheduling at international airports, for example, or factory assembly lines, or distributing casks among warehouses and distilleries. Again, GAs have proven themselves effective at solving problems whose complexity would swamp any human. In light of the multiple innovations and unexpectedly effective solutions arising from GAs in many fields, Batten's claim that "The amount of new information generated (by a GA) is usually quite trivial" rings hollow indeed. William Dembski Old-earth creationist Dr. William Dembski's recent book, No Free Lunch: Why Specified Complexity Cannot Be Purchased Without Intelligence, is largely devoted to the topic of evolutionary algorithms and how they relate to biological evolution. In particular, Dembski's book is concerned with an elusive quality he calls "specified complexity," which he asserts is contained in abundance in living things, and which he further asserts evolutionary processes are incapable of generating, leaving "design" through unspecified mechanisms by an unidentified "intelligent designer" the only alternative. To bolster his case, Dembski appeals to a class of mathematical theorems known as the No Free Lunch theorems, which he claims prove that evolutionary algorithms, on the average, do no better than blind search. Richard Wein has written an excellent and comprehensive rebuttal to Dembski, entitled Not a Free Lunch But a Box of Chocolates, and its points will not be reproduced here. I will instead focus on chapter 4 of Dembski's book, which deals in detail with genetic algorithms. Dembski has one main argument against GAs, which is developed at length throughout this chapter. While he does not deny that they can produce impressive results - indeed, he says that there is something "oddly compelling and almost magical" (p.221) about the way GAs can find solutions that are unlike anything designed by human beings - he argues that their success is due to the specified complexity that is "smuggled into" them by their human designers and subsequently embodied in the solutions they produce. "In other words, all the specified complexity we get out of an evolutionary algorithm has first to be put into its construction and into the information that guides the algorithm. Evolutionary algorithms therefore do not generate or create specified complexity, but merely harness already existing specified complexity" (p.207). The first problem evident in Dembski's argument is this. Although his chapter on evolutionary algorithms runs for approximately 50 pages, the first 30 of those pages discuss nothing but Dr. Richard Dawkins' "weasel" algorithm, which, as already discussed, is not a true genetic algorithm and is not representative of genetic algorithms. Dembski's other two examples - the crooked wire genetic antennas of Edward Altshuler and Derek Linden and the checkers-playing neural nets of Kumar Chellapilla and David Fogel - are only introduced within the last 10 pages of the chapter and are discussed for three pages, combined. This is a serious deficiency, considering that the "weasel" program is not representative of most work being done in the field of evolutionary computation; nevertheless, Dembski's arguments relating to it will be analyzed. In regard to the weasel program, Dembski states that "Dawkins and fellow Darwinists use this example to illustrate the power of evolutionary algorithms" (p.182), and, again, "Darwinists... are quite taken with the METHINKS IT IS LIKE A WEASEL example and see it as illustrating the power of evolutionary algorithms to generate specified complexity" (p.183). This is a straw man of Dembski's creation (not least because Dawkins' book was written long before Dembski ever coined that term!). Here is what Dawkins really says about the purpose of his program: "What matters is the difference between the time taken by cumulative selection, and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection: about a million million million million million years." (Dawkins 1996, p.49, emphasis original) In other words, the weasel program was intended to demonstrate the difference between two different kinds of selection: single-step selection, where a complex result is produced by pure chance in a single leap, and cumulative selection, where a complex result is built up bit by bit via a filtering process that preferentially preserves improvements. It was never intended to be a simulation of evolution as a whole. Single-step selection is the absurdly improbable process frequently attacked in creationist literature by comparing it to a tornado blowing through a junkyard producing a 747 airliner, or an explosion in a print shop producing a dictionary. Cumulative selection is what evolution actually uses. Using single-step selection to achieve a functional result of any significant complexity, one would have to wait, on average, many times the current age of the universe. Using cumulative selection, however, that same result can be reached in a comparatively very short length of time. Demonstrating this difference was the point of Dawkins' weasel program, and that was the only point of that program. In a footnote to this chapter, Dembski writes, "It is remarkable how Dawkins' example gets recycled without any indication of the fundamental difficulties that attend it" (p.230), but it is only misconceptions in the minds of creationists such as Dembski and Batten, who attack the weasel program for not demonstrating something it was never intended to demonstrate, that give rise to these "difficulties". Unlike every example of evolutionary algorithms discussed in this essay, the weasel program does indeed have a single, prespecified outcome, and the quality of the solutions it generates is judged by explicitly comparing them to that prespecified outcome. Therefore, Dembski is quite correct when he says that the weasel program does not generate new information. However, he then makes a gigantic and completely unjustified leap when he extrapolates this conclusion to all evolutionary algorithms: "As the sole possibility that Dawkins' evolutionary algorithm can attain, the target sequence in fact has minimal complexity.... Evolutionary algorithms are therefore incapable of generating true complexity" (p.182). Even Dembski seems to recognize this when he writes: "...most evolutionary algorithms in the literature are programmed to search a space of possible solutions to a problem until they find an answer - not, as Dawkins does here, by explicitly programming the answer into them in advance" (p.182). But then, having given a perfectly good reason why the weasel program is not representative of GAs as a whole, he inexplicably goes on to make precisely that fallacious generalization! In reality, the weasel program is significantly different from most genetic algorithms, and therefore Dembski's argument from analogy does not hold up. True evolutionary algorithms, such as the examples discussed in this essay, do not simply find their way back to solutions already discovered by other methods - instead, they are presented with problems where the optimal solution is not known in advance, and are asked to discover that solution on their own. Indeed, if genetic algorithms could do nothing more than rediscover solutions already programmed into them, what would be the point of using them? It would be an exercise in redundancy to do so. However, the widespread scientific (and commercial) interest in GAs shows that there is far more substance to them than the rather trivial example Dembski tries to reduce this entire field to. Having set up and then knocked down this straw man, Dembski moves on to his next line of argument: that the specified complexity exhibited by the outcomes of more representative evolutionary algorithms has, like the weasel program, been "smuggled in" by the designers of the algorithm. "But invariably we find that when specified complexity seems to be generated for free, it has in fact been front-loaded, smuggled in, or hidden from view" (p.204). Dembski suggests that the most common "hiding place" of specified complexity is in the GA's fitness function. "What the [evolutionary algorithm] has done is take advantage of the specified complexity inherent in the fitness function and used it in searching for and then locating the target..." (p.194). Dembski goes on to argue that, before an EA can search a given fitness landscape for a solution, some mechanism must first be employed to select that fitness landscape from what he calls a phase space of all the possible fitness landscapes, and if that mechanism is likewise an evolutionary one, some other mechanism must first be employed to select its fitness function from an even larger phase space, and so on. Dembski concludes that the only way to stop this infinite regress is through intelligence, which he holds to have some irreducible, mysterious ability to select a fitness function from a given phase space without recourse to higher-order phase spaces. "There is only one known generator of specified complexity, and that is intelligence" (p.207). Dembski is correct when he writes that the fitness function "guid[es] an evolutionary algorithm into the target" (p.192). However, he is incorrect in his claim that selecting the right fitness function is a process that requires the generation of even more specified complexity than the EA itself produces. As Koza (1999, p. 39) writes, the fitness function tells an evolutionary algorithm "what needs to be done", not "how to do it". Unlike the unrepresentative weasel program example, the fitness function of an EA typically does not specify any particular form that the solution should take, and therefore it cannot be said to contribute "specified complexity" to the evolved solution in any meaningful sense. An example will illustrate the point in greater detail. Dembski claims that in Chellapilla and Fogel's checkers experiment, their choice to hold the winning criterion constant from game to game "inserted an enormous amount of specified complexity" (p.223). It is certainly true that the final product of this process displayed a great deal of specified complexity (however one chooses to define that term). But is it true that the chosen fitness measure contained just as much specified complexity? Here is what Chellapilla and Fogel actually say: "To appreciate the level of play that has been achieved, it may be useful to consider the following thought experiment. Suppose you are asked to play a game on an eight-by-eight board of squares with alternating colors. There are 12 pieces on each side arranged in a specific manner to begin play. You are told the rules of how the pieces move (i.e., diagonally, forced jumps, kings) and that the piece differential is available as a feature. You are not, however, told whether or not this differential is favorable or unfavorable (there is a version of checkers termed 'suicide checkers,' where the object is to 'lose' as fast as possible) or if it is even valuable information. Most importantly, you are not told the object of the game. You simply make moves and at some point an external observer declares the game over. They do not, however, provide feedback on whether or not you won, lost, or drew. The only data you receive comes after a minimum of five such games and is offered in the form of an overall point score. Thus, you cannot know with certainty which games contributed to the overall result or to what degree. Your challenge is to induce the appropriate moves in each game based only on this coarse level of feedback." (Chellapilla and Fogel 2001, p.427) It exceeds the bounds of the absurd for Dembski to claim that this fitness measure inserted an "enormous" amount of specified complexity. If a human being who had never heard of checkers was given the same information, and we returned several months later to discover that he had become an internationally ranked checkers expert, should we conclude that specified complexity has been generated? Dembski states that to overturn his argument, "one must show that finding the information that guides an evolutionary algorithm to a target is substantially easier than finding the target directly through a blind search" (p.204). I contend that this is precisely the case. Intuitively, it should not be surprising that the fitness function contains less information than the evolved solution. This is precisely the reason why GAs have found such widespread use: it is easier (requires less information) to write a fitness function that measures how good a solution is, than to design a good solution from scratch. In more informal terms, consider Dembski's two examples, the crooked-wire genetic antenna and the evolved checkers-playing neural network named Anaconda. It requires a great deal of detailed information about the game of checkers to come up with a winning strategy (consider Chinook and its enormous library of endgames). However, it does not require equally detailed information to recognize such a strategy when one sees it: all we need observe is that that strategy consistently defeats its opponents. Similarly, a person who knew nothing about how to design an antenna that radiates evenly over a hemispherical region in a given frequency range could still test such an antenna and verify that it works as intended. In both cases, determining what constitutes high fitness is far easier (requires less information) than figuring out how to achieve high fitness. Granted, even though choosing a fitness function for a given problem requires less information than actually solving the problem defined by that fitness function, it does take some information to specify the fitness function in the first place, and it is a legitimate question to ask where this initial information comes from. Dembski may still ask about the origin of human intelligence that enables us to decide to solve one problem rather than another, or about the origin of the natural laws of the cosmos that make it possible for life to exist and flourish and for evolution to occur. These are valid questions, and Dembski is entitled to wonder about them. However, by this point - seemingly unnoticed by Dembski himself - he has now moved away from his initial argument. He is no longer claiming that evolution cannot happen; instead, he is essentially asking why we live in a universe where evolution can happen. In other words, what Dembski does not seem to realize is that the logical conclusion of his argument is theistic evolution. It is fully compatible with a God who (as many Christians, including evolutionary biologist Kenneth Miller, believe) used evolution as his creative tool, and set up the universe in such a way as to make it not just likely, but certain. I will conclude by clearing up some additional, minor misconceptions in chapter 4 of No Free Lunch. For starters, although Dembski, unlike Batten, is clearly aware of the field of multiobjective optimization, he erroneously states that "until some form of univalence is achieved, optimization cannot begin" (p.186). This essay's discussion of multiple-objective genetic algorithms shows the error of this. Perhaps other design techniques have this restriction, but one of the virtues of GAs is precisely that they can make trade-offs and optimize several mutually exclusive objectives simultaneously, and the human overseers can then pick whichever solution best achieves their goals from the final group of Pareto-optimal solutions. No method of combining multiple criteria into one is necessary. Dembski also states that GAs "seem less adept at constructing integrated systems that require multiple parts to achieve novel functions" (p.237). The many examples detailed in this essay (particularly John Koza's use of genetic programming to engineer complex analog circuits) show this claim to be false as well. Finally, Dembski mentions that INFORMS, the professional organization of the operations research community, pays very little attention to GAs, and this "is reason to be skeptical of the technique's general scope and power" (p.237). However, just because a particular scientific society is not making widespread use of GAs does not mean that such uses are not widespread elsewhere or in general, and this essay has endeavored to show that this is in fact the case. Evolutionary techniques have found a wide variety of uses in virtually any field of science one would care to name, as well as among many companies in the commercial sector. Here is a partial list: Lockheed Martin (Gibbs 1996) GlaxoSmithKline (Gillet 2002) LBS Capital Management (Naik 1996) First Quadrant (Begley and Beals 1995) Texas Instruments (Begley and Beals 1995) U.S. West (Begley and Beals 1995) John Deere & Co. (Petzinger 1995) Volvo (Rao 1998) Ascent Technology (Wired 2002) Boeing (Ashley 1992) British Petroleum (Lemley 2001) Ford Motor Company (Lemley 2001) Unilever (Lemley 2001) United Distillers and Vintners (Lemley 2001) By contrast, given the dearth of scientific discoveries and research stimulated by intelligent design, Dembski is in a poor position to complain about lack of practical application. Intelligent design is a vacuous hypothesis, telling us nothing more than "Some designer did something, somehow, at some time, to cause this result." By contrast, this essay has hopefully demonstrated that evolution is a problem-solving strategy rich with practical applications. Conclusion Top Even creationists find it impossible to deny that the combination of mutation and natural selection can produce adaptation. Nevertheless, they still attempt to justify their rejection of evolution by dividing the evolutionary process into two categories - "microevolution" and "macroevolution" - and arguing that only the second is controversial, and that any evolutionary change we observe is only an example of the first. Now, microevolution and macroevolution are terms that have meaning to biologists; they are defined, respectively, as evolution below the species level and evolution at or above the species level. But the crucial difference between the way creationists use these terms and the way scientists use them is that scientists recognize that these two are fundamentally the same process with the same mechanisms, merely operating at different scales. Creationists, however, are forced to postulate some type of unbridgeable gap separating the two, in order for them to deny that the processes of change and adaptation we see operating in the present can be extrapolated to produce all the diversity observed in the living world. However, genetic algorithms make this view untenable by demonstrating the fundamental seamlessness of the evolutionary process. Take, for example, a problem that consists of programming a circuit to discriminate between a 1-kilohertz and a 10-kilohertz tone, and respond respectively with steady outputs of 0 and 5 volts. Say we have a candidate solution that can accurately discriminate between the two tones, but its outputs are not quite steady as required; they produce small waveforms rather than the requisite unchanging voltage. Presumably, according to the creationist view, to change this circuit from its present state to the perfect solution would be "microevolution", a small change within the ability of mutation and selection to produce. But surely, a creationist would argue, to arrive at this same final state from a completely random initial arrangement of components would be "macroevolution" and beyond the reach of an evolutionary process. However, genetic algorithms were able to accomplish both, evolving the system from a random arrangement to the near-perfect solution and finally to the perfect, optimal solution. At no step of the way did an insoluble difficulty or a gap that could not be bridged turn up. At no point whatsoever was human intervention required to assemble an irreducibly complex core of components (despite the fact that the finished product does contain such a thing) or to "guide" the evolving system over a difficult peak. The circuit evolved, without any intelligent guidance, from a completely random and non-functional state to a tightly complex, efficient and optimal state. How can this not be a compelling experimental demonstration of the power of evolution? It has been said that human cultural evolution has superceded the biological kind - that we as a species have reached a point where we are able to consciously control our society, our environment and even our genes to a sufficient degree to make the evolutionary process irrelevant. It has been said that the cultural whims of our rapidly changing society, rather than the comparatively glacially slow pace of genetic mutation and natural selection, is what determines fitness today. In a sense, this may well be true. But in another sense, nothing could be further from the truth. Evolution is a problem-solving process whose power we are only beginning to understand and exploit; despite this, it is already at work all around us, shaping our technology and improving our lives, and in the future, these uses will only multiply. Without a detailed understanding of the evolutionary process, none of the countless advances we owe to genetic algorithms would have been possible. There is a lesson here to those who deny the power of evolution, as well as those who deny that knowledge of it has any practical benefit. As incredible as it may seem, evolution works. As the poet Lord Byron put it: "'Tis strange but true; for truth is always strange, stranger than fiction." References and resources Top "Adaptive Learning: Fly the Brainy Skies." Wired, vol.10, no.3 (March 2002). Available online athttp://www.wired.com/wired/archive/10.03/everywhere.html?pg=2. Altshuler, Edward and Derek Linden. "Design of a wire antenna using a genetic algorithm." Journal of Electronic Defense, vol.20, no.7, p.50-52 (July 1997). Andre, David and Astro Teller. "Evolving team Darwin United." In RoboCup-98: Robot Soccer World Cup II, Minoru Asada and Hiroaki Kitano (eds). Lecture Notes in Computer Science, vol.1604, p.346-352. Springer-Verlag, 1999. See also: Willihnganz, Alexis. "Software that writes software." Salon, August 10, 1998. Available online at http://www.salon.com/tech/feature/1999/08/10/genetic_programming/. Andreou, Andreas, Efstratios Georgopoulos and Spiridon Likothanassis. "Exchange-rates forecasting: A hybrid algorithm based on genetically optimized adaptive neural networks." Computational Economics, vol.20, no.3, p.191-210 (December 2002). Ashley, Steven. "Engineous explores the design space." Mechanical Engineering, February 1992, p.49-52. Assion, A., T. Baumert, M. Bergt, T. Brixner, B. Kiefer, V. Seyfried, M. Strehle and G. Gerber. "Control of chemical reactions by feedback-optimized phase-shaped femtosecond laser pulses." Science, vol.282, p.919-922 (30 October 1998). Au, Wai-Ho, Keith Chan, and Xin Yao. "A novel evolutionary data mining algorithm with applications to churn prediction." IEEE Transactions on Evolutionary Computation, vol.7, no.6, p.532-545 (December 2003). Beasley, J.E., J. Sonander and P. Havelock. "Scheduling aircraft landings at London Heathrow using a population heuristic." Journal of the Operational Research Society, vol.52, no.5, p.483-493 (May 2001). Begley, Sharon and Gregory Beals. "Software au naturel." Newsweek, May 8, 1995, p.70. Benini, Ernesto and Andrea Toffolo. "Optimal design of horizontal-axis wind turbines using blade-element theory and evolutionary computation." Journal of Solar Energy Engineering, vol.124, no.4, p.357-363 (November 2002). Burke, E.K. and J.P. Newall. "A multistage evolutionary algorithm for the timetable problem." IEEE Transactions on Evolutionary Computation, vol.3, no.1, p.63-74 (April 1999). Charbonneau, Paul. "Genetic algorithms in astronomy and astrophysics." The Astrophysical Journal Supplement Series, vol.101, p.309-334 (December 1995). Chellapilla, Kumar and David Fogel. "Evolving an expert checkers playing program without using human expertise." IEEE Transactions on Evolutionary Computation, vol.5, no.4, p.422-428 (August 2001). Available online at http://www.natural-selection.com/NSIPublicationsOnline.htm. Chellapilla, Kumar and David Fogel. "Anaconda defeats Hoyle 6-0: a case study competing an evolved checkers program against commercially available software." In Proceedings of the 2000 Congress on Evolutionary Computation, p.857-863. IEEE Press, 2000. Available online at http://www.natural-selection.com/NSIPublicationsOnline.htm. Chellapilla, Kumar and David Fogel. "Verifying Anaconda's expert rating by competing against Chinook: experiments in co-evolving a neural checkers player." Neurocomputing, vol.42, no.1-4, p.69-86 (January 2002). Chryssolouris, George and Velusamy Subramaniam. "Dynamic scheduling of manufacturing job shops using genetic algorithms." Journal of Intelligent Manufacturing, vol.12, no.3, p.281-293 (June 2001). Coale, Kristi. "Darwin in a box." Wired News, July 14, 1997. Available online athttp://www.wired.com/news/technology/0,1282,5152,00.html. Coello, Carlos. "An updated survey of GA-based multiobjective optimization techniques." ACM Computing Surveys, vol.32, no.2, p.109-143 (June 2000). Davidson, Clive. "Creatures from primordial silicon." New Scientist, vol.156, no.2108, p.30-35 (November 15, 1997). Available online at http://www.newscientist.com/hottopics/ai/primordial.jsp. Dawkins, Richard. The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe Without Design. W.W. Norton, 1996. Dembski, William. No Free Lunch: Why Specified Complexity Cannot Be Purchased Without Intelligence. Rowman & Littlefield, 2002. Fleming, Peter and R.C. Purshouse. "Evolutionary algorithms in control systems engineering: a survey." Control Engineering Practice, vol.10, p.1223-1241 (2002). Fonseca, Carlos and Peter Fleming. "An overview of evolutionary algorithms in multiobjective optimization." Evolutionary Computation, vol.3, no.1, p.1-16 (1995). Forrest, Stephanie. "Genetic algorithms: principles of natural selection applied to computation."Science, vol.261, p.872-878 (1993). Gibbs, W. Wayt. "Programming with primordial ooze." Scientific American, October 1996, p.48-50. Gillet, Valerie. "Reactant- and product-based approaches to the design of combinatorial libraries."Journal of Computer-Aided Molecular Design, vol.16, p.371-380 (2002). Giro, R., M. Cyrillo and D.S. Galvão. "Designing conducting polymers using genetic algorithms."Chemical Physics Letters, vol.366, no.1-2, p.170-175 (November 25, 2002). Glen, R.C. and A.W.R. Payne. "A genetic algorithm for the automated generation of molecules within constraints." Journal of Computer-Aided Molecular Design, vol.9, p.181-202 (1995). Goldberg, David. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, 1989. Graham-Rowe, Duncan. "Radio emerges from the electronic soup." New Scientist, vol.175, no.2358, p.19 (August 31, 2002). Available online at http://www.newscientist.com/news/news.jsp?id=ns99992732. See also: Bird, Jon and Paul Layzell. "The evolved radio and its implications for modelling the evolution of novel sensors." In Proceedings of the 2002 Congress on Evolutionary Computation, p.1836-1841. Graham-Rowe, Duncan. "Electronic circuit 'evolves' from liquid crystals." New Scientist, vol.181, no.2440, p.21 (March 27, 2004). Haas, O.C.L., K.J. Burnham and J.A. Mills. "On improving physical selectivity in the treatment of cancer: A systems modelling and optimisation approach." Control Engineering Practice, vol.5, no.12, p.1739-1745 (December 1997). Hanne, Thomas. "Global multiobjective optimization using evolutionary algorithms." Journal of Heuristics, vol.6, no.3, p.347-360 (August 2000). Haupt, Randy and Sue Ellen Haupt. Practical Genetic Algorithms. John Wiley & Sons, 1998. He, L. and N. Mort. "Hybrid genetic algorithms for telecommunications network back-up routeing." BT Technology Journal, vol.18, no.4, p. 42-50 (Oct 2000). Holland, John. "Genetic algorithms." Scientific American, July 1992, p. 66-72. Hughes, Evan and Maurice Leyland. "Using multiple genetic algorithms to generate radar point-scatterer models." IEEE Transactions on Evolutionary Computation, vol.4, no.2, p.147-163 (July 2000). Jensen, Mikkel. "Generating robust and flexible job shop schedules using genetic algorithms." IEEE Transactions on Evolutionary Computation, vol.7, no.3, p.275-288 (June 2003). Kewley, Robert and Mark Embrechts. "Computational military tactical planning system." IEEE Transactions on Systems, Man and Cybernetics, Part C - Applications and Reviews, vol.32, no.2, p.161-171 (May 2002). Kirkpatrick, S., C.D. Gelatt and M.P. Vecchi. "Optimization by simulated annealing." Science, vol.220, p.671-678 (1983). Koza, John, Forest Bennett, David Andre and Martin Keane. Genetic Programming III: Darwinian Invention and Problem Solving. Morgan Kaufmann Publishers, 1999. Koza, John, Martin Keane, Matthew Streeter, William Mydlowec, Jessen Yu and Guido Lanza. Genetic Programming IV: Routine Human-Competitive Machine Intelligence. Kluwer Academic Publishers, 2003. See also: Koza, John, Martin Keane and Matthew Streeter. "Evolving inventions." Scientific American, February 2003, p. 52-59. Keane, A.J. and S.M. Brown. "The design of a satellite boom with enhanced vibration performance using genetic algorithm techniques." In Adaptive Computing in Engineering Design and Control '96 - Proceedings of the Second International Conference, I.C. Parmee (ed), p.107-113. University of Plymouth, 1996. See also: Petit, Charles. "Touched by nature: Putting evolution to work on the assembly line."U.S. News and World Report, vol.125, no.4, p.43-45 (July 27, 1998). Available online athttp://www.genetic-programming.com/published/usnwr072798.html. Lee, Yonggon and Stanislaw H. Zak. "Designing a genetic neural fuzzy antilock-brake-system controller." IEEE Transactions on Evolutionary Computation, vol.6, no.2, p.198-211 (April 2002). Lemley, Brad. "Machines that think." Discover, January 2001, p.75-79. Mahfoud, Sam and Ganesh Mani. "Financial forecasting using genetic algorithms." Applied Artificial Intelligence, vol.10, no.6, p.543-565 (1996). Mitchell, Melanie. An Introduction to Genetic Algorithms. MIT Press, 1996. Naik, Gautam. "Back to Darwin: In sunlight and cells, science seeks answers to high-tech puzzles."The Wall Street Journal, January 16, 1996, p. A1. Obayashi, Shigeru, Daisuke Sasaki, Yukihiro Takeguchi, and Naoki Hirose. "Multiobjective evolutionary computation for supersonic wing-shape optimization." IEEE Transactions on Evolutionary Computation, vol.4, no.2, p.182-187 (July 2000). Petzinger, Thomas. "At Deere they know a mad scientist may be a firm's biggest asset." The Wall Street Journal, July 14, 1995, p.B1. See also: "Evolving business, with a Santa Fe Institute twist." SFI Bulletin, Winter 1998. Available online at http://www.santafe.edu/sfi/publications/Bulletins/bulletin-winter98/feature.html. Porto, Vincent, David Fogel and Lawrence Fogel. "Alternative neural network training methods." IEEE Expert, vol.10, no.3, p.16-22 (June 1995). Rao, Srikumar. "Evolution at warp speed." Forbes, vol.161, no.1, p.82-83 (January 12, 1998). Rizki, Mateen, Michael Zmuda and Louis Tamburino. "Evolving pattern recognition systems." IEEE Transactions on Evolutionary Computation, vol.6, no.6, p.594-609 (December 2002). Robin, Franck, Andrea Orzati, Esteban Moreno, Otte Homan, and Werner Bachtold. "Simulation and evolutionary optimization of electron-beam lithography with genetic and simplex-downhill algorithms." IEEE Transactions on Evolutionary Computation, vol.7, no.1, p.69-82 (February 2003). Sagan, Carl. Broca's Brain: Reflections on the Romance of Science. Ballantine, 1979. Sambridge, Malcolm and Kerry Gallagher. "Earthquake hypocenter location using genetic algorithms."Bulletin of the Seismological Society of America, vol.83, no.5, p.1467-1491 (October 1993). Sasaki, Daisuke, Masashi Morikawa, Shigeru Obayashi and Kazuhiro Nakahashi. "Aerodynamic shape optimization of supersonic wings by adaptive range multiobjective genetic algorithms." InEvolutionary Multi-Criterion Optimization: First International Conference, EMO 2001, Zurich, Switzerland, March 2001: Proceedings, K. Deb, L. Theile, C. Coello, D. Corne and E. Zitler (eds). Lecture Notes in Computer Science, vol.1993, p.639-652. Springer-Verlag, 2001. Sato, S., K. Otori, A. Takizawa, H. Sakai, Y. Ando and H. Kawamura. "Applying genetic algorithms to the optimum design of a concert hall." Journal of Sound and Vibration, vol.258, no.3, p. 517-526 (2002). Schechter, Bruce. "Putting a Darwinian spin on the diesel engine." The New York Times, September 19, 2000, p. F3. See also: Patch, Kimberly. "Algorithm evolves more efficient engine." Technology Research News, June/July 2000. Available online athttp://www.trnmag.com/Stories/062800/Genetically_Enhanced_Engine_062800.html. Srinivas, N. and Kalyanmoy Deb. "Multiobjective optimization using nondominated sorting in genetic algorithms." Evolutionary Computation, vol.2, no.3, p.221-248 (Fall 1994). Soule, Terrence and Amy Ball. "A genetic algorithm with multiple reading frames." In GECCO-2001: Proceedings of the Genetic and Evolutionary Computation Conference, Lee Spector and Eric Goodman (eds). Morgan Kaufmann, 2001. Available online athttp://www.cs.uidaho.edu/~tsoule/research/papers.html. Tang, K.S., K.F. Man, S. Kwong and Q. He. "Genetic algorithms and their applications." IEEE Signal Processing Magazine, vol.13, no.6, p.22-37 (November 1996). Weismann, Dirk, Ulrich Hammel, and Thomas Bäck. "Robust design of multilayer optical coatings by means of evolutionary algorithms." IEEE Transactions on Evolutionary Computation, vol.2, no.4, p.162-167 (November 1998). Williams, Edwin, William Crossley and Thomas Lang. "Average and maximum revisit time trade studies for satellite constellations using a multiobjective genetic algorithm." Journal of the Astronautical Sciences, vol.49, no.3, p.385-400 (July-September 2001). See also: "Selecting better orbits for satellite constellations." Spaceflight Now, 18 October 2001. Available online at http://spaceflightnow.com/news/n0110/18orbits/. "Darwinian selection of satellite orbits for military use." Space.com, 16 October 2001. Available online at http://www.space.com/news/darwin_satellites_011016.html. Zitzler, Eckart and Lothar Thiele. "Multiobjective evolutionary algorithms: a comparative case study and the Strength Pareto approach." IEEE Transactions on Evolutionary Computation, vol.3, no.4, p.257-271 (November 1999). Crash Introduction to Artificial Neural Networks by Ivan Galkin, U. MASS Lowell (Materials for UML 91.531 Data Mining course) 1. Neurobiological Background   Neural Doctrine The nervous system of living organisms is a structure consisting of many elements working in parallel and in connection with one another 1836 Discovery of the neural cell of the brain, the neuron The Structure of Neuron   Source of the diagram: NIBS Pte Ltd.  This is a result worth of the Nobel Prize [1906]. The neuron is a many-inputs / one-output unit. The output can be excited or not excited , just two possible choices (like a flip-flop). The signals from other neurons are summed together and compared against a threshold to determine if the neuron shall excite ("fire"). The input signals are subject to attenuation in the synapses which are junction parts of the neuron. 1897 Synapse concept introduced The next important step was to find that the synapse resistance to the incoming signal can be changed during a "learning" process [1949]:    Hebb's Learning Rule If an input of a neuron is repeatedly and persistently causing the neuron to fire, a metabolic change happens in the synapse of that particular input to reduce its resistance.  This discovery became a basis for the concept of associative memory (see below). 2. Artificial Neuron 1943 First mathematical representation of neuron Artificial Neuron The Artificial Neuron is actually quite simple. All signals can be 1 or -1 (the "binary" case, often calledclassic spin for its similarity with the problem of disordered magnetic systems). The neuron calculates a weighted sum of inputs and compares it to a threshold. If the sum is higher than the threshold, the output is set to 1, otherwise to -1. 3. Artificial Neural Networks (ANN) The power of neuron comes from its collective behavior in a network where all neurons are interconnected. The network starts evolving: neurons continuously evaluate their output by looking at their inputs, calculating the weighted sum and comparing to a threshold to decide if they should fire. This is highly complex parallel process whose features can not be reduced to phenomena taking place with individual neurons. One observation is that the evolving of ANN causes it to eventually reach a state where all neurons continue working but no further changes in their state happen. A network may have more than one stable states, and it is obviously determined (somehow) by the choice of synaptic weights and thresholds for the neurons. 3.1. Perceptron 3.1.1. Feed-forward Configuration Another step in understanding of ANN dynamics was made with introduction and analysis of a perceptron [Rosenblat, 1958], a simple neural network consisting of input layer, output layer, and possibly one or more intermediate layers of neurons.    Perceptron Once the input layer neurons are clamped to their values, the evolving starts: layer by layer, the neurons determine their output. This ANN configuration is often called feed-forward because of this feature. The dependence of output values on input values is quite complex and includes all synaptic weights and thresholds. Usually this dependence does not have a meaningful analytic expression. However, this is not necessary: there are learning algorithms that, given the inputs, adjust the weights to produce the required output. So, simply put, we created a thing which learns how to recognize certain input patterns. There is a training phase when known patterns are presented to the network and its weights are adjusted to produce required outputs. Then, there is a recognition phase when the weights are fixed, the patterns are again presented to the network and it recalls the outputs. 3.1.2. Classification/Prediction ANN Among many applications of the feed-forward ANNs, the classification or prediction scenario is perhaps the most interesting for data mining. In this mode, the network is trained to classify certain patterns into certain groups, and then is used to classify novel patterns which were never presented to the net before. (The correct term for this scenario is schemata-completion ).    Example (courtesy Trajecta ) A company plans to suggest a product to potential customers. A database of 1 million customer records exists; 20,000 purchases (2% response) is the goal. Instead of contacting all 1,000,000 customers, only 100,000 is contacted. The response to the sale offers is known; so the subset is used to train a neural network to tell which of the 100,000 customers decide to buy the product. Then the rest 900,000 customers are presented to the network which classifies 32,000 of them as buyers. The 32,000 customers are contacted. and the 2% response is achieved. Total savings  $2,500,000. 3.1.3. Application of ANN to Memory Designs Content-Addressable and Associative Memory: the information is stored, retrieved, and modified based on the data itself, not its arbitrary storage location. Error-Correction and Partial-Contents Memory: the memory capable of retrieving the stored information being presented with a noisy or incomplete original sample. 3.1.4. Training Algorithms for Feed-forward ANNs The most popular algorithm for adjusting weights during the training phase is called back-propagation of error. The initial configuration of ANN is arbitrary; the result of presenting a pattern to the ANN is likely to produce incorrect output. The errors for all input patterns are propagated backwards, from the output layer towards the input layer. The corrections to the weights are selected  to minimize the residual error between actual and desired outputs. The algorithm can be viewed as a generalized least squares technique applied to multilayer perceptron. The usual approach of minimizing the errors leads to a system of linear equations; it is important to realize here that its solution is not unique. The time spent on calculating weights is much longer than the time required to run the finalized neural network in the recognition mode, so the resulting weights are arguably eligible for copyright protection. 3.2. Energy Function Analysis 3.2.1. Energy Function Introduction of an energy function concept, due to Hopfield [1982-1986], led to an explosion of work on neural networks. The energy function brought an elegant answer to a tough problem of ANN convergence to a stable state:    Energy Function (A network of N neurons with weights Wij and output states Si )    The specifics of ANN evolving allowed to introduce this function which always decreases (or remains unchanged) with each iteration step. Evolving of ANNs and even their specifications started to be described in terms of the energy function. 3.2.2. Global and Local Minima of Energy Function Apparently, letting an ANN to evolve eventually shall lead it to a stable state where the energy function has a minimum and can not change further. The information stored in the synaptic weights Wij therefore corresponds to some minima of the energy function where the neural network ends up after its evolving.    Minima of Energy Function When the input neurons are clamped to the initial values, the network obtains certain energy as indicated by a ball in the above figure. Further evolving will cause the ball to slide to the nearest local minimum. 3.2.3. Training of ANN in Terms of Energy Function Training of a network is adjustment of weights to match the actual outputs to desired outputs of the network. The expression for the energy function contain the weights, and we can think of the training process as "shaping" of the energy function. 3.3. Mean-Field Theory The Mean Field Theory (MFT) [Hopfield, 1982] treats neurons as objects with continuous inputs and output states, not just +1 or -1, as in the "classic spin" approach. In the network of neurons, MFT introduces statistical notion of "thermal average", "mean" field V of all input signals of a neuron. To provide the continuity of the neuron outputs, a notion of sigmoid function, g(u), is introduced to be used instead of a simple step-function threshold.   Classic Spin Mean Field Neuron State Si = -1 or +1 Energy Neuron Input Field Evolving Si = sign( Ui ) The most popular sigmoid function for ANNs is tanh: Sigmoid Function Sigmoid function makes processing in neurons nonlinear: Artificial Neuron, MFT 3.4. Feed-back ANN This configuration of neural networks does not assume propagation of input signal through the hidden layers to the output layers; rather, the output signals of neurons are fed again to the inputs so that evolving continues until stopped externally Feed-forward Feed-back The feed-back ANNs best suited for optimization problems where the neural network looks for the best arrangement of interconnected factors. The arrangement corresponds to the global minimum of the energy function (see section 3.2.2). Feed-back ANNs are also used for error-correction and partial-contents memories (see section 3.1.3) where the stored patterns correspond to the local minima of the energy function. 3.5. Search For the Global Minimum Optimization problems require a search for the global minimum of the energy function. If we simply let ANN evolve it will reach the nearest local minimum and stop there. To help with the search, one way is to allow for energy fluctuations in ANN by introducing a "probability" that the neuron should fire when the local field is high or should stay calm if the local field is low. In other words, certain amount of chaos is introduced into the ANN evolving process, which can be also imagined as "shaking" of the system where the ball slides along the energy function curve. 3.5.1. Simulated Annealing Schemes If we allow "shaking" of the evolving process, it is important not to shake the system out of the global minimum. In other words, shaking should be made lighter as the network evolves. Due to similarity of annealing technique used in metal industry, the temperature scenarios are collectively called simulated annealing.  There are a few popular stochastic simulated annealing schemes (Heat Bath, Metropolis, Langevin). Training of the neural networks is an optimization problem itself. Simulated annealing schemes are applied there, too (see Section 4.2). 3.5.2. MFT Tunneling Treatment of ANN evolving within the MFT approach allows for tunneling of the energy into the neighboring (lower) maximum of the energy function. 4. Advanced Topics 4.1. Underfitting and Overfitting of ANN 4.1.1. Fitting the Data Application of neural networks to Data Mining uses the ability of ANN to build models of data by capturing the most important features during training period. What is called "statistical inference" in statistics, here is called "data fitting". The critical issue in developing a neural network is this generalization: how well will the network make classification of patterns that are not in the training set? Neural networks, like other flexible nonlinear estimation methods such as kernel regression and smoothing splines, can suffer from either underfitting or overfitting. 4.1.2. Dealing with Noisy Data The training set of data might as well be quite "noisy" or "imprecise". The training process usually relies on some version of least squares technique which ideally should abstract from the noise in the data. However, this feature of ANN is dependent on how optimal is the configuration of the net in terms of number of layers, neurons and, ultimately, weights. Underfitting ANN that is not sufficiently complex to correctly detect the pattern in a noisy data set. Overfitting ANN that is too complex so it reacts on noise in the data A network that is not sufficiently complex can fail to detect fully the signal in a complicated data set, leading to underfitting. A network that is too complex may fit the noise, not just the signal, leading to overfitting. Overfitting is especially dangerous because it can easily lead to predictions that are far beyond the range of the training data with many of the common types of neural networks. But underfitting can also produce wild predictions in multilayer perceptrons, even with noise-free data.  4.1.3. Avoiding Underfitting and Overfitting The best way to avoid overfitting is to use lots of training data. If you have at least 30 times as many training cases as there are weights in the network, you are unlikely to suffer from overfitting. But you can't arbitrarily reduce the number of weights for fear of underfitting.  Given a fixed amount of training data, there are at least five effective approaches to avoiding underfitting and overfitting, and hence getting good generalization: Model selection Jittering Weight decay Early stopping Bayesian estimation 4.2. Training ANN as an Optimization Task Training a neural network is, in most cases, an exercise in numerical optimization of a usually nonlinear function. Methods of nonlinear optimization have been studied for hundreds of years, and there is a huge literature on the subject in fields such as numerical analysis, operations research, and statistical computing, e.g., Bertsekas 1995, Gill, Murray, and Wright 1981. There is no single best method for nonlinear optimization. You need to choose a method based on the characteristics of the problem to be solved. For functions with continuous second derivatives (which would include feedforward nets with the most popular differentiable activation functions and error functions), three general types of algorithms have been found to be effective for most practical purposes: For a small number of weights, stabilized Newton and Gauss-Newton algorithms, including various Levenberg-Marquardt and trust-region algorithms are efficient. For a moderate number of weights, various quasi-Newton algorithms are efficient. For a large number of weights, various conjugate-gradient algorithms are efficient. All of the above methods find local optima. For global optimization, there are a variety of approaches. You can simply run any of the local optimization methods from numerous random starting points. Or you can try more complicated methods designed for global optimization such as simulated annealing or genetic algorithms (see Reeves 1993 and "What about Genetic Algorithms and Evolutionary Computation?"). Introduction to quantum mechanics From Wikipedia, the free encyclopedia This article is an accessible, non-technical introduction to the subject. For the main encyclopedia article, see Quantum mechanics. It has been suggested that Basic concepts of quantum mechanics be merged into this article or section. (Discuss) Proposed since November 2009. Quantum mechanics Uncertainty principle Introduction Mathematical formulations [show]Background [show]Fundamental concepts [show]Experiments [show]Formulations [show]Equations [show]Interpretations [show]Advanced topics [show]Scientists v · d · e Left to right: Max Planck, Albert Einstein,Niels Bohr, Louis de Broglie, Max Born, Paul Dirac, Werner Heisenberg, Wolfgang Pauli, Erwin Schrödinger,Richard Feynman. Quantum mechanics is the body of scientific principles which attempts to explain the behavior of matter and its interactions withenergy on the scale of atoms and atomic particles. Just before 1900, it became clear that classical physics was unable to model certain phenomena. Coming to terms with these limitations led to the development of quantum mechanics, a major revolution in physics. This article describes how the limitations of classical physics were discovered, and the main concepts of the quantum theories which replaced them in the early decades of the 20th Century.[note 1] These concepts are described in roughly the order they were first discovered; for a more complete history of the subject, see History of quantum mechanics. Some aspects of quantum mechanics can seem counter-intuitive, because they describe behavior quite different than that seen at larger length scales, where classical physics is an excellent approximation. In the words of Richard Feynman, quantum mechanics deals with "nature as she is — absurd."[1] Many types of energy, such as photons (discrete units of light), behave in some respects like particles and in other respects like waves. Radiators of photons (such as neon lights) have emission spectra which are discontinuous, in that only certain frequencies of light are present. Quantum mechanics predicts the energies, the colors, and the spectralintensities of all forms of electromagnetic radiation. But quantum mechanics theory ordains that the more closely one pins down one measure (such as the position of a particle), the less precise another measurement pertaining to the same particle (such as itsmomentum) must become. Put another way, measuring position first and then measuring momentum does not have the same outcome as measuring momentum first and then measuring position; the act of measuring the first property necessarily introduces additional energy into the micro-system being studied, thereby perturbing that system. Even more disconcerting, pairs of particles can be created as entangled twins — which means that a measurement which pins down one property of one of the particles will instantaneously pin down the same or another property of its entangled twin, regardless of the distance separating them — though this may be regarded as merely a mathematical anomaly, rather than a real one. Contents  [hide] 1 The first quantum theory: Max Planck and black body radiation. 2 Photons: the quantisation of light 2.1 The photoelectric effect 3 The quantisation of matter: the Bohr model of the atom 3.1 Bohr's model 4 Wave-particle duality 4.1 The double-slit experiment 4.2 Application to the Bohr model 5 Development of modern quantum mechanics 6 Copenhagen interpretation 6.1 Uncertainty principle 6.2 Wave function collapse 6.3 Eigenstates and eigenvalues 6.4 The Pauli exclusion principle 6.5 Application to the hydrogen atom 7 Dirac wave equation 8 Quantum entanglement 9 Quantum electrodynamics 10 Interpretations 11 See also 12 Notes 13 References 14 Further reading 15 External links The first quantum theory: Max Planck and black body radiation. Hot metalwork from a blacksmith. The yellow-orange glow is the visible part of the thermal radiation emitted due to the high temperature. Everything else in the picture is glowing with thermal radiation as well, but less brightly and at longer wavelengths than the human eye can detect. A far-infrared camera will show this radiation. Thermal radiation is electromagnetic radiationemitted from the surface of an object due to the object's temperature. If an object is heated sufficiently, it starts to emit light at the red end of the spectrum — it is "red hot". Heating it further causes the colour to change, as light at shorter wavelengths (higher frequencies) begins to be emitted. It turns out that a perfect emitter is also a perfect absorber. When it is cold, such an object looks perfectly black, as it emits practically no visible light, because it absorbs all the light that falls on it. Consequently, an ideal thermal emitter is known as a black body, and the radiation it emits is called black body radiation. In the late 19th century, thermal radiation had been fairly well characterized experimentally. The wavelength at which the radiation is strongest is given by Wien's displacement law, and the overall power emitted per unit area is given by the Stefan–Boltzmann law. So, as temperature increases, the glow colour changes from red to yellow to white to blue. Even as the peak wavelength moves into the ultra-violet, enough radiation continues to be emitted in the blue wavelengths that the body continues to appear blue. It never becomes invisible—indeed, the radiation of visible light increasesmonotonically with temperature.[2] Physicists were searching for a theoretical explanation for these experimental results. The peak wavelength and total power radiated by a black body vary with temperature. Classical electromagnetism drastically overestimates these intensities, particularly at short wavelengths. The "answer" found using classical physics is theRayleigh–Jeans law. This law agrees with experimental results at long wavelengths. At short wavelengths, however, classical physics predicts that energy will be emitted by a hot body at an infinite rate. This result, which is clearly wrong, is known as the ultraviolet catastrophe. The first model which was able to explain the full spectrum of thermal radiation was put forward byMax Planck in 1900.[3] He modelled the thermal radiation as being in equilibrium, using a set of harmonic oscillators. To reproduce the experimental results he had to assume that each oscillator produced an integral number of units of energy at its one characteristic frequency, rather than being able to emit any arbitrary amount of energy. In other words, the energy of each oscillator was "quantized".[note 2] The quantum of energy for each oscillator, according to Planck, was proportional to the frequency of the oscillator; the constant of proportionality is now known as the Planck constant. The Planck constant, usually written as h, has the value 6.63×10−34 J s, and so the energy, E, of an oscillator of frequency f is given by  where [4] Planck's law was the first quantum theory in physics, and Planck won the Nobel Prize in 1918 "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta".[5] At the time, however, Planck's view was that quantization was purely a mathematical trick, rather than (as we now know) a fundamental change in our understanding of the world.[6] Photons: the quantisation of light Einstein's portrait byHarm Kamerlingh Onnesat the University of Leiden in 1920 In 1905, Albert Einstein took an extra step. He suggested that quantisation wasn't just a mathematical trick: the energy in a beam of light occurs in individual packets, which are now called photons.[7] The energy of a single photon is given by its frequency multiplied by Planck's constant: For centuries, scientists had debated between two possible theories of light: was it a wave or did it instead consist of a stream of tiny particles? By the 19th century, the debate was generally considered to have been settled in favour of the wave theory, as it was able to explain observed effects such asrefraction, diffraction and polarization. James Clerk Maxwell had shown that electricity, magnetism and light are all manifestations of the same phenomenon: the electromagnetic field. Maxwell's equations, which are the complete set of laws of classical electromagnetism, describe light as waves: a combination of oscillating electric and magnetic fields. Because of the preponderance of evidence in favour of the wave theory, Einstein's ideas were met initially by great scepticism. Eventually, however, the photon model became favoured; one of the most significant pieces of evidence in its favour was its ability to explain several puzzling properties of the photoelectric effect, described in the following section. Nonetheless, the wave analogy remained indispensable for helping to understand other characteristics of light, such as diffraction. The photoelectric effect Main article: Photoelectric effect Light (red arrows, left) is shone upon a metal. If the light is of sufficient frequency (i.e. sufficient energy), electrons are ejected (blue arrows, right). In 1887 Heinrich Hertz observed that light can eject electrons from metal.[8] In 1902 Philipp Lenard discovered that the maximum possible energy of an ejected electron is related to the frequency of the light, not to its intensity; if the frequency is too low, no electrons are ejected regardless of the intensity. The lowest frequency of light which causes electrons to be emitted, called the threshold frequency, is different for every metal. This is at odds with classical electromagnetism, which predicts that the electron's energy should be proportional to theintensity of the radiation. Einstein explained the effect by postulating that a beam of light is a stream of particles (photons), and that if the beam is of frequency f each photon has an energy equal to hf (i.e. an integer multiple of Planck's constant).[8] An electron is likely to be struck only by a single photon, which imparts at most an energy hf to the electron[8] (in point of fact, it logically cannot be struck by more than one photon, since the first it absorbs will cause it to eject). Therefore, the intensity of the beam has no effect;[note 3] only its frequency determines the maximum energy that can be imparted to the electron.[8] To explain the threshold effect, Einstein argued that it takes a certain amount of energy, called thework function, denoted by φ, to remove an electron from the metal.[8] This amount of energy is different for each metal. If the energy of the photon is less than the work function then it does not carry sufficient energy to remove the electron from the metal. The threshold frequency, f0, is the frequency of a photon whose energy is equal to the work function: If f is greater than f0, the energy hf is enough to remove an electron. The ejected electron has a kinetic energy EK which is, at most, equal to the photon's energy minus the energy needed to dislodge the electron from the metal: Einstein's description of light as being composed of particles extended Planck's notion of quantised energy: a single photon of a given frequency f delivers an invariant amount of energy hf. In other words, individual photons can deliver more or less energy, but only depending on their frequencies. However, although the photon is a particle it was still being described as having the wave-like property of frequency. Once again, the particle account of light was being "compromised".[9][note 4] The relationship between the frequency of electromagnetic radiation and the energy of each individual photon is why ultraviolet light can cause sunburn, but visible or infrared light cannot. A photon of ultraviolet light will deliver a high amount of energy—enough to contribute to cellular damage such as a sunburn. A photon of infrared light will deliver a lower amount of energy—only enough to warm one's skin. So an infrared lamp can warm a large surface, perhaps large enough to keep people comfortable in a cold room, but it cannot give anyone a sunburn. If each individual photon had identical energy, it would not be correct to talk of a "high energy" photon. Light of high frequency would carry more energy only because of a wave effect, i.e. because there were more photons arriving per second. If you doubled the frequency, you would double the number of energy units arriving each second: an argument based on intensity (i.e. on the number of photons per second). Einstein rejected that wave-dependent classical approach, in favour of a particle-based analysis where the energy of the particle must be absolute (since it is measured from a single impact only), and varies with frequency in discrete steps (i.e. is quantised). Hence he arrived at the concept of quantised energy levels. In nature, single photons are not encountered. The sun emits photons continuously at all electromagnetic frequencies, so they appear to propagate as a continuous wave, not as discrete units. The emission sources available to Hertz and Lennard in the 19th Century also had that characteristic. And the traditional mechanisms for generating photons are classical devices, in which the energy output is regulated by varying the frequency. But Einstein proposed that although a particular frequency is tied to a specific energy level, the frequency is dependent on the energy level, not vice versa (contrary to the tenets of classical physics). This formed his solution to the photoelectric effect, even though it was counter-intuitive. And although the energy imparted by the photon is invariant at any given frequency, the initial energy-state of the electron prior to absorption is not. Therefore anomalous results may occur for individual electrons, but statistically a process of averaging will smooth out the results if a large enough number of electrons are emitted. This point is helpful in comprehending the distinction between the study of individual particles in quantum dynamics and the study of massed particles in classical physics. The quantisation of matter: the Bohr model of the atom By the dawn of the 20th Century, it was known that atoms consist of a diffuse cloud of negatively-charged electrons surrounding a small, dense, positively-charged nucleus. This suggested a model in which the electrons circle around the nucleus like planets orbiting a sun.[note 5] However, it was also known that the atom in this model would be unstable: according to classical theory orbiting electrons are undergoing centripetal acceleration, and should therefore give off electromagnetic radiation, the loss of energy also causing them to spiral toward the nucleus, colliding with it in a fraction of a second. A second, related, puzzle was the emission spectrum of atoms. When a gas is heated, it gives off light only at discrete frequencies. For example, the visible light given off by hydrogen consists of four different colours, as shown in the picture below. By contrast, white light consists of a continuous emission across the whole range of visible frequencies. Emission spectrum of hydrogen. When excited, hydrogen gas gives off light in four distinct colours (spectral lines) in the visible spectrum, as well as a number of lines in the infra-red and ultra-violet. In 1885 the Swiss mathematician Johann Balmer discovered that each wavelength λ (lambda) in the visible spectrum of hydrogen is related to some integer n by the equation where B is a constant which Balmer determined to be equal to 364.56 nm. Thus Balmer's constant was the basis of a system of discrete, i.e. quantised, integers. In 1888 Johannes Rydberg generalized and greatly increased the explanatory utility of Balmer's formula. He predicted that hydrogen will emit light of wavelength λ (lambda) where λ is related to two integers n and m according to what is now known as the Rydberg formula:[11] where R is the Rydberg constant, equal to 0.0110 nm−1, and n must be greater than m. Rydberg's formula accounts for the four visible wavelengths of hydrogen by setting m = 2 andn = 3,4,5,6. It also predicts additional wavelengths in the emission spectrum: for m = 1 and for n > 1, the emission spectrum should contain certain ultraviolet wavelengths, and for m = 3 and n > 3, it should also contain certain infrared wavelengths. Experimental observation of these wavelengths came several decades later: in 1908 Louis Paschen found some of the predicted infrared wavelengths, and in 1914 Theodore Lyman found some of the predicted ultraviolet wavelengths.[11]
Bohr's model
Main article: Bohr model


The Bohr model of the atom, showing an electron quantum jumping to ground state n = 1.
In 1913 Niels Bohr proposed a new model of the atom that included quantized electron orbits.[12] In Bohr's model, electrons could inhabit only certain orbits around the atomic nucleus. When an atom emitted (or absorbed) energy, the electron did not move in a continuous trajectory from one orbit around the nucleus to another, as might be expected classically. Instead, the electron would jump instantaneously from one orbit to another, giving off the emitted light in the form of a photon.[13] The possible energies of photons given off by each element were determined by the differences in energy between the orbits, and so the emission spectrum for each element would contain a number of lines.[14]
Bohr theorised that the angular momentum, L, of an electron is quantised:

where n is an integer and h is the Planck constant. Starting from this assumption, Coulomb's law and the equations of circular motion show that an electron with n units of angular momentum will orbit a proton at a distance r given by
,
where ke is the Coulomb constant, m is the mass of an electron, and e is the charge on an electron. For simplicity this is written as

where a0, called the Bohr radius, is equal to 0.0529 nm. The Bohr radius is the radius of the smallest allowed orbit.
The energy of the electron[note 6] can also be calculated, and is given by
.
Thus Bohr's assumption that angular momentum is quantised means that an electron can only inhabit certain orbits around the nucleus, and that it can have only certain energies. A consequence of these constraints is that the electron will not crash into the nucleus: it cannot continuously emit energy, and it cannot come closer to the nucleus than a0 (the Bohr radius).
An electron loses energy by jumping instantaneously from its original orbit to a lower orbit; the extra energy is emitted in the form of a photon. Conversely, an electron that absorbs a photon gains energy, hence it jumps to an orbit that is farther from the nucleus.
Each photon from glowing atomic hydrogen is due to an electron moving from a higher orbit, with radius rn, to a lower orbit, rm. The energy Eγ of this photon is the difference in the energies En and Emof the electron:

Since Planck's equation shows that the photon's energy is related to its wavelength by Eγ = hc/λ, the wavelengths of light which can be emitted are given by

This equation has the same form as the Rydberg formula, and predicts that the constant R should be given by

Therefore the Bohr model of the atom can predict the emission spectrum of hydrogen in terms of fundamental constants.[note 7] However, it was not able to make accurate predictions for multi-electron atoms, or to explain why some spectral lines are brighter than others.
Wave-particle duality
Main article: Wave-particle duality
In 1924, Louis de Broglie proposed the idea that just as light has both wave-like and particle-like properties, matter also has wave-like properties.[15] The wavelength, λ, associated with a particle is related to its momentum, p:[16][17]

The relationship, called the de Broglie hypothesis, holds for all types of matter. Thus all matter exhibits properties of both particles and waves.
Three years later, the wave-like nature of electrons was demonstrated by showing that a beam of electrons could exhibit diffraction, just like a beam of light. At the University of Aberdeen, George Thomson passed a beam of electrons through a thin metal film and observed the predicted interference patterns. At Bell Labs, Davisson and Germer guided their beam through a crystalline grid. Similar wave-like phenomena were later shown for atoms and even small molecules. De Broglie was awarded the Nobel Prize for Physics in 1929 for his hypothesis; Thomson and Davisson shared the Nobel Prize for Physics in 1937 for their experimental work.
This is the concept of wave-particle duality: neither the classical concepts of "particle" or "wave" can fully describe the behavior of quantum-scale objects, either photons or matter. Wave-particle duality is an example of the principle of complementarity in quantum physics. An elegant example of wave-particle duality, the double slit experiment, is discussed in the section below.
De Broglie's treatment of quantum events served as a jumping off point for Schrödinger when he set about to construct a wave equation to describe quantum theoretical events.
The double-slit experiment
Main article: Double-slit experiment


Light from one slit interferes with light from the other, producing an interference pattern (the 3 fringes shown at the right).
In the double-slit experiment as originally performed byThomas Young and Augustin Fresnel in 1827, a beam of light is directed through two narrow, closely spaced slits, producing an interference pattern of light and dark bands on a screen. If one of the slits is covered up, one might naively expect that the intensity of the fringes due to interference would be halved everywhere. In fact, a much simpler pattern is seen, a simplediffraction pattern. Closing one slit results in a much simpler pattern diametrically opposite the open slit. Exactly the same behaviour can be demonstrated in water waves, and so the double-slit experiment was seen as a demonstration of the wave nature of light.


The diffraction pattern produced when light is shone through one slit (top) and the interference pattern produced by two slits (bottom). The interference pattern from two slits is much more complex, demonstrating the wave-like propagation of light.
The double-slit experiment has also been performed using electrons, atoms, and even molecules, and the same type of interference pattern is seen. Thus all matter possesses both particle and wave characteristics.
Even if the source intensity is turned down so that only one particle (e.g. photon or electron) is passing through the apparatus at a time, the same interference pattern develops over time. The quantum particle acts as a wave when passing through the double slits, but as a particle when it is detected. This is a typical feature of quantum complementarity: a quantum particle will act as a wave when we do an experiment to measure its wave-like properties, and like a particle when we do an experiment to measure its particle-like properties. Where on the detector screen any individual particle shows up will be the result of an entirely random process.
Application to the Bohr model
De Broglie expanded the Bohr model of the atom by showing that an electron in orbit around a nucleus could be thought of as having wave-like properties. In particular, an electron will be observed only in situations that permit a standing wave around a nucleus. An example of a standing wave is a violin string, which is fixed at both ends and can be made to vibrate. The waves created by a stringed instrument appear to oscillate in place, moving from crest to trough in an up-and-down motion. The wavelength of a standing wave is related to the length of the vibrating object and the boundary conditions. For example, because the violin string is fixed at both ends, it can carry standing waves of wavelengths 2l/n, where l is the length and n is a positive integer. De Broglie suggested that the allowed electron orbits were those for which the circumference of the orbit would be an integer number of wavelengths.
Development of modern quantum mechanics


¨Erwin Schrödinger, about 1933, age 46
In 1925, building on de Broglie's hypothesis, Erwin Schrödinger developed the equation that describes the behaviour of a quantum mechanical wave. The equation, called the Schrödinger equation after its creator, is central to quantum mechanics, and defines the permitted stationary states of a quantum system, and describes how the quantum state of a physical system changes in time.
Schrödinger was able to calculate the energy levels of hydrogen by treating a hydrogen atom's electron as a classical wave, moving in a well of electrical potential created by the proton. This calculation accurately reproduced the energy levels of the Bohr model.


Werner Heisenberg at the age of 26. Heisenberg won theNobel Prize in Physics in 1932 for the work that he did at around this time.[18]
At around the same time, Werner Heisenbergwas trying to find an explanation for the intensities of the different lines in the hydrogen emission spectrum. By means of a series of mathematical analogies, Heisenberg wrote out the quantum mechanical analogue for the classical computation of intensities. Shortly afterwards, Heisenberg's colleague Max Born realised that Heisenberg's method of calculating the probabilities for transitions between the different energy levels could best be expressed by using the mathematical concept of matrices.[note 8]
In May 1926, Schrödinger proved that Heisenberg's matrix mechanics and his own wave mechanics made the same predictions about the properties and behaviour of the electron; mathematically, the two theories were identical. Yet the two men disagreed on the interpretation of their mutual theory. Heisenberg saw no problem in the existence of discontinuous quantum jumps, but Schrödinger hoped that a theory based on continuous wave-like properties could avoid what he called (in the words of Wilhelm Wien[19]) "this nonsense about quantum jumps."
Copenhagen interpretation
Main article: Copenhagen interpretation
Bohr, Heisenberg and others tried to explain what these experimental results and mathematical models really mean. Their description, known as the Copenhagen interpretation of quantum mechanics, aimed to describe the nature of reality that was being probed by the measurements and described by the mathematical formulations of quantum mechanics.
The main principles of the Copenhagen interpretation are:
1. A system is completely described by a wave function, ψ. (Heisenberg)
2. How ψ changes over time is given by the Schrödinger equation.
3. The description of nature is essentially probabilistic. The probability of an event — for example, where on the screen a particle will show up in the two slit experiment — is related to the square of the amplitude of its wave function. (Born rule, due to Max Born, which gives a physical meaning to the wavefunction in the Copenhagen interpretation: the probability amplitude)
4. It is not possible to know the values of all of the properties of the system at the same time; those properties that are not known with precision must be described by probabilities. (Heisenberg's uncertainty principle)
5. Matter, like energy, exhibits a wave-particle duality. An experiment can demonstrate the particle-like properties of matter, or its wave-like properties; but not both at the same time. (Complementarity principle due to Bohr)
6. Measuring devices are essentially classical devices, and measure classical properties such as position and momentum.
7. The quantum mechanical description of large systems should closely approximate the classical description. (Correspondence principle of Bohr and Heisenberg)
Various consequences of these principles are discussed in more detail in the following subsections.
Uncertainty principle
Main article: Uncertainty principle
Suppose that we want to measure the position and speed of an object — for example a car going through a radar speed trap. Naively, we assume that the car has a definite position and speed at a particular moment in time, and how accurately we can measure these values depends on the quality of our measuring equipment — if we improve the precision of our measuring equipment, we will get a result that is closer to the true value. In particular, we would assume that how precisely we measure the speed of the car does not affect its position, and vice versa.
In 1927, Heisenberg proved that these assumptions are not correct.[20] Quantum mechanics shows that certain pairs of physical properties, like position and speed, cannot both be known to arbitrary precision: the more precisely one property is known, the less precisely the other can be known. This statement is known as the uncertainty principle. The uncertainty principle isn't a statement about the accuracy of our measuring equipment, but about the nature of the system itself — our naive assumption that the car had a definite position and speed was incorrect. On a scale of cars and people, these uncertainties are too small to notice, but when dealing with atoms and electrons they become critical.[21]
The uncertainty principle shows mathematically that the product of the uncertainty in the position andmomentum of a particle (momentum is velocity multiplied by mass) could never be less than a certain value, and that this value is related to Planck's constant.
Wave function collapse
Main article: Wave function collapse
Wavefunction collapse is a forced term for whatever happened when it becomes appropriate to replace the description of an uncertain state of a system by a description of the system in a definite state. Explanations for the nature of the process of becoming certain are controversial. At any time before a photon "shows up" on a detection screen it can only be described by a set of probabilities for where it might show up. When it does show up, for instance in the CCD of an electronic camera, the time and the space where it interacted with the device are known within very tight limits. However, the photon has disappeared, and the wave function has disappeared with it. In its place some physical change in the detection screen has appeared, e.g., an exposed spot in a sheet of photographic film.
Eigenstates and eigenvalues
For a more detailed introduction to this subject, see: Introduction to eigenstates
Because of the uncertainty principle, statements about both the position and momentum of particles can only assign a probability that the position or momentum will have some numerical value. Therefore it is necessary to formulate clearly the difference between the state of something that is indeterminate, such as an electron in a probability cloud, and the state of something having a definite value. When an object can definitely be "pinned-down" in some respect, it is said to possess an eigenstate.
The Pauli exclusion principle
Main article: Pauli exclusion principle
Wolfgang Pauli proposed the following concise statement of his 1925 principle: "There cannot exist an atom in such a quantum state that two electrons within [it] have the same set of quantum numbers."[22]
He developed the exclusion principle from what he called a "two-valued quantum degree of freedom" to account for the observation of a doublet, meaning a pair of lines differing by a small amount (e.g. on the order of 0.15 Å), in the spectrum of atomic hydrogen. The existence of these closely spaced lines in the bright-line spectrum meant that there was more energy in the electron orbital from magnetic moments than had previously been described.
In early 1925, Uhlenbeck and Goudsmit proposed that electrons rotate about an axis in the same way that the earth rotates on its axis. They proposed to call this property spin. Spin would account for the missing magnetic moment, and allow two electrons in the same orbital to occupy distinct quantum states if they, "spun" in opposite directions, thus satisfying the Exclusion Principle. A new quantum number was then needed, one to represent the momentum embodied in the rotation of each electron.
Application to the hydrogen atom
Main article: Atomic orbital model
Schrödinger was able to calculate the energy levels of hydrogen by treating a hydrogen atom'selectron as a wave, represented by the "wave function" Ψ, in a electric potential well, V, created by the proton. In Bohr's theory, electrons orbited the nucleus much as planets orbit the sun. Schrödinger's model gives the probability of the electron being located at any particular location – following the uncertainty principle, the electron cannot be viewed as having an exact location at any given time. The solutions to Schrödinger's equation are called atomic orbitals, "clouds" of possible locations in which an electron might be found, a distribution of probabilities rather than a precise location. Orbitals have a range of different shapes in three dimensions. The energies of the different orbitals can be calculated, and they accurately reproduce the energy levels of the Bohr model.
Within Schrödinger's picture, each electron has four properties:
1. An "orbital" designation, indicating whether the particle wave is one that is closer to the nucleus with less energy or one that is farther from the nucleus with more energy;
2. The "shape" of the orbital, spherical or otherwise;
3. The "inclination" of the orbital, determining the magnetic moment of the orbital around the z-axis.
4. The "spin" of the electron.
The collective name for these properties is the quantum state of the electron. The quantum state can be described by giving a number to each of these properties; these are known as the electron'squantum numbers. The quantum state of the electron is described by its wavefunction. The Pauli exclusion principle demands that no two electrons within an atom may have the same values of all four numbers.


The shapes of the first five atomic orbitals: 1s, 2s, 2px,2py, and 2pz. The colors show the phase of the wavefunction.
The first property describing the orbital is the principal quantum number, n, which is the same as in Bohr's model. n denotes the energy level of each orbital. The possible values for n are integers: 
The next quantum number, the azimuthal quantum number, denoted l, describes the shape of the orbital. The shape is a consequence of the angular momentum of the orbital. The angular momentum represents the resistance of a spinning object to speeding up or slowing down under the influence of external force. The azimuthal quantum number represents the orbital angular momentum of an electron around its nucleus. The possible values for l are integers from 0 to n - 1: . The shape of each orbital has its own letter as well. The first shape is denoted by the letter s (a mnemonic being "sphere"). The next shape is denoted by the letter p and has the form of a dumbbell. The other orbitals have more complicated shapes (see atomic orbital), and are denoted by the letters d, f, and g.
The third quantum number, the magnetic quantum number, describes the magnetic moment of the electron, and is denoted by ml (or simply m). The possible values for ml are integers from -l to l: . The magnetic quantum number measures the component of the angular momentum in a particular direction. The choice of direction is arbitrary, conventionally the z-direction is chosen.
The fourth quantum number, the spin quantum number (pertaining to the "orientation" of the electron's spin) is denoted ms, with values +1⁄2 or -1⁄2.
The chemist Linus Pauling wrote, by way of example:
In the case of a helium atom with two electrons in the 1 s orbital, the Pauli Exclusion Principle requires that the two electrons differ in the value of one quantum number. Their values of n, l, and ml are the same; moreover, they have the same spin, s = 1⁄2. Accordingly they must differ in the value of ms, which can have the value of +1⁄2 for one electron and −1⁄2 for the other."[22]
It is the underlying structure and symmetry of atomic orbitals, and the way that electrons fill them, that determines the organisation of the periodic table and the structure and strength of chemical bonds between atoms.
Dirac wave equation
Main article: Dirac equation
In 1928, Paul Dirac extended the Pauli equation, which described spinning electrons, to account forspecial relativity. The result was a theory that dealt properly with events, such as the speed at which an electron orbits the nucleus, occurring at a substantial fraction of the speed of light. By using the simplest electromagnetic interaction, Dirac was able to predict the value of the magnetic moment associated with the electron's spin, and found the experimentally observed value, which was too large to be that of a spinning charged sphere governed by classical physics. He was able to solve for the spectral lines of the hydrogen atom, and to reproduce from physical first principles Sommerfeld's successful formula for the fine structure of the hydrogen spectrum.
Dirac's equations sometimes yielded a negative value for energy, for which he proposed a novel solution: he posited the existence of an antielectron and of a dynamical vacuum. This led to many-particle quantum field theory.
Quantum entanglement
Main article: Quantum entanglement


Superposition of two quantum characteristics, and two resolution possibilities.
The Pauli exclusion principle says that two electrons in one system cannot be in the same state. Nature leaves open the possibility, however, that two electrons can have both states "superimposed" over them. Recall that the wave functions that emerge simultaneously from the double slits arrive at the detection screen in a state of superposition. Nothing is certain until the superimposed waveforms "collapse," At that instant an electron shows up somewhere in accordance with the probabilities that are the squares of the amplitudes of the two superimposed waveforms. The situation there is already very abstract. A concrete way of thinking about entangled photons, photons in which two contrary states are superimposed on each of them in the same event, is as follows:
Imagine that the superposition of a state that can be mentally labeled as blue and another state that can be mentally labeled as red will then appear (in imagination, of course) as a purple state. Two photons are produced as the result of the same atomic event. Perhaps they are produced by the excitation of a crystal that characteristically absorbs a photon of a certain frequency and emits two photons of half the original frequency. So the two photons come out "purple." If the experimenter now performs some experiment that will determine whether one of the photons is either blue or red, then that experiment changes the photon involved from one having a superposition of "blue" and "red" characteristics to a photon that has only one of those characteristics. The problem that Einstein had with such an imagined situation was that if one of these photons had been kept bouncing between mirrors in a laboratory on earth, and the other one had traveled halfway to the nearest star, when its twin was made to reveal itself as either blue or red, that meant that the distant photon now had to lose its "purple" status too. So whenever it might be investigated, it would necessarily show up, instantaneously, in the opposite state to whatever its twin had revealed.
In trying to show that quantum mechanics was not a complete theory, Einstein started with the theory's prediction that two or more particles that have interacted in the past can appear strongly correlated when their various properties are later measured. He sought to explain this seeming interaction in a classical way, through their common past, and preferably not by some "spooky action at a distance." The argument is worked out in a famous paper, Einstein, Podolsky, and Rosen (1935; abbreviated EPR), setting out what is now called the EPR paradox. Assuming what is now usually called local realism, EPR attempted to show from quantum theory that a particle has both position and momentum simultaneously, while according to the Copenhagen interpretation, only one of those two properties actually exists and only at the moment that it is being measured. EPR concluded that quantum theory is incomplete in that it refuses to consider physical properties which objectively exist in nature. (Einstein, Podolsky, & Rosen 1935 is currently Einstein's most cited publication in physics journals.) In the same year, Erwin Schrödinger used the word "entanglement" and declared: "I would not call that one but rather the characteristic trait of quantum mechanics." [23] The question of whether entanglement is a real condition is still in dispute.[24] The Bell inequalities are the most powerful challenge to Einstein's claims.
Quantum electrodynamics
Main article: Quantum electrodynamics
Quantum electrodynamics (QED) is the name of the quantum theory of the electromagnetic force. Understanding QED begins with understanding electromagnetism. Electromagnetism can be called "electrodynamics" because it is a dynamic interaction between electrical and magnetic forces. Electromagnetism begins with the electric charge.
Electric charges are the sources of, and create, electric fields. An electric field is a field which exerts a force on any particles that carry electric charges, at any point in space. This includes the electron, proton, and even quarks, among others. As a force is exerted, electric charges move, a current flows and a magnetic field is produced. The magnetic field, in turn causes electric current (moving electrons). The interacting electric and magnetic field is called an electromagnetic field.
The physical description of interacting charged particles, electrical currents, electrical fields, and magnetic fields is called electromagnetism.
In 1928 Paul Dirac produced a relativistic quantum theory of electromagnetism. This was the progenitor to modern quantum electrodynamics, in that it had essential ingredients of the modern theory. However, the problem of unsolvable infinities developed in this relativistic quantum theory. Years later, renormalization solved this problem. Initially viewed as a suspect, provisional procedure by some of its originators, renormalization eventually was embraced as an important and self-consistent tool in QED and other fields of physics. Also, in the late 1940s Feynman's diagrams showed all possible interactions of a given event. The diagrams showed that the electromagnetic force is the interactions of photons between interacting particles.
An example of a prediction of quantum electrodynamics which has been verified experimentally is theLamb shift. This refers to an effect whereby the quantum nature of the electromagnetic field causes the energy levels in an atom or ion to deviate slightly from what they would otherwise be. As a result, spectral lines may shift or split.
In the 1960s physicists realized that QED broke down at extremely high energies. From this inconsistency the Standard Model of particle physics was discovered, which remedied the higher energy breakdown in theory. The Standard Model unifies the electromagnetic and weak interactionsinto one theory. This is called the electroweak theory.
Interpretations
Main article: Interpretation of quantum mechanics
The physical measurements, equations, and predictions pertinent to quantum mechanics are all consistent and hold a very high level of confirmation. However, the question of what these abstract models say about the underlying nature of the real world has received competing answers.
See also
Coherence
Experiments:
Popper's
Stern-Gerlach
Glossary of elementary quantum mechanics
History of quantum mechanics
Heisenberg's entryway to matrix mechanics
Interpretations:
Heisenberg picture
Interaction picture
Many worlds
Schrödinger picture
Sum over histories
Measurement in quantum mechanics
Measurement problem
Orbital:
Atomic
Molecular
Mathematical formulation of quantum mechanics
Philosophy of information
Philosophy of physics
Probability amplitude
Quantum:
Chaos
Computer
Decoherence
Information
Pseudo-telepathy
State
Tunneling
Zeno effect
Reduction criterion
Schrödinger's cat
Standard Model
Virtual particle
Wave packet
Persons important for discovering and elaborating quantum theory:
John Stewart Bell
David Bohm
Satyendra Nath Bose
Hugh Everett
Markus Fierz
Pascual Jordan
Eugene Wigner
Notes
1. ^ Much of the universe on the largest scale also does not conform to classical physics, because ofgeneral relativity.
2. ^ The word "quantum" comes from the Latin word for "how much" (as does "quantity"). Something which is "quantized", like the energy of Planck's harmonic oscillators, can only take specific values. For example, in most countries money is effectively quantized, with the "quantum of money" being the lowest-value coin in circulation. "Mechanics" is the branch of science that deals with the action of forces on objects, and so "quantum mechanics" is the part of mechanics that deals with objects for which particular properties are quantized.
3. ^ Actually there can be intensity-dependent effects, but at intensities achievable with non-laser sources these effects are unobservable.
4. ^ Einstein's photoelectric effect equation can be derived and explained without requiring the concept of "photons". That is, the electromagnetic radiation can be treated as a classical electromagnetic wave, as long as the electrons in the material are treated by the laws of quantum mechanics. The results are quantitatively correct for thermal light sources (the sun, incandescent lamps, etc) both for the rate of electron emission as well as their angular distribution. For more on this point, see [10]
5. ^ The classical model of the atom is called the planetary model, or sometimes the Rutherford modelafter Ernest Rutherford who proposed it in 1911, based on the Geiger-Marsden gold foil experimentwhich first demonstrated the existence of the nucleus.
6. ^ In this case, the energy of the electron is the sum of its kinetic and potential energies. The electron has kinetic energy by virtue of its actual motion around the nucleus, and potential energy because of its electromagnetic interaction with the nucleus.
7. ^ The model can be easily modified to account of the emission spectrum of any system consisting of a nucleus and a single electron (that is, ions such as He+ or O7+ which contain only one electron).
8. ^ For a somewhat more sophisticated look at how Heisenberg transitioned from the old quantum theory and classical physics to the new quantum mechanics, see Heisenberg's entryway to matrix mechanics.
References
Bernstein, Jeremy (2005). "Max Born and the quantum theory". American Journal of Physics 73(11).
Beller, Mara (2001). Quantum Dialogue: The Making of a Revolution. University of Chicago Press.
Bohr, Niels (1958). Atomic Physics and Human Knowledge. John Wiley & Sons.ASIN B00005VGVF. ISBN 0486479285. OCLC 530611.
de Broglie, Louis (1953). The Revolution in Physics. Noonday Press. LCCN 53010401.
Einstein, Albert (1934). Essays in Science. Philosophical Library. ISBN 0486470113.LCCN 55003947.
Feigl, Herbert; Brodbeck, May (1953). Readings in the Philosophy of Science. Appleton-Century-Crofts. ISBN 0390304883. LCCN 53006438.
Feynman, Richard P. (1949). "Space-Time Approach to Quantum Electrodynamics". Physical Review 76 (6): 769–789. Bibcode 1949PhRv...76..769F. doi:10.1103/PhysRev.76.769.
Fowler, Michael (1999). The Bohr Atom. University of Virginia.
Heisenberg, Werner (1958). Physics and Philosophy. Harper and Brothers. ISBN 0061305499.LCCN 99010404.
Lakshmibala, S. (2004). "Heisenberg, Matrix Mechanics and the Uncertainty Principle".Resonance, Journal of Science Education 9 (8).
Liboff, Richard L. (1992). Introductory Quantum Mechanics (2nd ed.).
Lindsay, Robert Bruce; Margenau, Henry (1957). Foundations of Physics. Dover.ISBN 0918024188. LCCN 57014416.
McEvoy, J. P.; Zarate, Oscar. Introducing Quantum Theory. ISBN 1-874166-37-4.
Nave, Carl Rod (2005). "Quantum Physics". HyperPhysics. Georgia State University.
Peat, F. David (2002). From Certainty to Uncertainty: The Story of Science and Ideas in the Twenty-First Century. Joseph Henry Press.
Reichenbach, Hans (1944). Philosophic Foundations of Quantum Mechanics. University of California Press. ISBN 0486404595. LCCN a44004471.
Schlipp, Paul Arthur (1949). Albert Einstein: Philosopher-Scientist. Tudor Publishing Company.LCCN 50005340.
Scientific American Reader, 1953.
Sears, Francis Weston (1949). Optics (3rd ed.). Addison-Wesley. ISBN 0195046013.LCCN 51001018.
Shimony, A. (1983). "(title not given in citation)". Foundations of Quantum Mechanics in the Light of New Technology (S. Kamefuchi et al., eds.). Tokyo: Japan Physical Society. pp. 225.; cited in:Popescu, Sandu; Daniel Rohrlich (1996). "Action and Passion at a Distance: An Essay in Honor of Professor Abner Shimony". arXiv:quant-ph/9605004 [quant-ph].
Tavel, Morton; Tavel, Judith (illustrations) (2002). Contemporary physics and the limits of knowledge. Rutgers University Press. ISBN 9780813530772.
Van Vleck, J. H.,1928, "The Correspondence Principle in the Statistical Interpretation of Quantum Mechanics," Proc. Nat. Acad. Sci. 14: 179.
Wheeler, John Archibald; Feynman, Richard P. (1949). "Classical Electrodynamics in Terms of Direct Interparticle Action". Reviews of Modern Physics 21 (3): 425–433. Bibcode1949RvMP...21..425W. doi:10.1103/RevModPhys.21.425.
Wieman, Carl; Perkins, Katherine (2005). "Transforming Physics Education". Physics Today.
Westmoreland; Benjamin Schumacher (1998). "Quantum Entanglement and the Nonexistence of Superluminal Signals". arXiv:quant-ph/9801014 [quant-ph].
Bronner, Patrick; Strunz, Andreas; Silberhorn, Christine; Meyn, Jan-Peter (2009). "Demonstrating quantum random with single photons". European Journal of Physics 30 (5): 1189–1200. Bibcode2009EJPh...30.1189B. doi:10.1088/0143-0807/30/5/026.
1. ^ Richard P. Feynman, QED, p. 10
2. ^ Landau, L. D.; E. M. Lifshitz (1996). Statistical Physics (3rd Edition Part 1 ed.). Oxford: Butterworth-Heinemann. ISBN 0521653142.
3. ^ This was published (in German) as Planck, Max (1901). "Ueber das Gesetz der Energieverteilung im Normalspectrum".Ann. Phys. 309 (3): 553–63. Bibcode1901AnP...309..553P.doi:10.1002/andp.19013090310. English translation: "On the Law of Distribution of Energy in the Normal Spectrum".
4. ^ Francis Weston Sears (1958). Mechanics, Wave Motion, and Heat. Addison-Wesley. p. 537.
5. ^ "The Nobel Prize in Physics 1918". The Nobel Foundation. Retrieved 2009-08-01.
6. ^ Kragh, Helge (1 December 2000). "Max Planck: the reluctant revolutionary". PhysicsWorld.com
7. ^ Einstein, Albert (1905). "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt". Annalen der Physik 17: 132–148. Bibcode 1905AnP...322..132E.doi:10.1002/andp.19053220607., translated into English as On a Heuristic Viewpoint Concerning the Production and Transformation of Light. The term "photon" was introduced in 1926.
8. ^ a b c d e Taylor, J. R.; Zafiratos, C. D.; Dubson, M. A. (2004). Modern Physics for Scientists and Engineers. Prentice Hall. pp. 127–9. ISBN 0135897890.
9. ^ Dicke and Wittke, Introduction to Quantum Mechanics, p. 12
10. ^ NTRS.NASA.gov
11. ^ a b Taylor, J. R.; Zafiratos, C. D.; Dubson, M. A. (2004). Modern Physics for Scientists and Engineers. Prentice Hall. pp. 147–8.ISBN 0135897890.
12. ^ McEvoy, J. P.; Zarate, O. (2004). Introducing Quantum Theory. Totem Books. pp. 70–89, especially p. 89. ISBN 1840465778.
13. ^ World Book Encyclopedia, page 6, 2007.
14. ^ Dicke and Wittke, Introduction to Quantum Mechanics, p. 10f.
15. ^ J. P. McEvoy and Oscar Zarate (2004).Introducing Quantum Theory. Totem Books. p. 110f. ISBN 1-84046-577-8.
16. ^ Aezel, Amir D., Entanglrment, p. 51f. (Penguin, 2003) ISBN 0-452-28457
17. ^ J. P. McEvoy and Oscar Zarate (2004).Introducing Quantum Theory. Totem Books. p. 114. ISBN 1-84046-577-8.
18. ^ Heisenberg's Nobel Prize citation
19. ^ W. Moore, Schrödinger: Life and Thought, Cambridge University Press (1989), p. 222.
20. ^ Heisenberg first published his work on the uncertainty principle in the leading German physics journal Zeitschrift für Physik:Heisenberg, W. (1927). "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik". Z. Phys. 43 (3–4): 172–198.Bibcode 1927ZPhy...43..172H.doi:10.1007/BF01397280.
21. ^ Nobel Prize in Physics presentation speech, 1932
22. ^ a b Linus Pauling, The Nature of the Chemical Bond, p. 47
23. ^ E. Schrödinger, Proceedings of the Cambridge Philosophical Society, 31 (1935), p. 555says: "When two systems, of which we know the states by their respective representation, enter into a temporary physical interaction due to known forces between them and when after a time of mutual influence the systems separate again, then they can no longer be described as before, viz., by endowing each of them with a representative of its own. I would not call that one but ratherthe characteristic trait of quantum mechanics."
24. ^ "Quantum Nonlocality and the Possibility of Superluminal Effects", John G. Cramer,npl.washington.edu
Further reading
The following titles, all by working physicists, attempt to communicate quantum theory to lay people, using a minimum of technical apparatus.
Jim Al-Khalili (2003) Quantum: A Guide for the Perplexed. Weidenfield & Nicholson.
Richard Feynman (1985) QED: The Strange Theory of Light and Matter. Princeton University Press. ISBN 0-691-08388-6
Ford, Kenneth (2005) The Quantum World. Harvard Univ. Press. Includes elementary particle physics.
Ghirardi, GianCarlo (2004) Sneaking a Look at God's Cards, Gerald Malsbary, trans. Princeton Univ. Press. The most technical of the works cited here. Passages using algebra, trigonometry, and bra-ket notation can be passed over on a first reading.
Tony Hey and Walters, Patrick (2003) The New Quantum Universe. Cambridge Univ. Press. Includes much about the technologies quantum theory has made possible.
N. David Mermin (1990) “Spooky actions at a distance: mysteries of the QT” in his Boojums all the way through. Cambridge Univ. Press: 110-176. The author is a rare physicist who tries to communicate to philosophers and humanists.
Roland Omnes (1999) Understanding Quantum Mechanics. Princeton Univ. Press.
Victor Stenger (2000) Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Buffalo NY: Prometheus Books. Chpts. 5-8.
Martinus Veltman (2003) Facts and Mysteries in Elementary Particle Physics. World Scientific Publishing Company.
A website with good introduction to Quantum mechanics can be found here.
External links
Takada, Kenjiro, Emeritus professor at Kyushu University, "Microscopic World -- Introduction to Quantum Mechanics."
Quantum Theory.
Quantum Mechanics.
The spooky quantum
Planck's original paper on Planck's constant.
Everything you wanted to know about the quantum world. From the New Scientist.
This Quantum World.
The Quantum Exchange (tutorials and open source learning software).
Theoretical Physics wiki
"Uncertainty Principle," a recording of Werner Heisenberg's voice.
Single and double slit interference
Time-Evolution of a Wavepacket in a Square Well An animated demonstration of a wave packet dispersion over time.


Quantum mechanics

From Wikipedia, the free encyclopedia

For a generally accessible and less technical introduction to the topic, see Introduction to quantum mechanics.
Quantum mechanics

Uncertainty principle
Introduction
Mathematical formulations
[show]Background

[show]Fundamental concepts

[show]Experiments

[show]Formulations

[show]Equations

[show]Interpretations

[show]Advanced topics

[show]Scientists
v · d · e


Some trajectories of a harmonic oscillator (a ball attached to a spring) in classical mechanics (A-B) and quantum mechanics (C-H). In quantum mechanics, the position of the ball is represented by a wave (called the wavefunction), with real part shown in blue and imaginary part in red. Some of the trajectories, such as C,D,E,F, are standing waves (or "stationary states"). Each standing-wave frequency is proportional to a possible energy level of the oscillator. This "energy quantization" does not occur in classical physics, where the oscillator can have any energy.
Quantum mechanics, also known as quantum physics orquantum theory, is a branch of physics providing a mathematical description of the dual particle-like and wave-like behavior and interaction of matter and energy. Quantum mechanics describes the time evolution of physical systems via a mathematical structure called the wave function. The wave function encapsulates the probability that the system is to be found in a given state at a given time. Quantum mechanics also allows one to calculate the effect on the system of making measurements of properties of the system by defining the effect of those measurements on the wave function. This leads to the well-known uncertainty principle as well as the enduring debate over the role of the experimenter, epitomised in the Schrödinger's Cat thought experiment.
Quantum mechanics differs significantly from classical mechanics in its predictions when the scale of observations becomes comparable to the atomic and sub-atomic scale, the so-called quantum realm. However, many macroscopic properties of systems can only be fully understood and explained with the use of quantum mechanics. Phenomena such assuperconductivity, the properties of materialssuch as semiconductors and nuclear andchemical reaction mechanisms observed as macroscopic behaviour, cannot be explained using classical mechanics.
The term was coined by Max Planck, and derives from the observation that some physical quantities can be changed only by discrete amounts, or quanta, as multiples of the Planck constant, rather than being capable of varying continuously or by any arbitrary amount. For example, the angular momentum, or more generally theaction,[citation needed] of an electron bound into an atom or molecule is quantized. Although an unbound electron does not exhibit quantized energy levels, one which is bound in an atomic orbital has quantized values of angular momentum. In the context of quantum mechanics, the wave–particle duality of energy and matter and theuncertainty principle provide a unified view of the behavior of photons, electrons and other atomic-scale objects.
The mathematical formulations of quantum mechanics are abstract. Similarly, the implications are often counter-intuitive in terms of classical physics. The centerpiece of the mathematical formulation is the wavefunction (defined by Schrödinger's wave equation), which describes the probability amplitude of the position and momentum of a particle. Mathematical manipulations of the wavefunction usually involve the bra-ket notation, which requires an understanding of complex numbers and linear functionals. The wavefunction treats the object as a quantum harmonic oscillator and the mathematics is akin to that of acoustic resonance.
Many of the results of quantum mechanics do not have models that are easily visualized in terms ofclassical mechanics; for instance, the ground state in the quantum mechanical model is a non-zero energy state that is the lowest permitted energy state of a system, rather than a traditional classical system that is thought of as simply being at rest with zero kinetic energy.
Fundamentally, it attempts to explain the peculiar behaviour of matter and energy at the subatomic level—an attempt which has produced more accurate results than classical physics in predicting how individual particles behave. But many unexplained anomalies remain.
Historically, the earliest versions of quantum mechanics were formulated in the first decade of the 20th Century, around the time that atomic theory and the corpuscular theory of light as interpreted by Einstein first came to be widely accepted as scientific fact; these later theories can be viewed as quantum theories of matter and electromagnetic radiation.
Following Schrödinger's breakthrough in deriving his wave equation in the mid-1920s, quantum theory was significantly reformulated away from the old quantum theory, towards the quantum mechanics ofWerner Heisenberg, Max Born, Wolfgang Pauli and their associates, becoming a science of probabilities based upon the Copenhagen interpretation of Niels Bohr. By 1930, the reformulated theory had been further unified and formalized by the work of Paul Dirac and John von Neumann, with a greater emphasis placed on measurement, the statistical nature of our knowledge of reality, and philosophical speculations about the role of the observer.
The Copenhagen interpretation quickly became (and remains) the orthodox interpretation. However, due to the absence of conclusive experimental evidence there are also many competing interpretations.
Quantum mechanics has since branched out into almost every aspect of physics, and into other disciplines such as quantum chemistry, quantum electronics, quantum optics and quantum information science. Much 19th century physics has been re-evaluated as the classical limit of quantum mechanics and its more advanced developments in terms of quantum field theory, string theory, and speculative quantum gravity theories.

Contents
 [hide]
1 History
2 Mathematical formulations
3 Interactions with other scientific theories
3.1 Quantum mechanics and classical physics
3.2 Relativity and quantum mechanics
3.3 Attempts at a unified field theory
4 Philosophical implications
5 Applications
6 Examples
6.1 Free particle
6.2 Step potential
6.3 Rectangular potential barrier
6.4 Particle in a box
6.5 Finite potential well
6.6 Harmonic oscillator
7 Notes
8 References
9 Further reading
10 External links
[edit]History
Main article: History of quantum mechanics
The history of quantum mechanics dates back to the 1838 discovery of cathode rays by Michael Faraday. This was followed by the 1859 statement of the black body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system can be discrete, and the 1900 quantum hypothesis of Max Planck.[1] Planck's hypothesis that energy is radiated and absorbed in discrete "quanta", or "energy elements", precisely matched the observed patterns of black body radiation. According to Planck, each energy element E is proportional to itsfrequency ν:

where h is Planck's constant. Planck cautiously insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself.[2] However, in 1905 Albert Einstein interpreted Planck's quantum hypothesisrealistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material.
The foundations of quantum mechanics were established during the first half of the twentieth century by Niels Bohr, Werner Heisenberg, Max Planck, Louis de Broglie, Albert Einstein, Erwin Schrödinger, Max Born, John von Neumann, Paul Dirac, Wolfgang Pauli, David Hilbert, and others. In the mid-1920s, developments in quantum mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the "Old Quantum Theory". Out of deference to their dual state as particles, light quanta came to be called photons(1926). From Einstein's simple postulation was born a flurry of debating, theorizing and testing. Thus the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927.
The other exemplar that led to quantum mechanics was the study of electromagnetic waves such as light. When it was found in 1900 by Max Planck that the energy of waves could be described as consisting of small packets or quanta, Albert Einstein further developed this idea to show that an electromagnetic wave such as light could be described as a particle - later called the photon - with a discrete quanta of energy that was dependent on its frequency.[3] This led to a theory of unity betweensubatomic particles and electromagnetic waves called wave–particle duality in which particles and waves were neither one nor the other, but had certain properties of both.
While quantum mechanics traditionally described the world of the very small, it is also needed to explain certain recently investigated macroscopic systems such as superconductors and superfluids.
The word quantum derives from Latin, meaning "how great" or "how much".[4] In quantum mechanics, it refers to a discrete unit that quantum theory assigns to certain physical quantities, such as theenergy of an atom at rest (see Figure 1). The discovery that particles are discrete packets of energy with wave-like properties led to the branch of physics dealing with atomic and sub-atomic systems which is today called quantum mechanics. It is the underlying mathematical framework of many fields of physics and chemistry, including condensed matter physics, solid-state physics, atomic physics,molecular physics, computational physics, computational chemistry, quantum chemistry, particle physics, nuclear chemistry, and nuclear physics.[5] Some fundamental aspects of the theory are still actively studied.[6]
Quantum mechanics is essential to understand the behavior of systems at atomic length scales and smaller. For example, if classical mechanics governed the workings of an atom, electrons would rapidly travel towards and collide with the nucleus, making stable atoms impossible. However, in the natural world the electrons normally remain in an uncertain, non-deterministic "smeared" (wave–particle wave function) orbital path around or through the nucleus, defying classical electromagnetism.[7]
Quantum mechanics was initially developed to provide a better explanation of the atom, especially the differences in the spectra of light emitted by different isotopes of the same element. The quantum theory of the atom was developed as an explanation for the electron remaining in its orbit, which could not be explained by Newton's laws of motion and Maxwell's laws of classical electromagnetism.
Broadly speaking, quantum mechanics incorporates four classes of phenomena for which classical physics cannot account:
The quantization of certain physical properties
Wave–particle duality
The uncertainty principle
Quantum entanglement
[edit]Mathematical formulations
Main article: Mathematical formulations of quantum mechanics
See also: Quantum logic
In the mathematically rigorous formulation of quantum mechanics developed by Paul Dirac[8] and John von Neumann,[9] the possible states of a quantum mechanical system are represented by unit vectors(called "state vectors"). Formally, these reside in a complex separable Hilbert space (variously called the "state space" or the "associated Hilbert space" of the system) well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system; for example, the state space for position and momentum states is the space of square-integrable functions, while the state space for the spin of a single proton is just the product of two complex planes. Each observable is represented by a maximally Hermitian(precisely: by a self-adjoint) linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. If the operator's spectrum is discrete, the observable can only attain those discrete eigenvalues.
In the formalism of quantum mechanics, the state of a system at a given time is described by acomplex wave function, also referred to as state vector in a complex vector space.[10] This abstract mathematical object allows for the calculation of probabilities of outcomes of concrete experiments. For example, it allows one to compute the probability of finding an electron in a particular region around the nucleus at a particular time. Contrary to classical mechanics, one can never make simultaneous predictions of conjugate variables, such as position and momentum, with accuracy. For instance, electrons may be considered to be located somewhere within a region of space, but with their exact positions being unknown. Contours of constant probability, often referred to as "clouds", may be drawn around the nucleus of an atom to conceptualize where the electron might be located with the most probability. Heisenberg's uncertainty principle quantifies the inability to precisely locate the particle given its conjugate momentum.[11]
According to one interpretation, as the result of a measurement the wave function containing the probability information for a system collapses from a given initial state to a particular eigenstate. The possible results of a measurement are the eigenvalues of the operator representing the observable — which explains the choice of Hermitian operators, for which all the eigenvalues are real. We can find the probability distribution of an observable in a given state by computing the spectral decompositionof the corresponding operator. Heisenberg's uncertainty principle is represented by the statement that the operators corresponding to certain observables do not commute.
The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famousBohr-Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of "wavefunction collapse"; see, for example, the relative state interpretation. The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wavefunctions become entangled, so that the original quantum system ceases to exist as an independent entity. For details, see the article on measurement in quantum mechanics.[12] Generally, quantum mechanics does not assign definite values. Instead, it makes predictions using probability distributions; that is, it describes the probability of obtaining possible outcomes from measuring an observable. Often these results are skewed by many causes, such as dense probability clouds[13] or quantum state nuclear attraction.[14][15] Naturally, these probabilities will depend on the quantum state at the "instant" of the measurement. Hence, uncertainty is involved in the value. There are, however, certain states that are associated with a definite value of a particular observable. These are known as eigenstates of the observable ("eigen" can be translated from German as meaning inherent or characteristic).[16]
In the everyday world, it is natural and intuitive to think of everything (every observable) as being in an eigenstate. Everything appears to have a definite position, a definite momentum, a definite energy, and a definite time of occurrence. However, quantum mechanics does not pinpoint the exact values of a particle's position and momentum (since they are conjugate pairs) or its energy and time (since they too are conjugate pairs); rather, it only provides a range of probabilities of where that particle might be given its momentum and momentum probability. Therefore, it is helpful to use different words to describe states having uncertain values and states having definite values (eigenstate). Usually, a system will not be in an eigenstate of the observable (particle) we are interested in. However, if one measures the observable, the wavefunction will instantaneously be an eigenstate (or generalised eigenstate) of that observable. This process is known as wavefunction collapse, a controversial and much debated process.[17] It involves expanding the system under study to include the measurement device. If one knows the corresponding wave function at the instant before the measurement, one will be able to compute the probability of collapsing into each of the possible eigenstates. For example, the free particle in the previous example will usually have a wavefunction that is a wave packetcentered around some mean position x0, neither an eigenstate of position nor of momentum. When one measures the position of the particle, it is impossible to predict with certainty the result.[12] It is probable, but not certain, that it will be near x0, where the amplitude of the wave function is large. After the measurement is performed, having obtained some result x, the wave function collapses into a position eigenstate centered at x.[18]
The time evolution of a quantum state is described by the Schrödinger equation, in which theHamiltonian (the operator corresponding to the total energy of the system) generates time evolution. The time evolution of wave functions is deterministic in the sense that, given a wavefunction at an initial time, it makes a definite prediction of what the wavefunction will be at any later time.[19]
During a measurement, on the other hand, the change of the wavefunction into another one is not deterministic; it is unpredictable, i.e. random. A time-evolution simulation can be seen here.[20][21]Wave functions can change as time progresses. An equation known as the Schrödinger equationdescribes how wavefunctions change in time, a role similar to Newton's second law in classical mechanics. The Schrödinger equation, applied to the aforementioned example of the free particle, predicts that the center of a wave packet will move through space at a constant velocity, like a classical particle with no forces acting on it. However, the wave packet will also spread out as time progresses, which means that the position becomes more uncertain. This also has the effect of turning position eigenstates (which can be thought of as infinitely sharp wave packets) into broadened wave packets that are no longer position eigenstates.[22]


Fig. 1: Probability densities corresponding to thewavefunctions of an electron in a hydrogen atompossessing definite energy levels (increasing from the top of the image to the bottom: n = 1, 2, 3, ...) andangular momentum (increasing across from left to right:s, p, d, ...). Brighter areas correspond to higher probability density in a position measurement. Wavefunctions like these are directly comparable toChladni's figures of acoustic modes of vibration inclassical physics and are indeed modes of oscillation as well: they possess a sharp energy and thus a keenfrequency. The angular momentum and energy arequantized, and only take on discrete values like those shown (as is the case for resonant frequencies in acoustics).
Some wave functions produce probability distributions that are constant, or independent of time, such as when in a stationary state of constant energy, time drops out of the absolute square of the wave function. Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atomis pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics it is described by a static, spherically symmetric wavefunction surrounding the nucleus (Fig. 1). (Note that only the lowest angular momentum states, labeled s, are spherically symmetric).[23]
The Schrödinger equation acts on the entire probability amplitude, not merely its absolute value. Whereas the absolute value of the probability amplitude encodes information about probabilities, its phase encodes information about the interference between quantum states. This gives rise to the wave-like behavior of quantum states. It turns out that analytic solutions of Schrödinger's equation are only available for a small number of model Hamiltonians, of which thequantum harmonic oscillator, the particle in a box, the hydrogen molecular ion and the hydrogen atom are the most important representatives. Even the helium atom, which contains just one more electron than hydrogen, defies all attempts at a fully analytic treatment. There exist several techniques for generating approximate solutions. For instance, in the method known as perturbation theory one uses the analytic results for a simple quantum mechanical model to generate results for a more complicated model related to the simple model by, for example, the addition of a weak potential energy. Another method is the "semi-classical equation of motion" approach, which applies to systems for which quantum mechanics produces weak deviations from classical behavior. The deviations can be calculated based on the classical motion. This approach is important for the field of quantum chaos.
There are numerous mathematically equivalent formulations of quantum mechanics. One of the oldest and most commonly used formulations is the transformation theory proposed by Cambridge theoretical physicist Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics, matrix mechanics (invented by Werner Heisenberg)[24][25] and wave mechanics (invented by Erwin Schrödinger).[26] In this formulation, the instantaneous state of a quantum system encodes the probabilities of its measurable properties, or "observables". Examples of observables includeenergy, position, momentum, and angular momentum. Observables can be either continuous (e.g., the position of a particle) or discrete (e.g., the energy of an electron bound to a hydrogen atom).[27] An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over histories between initial and final states; this is the quantum-mechanical counterpart of action principles in classical mechanics.
[edit]Interactions with other scientific theories
The rules of quantum mechanics are fundamental; they assert that the state space of a system is aHilbert space and that observables of that system are Hermitian operators acting on that space; they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system. An important guide for making these choices is the correspondence principle, which states that the predictions of quantum mechanics reduce to those of classical physics when a system moves to higher energies or, equivalently, larger quantum numbers (i.e. whereas a single particle exhibits a degree of randomness, in systems incorporating millions of particles averaging takes over and, at the high energy limit, the statistical probability of random behaviour approaches zero). In other words, classical mechanics is simply a quantum mechanics of large systems. This "high energy" limit is known as the classical or correspondence limit. One can even start from an established classical model of a particular system, and attempt to guess the underlying quantum model that would give rise to the classical model in the correspondence limit.
Unsolved problems in physics
In thecorrespondence limitof quantum mechanics: Is there a preferred interpretation of quantum mechanics? How does the quantum description of reality, which includes elements such as the "superposition of states" and "wavefunction collapse", give rise to the reality weperceive?

When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator.
Early attempts to merge quantum mechanics with special relativityinvolved the replacement of the Schrödinger equation with a covariant equation such as the Klein-Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field rather than a fixed set of particles. The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one employed since the inception of quantum mechanics, is to treatcharged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical  Coulomb potential. This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles.
Quantum field theories for the strong nuclear force and the weak nuclear force have been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles: quarks and gluons. The weak nuclear force and theelectromagnetic force were unified, in their quantized forms, into a single quantum field theory known as electroweak theory, by the physicists Abdus Salam, Sheldon Glashow and Steven Weinberg. These three men shared the Nobel Prize in Physics in 1979 for this work.[28]
It has proven difficult to construct quantum models of gravity, the remaining fundamental force. Semi-classical approximations are workable, and have led to predictions such as Hawking radiation. However, the formulation of a complete theory of quantum gravity is hindered by apparent incompatibilities between general relativity, the most accurate theory of gravity currently known, and some of the fundamental assumptions of quantum theory. The resolution of these incompatibilities is an area of active research, and theories such as string theory are among the possible candidates for a future theory of quantum gravity.
Classical mechanics has been extended into the complex domain, and complex classical mechanics exhibits behaviours similar to quantum mechanics.[29]
[edit]Quantum mechanics and classical physics
Predictions of quantum mechanics have been verified experimentally to a extremely high degree of accuracy. According to the correspondence principle between classical and quantum mechanics, all objects obey the laws of quantum mechanics, and classical mechanics is just an approximation for large systems (or a statistical quantum mechanics of a large collection of particles). The laws of classical mechanics thus follow from the laws of quantum mechanics as a statistical average at the limit of large systems or large quantum numbers.[30] However, chaotic systems do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems.
Quantum coherence is an essential difference between classical and quantum theories, and is illustrated by the Einstein-Podolsky-Rosen paradox. Quantum interference involves adding togetherprobability amplitudes, whereas when classical waves interfere there is an adding together ofintensities. For microscopic bodies, the extension of the system is much smaller than the coherence length, which gives rise to long-range entanglement and other nonlocal phenomena characteristic of quantum systems.[31] Quantum coherence is not typically evident at macroscopic scales, although an exception to this rule can occur at extremely low temperatures, when quantum behavior can manifest itself on more macroscopic scales (see Bose-Einstein condensate and Quantum machine). This is in accordance with the following observations:
Many macroscopic properties of a classical system are a direct consequences of the quantum behavior of its parts. For example, the stability of bulk matter (which consists of atoms andmolecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics.[32]
While the seemingly exotic behavior of matter posited by quantum mechanics and relativity theory become more apparent when dealing with extremely fast-moving or extremely tiny particles, the laws of classical Newtonian physics remain accurate in predicting the behavior of the vast majority of large objects—of the order of the size of large molecules and bigger—at velocities much smaller than the velocity of light.[33]
[edit]Relativity and quantum mechanics
Main articles: Quantum gravity and Theory of everything
Even with the defining postulates of both Einstein's theory of general relativity and quantum theory being indisputably supported by rigorous and repeated empirical evidence and while they do not directly contradict each other theoretically (at least with regard to primary claims), they are resistant to being incorporated within one cohesive model.[34]
Einstein himself is well known for rejecting some of the claims of quantum mechanics. While clearly contributing to the field, he did not accept the more philosophical consequences and interpretations of quantum mechanics, such as the lack of deterministic causality and the assertion that a singlesubatomic particle can occupy numerous areas of space at one time. He also was the first to notice some of the apparently exotic consequences of entanglement and used them to formulate theEinstein-Podolsky-Rosen paradox, in the hope of showing that quantum mechanics had unacceptable implications. This was 1935, but in 1964 it was shown by John Bell (see Bell inequality) that, although Einstein was correct in identifying seemingly paradoxical implications of quantum mechanical nonlocality, these implications could be experimentally tested. Alain Aspect's initial experiments in 1982, and many subsequent experiments since, have verified quantum entanglement.
According to the paper of J. Bell and the Copenhagen interpretation (the common interpretation of quantum mechanics by physicists since 1927), and contrary to Einstein's ideas, quantum mechanics was not at the same time
a "realistic" theory
and a local theory.
The Einstein-Podolsky-Rosen paradox shows in any case that there exist experiments by which one can measure the state of one particle and instantaneously change the state of its entangled partner, although the two particles can be an arbitrary distance apart; however, this effect does not violatecausality, since no transfer of information happens. Quantum entanglement is at the basis of quantum cryptography, with high-security commercial applications in banking and government.
Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those applications. However, the lack of a correct theory of quantum gravity is an important issue in cosmology and physicists' search for an elegant "theory of everything". Thus, resolving the inconsistencies between both theories has been a major goal of twentieth- and twenty-first-century physics. Many prominent physicists, including Stephen Hawking, have labored in the attempt to discover a theory underlying everything, combining not only different models of subatomic physics, but also deriving the universe's four forces —the strong force,electromagnetism, weak force, and gravity— from a single force or phenomenon. While Stephen Hawking was initially a believer in the Theory of Everything, after considering Gödel's Incompleteness Theorem, concluded that one was not obtainable, and stated such publicly in his lecture, "Gödel and the end of physics" in 2002.[35] One of the leaders in this field is Edward Witten, a theoretical physicist who formulated the groundbreaking M-theory, which is an attempt at describing the supersymmetrical based string theory.
[edit]Attempts at a unified field theory
Main article: Grand unified theory
As of 2011, the quest for unifying the fundamental forces through quantum mechanics is still ongoing. There has been a potential breakthrough with Antony Garrett Lisi using E8 Lie group, though it is still ongoing Garrett's hypothesis has drawn many criticisms and praise from other physicists. Quantum electrodynamics (or "quantum electromagnetism"), which is currently (in the perturbative regime at least) the most accurately tested physical theory,[36] has been successfully merged with the weak nuclear force into the electroweak force and work is currently being done to merge the electroweak and strong force into the electrostrong force. Current predictions state that at around 1014 GeV the three aforementioned forces are fused into a single unified field,[37] Beyond this "grand unification," it is speculated that it may be possible to merge gravity with the other three gauge symmetries, expected to occur at roughly 1019 GeV. However — and while special relativity is parsimoniously incorporated into quantum electrodynamics — the expanded general relativity, currently the best theory describing the gravitation force, has not been fully incorporated into quantum theory.
[edit]Philosophical implications
Main article: Interpretations of quantum mechanics
Since its inception, the many counter-intuitive results of quantum mechanics have provoked strongphilosophical debate and many interpretations. Even fundamental issues such as Max Born's basicrules concerning probability amplitudes and probability distributions took decades to be appreciated by the society and leading scientists.
Richard Feynman said, "I think I can safely say that nobody understands quantum mechanics."[38]
The Copenhagen interpretation, due largely to the Danish theoretical physicist Niels Bohr, is the interpretation of the quantum mechanical formalism most widely accepted amongst physicists. According to it, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but instead must be considered to be a final renunciation of the classical ideal of causality. In this interpretation, it is believed that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementarity nature of evidence obtained under different experimental situations.
Albert Einstein, himself one of the founders of quantum theory, disliked this loss of determinism in measurement. (A view paraphrased as "God does not play dice with the universe.") Einstein held that there should be a local hidden variable theory underlying quantum mechanics and that, consequently, the present theory was incomplete. He produced a series of objections to the theory, the most famous of which has become known as the Einstein-Podolsky-Rosen paradox. John Bell showed that the EPR paradox led to experimentally testable differences between quantum mechanics and local realistic theories. Experiments have been performed confirming the accuracy of quantum mechanics, thus demonstrating that the physical world cannot be described by local realistic theories.[39] The Bohr-Einstein debates provide a vibrant critique of the Copenhagen Interpretation from an epistemologicalpoint of view.
The Everett many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes.[40] This is not accomplished by introducing some new axiom to quantum mechanics, but on the contrary by removing the axiom of the collapse of the wave packet: All the possible consistent states of the measured system and the measuring apparatus (including the observer) are present in areal physical (not just formally mathematical, as in other interpretations) quantum superposition. Such a superposition of consistent state combinations of different systems is called an entangled state. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we can observe only the universe, i.e. the consistent state contribution to the mentioned superposition, we inhabit. Everett's interpretation is perfectly consistent with John Bell's experiments and makes them intuitively understandable. However, according to the theory of quantum decoherence, the parallel universes will never be accessible to us. This inaccessibility can be understood as follows: Once a measurement is done, the measured system becomes entangled with both the physicist who measured it and a huge number of other particles, some of which are photonsflying away towards the other end of the universe; in order to prove that the wave function did not collapse one would have to bring all these particles back and measure them again, together with the system that was measured originally. This is completely impractical, but even if one could theoretically do this, it would destroy any evidence that the original measurement took place (including the physicist's memory).[citation needed]
[edit]Applications
Quantum mechanics had enormous success in explaining many of the features of our world. The individual behaviour of the subatomic particles that make up all forms of matter—electrons, protons,neutrons, photons and others—can often only be satisfactorily described using quantum mechanics. Quantum mechanics has strongly influenced the string theory, a candidate for a theory of everything(see reductionism) and the multiverse hypothesis.
Quantum mechanics is important for understanding how individual atoms combine covalently to form chemicals or molecules. The application of quantum mechanics to chemistry is known as quantum chemistry. (Relativistic) quantum mechanics can in principle mathematically describe most of chemistry. Quantum mechanics can provide quantitative insight into ionic and covalent bondingprocesses by explicitly showing which molecules are energetically favorable to which others, and by approximately how much.[41] Most of the calculations performed in computational chemistry rely on quantum mechanics.[42]


A working mechanism of a resonant tunneling diode device, based on the phenomenon of quantum tunneling through the potential barriers.
Much of modern technology operates at a scale where quantum effects are significant. Examples include thelaser, the transistor (and thus themicrochip), the electron microscope, and magnetic resonance imaging. The study of semiconductors led to the invention of the diode and thetransistor, which are indispensable for modern electronics.
Researchers are currently seeking robust methods of directly manipulating quantum states. Efforts are being made to develop quantum cryptography, which will allow guaranteed secure transmission ofinformation. A more distant goal is the development of quantum computers, which are expected to perform certain computational tasks exponentially faster than classical computers. Another active research topic is quantum teleportation, which deals with techniques to transmit quantum information over arbitrary distances.
Quantum tunneling is vital in many devices, even in the simple light switch, as otherwise the electrons in the electric current could not penetrate the potential barrier made up of a layer of oxide. Flash memory chips found in USB drives use quantum tunneling to erase their memory cells.
Quantum mechanics primarily applies to the atomic regimes of matter and energy, but some systems exhibit quantum mechanical effects on a large scale; superfluidity (the frictionless flow of a liquid at temperatures near absolute zero) is one well-known example. Quantum theory also provides accurate descriptions for many previously unexplained phenomena such as black body radiation and the stability of electron orbitals. It has also given insight into the workings of many different biological systems, including smell receptors and protein structures.[43] Recent work on photosynthesis has provided evidence that quantum correlations play an essential role in this most fundamental process of the plant kingdom.[44] Even so, classical physics often can be a good approximation to results otherwise obtained by quantum physics, typically in circumstances with large numbers of particles or large quantum numbers. (However, some open questions remain in the field of quantum chaos.)
[edit]Examples
[edit]Free particle
For example, consider a free particle. In quantum mechanics, there is wave-particle duality so the properties of the particle can be described as the properties of a wave. Therefore, its quantum statecan be represented as a wave of arbitrary shape and extending over space as a wave function. The position and momentum of the particle are observables. The Uncertainty Principle states that both the position and the momentum cannot simultaneously be measured with full precision at the same time. However, one can measure the position alone of a moving free particle creating an eigenstate of position with a wavefunction that is very large (a Dirac delta) at a particular position x and zero everywhere else. If one performs a position measurement on such a wavefunction, the result x will be obtained with 100% probability (full certainty). This is called an eigenstate of position (mathematically more precise: a generalized position eigenstate (eigendistribution)). If the particle is in an eigenstate of position then its momentum is completely unknown. On the other hand, if the particle is in an eigenstate of momentum then its position is completely unknown.[45] In an eigenstate of momentum having a plane wave form, it can be shown that the wavelength is equal to h/p, where h is Planck's constant and p is the momentum of the eigenstate.[46]


3D confined electron wave functions for each eigenstate in a Quantum Dot. Here, rectangular and triangular-shaped quantum dots are shown. Energy states in rectangular dots are more ‘s-type’ and ‘p-type’. However, in a triangular dot the wave functions are mixed due to confinement symmetry.
[edit]Step potential
Main article: Solution of Schrödinger equation for a step potential


The step potential with incident and exiting waves shown.
The potential in this case is given by:

The solutions are superpositions of left and right moving waves:
,

where the wave vectors are related to the energy via
, and

and the coefficients A and B are determined from the boundary conditions and by imposing a continuous derivative to the solution.
Each term of the solution can be interpreted as an incident, reflected of transmitted component of the wave, allowing the calculation of transmission and reflection coefficients. In contrast to classical mechanics, incident particles with energies higher than the size of the potential step are still partially reflected.
[edit]Rectangular potential barrier
Main article: Rectangular potential barrier
This is a model for the quantum tunneling effect, which has important applications to modern devices such as flash memory and the scanning tunneling microscope.
[edit]Particle in a box


1-dimensional potential energy box (or infinite potential well)
Main article: Particle in a box
The particle in a 1-dimensional potential energy box is the most simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy inside a certain region and infinite potential energy everywhere outside that region. For the 1-dimensional case in the x direction, the time-independent Schrödinger equation can be written as:[47]

Writing the differential operator

the previous equation can be seen to be evocative of the classic analogue

with E as the energy for the state ψ, in this case coinciding with the kinetic energy of the particle.
The general solutions of the Schrödinger equation for the particle in a box are:

or, from Euler's formula,

The presence of the walls of the box determines the values of C, D, and k. At each wall (x = 0 andx = L), ψ = 0. Thus when x = 0,

and so D = 0. When x = L,

C cannot be zero, since this would conflict with the Born interpretation. Therefore sin kL = 0, and so it must be that kL is an integer multiple of π. Therefore,

The quantization of energy levels follows from this constraint on k, since

[edit]Finite potential well
Main article: Finite potential well
This is generalization of the infinite potential well problem to potential wells of finite depth.
[edit]Harmonic oscillator
Main article: Quantum harmonic oscillator
As in the classical case, the potential for the quantum harmonic oscillator is given by:

This problem can be solved either by directly solving the Schrödinger equation directly, which is not trivial, or by using the more elegant ladder method, first proposed by Paul Dirac. The eigenstates are given by:

where Hn are the Hermite polynomials:

and the corresponding energy levels are
.
This is another example which illustrates the quantization of energy for bound states.
[edit]Notes
1. ^ J. Mehra and H. Rechenberg, The historical development of quantum theory, Springer-Verlag, 1982.
2. ^ T.S. Kuhn, Black-body theory and the quantum discontinuity 1894-1912, Clarendon Press, Oxford, 1978.
3. ^ A. Einstein, Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt (On a heuristic point of view concerning the production and transformation of light),Annalen der Physik 17 (1905) 132-148 (reprinted in The collected papers of Albert Einstein, John Stachel, editor, Princeton University Press, 1989, Vol. 2, pp. 149-166, in German; see also Einstein's early work on the quantum hypothesis, ibid. pp. 134-148).
4. ^ "Merriam-Webster.com". Merriam-Webster.com. 2010-08-13. Retrieved 2010-10-15.
5. ^ Edwin Thall. "FCCJ.org". Mooni.fccj.org. Retrieved 2010-10-15.
6. ^ Compare the list of conferences presented here [1].
7. ^ Oocities.com
8. ^ P.A.M. Dirac, The Principles of Quantum Mechanics, Clarendon Press, Oxford, 1930.
9. ^ J. von Neumann, Mathematische Grundlagen der Quantenmechanik, Springer, Berlin, 1932 (English translation: Mathematical Foundations of Quantum Mechanics, Princeton University Press, 1955).
10. ^ Greiner, Walter; Müller, Berndt (1994). Quantum Mechanics Symmetries, Second edition. Springer-Verlag. p. 52. ISBN 3-540-58080-8., Chapter 1, p. 52
11. ^ "AIP.org". AIP.org. Retrieved 2010-10-15.
12. ^ a b Greenstein, George; Zajonc, Arthur (2006). The Quantum Challenge: Modern Research on the Foundations of Quantum Mechanics, Second edition. Jones and Bartlett Publishers, Inc. p. 215.ISBN 0-7637-2470-X., Chapter 8, p. 215
13. ^ probability clouds are approximate, but better than the Bohr model, whereby electron location is given by a probability function, the wave function eigenvalue, such that the probability is the squared modulus of the complex amplitude
14. ^ "Actapress.com". Actapress.com. Retrieved 2010-10-15.
15. ^ Hirshleifer, Jack (2001). The Dark Side of the Force: Economic Foundations of Conflict Theory. Campbridge University Press. p. 265. ISBN 0-521-80412-4., Chapter , p.
16. ^ Dict.cc
De.pons.eu
17. ^ "PHY.olemiss.edu". PHY.olemiss.edu. 2010-08-16. Retrieved 2010-10-15.
18. ^ "Farside.ph.utexas.edu". Farside.ph.utexas.edu. Retrieved 2010-10-15.
19. ^ "Reddit.com". Reddit.com. 2009-06-01. Retrieved 2010-10-15.
20. ^ Michael Trott. "Time-Evolution of a Wavepacket in a Square Well — Wolfram Demonstrations Project". Demonstrations.wolfram.com. Retrieved 2010-10-15.
21. ^ Michael Trott. "Time Evolution of a Wavepacket In a Square Well". Demonstrations.wolfram.com. Retrieved 2010-10-15.
22. ^ Mathews, Piravonu Mathews; Venkatesan, K. (1976). A Textbook of Quantum Mechanics. Tata McGraw-Hill. p. 36. ISBN 0-07-096510-2., Chapter 2, p. 36
23. ^ "Wave Functions and the Schrödinger Equation" (PDF). Retrieved 2010-10-15.
24. ^ "Spaceandmotion.com". Spaceandmotion.com. Retrieved 2010-10-15.
25. ^ Especially since Werner Heisenberg was awarded the Nobel Prize in Physics in 1932 for the creation of quantum mechanics, the role of Max Born has been obfuscated. A 2005 biography of Born details his role as the creator of the matrix formulation of quantum mechanics. This was recognized in a paper by Heisenberg, in 1940, honoring Max Planck. See: Nancy Thorndike Greenspan, "The End of the Certain World: The Life and Science of Max Born" (Basic Books, 2005), pp. 124 - 128, and 285 - 286.
26. ^ "IF.uj.edu.pl" (PDF). Retrieved 2010-10-15.
27. ^ "OCW.ssu.edu" (PDF). Retrieved 2010-10-15.
28. ^ "The Nobel Prize in Physics 1979". Nobel Foundation. Retrieved 2010-02-16.
29. ^ Complex Elliptic Pendulum, Carl M. Bender, Daniel W. Hook, Karta Kooner
30. ^ "Scribd.com". Scribd.com. 2008-09-14. Retrieved 2010-10-15.
31. ^ Philsci-archive.pitt.edu[dead link]
32. ^ "Academic.brooklyn.cuny.edu". Academic.brooklyn.cuny.edu. Retrieved 2010-10-15.
33. ^ "Cambridge.org" (PDF). Retrieved 2010-10-15.
34. ^ "There is as yet no logically consistent and complete relativistic quantum field theory.", p. 4.  — V. B. Berestetskii, E. M. Lifshitz, L P Pitaevskii (1971). J. B. Sykes, J. S. Bell (translators). Relativistic Quantum Theory 4, part I. Course of Theoretical Physics (Landau and Lifshitz) ISBN 0080160255
35. ^ http://www.damtp.cam.ac.uk/strings02/dirac/hawking/
36. ^ "Life on the lattice: The most accurate theory we have". Latticeqcd.blogspot.com. 2005-06-03. Retrieved 2010-10-15.
37. ^ Parker, B. (1993). Overcoming some of the problems. pp. 259–279.
38. ^ The Character of Physical Law (1965) Ch. 6; also quoted in The New Quantum Universe (2003) by Tony Hey and Patrick Walters
39. ^ "Plato.stanford.edu". Plato.stanford.edu. 2007-01-26. Retrieved 2010-10-15.
40. ^ "Plato.stanford.edu". Plato.stanford.edu. Retrieved 2010-10-15.
41. ^ Books.google.com. Books.google.com. Retrieved 2010-10-23.
42. ^ "en.wikiboos.org". En.wikibooks.org. Retrieved 2010-10-23.
43. ^ Anderson, Mark (2009-01-13). "Discovermagazine.com". Discovermagazine.com. Retrieved 2010-10-23.
44. ^ "Quantum mechanics boosts photosynthesis". physicsworld.com. Retrieved 2010-10-23.
45. ^ Davies, P. C. W.; Betts, David S. (1984). Quantum Mechanics, Second edition. Chapman and Hall. p. 79. ISBN 0-7487-4446-0., Chapter 6, p. 79
46. ^ Books.Google.com. Books.Google.com. 2007-08-30. Retrieved 2010-10-23.
47. ^ Derivation of particle in a box, chemistry.tidalswan.com
[edit]References
The following titles, all by working physicists, attempt to communicate quantum theory to lay people, using a minimum of technical apparatus.
Chester, Marvin (1987) Primer of Quantum Mechanics. John Wiley. ISBN 0-486-42878-8
Richard Feynman, 1985. QED: The Strange Theory of Light and Matter, Princeton University Press. ISBN 0-691-08388-6. Four elementary lectures on quantum electrodynamics and quantum field theory, yet containing many insights for the expert.
Ghirardi, GianCarlo, 2004. Sneaking a Look at God's Cards, Gerald Malsbary, trans. Princeton Univ. Press. The most technical of the works cited here. Passages using algebra, trigonometry, and bra-ket notation can be passed over on a first reading.
N. David Mermin, 1990, "Spooky actions at a distance: mysteries of the QT" in his Boojums all the way through. Cambridge University Press: 110-76.
Victor Stenger, 2000. Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Buffalo NY: Prometheus Books. Chpts. 5-8. Includes cosmological and philosophical considerations.
More technical:
Bryce DeWitt, R. Neill Graham, eds., 1973. The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press. ISBN 0-691-08131-X
Dirac, P. A. M. (1930). The Principles of Quantum Mechanics. ISBN 0198520115. The beginning chapters make up a very clear and comprehensible introduction.
Hugh Everett, 1957, "Relative State Formulation of Quantum Mechanics," Reviews of Modern Physics 29: 454-62.
Feynman, Richard P.; Leighton, Robert B.; Sands, Matthew (1965). The Feynman Lectures on Physics. 1-3. Addison-Wesley. ISBN 0738200085.
Griffiths, David J. (2004). Introduction to Quantum Mechanics (2nd ed.). Prentice Hall. ISBN 0-13-111892-7. OCLC 40251748. A standard undergraduate text.
Max Jammer, 1966. The Conceptual Development of Quantum Mechanics. McGraw Hill.
Hagen Kleinert, 2004. Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. Singapore: World Scientific. Draft of 4th edition.
Gunther Ludwig, 1968. Wave Mechanics. London: Pergamon Press. ISBN 0-08-203204-1
George Mackey (2004). The mathematical foundations of quantum mechanics. Dover Publications. ISBN 0-486-43517-2.
Albert Messiah, 1966. Quantum Mechanics (Vol. I), English translation from French by G. M. Temmer. North Holland, John Wiley & Sons. Cf. chpt. IV, section III.
Omnès, Roland (1999). Understanding Quantum Mechanics. Princeton University Press. ISBN 0-691-00435-8. OCLC 39849482.
Scerri, Eric R., 2006. The Periodic Table: Its Story and Its Significance. Oxford University Press. Considers the extent to which chemistry and the periodic system have been reduced to quantum mechanics. ISBN 0-19-530573-6
Transnational College of Lex (1996). What is Quantum Mechanics? A Physics Adventure. Language Research Foundation, Boston. ISBN 0-9643504-1-6. OCLC 34661512.
von Neumann, John (1955). Mathematical Foundations of Quantum Mechanics. Princeton University Press. ISBN 0691028931.
Hermann Weyl, 1950. The Theory of Groups and Quantum Mechanics, Dover Publications.
D. Greenberger, K. Hentschel, F. Weinert, eds., 2009. Compendium of quantum physics, Concepts, experiments, history and philosophy, Springer-Verlag, Berlin, Heidelberg.
[edit]Further reading
Bernstein, Jeremy (2009). Quantum Leaps. Cambridge, Massachusetts: Belknap Press of Harvard University Press. ISBN 9780674035416.
Bohm, David (1989). Quantum Theory. Dover Publications. ISBN 0-486-65969-0.
Eisberg, Robert; Resnick, Robert (1985). Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles (2nd ed.). Wiley. ISBN 0-471-87373-X.
Liboff, Richard L. (2002). Introductory Quantum Mechanics. Addison-Wesley. ISBN 0-8053-8714-5.
Merzbacher, Eugen (1998). Quantum Mechanics. Wiley, John & Sons, Inc. ISBN 0-471-88702-1.
Sakurai, J. J. (1994). Modern Quantum Mechanics. Addison Wesley. ISBN 0-201-53929-2.
Shankar, R. (1994). Principles of Quantum Mechanics. Springer. ISBN 0-306-44790-8.
[edit]External links
Find more about Quantum mechanics on Wikipedia's sister projects:

Definitions from Wiktionary

Images and media from Commons

Learning resources from Wikiversity

News stories from Wikinews

Quotations from Wikiquote

Source texts from Wikisource

Textbooks from Wikibooks
A foundation approach to quantum Theory that does not rely on wave-particle duality.
The Modern Revolution in Physics - an online textbook.
J. O'Connor and E. F. Robertson: A history of quantum mechanics.
Introduction to Quantum Theory at Quantiki.
Quantum Physics Made Relatively Simple: three video lectures by Hans Bethe
H is for h-bar.
Quantum Mechanics Books Collection: Collection of free books
Course material
Doron Cohen: Lecture notes in Quantum Mechanics (comprehensive, with advanced topics).
MIT OpenCourseWare: Chemistry.
MIT OpenCourseWare: Physics. See 8.04
Stanford Continuing Education PHY 25: Quantum Mechanics by Leonard Susskind, see course description Fall 2007
5½ Examples in Quantum Mechanics
Imperial College Quantum Mechanics Course.
Spark Notes - Quantum Physics.
Quantum Physics Online : interactive introduction to quantum mechanics (RS applets).
Experiments to the foundations of quantum physics with single photons.
Motion Mountain, Volume IV - A modern introduction to quantum theory, with several animations.
AQME : Advancing Quantum Mechanics for Engineers — by T.Barzso, D.Vasileska and G.Klimeck online learning resource with simulation tools on nanohub
Quantum Mechanics by Martin Plenio
Quantum Mechanics by Richard Fitzpatrick
Online course on Quantum Transport
FAQs
Many-worlds or relative-state interpretation.
Measurement in Quantum mechanics.
Media
Lectures on Quantum Mechanics by Leonard Susskind
Everything you wanted to know about the quantum world — archive of articles from New Scientist.