# The Genesis Of Genetic Algorithms

Let’s say you have a function and you want to optimize it. In real life, this function can take many forms like choosing the right set of features for your car while keep the price low, picking the best possible apartment considering all the different factors like location, rent, closeness to stores etc, making a business plan, and many other things. In fact, we continuously use optimization in our everyday life without even realizing it. The interesting thing to note is that we don’t get the most optimal answer every time. We just look around for a while and stop when we get a good enough answer. More often than not, these answers are sub-optimal, mostly depending on the initial point we chose. So how do we get to the best answer? There might be billions of options, do we need check all of them to get to this global optimum?

Why do we need genetic algorithms?

Consider the example of finding an apartment for rent. If you are living in a big city, there will be a lot of apartments available. Of course you will have certain criteria that you look for in an apartment. Now given these constraints, you start searching for an apartment. Ideally, to get the best possible answer, you need to check out every single apartment that exists and see which one is the most optimal. Would you actually do it? No. You just start looking somewhere, check 15-20 apartments and pick the best among these. This apartment is more or less nearer to the first set of features you had in mind (may be location, price, neighborhood etc). This might not be the best for the given set of constraints, but you cannot check out all the apartments either because it consumes time and energy. This is where genetic algorithms come in.

In mathematical optimization, we employ search techniques to look for optimal solutions under the given constraints. These techniques give a local optimum depending on the initial search point we chose. It’s like we decided to start looking for an apartment in a particular locality. Once we look around in that locality, we are supposed to go and check other localities as well. But once we check out a few apartments in this locality, we get tired and just pick an apartment which is the best in this locality. This is called the local optimum. It might be the best in this locality, but it’s not the best overall. Finding the global optimum would be time consuming using traditional search techniques. Genetic algorithms help us find the global optimum in a much better way than these classical search techniques.

What are genetic algorithms exactly?

Genetic Algorithms (GAs) are adaptive heuristic search algorithm premised on the evolutionary ideas of natural selection and genetic. The basic concept of GAs is designed to simulate processes in natural system necessary for evolution, specifically those that follow the principles first laid down by Charles Darwin of survival of the fittest. As such they represent an intelligent exploitation of a random search within a defined search space to solve a problem.

GAs were actually introduced in the 1970s by John Holland. Whereas most stochastic search methods operate on a single solution to the problem at hand, genetic algorithms operate on a population of solutions. They are less susceptible to getting stuck at local optima than gradient search methods. Getting stuck at local optima is equivalent to comparing an apartment to a few apartments in the neighborhood, realizing that it’s the best one among these, and deciding that this is the best one overall! This is obviously not right.

How does it work?

Since the genetic algorithm traverses the search space using the genotype rather than the phenotype, it is less likely to get stuck on a local high or low. But they tend to be computationally expensive. The strength of GAs comes from the implicitly parallel search of the solution space that it performs via a population of candidate solutions and this population is manipulated in the simulation. The candidate solutions represent every possible behavior of the learner and based on the overall performance of the candidates, each could be assigned a fitness value. Fitness represents how good a solution can be. Basically, a set of candidate solutions are identified and taken to evolve into better solutions based on their fitness. The mathematical formulation is really interesting. Check out this link if you are interested.

Where are they used?

GAs have been used for problem-solving and for modeling. GAs are applied to many scientific, engineering problems, in business and entertainment, including:

• Optimization: Circuit design, video/sound quality optimization, numerical optimization.
• Automatic Programming: GAs have been used to evolve computer programs for specific tasks, and to design other computational structures, for example, cellular automata and sorting networks.
• Machine and robot learning: GAs have been used for many machine learning applications and protein structure prediction. GAs have also been used to design neural networks, to evolve rules for learning classifier systems or symbolic production systems, and to design and control robots.
• Models of social systems: GAs have been used to study evolutionary aspects of social systems, such as the evolution of cooperation, the evolution of communication, and trail-following behavior in ants.
• Economic models: GAs have been used to model processes of innovation, the development of bidding strategies, and the emergence of economic markets.
• Immune system models: GAs have been used to model various aspects of the natural immune system.
• Ecological models: GAs have been used to model ecological phenomena such as biological arms races, host-parasite co-evolutions, symbiosis and resource flow in ecologies.

————————————————————————————————-