lunes, 9 de mayo de 2011

Opinion: Simulated annealing algorithm, the ignored biological algorithm

This is a personal opinion article, based on my own experience and alienation.

I knew this algorithm for first time in Artificial Intelligence subject of my Master Degree in Computer Sciences in 2005. I liked it so much that I repeated one more year, only to increment my marks, a FAIL was not enough for me. I know, sometimes my ambition scares me the hell out too.

This algorithm models a universe where the solution to a problem does not try to be perfect, only good (my bro sometimes says that something good is the best enemy of the perfection). The approach is highly random at first, and, little by little, people (ups, I meant the search of the solution) go behaving as they are supposed to, and finally, it is like any other algorithm that reach a similar solution in a fashion way (faster, yes, but not better).

What I really love of this algorithm is not my implementation 5 years ago, I do not really remember what I solved, but it has not gone down in history. What I loved about that algorithm is the ease of application to the psychology of human life:

Example 1:

A child, with all that potential, could be anything in life, and, according with chaos theory, little changes makes big differences, an opinion, an idea or an image could make a child empathize or left aside a sport, a hobby, a passion...
Years later, with less  of that potential, it is not possible for a teenager to be the best in several fields, it is too late for him/her, but it is still headed for other disciplines.
Finally, the child grew adult, and the possibilities are within a cone that gets narrower and narrower. Of course, changes are always possible, my own mother got a job when she was 52 in something she never excelled. When she tried to poison my bro and me several times with her "cuisine", now she cooks very well in a restaurant.

Example 2:

A project, in its first design phase, it is a flower and fruit garden, a unexplored world full of possibilities, the childhood.
Only defining the base language, initial technology and a framework, most possibilities really disappear, and other grow strong. Java, Spring, AOP for Tomcat deploying weakens the possibilities of being portlet and destroys the possibilities of being a C++ desktop application. There are still many things to define and create with patterns and algorithms.
A month to deliver, what can we really change?

Those are my arguments for consider this algorithm essential for a global understanding of any creation, specially for engineers that makes of creation their primary task. Personally, I consider it a biological algorithm, until a God, Creator or BigBang algorithm category were created.


==========================================
Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities). For certain problems, simulated annealing may be more effective than exhaustive enumeration — provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution.

The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one.

By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends both on the difference between the corresponding function values and also on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves potentially saves the method from becoming stuck at local optima—which are the bane of greedier methods.

The method was independently described by Scott Kirkpatrick, C. Daniel Gelatt and Mario P. Vecchi in 1983,[1] and by Vlado Černý in 1985.[2] The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by M.N. Rosenbluth in a paper by N. Metropolis et al. in 1953.[3]

From wikipedia

No hay comentarios:

Publicar un comentario

Nota: solo los miembros de este blog pueden publicar comentarios.