next up previous contents
Next: Dorm Assignments: Five Cooling Up: Chances: The Simulated Annealing Previous: Chances: The Simulated Annealing   Contents

Simulated Annealing Schedules

Simulated Annealing (SA) has advantages and disadvantages compared to other global optimization techniques, such as genetic algorithms, tabu search, and neural networks. Among its advantages are the relative ease of implementation and the ability to provide reasonably good solutions for many combinatorial problems. Though a robust technique, its drawbacks include the need for a great deal of computer time for many runs and carefully chosen tunable parameters [EFC98].

With the existence of uphill moves, comes the possibility of cycles and random walks, both which stymie SA and are addressed in Section 4.1.6 (page [*]). Like hillclimbers, simulated annealing can fall into a local minima far from the global optimum solution (like premature crystalization) [Rib].

Without attempting to provide a comprehensive comparison of global optimization techniques, we present our approach and findings within the specific problem domain of improving dorm room assignments, using simulated annealing.

Given a neighborhood structure, simulated annealing can be viewed as an algorithm that continuously attempts to transform the current configuration into one of its neighbors [vLA87].

Simulated annealing, a Monte Carlo method that can be modeled mathematically using the theory of finite Markov chains [AKvL97]. By definition, a Markov chain is a sequence of trials, where the probability of the outcome of a given trial depends only on the outcome of the previous trial. In the case of SA, a trial corresponds to a move, and the set of outcomes is given by a finite set of neighboring states. Each move depends only on the outcome of the previous attempt, so the concept of Markov chains applies. Since the number of the trial does not affect the probabilities, it is considered homogenous [AKvL97].

To govern the convergence of the algorithm, a set of parameters, known as a cooling schedule, are specified by:

Determining these parameters is one challenge with annealing. There is much
research on the topic, dealing mostly with heuristic schedules [Pic87,BGNZ96,AKvL97]. There are two main categories of heuristic schedules: Static and dynamic. In a static cooling schedule, the parameters are fixed and cannot be changed during the execution of the algorithm. With a dynamic cooling schedule the parameters are adaptively changed during the execution. The Mathematical Optimization chapter of the Computer Science Education Project [CSE96] provides an introduction to two simple static cooling schedules:

The simplest and most common temperature decrement rule is:
$T_{k + 1} = \alpha T_k$, where $\alpha$ is a constant close to, but smaller than, 1. This exponential cooling scheme (ECS) was first proposed by Kirkpatrick et al. [KGV82] with $\alpha = .95$.

Randelman and Grest [RG86] compared this strategy with a linear cooling scheme (LCS) in which $T$ is reduced every $L$ trials [while avoiding negative temperatures]: $T_{k
+ 1} = T_k + \Delta T$. They found the reductions achieved using the two schemes to be comparable, and also noted that the final value of [the objective function] $\mathcal \mathbf f$ was, in general, improved with slower cooling rates, at the expense, of course, of greater computational effort. Finally, they observed that the algorithm performance depended more on the cooling rate, $\Delta
T/L$, than on the individual values of $\Delta T$ and $L$.

Some general guidelines exist when choosing an annealing schedule, for instance, there is a trade-off between large decrements of the control parameter ($T$) and small Markov chain lengths ($L$) -- usually small decrements of $T$, or a ceiling for $L$, polynomial in the problem size, are chosen to avoid long chains. When the Markov chain length is fixed, it may be related to the size of the neighborhoods in the problem instance. Apparently, when a sufficently long schedule is used, simulated annealing replaces iterative improvement as the optimal schedule [vLA87,AKvL97].

Researchers have proposed more elaborate, mostly adaptive, annealing schedules, using statistical measures to modify the control parameters. Unexpectedly, non-monotonic schedules have been observed as optimal in some situations [vLA87,AKvL97].


next up previous contents
Next: Dorm Assignments: Five Cooling Up: Chances: The Simulated Annealing Previous: Chances: The Simulated Annealing   Contents
elena s ackley 2002-01-20
download thesis