next up previous contents
Next: Case Studies Up: The Artifact Previous: Security Issues   Contents


Efficiency Considerations

Several authors (e.g. [MS91,vLA87,AKvL97,EFC98]) concur that a risk with simulated annealing is its extensive running times. The potential for cycles exists for the same reason SA does so well -- its willingness to take uphill moves. By refining neighborhoods, simulated annealing has become a technique used to reduce random walk behavior [Mac99].

Of the DAO's three main processing stages (Table 4.1), our concerns for efficiency lie in the middle:


Table: DAO Process Flow
Read and parse the input data and configuration files to create internal data structures;
Optimize student bed assignments with CPU-intensive SA algorithm on memory-resident data;
Output assignments & reports.


Linear time grading functions and memory-resident data structures support the need for a tight inner optimization loop to keep running times manageable. Profiling version v0.83 of the DAO[*], revealed inefficiencies in the age_consensus and music_consensus grading functions, and these were subsequently linearized. Available beds and moveable students are accessed directly through indexed arrays providing $O(1)$ access with no search or filtering required. For efficiency, the objective function is calculated most often by quickly applying changes in the cost resulting from a move, reserving complete calculations of all the students for the phase interval as an invariant insurance, as discussed in Section 2.1.2 (page [*]).

Given our acceptable running times and results, we opted not to maintain a memory of some recently visited points (tabu solutions) and incur the extra overhead to recognize repeating them to help prevent cycling [HTdW97]. To diversify the search and avoid unexplored neighborhoods we choose each move randomly. Furthermore, we reset the random seed at the start of each round, and optionally start from the same initial configuration at each round, or continue with the results from the last round. With the size of our problem and a random selection method, there is little chance of picking a point and the same subsequent point, repeatedly. During the minimization step, the iterated next-descent hillclimber avoids cycles by never accepting lateral moves.

As a CPU-bound process, linear speed up may be attainable as processor speeds double about every eighteen months. This was evident with the 350MHz Pentium II, and 700MHz Pentium III dedicated machines used in our tests. The average processing speed was approximately nine thousand attempts per second on the slower processors, and seventeen thousand attempts per second on the faster machine. The post-round minimization steps were a quarter-time slower at 6,700 attempts per second on the 350MHz computers, and 12,480 attempts per second on the 700MHz machine. Though it has yet to be investigated, we hypothesize the difference was due to a higher proportion of unaccepted moves during the minimization step, requiring the original state to be restored.


next up previous contents
Next: Case Studies Up: The Artifact Previous: Security Issues   Contents
elena s ackley 2002-01-20
download thesis