next up previous contents
Next: Comparing the Schemes: Scores Up: Experimental Results Previous: Varying Probabilities for Better   Contents

Reaching a Local Minimum

Is the minimization step important to the final result?

The use of iterative descent hillclimbing (m) on the results from our random methods not only guaranteed local minima -- a marketing advantage -- it also improved the results of the methods, often to the point of indistinction.

The following statements regarding the differences between the schemes could only be made before the minimization step took place. With the Fall 1998 data, sa3 was significantly different than sigmet1 with a .01 confidence level. In Fall 1999, sigmet4, sa4 and sa0 were strongly different than sa3 with .01 confidence, and sigmet4 differed from sa2 with .05 confidence. For Fall 2000, sigmet4, sa4 and rldhc were statistically different than sa3 with .01 confidence, and sa2 with a .05 confidence level.

Most of the differences between the schemes and the table-driven schemes remained after minimization. In Fall 2000, all the schemes except sa3 (i.e. sigmet4, sa4, sa0, rldhc, sa2) differed from sa1 with .01 confidence; sa3 changed from no apparent difference with the table-driven approaches, to strong differences with .01 confidence after minimization. With the Fall 1999 dataset, all the schemes remained strongly different than sa1 and sigmet1. In Fall 1998, all the schemes were initially strongly different than the table-driven schemes, but sa3/m and sa0/m lost this distinction with sigmet1/m. The table-driven approaches were the least volatile to the post-round minimization process, and ranked poorly.

sa3 was most volatile following the minimization step (see page [*]). While there was no change in the rank order of the schemes with the Fall 2000 dataset, in Fall 1999 (Figure 4.3), sa3/m moved ahead of rldhc/m and sa2/m, and for Fall 1998 (Figure 4.4), sa3/m moved ahead of sa0/m.

Disappearing differences among the schemes, after minimization, implies the lowest score before minimization is not necessarily the solution that reaches the lowest result. This realization threw a monkey-wrench into the original idea of performing the minimization step only on the best pre-minimized round. Based on these results, DAO version v1.10 was changed to allow per round minimization at the expense of increased running time.

To detect the possible situation where an overall lower grade occurring during a round might be lost by accepting an uphill move, we added the ability to keep track of the best grades after each accepted improving move during a round, and verify this was no better than the final unminimized result. The option of maintaining this lowest per accept score as the global best grade was added in DAO version v1.15.

Out of eight trials each across all three datasets, we saw this case occur once with sa3 in the 1998 dataset, and twice with sa4 in the other two datasets. This small sample provided support both for ignoring and remembering the lowest score ever seen. In one case, minimization after each round lost the lowest result which would have minimized even further, in the other it saved it because the ending grade minimized lower than the within-round lowest grade would have. We did not see the case where the lowest round score before minimization was less than the minimized ending score.

The minimization step plays an important role in that it guarantees a local minimum solution. The amount of work performed by the minimization step varies. One might have guessed that a long minimization step implies a poor scheme, but the results show otherwise. For example, sa3 typically takes many iterations to reach a local minimum, while sa0 often finds a local minimum by itself, yet both achieve comparable scores.

next up previous contents
Next: Comparing the Schemes: Scores Up: Experimental Results Previous: Varying Probabilities for Better   Contents
elena s ackley 2002-01-20
download thesis