Embedded Systems September 2000 Vol13_10

Issue link:

Contents of this Issue


Page 49 of 229

PROGRAMMER'S TOOLBOX e I'm willing to accept more golden section steps, even if it means more function evaluations, if that's what it takes to force the end points of the range to pull together in a smoother fashion. keep it so at each stage. As long as Equation 5 is satisfied, X.nin has no choice but to lie between Xo and .%:1· Next we should decide how to adjust the poin ts and the interval they represent fo r the subsequent step. When you think about it, the situation is identical to that presented by the new point gene rated by functi on goldO. Both gold() and para_fitO generate a new value of x that is sup- posed to be an improvement over the previous o ne. The decision-making process doesn't care what logic was used to come up wi th this new poin t, it only needs to deal with it. There is a diffe rence, however. g old_shri nkO always knows that the value coming out of gold() will be between Xo and x1• This isn't the case with para_fitO, so additio nal logic is required . Otherwise, we can write a fun ction complete ly analogous to gold_shrinkO. I've given it the incred- ibly imaginative name para_shrinkO. In wri ting para_shrinkO, I tin- kered with alternate ways to deal with the situation in which the output of para_fitO lies to the "right" of x1, rather than the left. The simplest olu- tion seems to be to refl ect the entire thing about p1, by swapping d1e poin ts around. The swap of p0 and p2 is traightfo rward . To make the rest come out right, we sho uld also swap p (th e estimated minimum) and p1• I haven't shown the equivalent of gold_searchO because, unlike th e golden section search, we can't count on a parabolic search to work wi thout a li tde help. Nevertheless, I've written a main program to test the med1od. That code is shown in Listing 4. How well does it work? The resul ts of a run on o ur test case are shown in Figures 2 and 3. The test function, by the way, is the one we've been using all along, which is: f(x) = cos(2rrx 2 ) (7) In Figure 2, I've shown the three val- ues of x, for each iteration. As you can see, we have converged in 12 itera- tions, which is nice. Including the setup and initial call to gold_shrinkO, we do a total of 16 function evalua- tions. Looking at Figure 2, it certainly seems that things are working smooth- ly, and we are e entially converged by the seventh iteration. But look again. The situation is shown more clearly in Figure 3, where I've included the error from the known solution, o n a logarithmic scale. Here we can see graphically, the quadratic me thod 's penchant for hanging on to the end poin ts far longer d1an it should. The val ue of Xo seems to stick at one fo r four steps. Not visible in Figure 2, but plain in Figure 3, is the fact that it sticks again for a full seven steps, r ight after that. This is d1e behavior I'd noticed in my earlier, manual experi- ments with quadi-atic fi ts. I don 't think I like it. We can look at th e be havi or in two ways. I f we co nside r o nly the value o f x1, the middle poin t of the trio, we find that is has conve rged by the ninth itera tion , equivalent to 13 fun cti on evaluati ons, and ve ry much consiste nt with th e pe rforma nce of fmin. f fo r the same pro blem. But th e in terval's fa ilure to shrink is bothersome. On the other hand, recall that in the golden section me thod, it's the diffe rence between x0 and X:1 that we use to determine when we've con- verged, and the best estimate of the 48 SEPTEMBER 2000 Embedded Systems Programming minimum is the ave rage of th e two. We can't use eithe r of these con- structs for the quadratic case, which I suppose is why Brent's method uses a rathe r to rtuous criterio n fo r conver- ge nce. I see nothing in Bre nt's method to prevent this tendency to hang onto wide intervals, and that bothe rs me. My position on d1e matter is sim- ple: I want that in terval between x0 and x2 to shrink in a smooth and sys- tematic way, and I'm not going to be satisfied until it does. I think we can make it do just d1at by fo rcing an occa- sional golden section step. Press et al. imply that if the fun ction is well- behaved, Brent 's method will drive di recdy to d1e minimum using o nly quadratic steps. Indeed, it does exact- ly that, which is precisely why it makes me uncomfo rtabl e. I'm willing to accept more golden section ste ps, even i f it means mo re function evalua- tions, if that's what it takes to force the end poin ts of the range to pull togeth- er in a smoother fas hion . I'd like to pursue these issues fur- the r this month, but I'm out of both tim e and space. I think you ' ll agree, tho ugh , that we've made q uite a de n t in the pro blem. Next month , we' ll conside r ways to fo rce occa- sio nal golden sectio n steps. We' ll a lso address all the other issu es, such as alte rn ate end cri te ria. See you the n. esp j ack W Crenshaw is ct senior princijJal design engineer at Alliant Tech Systems Inc. in Cleanuater, FL. He did rnuch early work in the space program and has deuelajJed numerous analysis and real-time programs. He holds a PhD in jJhysics from Auburn University. Crenshaw enjo-ys contact and can be reached via e-rnail at jcrens@earthlink. net. Resources 1. Press, William H. et al. Numerical Recipes in C. Cambridge: (Cambridge Univ. Press, 1992) 2. Brent, Algorithms for Minimization Without Derivatives. Englewood Cliffs, NJ: Prentice-Hall, 1973.

Articles in this issue

Archives of this issue

view archives of EETimes - Embedded Systems September 2000 Vol13_10