# EETimes

## Embedded Systems December 2000 Vol13_13

Issue link: http://dc.ee.ubm-us.com/i/71850

## Contents of this Issue  ## Navigation

### Page 27 of 197

PROGRAMMER'S TOOLBOX • Remember, because the shape of the function near its minimum is almost always parabolic when looked at with enough "zoom" applied, the trough of the minimum gets more and more shallow as we get closer to it. smaller, and a a result, y_range will increase. Given these two ranges, I can now defin e what I consider to be a robust criterion for convergence: double-precision computations. The best we can hope for is roughly the square root of the machine pl-ecision. On the other hand, we can expect = Yo +..L(X2 -XO -~lm c 2 - or: (12) About error criteria A moment ago I said that I felt Brent's choice of a relative error criterion in x was unfortunate, because it may lead to division by zero. Only the addition of an arbitrary constant prevents numerical disaster. A far better approach is to use an absolute criterion in the first place. Fortunately, we are in a perfect posi- tio n to take this approach because the original search range is given going in . If we can reduce the e rror in x to, say, one part in a million of the o riginal range, I'd have to say we've pinned down the minimum pre tly doggone well. Remembe r, because the shape of the fun ction near its minimum i almost always para boli c when looked at with enough "zoom" applied, the trough of the minimum gets more and more shallow as we ge t close r to it. Therefore, we can't expect this algo- rithm (or any othe r) to give the kind of precision we' re used to seeing in .f. (X2 -XO)- ~l 2 2 the y-value to be accu rate to machine precision , and I'm intending to use both . Again , I intend to use an absolute measure (or, to be more pre- cise, a measure that is relative, as mea- sured by the full range as opposed to the change from step to step) . The only problem is, the full range of val- ues of x is known going in, but those of Y = j( x) are not. Nevertheless, we can determine the full range simply by recording the largest and smallest val- ue of y found so far. Based on these notions, I'm going to define the following error critel;a for convergence. Let e be some input value giving the precision desired, as a precision relative to the maximum range. Then define: x_range = x2 - xO (13) where the x2 and xO referred to here are the original values, that is, they define the overall range within which we search for a minimum. Likewise, let: y_range = bis-y - LittLe-y (14) where the values of bis-y and L it- t Le-y are dynamic. The parameter bis-y will, of necessity, be either the original Yo or Y2, depending on which is larger. The starting value for L it- tLe-y will be the first value of )'1> the result after we get in to the configura- tion where Equation 1 holds. As the search proceeds, we will hopefully find poin ts whose heights are lower than the current minimum; that is, we are sliding downhill to the true minimum. As this happens, L ittLe-y can only get 26 DECEMBER 2000 Embedded Systems Programming ~ < .f;(x _ range) ~y < e(y _ range) Here, I've been necessarily vague as to what ~x and ~Y are. In general , they' re the differences between two estimates of the minimum values, but I can't be more specific at this time. We might define ~x, for example, to be the dif- ference between the values given by the two parabolic fits of a five-point fit. We also might define it to be the dif- ference between the results of two suc- cessive passes or, as Brent does, we might define it to be the difference between estimates two passes apart. Those are details that remain to be worked out. Either way, Equation 15 defin es the condition that we are ready to accept the estimate as a con- verged solution. Also, either way, both of the conditions of Equation 15 must be satisfied. Recall that in the golden section method we stop when the points bracketing the minimum (Xo and ~, in our current notation) are shmnk together to the necessary degree of accuracy. I am not sure yet that this is a reasonable requirement on the par- abolic solution. I have the feeling that we cannot reasonably achieve this condition without using golden sec- tions. That is why both Brent and I lean towards comparing successive estimates, rather than the bracketing values. The test functions Until now, I've been mostly using a single function for my testing. This is the now familiar one: (16) (15)

## Archives of this issue

view archives of EETimes - Embedded Systems December 2000 Vol13_13