Freesteel Blog » Compiler division optimizations

Compiler division optimizations

Tuesday, October 29th, 2013 at 6:37 pm Written by:

The new Microsoft C++ compiler was released two weeks ago with all new code optimization, performance features and other bells and whistles.

You need to keep with the times and not fall behind. How’s it working for us with the old machining kernel?

Internal error 'j >= 0' failed at freesteel\S2weaveIterators.cpp:345 (#1).
  1: S2weavefciter::ExtendToCell+1035
  2: S2weavefciter::ExtendToCell+67
  3: S2weeave::AddValPerCells+813
  4: S2weave::BuildValPerCells+684
  ...
Error: An internal error was generated for toolpath.

Bugger!

This is really old code that hasn’t been looked at for years because it’s 100% reliable.

The problem only appears in the fully optimized release version, so I am reduced to debugging by Print statements.

I chased the problem all the way down to the generation of the regular partition, the substance of which can be reduced to thus:

int n = 499; 
vector<double> b();
for (int i = 0; i <= n; i++)
    b.push_back((double)i/n);
cout << "This should be exactly zero: " << (b.back() - 1.0) << endl

The result was: -1.11022e-16, violating the axiom that z/z = 1.0 for all z != 0.0.

Now, unlike most Computational Geometry programmers around the world, I know that floating point arithmetic is perfectly precise and predictable. It just adheres to its own axioms, which are not the same as those of Real Numbers.

You can choose to “solve” it by assuming that the calculation is approximate. But I have absolutely no truck with allowing any of my algorithms to be globally polluted by an arbitrary epsilon value, such as the SPAresabs deployed in the ACIS kernel because, tempting though it is to deploy, I know how bottomless this can of rotting worms is once opened. The reliance on an epsilon value in computational geometry is a product of group-think and laziness, because the group thinks it has to be this way when it does not. Use of epsilon appriximations is as harmful to programming as the goto statement. It would be nice to have sufficient general enlightenment within the computational geometry community to be able to hold this debate. Unfortunately this is not yet the case and, while I am still completely right, I remain in a minority of one.

And it takes exactly this type of dogged attitude to get to the bottom of the issue and not fall for the trap that floats are inaccurate.

Rene spotted the problem immediately, though it took me some delving into the disassembly code to be convinced. Multiplication is far faster than division in the CPU (probably because all the multiple terms can be summed in parallel rather than engage in the divisory sequential subtraction), so the new compiler has optimized it, like so:

int n = 499; 
double n_reciprocal = 1.0/n; 
vector<double> b();
for (int i = 0; i <= n; i++)
    b.push_back(i*n_reciprocal);

This was substantiated by the following line of Python:

>>> print 1-(1-499.0)*499
1.1102230246251565e-16

The fix is easy. Do the optimization myself:

double nrecip = 1.0 / n;
b.push_back(0);
for (int i = 1; i < n; i++)
    b.push_back(i*nrecip);
b.push_back(1);

Problem is, how many other places in the code do I have this?

Oh dear.

Leave a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <blockquote cite=""> <code> <em> <strong>