Freesteel Blog » 2013 » October

Tuesday, October 29th, 2013 at 6:37 pm - - Machining

The new Microsoft C++ compiler was released two weeks ago with all new code optimization, performance features and other bells and whistles.

You need to keep with the times and not fall behind. How’s it working for us with the old machining kernel?

Internal error 'j >= 0' failed at freesteel\S2weaveIterators.cpp:345 (#1).
  1: S2weavefciter::ExtendToCell+1035
  2: S2weavefciter::ExtendToCell+67
  3: S2weeave::AddValPerCells+813
  4: S2weave::BuildValPerCells+684
  ...
Error: An internal error was generated for toolpath.

Bugger!

This is really old code that hasn’t been looked at for years because it’s 100% reliable.

The problem only appears in the fully optimized release version, so I am reduced to debugging by Print statements.

I chased the problem all the way down to the generation of the regular partition, the substance of which can be reduced to thus:

int n = 499; 
vector<double> b();
for (int i = 0; i <= n; i++)
    b.push_back((double)i/n);
cout << "This should be exactly zero: " << (b.back() - 1.0) << endl

The result was: -1.11022e-16, violating the axiom that z/z = 1.0 for all z != 0.0.

Now, unlike most Computational Geometry programmers around the world, I know that floating point arithmetic is perfectly precise and predictable. It just adheres to its own axioms, which are not the same as those of Real Numbers.

You can choose to “solve” it by assuming that the calculation is approximate. But I have absolutely no truck with allowing any of my algorithms to be globally polluted by an arbitrary epsilon value, such as the SPAresabs deployed in the ACIS kernel because, tempting though it is to deploy, I know how bottomless this can of rotting worms is once opened. The reliance on an epsilon value in computational geometry is a product of group-think and laziness, because the group thinks it has to be this way when it does not. Use of epsilon appriximations is as harmful to programming as the goto statement. It would be nice to have sufficient general enlightenment within the computational geometry community to be able to hold this debate. Unfortunately this is not yet the case and, while I am still completely right, I remain in a minority of one.

And it takes exactly this type of dogged attitude to get to the bottom of the issue and not fall for the trap that floats are inaccurate.

Rene spotted the problem immediately, though it took me some delving into the disassembly code to be convinced. Multiplication is far faster than division in the CPU (probably because all the multiple terms can be summed in parallel rather than engage in the divisory sequential subtraction), so the new compiler has optimized it, like so:

int n = 499; 
double n_reciprocal = 1.0/n; 
vector<double> b();
for (int i = 0; i <= n; i++)
    b.push_back(i*n_reciprocal);

This was substantiated by the following line of Python:

>>> print 1-(1-499.0)*499
1.1102230246251565e-16

The fix is easy. Do the optimization myself:

double nrecip = 1.0 / n;
b.push_back(0);
for (int i = 1; i < n; i++)
    b.push_back(i*nrecip);
b.push_back(1);

Problem is, how many other places in the code do I have this?

Oh dear.

Monday, October 28th, 2013 at 10:53 am - - Kayak Dive, Weekends

So that was the usual HSMWorks Denmark run over with. We’d stayed for a week from Thursday to Thursday, across the weekend when not much was going on as there used to be because people now have lives to go to. We got one meal out (manager not allowed to come because he can’t self-authorize), but otherwise had to fend for ourselves.

I played two games of Underwater Rugby with the Amager club which damn near killed me. The first session was on the Thursday we arrived and I almost threw-up during the pre-game training.

I can hold my breath, or I can swim around frantically, but I can’t do both. When it’s time to breath and you are at the bottom of a three metre pool it’s bad. You don’t get that problem with underwater hockey, where the game is more to do with being in the right position and flicking the puck from place to place. I gave up trying to keep up.

(more…)

Wednesday, October 16th, 2013 at 6:02 pm - - Whipping

We hit the culture night last Friday on our approximately annual working visit to Copenhagen. With only a program in Danish to guide us, it was a bit hit and miss. Mostly miss. One thing that was a hit was an exhibit in a back room of the design museum where we said hello to lots of different materials.

(more…)

Sunday, October 13th, 2013 at 1:39 pm - - Adaptive

So, we’re having lots of fun with subprocesses kicking off independently executed processes and transmitting data to and from them through its stdin and stdout pipe.

The problem we had was the data describing the clearing work in each level was encoded big disorganized C++ object with lots of values pointers, arrays, and junk like that, and I was too lazy/wise to build a whole serialization protocol for it. I decided it was much easier to make a pure Python unpacking (and repacking) of the C++ object and then simply encode and decode it into a stream of bytes with the pickle module.

pyzleveljob = MakePurePythonCopyOfDataZlevelCobject(zleveljob)  # PyWorkZLevel_pickle  yellow
sout = cPickle.dumps(pyzleveljob, cPickle.HIGHEST_PROTOCOL)     # just_pickle  red
p = subprocess.Popen(["python", "-u", "zlevelprocedure.py"], 
                      stdin=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=False)
p.stdin.write(sout)
p.stdin.flush()
p.stdin.close()       # the subprocess starts executing here at popen  blue

That’s obviously the simplified code. The -u option is to ensure that the pipes are treated as pure binary streams at both ends.

Here’s what the timing graph looks like, with time going from left to right, with seven concurrent process threads.
(more…)

Thursday, October 3rd, 2013 at 6:00 pm - - Machining 2 Comments »

This is the follow-on from last week’s A plan to make Python 2.4 multicore. It’s been a disaster, but I’ve proved a point.

In summary, the idea was to run a single threading version of Python (compiled with WITH_THREADS undefined) in multiple different Windows threads in a way that these totally isolated interpreters could interact with and access the same pool of C++ functions and very large objects.

And it might have been easy if the Python interpreter wasn’t so totally infested with unnecessary global variables — especially in the garbage collector and object allocators.

A sample of the hacks is at: bitbucket.org/goatchurch/python24threadlocal. (Go into the files and click on [DIFF] to see the sorts of changes I was making.)

First the results.

(more…)

Wednesday, October 2nd, 2013 at 11:25 am - - Hang-glide 2 Comments »

Went down to the Hidden Earth cavers conference in Monmouth last weekend. After causing much mayhem and ranting regarding lasers and scanning (and that was before I had even trawled through Companies House records) I bagged a Sunday flight on Pandy owing to a rare forecast of easterlies and sunshine. I went to bed early while Becka stayed up forcing everyone, including my designated retrieve driver, to finish the Hilde-schnapps in the car-park at 3am. The retrieve driver not surprisingly had a hang-over in the morning and needed some persuading to come along. (“The fresh air will clear your head; you won’t be able to concentrate sitting here through all these boring lectures.”)


The best use for a football field — mass camping at a caving conference.

We got away at 10:30am, drove to the landing field (full of sheep), looked at the smaller landing field to be used when the first was full of sheep (surrounded by tall trees), then drove up. The wind was blowing between 22 and 29mph on the edge of the hill at take-off. That at least kept all the paragliders away. But there weren’t any hang-gliders either. Felt a little concerned about this. It’s always best to have a local at a new site to point out what’s safe and where not to go, otherwise you’re just guessing and taking chances. But nothing can actually go wrong until you’ve taken off, so we walked down to fetch the glider.

Just then, like the cavalry arriving, a car carrying three gliders with their friendly pilots turned up, and everything was great. They showed me their preferred take-off and told me how high you need to get before crossing over to the main ridge. Asked about top landing near the take-off, they said everyone has done it once, and then vowed “never again”. And don’t worry about the sheep if you can’t make the smaller landing field.

(more…)

Tuesday, October 1st, 2013 at 6:30 pm - - Machining 2 Comments »

This has got to be an old idea. But I don’t know if it’s been done yet.

The issue is when a ball-nosed cutter of radius r goes over the edge of a part it spends a long time (between points a and b) in contact with the same point on the corner, grinding it away as it follows the green circular path (if it crosses the corner at right angles; otherwise it’s an elliptical path).

If you wanted nice crisp knife-edge corners, you aren’t going to get it.

(At least that’s my guess. I don’t actually know as I am not a machinist and don’t have access to a machine on which to conduct experiments.)

I have recently been doing a lot of curve fitting, which gave me an idea.

How about if I detected these outer circular motions and bulged them out slightly by a small factor e. Then the tool would leave the corner at the top surface when it is at point a, and rejoin the corner when it is at point b — and not grind it down smooth on its way over.

If we did this everywhere, would it make the whole part that little bit sharper and nicer?

Who knows? But I’m not going to spend my time programming it for the general case until I see some evidence that it does make a difference. It would be easy enough for a Process Engineer to directly design some special paths that only work on a test corner and do a set of experiments that involve scientifically quantifying the sharpness of the corners that results from this strategy — and then product tests the feature for desirability.

Now, who do I know who works in a large company that ought to be well enough organized to deploy resources to check up on an idea like this? Never mind. I can’t think of one off the top of my head.

There’s an 80% chance that this idea is garbage. It’s necessary feature of progress that there are many failed experiments along the way, because you don’t know what you are doing. Of course, if you did know what you were doing, you wouldn’t be doing anything new, would you?

If we did do these experiments and they didn’t work out, it would be very important to publish the results accurately so that the next people can either (a) avoid going down this dead end, or (b) spot something in our procedure that we didn’t quite get right.

(Let’s set aside the evil capitalist logic that intends to promote damage and waste external to the organization by ownership control, lying and deliberately withholding technical information.)

I believe I have recognized the hallmark of an innovative system — it’s one that is capable of funding these failures. It might seem efficient to direct all your programmers to work only on “real” projects. But if you do you won’t encounter the sequence of failures, abandoned experiments and learning opportunities that are an utterly essential part of the innovative process. Or worse, you’ll continue to push through on a serious “real” project long after it should have been declared a failure and learnt from.

This critique works on the pharmaceuticals system. New drugs require a great deal of innovation to develop. But we only reward the successes through a government-granted patent monopoly long after the work is fully complete. Who is paying for all the necessary failed drugs along the way? This crucial part of the system is overlooked. It is real work, and it has to be paid for. The official story is that the drugs companies recycle the profits from the successes into future failed experiments. But there’s very little quantified evidence of that.

So here’s to failure. We need more of it. And we need not to be frightened of it.