Saturday, November 30th, 2013 at 2:56 pm - Cave
Short note on the previous weekend. There was a meeting at Andy Eavis’s house under the Humber Bridge following their great cave chamber laser scanning projects in China over the summer. Somehow I got invited, probably because I say I can able to do things with CADCAM software, and such forth.
We cycled from Brough behind the BAe factory where they make the Hawk “Trainer” jets. Funny how there’s so many “Trainer” jets being exported. What a laugh. Doesn’t do any harm. They’re not “weapons”. Like those “replica” hand-guns with a complete set of working parts. Or a presidential candidate who says that he smoked, but did not inhale. Far too few people laugh at these official lies to stop them sticking.
Andy is the current president of the International Union of Speleology, an off-shoot of the International Congress of Speology, the 16th of which I went to in Brno this summer in the Czech Republic. It’s a fantastic bureaucracy, worthy of Kafka, where Andy is named as the coordinator of the Long, Deep and Large Caves Commission, among about a hundred other commissions.
Of course, none of the commissions is in charge of new technology that would be game-changing for cave surveying. Nor do they have anyone who is particularly interested in cave survey software (at the Congress, we had to have our meeting in a cafe round the back). If I was in charge I’d establish a Commission on the Relevance of Commissions and do some long-overdue weeding.
Anyways, we got to see some of the point clouds being rendered by the experts using the open source Cloud Compare software. As standard practice, the point cloud was thinned to make it renderable.
Someone there need to produce a video of a flythrough and wanted to use all the points, and didn’t know how as the expensive software they used crashed when it received more than 10 million points. What crappy software engineering. I worked with him on a work-around.
I haven’t had time yet to download and install Autodesk ReCap to see if it is up to the job. Probably not. We’re getting hit with trillions of laser scanner points now, and no one with a budget is taking it seriously in the software world yet. (I do, but I don’t have a budget, do I?)
Quite coincidentally, someone sent this youtube video about a company called Euclideon claiming to be able render unlimited point clouds at a reasonable frame-rate using just the CPU. It’s a great little video. I’ve watched all of it twice.
There’s been a lot of controversy around the claim, which is not surprising for something that’s been worked on for seven years without releasing a product.
I think I may have worked out what’s going on here, after a long sleepless night. If I have it right, the technique does not allow you to zoom; you can only move nearer or further from the view.
I don’t need any more distractions. Maybe I’ll toy with an implementation once we do start getting our own cave scans to play with.
There was a brief window of weather this week which was enough for three divers to get out on Cosmo’s boat on Thursday: Becka, me and someone much more competent than us with a rebreather from Ormskirk.
Although the sea was very calm, the visibility didn’t look good at an inshore site, so we moved off to the Wreck of the Counsellor in deeper water, and dived that.
It was black as midnight down there with barely 0.75m working visibility, so we came up pretty swiftly. Don’t waste risk; only do something dangerous if you’re getting something out of it.
My second tank was only a 10Litre, because my 12Litre was completely empty when I fetched it from the garage. Maybe something leaked. Fortunately this was meant to be a shallower dive, on the Wreck of the Speke, which, I was told, is on its side half buried in the sediment so you can go into its hold along the sea bed.
The anchor must have landed square in front of this opening, because we blundered away from it for quite some distance before we hit the wreck — from the inside!
Not good. According to the video footage, we were lost for 3 minutes, which is a very long time. While we got away with it, our margin for error was down to the width of a prawn’s antenna, because all it would take is a regulator snagged from your mouth by a bit of twisted metal or a exploding o-ring, and then we’d be toast.
There was a dredger at work on the channel, which might have accounted for the visibility.
Back in the harbour all the little boats piled into the lock following the usual delay waiting for tide to rise high enough to be over the sill. Everyone else had been out in the estuary fishing for cod. Becka challenged me to a game of “spot the female”. Not a single one among them. What is it about fishing and women?
Anyhow, that was enough excitement for one week.
Wednesday, November 20th, 2013 at 8:24 pm - Adaptive
How software actually gets developed. I came up with this idea over in Denmark when I heard the usual complaints about linking motions going over the top in a retract motion when they could stay down and traverse all hither and thither around the obstacles to get from one position to the next.
There’s always something. Last year it was the order of the cutting and the lack of multi-core processing. No one is complaining about those anymore. Either we fixed it, or they got bored.
Anyhow, I decided I’d show them why you don’t want that kind of a path by planning to do a stay-down path no matter how twisted and complicated it is, and then you can see it’s a lot slower than a straightforward no-nonsense retract motion.
This idea got onto the forum, and it sounds like people want it.
I guess that delays my other attempts to make an improved minimal retract linking where it jumps over from one place to another on a hump, like you get with finish machining linking, only it was going to avoid uncut stock and be able to deviate sideways slightly to miss any upstands. There’s a difference between being the fastest toolpath and looking good.
Getting rid of all the retract linking moves is going to look good — we hope. Otherwise it might turn out to be a mad scribble. Who knows until we try it?
So, this requires an implementation of the much-known A* search algorithm on the weave structure. It’s been a long time since I’ve been into this area of the code. I’ve made about three false starts to get anything out, but now I know what I am doing. This is my first picture I’ve got.
Theoretically, because the search can spread out into every single cell in the weave structure, it will always find a connection no matter how complicated.
Now the trick is to improve it by ripping out sections of the path and replacing them with straighter and smoother motions. Oh, and also test for collisions with uncut stock as well in order to avoid it.
Might be slower, but since it now calculates in parallel on separate cores of the computer it will only be a problem if it becomes the bottleneck.
This is parallel program optimization. It’s no longer necessary to make your functions as fast as they possibly can be. They only have to not the slowest of all the jobs that run in parallel.
It’s like a team race where it only counts when every single member of the team has crossed the finish line. So if you have a couple of really fast sprinters on your team it won’t make any difference.
We could make things even slower by completing this linking function for every possible order of cutting. Rather than, at the end of each pass, picking one of the five next paths to go to according to which one is closer, you build a complete linking motion to each and pick the one that actually has the shortest link, a bit like a quantum wave function collapse.
A lot’s been going on in the world. It’s out of my hands. I don’t know anything or have any influence over it, so what I say shouldn’t matter. Very sensitive. Yadda yadda. Got to watch out because it’s publicly listed companies and there’s shares and high-stakes gambling and all that money stuff.
Funnily enough, private non-listed companies that do not have any jittery share-holders or regulations to worry about are even more extremely secretive about their businesses. I wonder if they bother to invent positive and vaguely plausible excuses for their secrecy when there is apparently no commercial reason for it.
Secrecy is actually about power in all frames. This is the week that the TPP trade agreement got leaked. The text outlines a vast expansion of no-strings-attached corporate monopolies (aka “Intellectual Property”) in all sectors of the economy — policies that have been soundly and democratically rejected in the past. There’s no reason to keep that stuff secret, except in order to facilitate a plan to ram it through into law in an instant as a shock to the system in an opportune moment.
Not sure why I’m talking about that. I wanted to talk about food.
Wednesday, November 6th, 2013 at 7:25 pm - Machining
Let’s make a set of 50 random points, distributed like so:
N = 50 xs = [ random.gauss(0,25) for i in range(N) ] ys = [ random.gauss(0,11.9) for i in range(N) ] ys = [ y+x*0.3 for x, y in zip(xs, ys) ] xm, ym = sum(xs)/N, sum(ys)/N pts = [ (x-xm, y-ym) for x, y in zip(xs, ys) ]
The displacement by (xm, ym) is so that average is centred on the origin to make things simpler. Here’s the picture:
Tuesday, October 29th, 2013 at 6:37 pm - Machining
The new Microsoft C++ compiler was released two weeks ago with all new code optimization, performance features and other bells and whistles.
You need to keep with the times and not fall behind. How’s it working for us with the old machining kernel?
Internal error 'j >= 0' failed at freesteel\S2weaveIterators.cpp:345 (#1). 1: S2weavefciter::ExtendToCell+1035 2: S2weavefciter::ExtendToCell+67 3: S2weeave::AddValPerCells+813 4: S2weave::BuildValPerCells+684 ... Error: An internal error was generated for toolpath.
This is really old code that hasn’t been looked at for years because it’s 100% reliable.
The problem only appears in the fully optimized release version, so I am reduced to debugging by Print statements.
I chased the problem all the way down to the generation of the regular partition, the substance of which can be reduced to thus:
int n = 499; vector<double> b(); for (int i = 0; i <= n; i++) b.push_back((double)i/n); cout << "This should be exactly zero: " << (b.back() - 1.0) << endl
The result was: -1.11022e-16, violating the axiom that z/z = 1.0 for all z != 0.0.
Now, unlike most Computational Geometry programmers around the world, I know that floating point arithmetic is perfectly precise and predictable. It just adheres to its own axioms, which are not the same as those of Real Numbers.
You can choose to “solve” it by assuming that the calculation is approximate. But I have absolutely no truck with allowing any of my algorithms to be globally polluted by an arbitrary epsilon value, such as the SPAresabs deployed in the ACIS kernel because, tempting though it is to deploy, I know how bottomless this can of rotting worms is once opened. The reliance on an epsilon value in computational geometry is a product of group-think and laziness, because the group thinks it has to be this way when it does not. Use of epsilon appriximations is as harmful to programming as the goto statement. It would be nice to have sufficient general enlightenment within the computational geometry community to be able to hold this debate. Unfortunately this is not yet the case and, while I am still completely right, I remain in a minority of one.
And it takes exactly this type of dogged attitude to get to the bottom of the issue and not fall for the trap that floats are inaccurate.
Rene spotted the problem immediately, though it took me some delving into the disassembly code to be convinced. Multiplication is far faster than division in the CPU (probably because all the multiple terms can be summed in parallel rather than engage in the divisory sequential subtraction), so the new compiler has optimized it, like so:
int n = 499; double n_reciprocal = 1.0/n; vector<double> b(); for (int i = 0; i <= n; i++) b.push_back(i*n_reciprocal);
This was substantiated by the following line of Python:
>>> print 1-(1-499.0)*499 1.1102230246251565e-16
The fix is easy. Do the optimization myself:
double nrecip = 1.0 / n; b.push_back(0); for (int i = 1; i < n; i++) b.push_back(i*nrecip); b.push_back(1);
Problem is, how many other places in the code do I have this?
So that was the usual HSMWorks Denmark run over with. We’d stayed for a week from Thursday to Thursday, across the weekend when not much was going on as there used to be because people now have lives to go to. We got one meal out (manager not allowed to come because he can’t self-authorize), but otherwise had to fend for ourselves.
I played two games of Underwater Rugby with the Amager club which damn near killed me. The first session was on the Thursday we arrived and I almost threw-up during the pre-game training.
I can hold my breath, or I can swim around frantically, but I can’t do both. When it’s time to breath and you are at the bottom of a three metre pool it’s bad. You don’t get that problem with underwater hockey, where the game is more to do with being in the right position and flicking the puck from place to place. I gave up trying to keep up.
Wednesday, October 16th, 2013 at 6:02 pm - Whipping
We hit the culture night last Friday on our approximately annual working visit to Copenhagen. With only a program in Danish to guide us, it was a bit hit and miss. Mostly miss. One thing that was a hit was an exhibit in a back room of the design museum where we said hello to lots of different materials.
Sunday, October 13th, 2013 at 1:39 pm - Adaptive
The problem we had was the data describing the clearing work in each level was encoded big disorganized C++ object with lots of values pointers, arrays, and junk like that, and I was too lazy/wise to build a whole serialization protocol for it. I decided it was much easier to make a pure Python unpacking (and repacking) of the C++ object and then simply encode and decode it into a stream of bytes with the pickle module.
pyzleveljob = MakePurePythonCopyOfDataZlevelCobject(zleveljob) # PyWorkZLevel_pickle yellow sout = cPickle.dumps(pyzleveljob, cPickle.HIGHEST_PROTOCOL) # just_pickle red p = subprocess.Popen(["python", "-u", "zlevelprocedure.py"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=False) p.stdin.write(sout) p.stdin.flush() p.stdin.close() # the subprocess starts executing here at popen blue
That’s obviously the simplified code. The -u option is to ensure that the pipes are treated as pure binary streams at both ends.
Here’s what the timing graph looks like, with time going from left to right, with seven concurrent process threads.
This is the follow-on from last week’s A plan to make Python 2.4 multicore. It’s been a disaster, but I’ve proved a point.
In summary, the idea was to run a single threading version of Python (compiled with WITH_THREADS undefined) in multiple different Windows threads in a way that these totally isolated interpreters could interact with and access the same pool of C++ functions and very large objects.
And it might have been easy if the Python interpreter wasn’t so totally infested with unnecessary global variables — especially in the garbage collector and object allocators.
A sample of the hacks is at: bitbucket.org/goatchurch/python24threadlocal. (Go into the files and click on [DIFF] to see the sorts of changes I was making.)
First the results.