Freesteel Blog » Machining
Tuesday, December 10th, 2013 at 6:40 pm - Adaptive
Of course, I could be going down another blind channel. But for the moment it looks like it is getting closer as I progressively fiddle with the search algorithm and allow it to pick the exit points from each cell and not stick to the mid-point of each edge.
Wednesday, November 20th, 2013 at 8:24 pm - Adaptive
How software actually gets developed. I came up with this idea over in Denmark when I heard the usual complaints about linking motions going over the top in a retract motion when they could stay down and traverse all hither and thither around the obstacles to get from one position to the next.
There’s always something. Last year it was the order of the cutting and the lack of multi-core processing. No one is complaining about those anymore. Either we fixed it, or they got bored.
Anyhow, I decided I’d show them why you don’t want that kind of a path by planning to do a stay-down path no matter how twisted and complicated it is, and then you can see it’s a lot slower than a straightforward no-nonsense retract motion.
This idea got onto the forum, and it sounds like people want it.
I guess that delays my other attempts to make an improved minimal retract linking where it jumps over from one place to another on a hump, like you get with finish machining linking, only it was going to avoid uncut stock and be able to deviate sideways slightly to miss any upstands. There’s a difference between being the fastest toolpath and looking good.
Getting rid of all the retract linking moves is going to look good — we hope. Otherwise it might turn out to be a mad scribble. Who knows until we try it?
So, this requires an implementation of the much-known A* search algorithm on the weave structure. It’s been a long time since I’ve been into this area of the code. I’ve made about three false starts to get anything out, but now I know what I am doing. This is my first picture I’ve got.
Theoretically, because the search can spread out into every single cell in the weave structure, it will always find a connection no matter how complicated.
Now the trick is to improve it by ripping out sections of the path and replacing them with straighter and smoother motions. Oh, and also test for collisions with uncut stock as well in order to avoid it.
Might be slower, but since it now calculates in parallel on separate cores of the computer it will only be a problem if it becomes the bottleneck.
This is parallel program optimization. It’s no longer necessary to make your functions as fast as they possibly can be. They only have to not the slowest of all the jobs that run in parallel.
It’s like a team race where it only counts when every single member of the team has crossed the finish line. So if you have a couple of really fast sprinters on your team it won’t make any difference.
We could make things even slower by completing this linking function for every possible order of cutting. Rather than, at the end of each pass, picking one of the five next paths to go to according to which one is closer, you build a complete linking motion to each and pick the one that actually has the shortest link, a bit like a quantum wave function collapse.
A lot’s been going on in the world. It’s out of my hands. I don’t know anything or have any influence over it, so what I say shouldn’t matter. Very sensitive. Yadda yadda. Got to watch out because it’s publicly listed companies and there’s shares and high-stakes gambling and all that money stuff.
Funnily enough, private non-listed companies that do not have any jittery share-holders or regulations to worry about are even more extremely secretive about their businesses. I wonder if they bother to invent positive and vaguely plausible excuses for their secrecy when there is apparently no commercial reason for it.
Secrecy is actually about power in all frames. This is the week that the TPP trade agreement got leaked. The text outlines a vast expansion of no-strings-attached corporate monopolies (aka “Intellectual Property”) in all sectors of the economy — policies that have been soundly and democratically rejected in the past. There’s no reason to keep that stuff secret, except in order to facilitate a plan to ram it through into law in an instant as a shock to the system in an opportune moment.
Not sure why I’m talking about that. I wanted to talk about food.
Wednesday, November 6th, 2013 at 7:25 pm - Machining
Let’s make a set of 50 random points, distributed like so:
N = 50 xs = [ random.gauss(0,25) for i in range(N) ] ys = [ random.gauss(0,11.9) for i in range(N) ] ys = [ y+x*0.3 for x, y in zip(xs, ys) ] xm, ym = sum(xs)/N, sum(ys)/N pts = [ (x-xm, y-ym) for x, y in zip(xs, ys) ]
The displacement by (xm, ym) is so that average is centred on the origin to make things simpler. Here’s the picture:
Tuesday, October 29th, 2013 at 6:37 pm - Machining
The new Microsoft C++ compiler was released two weeks ago with all new code optimization, performance features and other bells and whistles.
You need to keep with the times and not fall behind. How’s it working for us with the old machining kernel?
Internal error 'j >= 0' failed at freesteel\S2weaveIterators.cpp:345 (#1). 1: S2weavefciter::ExtendToCell+1035 2: S2weavefciter::ExtendToCell+67 3: S2weeave::AddValPerCells+813 4: S2weave::BuildValPerCells+684 ... Error: An internal error was generated for toolpath.
This is really old code that hasn’t been looked at for years because it’s 100% reliable.
The problem only appears in the fully optimized release version, so I am reduced to debugging by Print statements.
I chased the problem all the way down to the generation of the regular partition, the substance of which can be reduced to thus:
int n = 499; vector<double> b(); for (int i = 0; i <= n; i++) b.push_back((double)i/n); cout << "This should be exactly zero: " << (b.back() - 1.0) << endl
The result was: -1.11022e-16, violating the axiom that z/z = 1.0 for all z != 0.0.
Now, unlike most Computational Geometry programmers around the world, I know that floating point arithmetic is perfectly precise and predictable. It just adheres to its own axioms, which are not the same as those of Real Numbers.
You can choose to “solve” it by assuming that the calculation is approximate. But I have absolutely no truck with allowing any of my algorithms to be globally polluted by an arbitrary epsilon value, such as the SPAresabs deployed in the ACIS kernel because, tempting though it is to deploy, I know how bottomless this can of rotting worms is once opened. The reliance on an epsilon value in computational geometry is a product of group-think and laziness, because the group thinks it has to be this way when it does not. Use of epsilon appriximations is as harmful to programming as the goto statement. It would be nice to have sufficient general enlightenment within the computational geometry community to be able to hold this debate. Unfortunately this is not yet the case and, while I am still completely right, I remain in a minority of one.
And it takes exactly this type of dogged attitude to get to the bottom of the issue and not fall for the trap that floats are inaccurate.
Rene spotted the problem immediately, though it took me some delving into the disassembly code to be convinced. Multiplication is far faster than division in the CPU (probably because all the multiple terms can be summed in parallel rather than engage in the divisory sequential subtraction), so the new compiler has optimized it, like so:
int n = 499; double n_reciprocal = 1.0/n; vector<double> b(); for (int i = 0; i <= n; i++) b.push_back(i*n_reciprocal);
This was substantiated by the following line of Python:
>>> print 1-(1-499.0)*499 1.1102230246251565e-16
The fix is easy. Do the optimization myself:
double nrecip = 1.0 / n; b.push_back(0); for (int i = 1; i < n; i++) b.push_back(i*nrecip); b.push_back(1);
Problem is, how many other places in the code do I have this?
Sunday, October 13th, 2013 at 1:39 pm - Adaptive
The problem we had was the data describing the clearing work in each level was encoded big disorganized C++ object with lots of values pointers, arrays, and junk like that, and I was too lazy/wise to build a whole serialization protocol for it. I decided it was much easier to make a pure Python unpacking (and repacking) of the C++ object and then simply encode and decode it into a stream of bytes with the pickle module.
pyzleveljob = MakePurePythonCopyOfDataZlevelCobject(zleveljob) # PyWorkZLevel_pickle yellow sout = cPickle.dumps(pyzleveljob, cPickle.HIGHEST_PROTOCOL) # just_pickle red p = subprocess.Popen(["python", "-u", "zlevelprocedure.py"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=False) p.stdin.write(sout) p.stdin.flush() p.stdin.close() # the subprocess starts executing here at popen blue
That’s obviously the simplified code. The -u option is to ensure that the pipes are treated as pure binary streams at both ends.
Here’s what the timing graph looks like, with time going from left to right, with seven concurrent process threads.
This is the follow-on from last week’s A plan to make Python 2.4 multicore. It’s been a disaster, but I’ve proved a point.
In summary, the idea was to run a single threading version of Python (compiled with WITH_THREADS undefined) in multiple different Windows threads in a way that these totally isolated interpreters could interact with and access the same pool of C++ functions and very large objects.
And it might have been easy if the Python interpreter wasn’t so totally infested with unnecessary global variables — especially in the garbage collector and object allocators.
A sample of the hacks is at: bitbucket.org/goatchurch/python24threadlocal. (Go into the files and click on [DIFF] to see the sorts of changes I was making.)
First the results.
This has got to be an old idea. But I don’t know if it’s been done yet.
The issue is when a ball-nosed cutter of radius r goes over the edge of a part it spends a long time (between points a and b) in contact with the same point on the corner, grinding it away as it follows the green circular path (if it crosses the corner at right angles; otherwise it’s an elliptical path).
If you wanted nice crisp knife-edge corners, you aren’t going to get it.
(At least that’s my guess. I don’t actually know as I am not a machinist and don’t have access to a machine on which to conduct experiments.)
I have recently been doing a lot of curve fitting, which gave me an idea.
How about if I detected these outer circular motions and bulged them out slightly by a small factor e. Then the tool would leave the corner at the top surface when it is at point a, and rejoin the corner when it is at point b — and not grind it down smooth on its way over.
If we did this everywhere, would it make the whole part that little bit sharper and nicer?
Who knows? But I’m not going to spend my time programming it for the general case until I see some evidence that it does make a difference. It would be easy enough for a Process Engineer to directly design some special paths that only work on a test corner and do a set of experiments that involve scientifically quantifying the sharpness of the corners that results from this strategy — and then product tests the feature for desirability.
Now, who do I know who works in a large company that ought to be well enough organized to deploy resources to check up on an idea like this? Never mind. I can’t think of one off the top of my head.
There’s an 80% chance that this idea is garbage. It’s necessary feature of progress that there are many failed experiments along the way, because you don’t know what you are doing. Of course, if you did know what you were doing, you wouldn’t be doing anything new, would you?
If we did do these experiments and they didn’t work out, it would be very important to publish the results accurately so that the next people can either (a) avoid going down this dead end, or (b) spot something in our procedure that we didn’t quite get right.
(Let’s set aside the evil capitalist logic that intends to promote damage and waste external to the organization by ownership control, lying and deliberately withholding technical information.)
I believe I have recognized the hallmark of an innovative system — it’s one that is capable of funding these failures. It might seem efficient to direct all your programmers to work only on “real” projects. But if you do you won’t encounter the sequence of failures, abandoned experiments and learning opportunities that are an utterly essential part of the innovative process. Or worse, you’ll continue to push through on a serious “real” project long after it should have been declared a failure and learnt from.
This critique works on the pharmaceuticals system. New drugs require a great deal of innovation to develop. But we only reward the successes through a government-granted patent monopoly long after the work is fully complete. Who is paying for all the necessary failed drugs along the way? This crucial part of the system is overlooked. It is real work, and it has to be paid for. The official story is that the drugs companies recycle the profits from the successes into future failed experiments. But there’s very little quantified evidence of that.
So here’s to failure. We need more of it. And we need not to be frightened of it.
I love Python, but its lack of multi-core support is killing us in the Adaptive Clearing strategy.
We’ve divided the algorithm into 8 separate threads that all work in a pipeline. Some of then can be operated in parallel (for example, the swirly Z-constant clearing strategy within an independent step-down level).
The heavy calculation is done in C++, but the high level toolpath planning, reordering, maintenance and selection from categories of start points is undertaken in Python.
While the C++ functions unlock the GIL (the Global Interpreter Lock) on entry, so that Python interpreter can carry on in parallel — and perhaps call another concurrent C++ function, this only gets us so far. Our limit is full use of two cores, which on a 4 or 8 core machine is a little embarrassing.
So we have two options, neither of which are good. One is to port our thousands of lines of Python code into C++ (which is not going to work as it will be too painful to debug this code yet again in another language that is much harder). And the second is to make Python multi-core.
Following my work attempting to create five axis toolpaths using tracking methods, I’ve fallen back to the more conventional techniques used to create 3-axis toolpaths of defining areas theoretically and then computing their boundaries.
For three-axis strategies almost everything is computed on an XY weave an efficient aligned subdividing structure. When things are aligned in this way it’s possible to be totally strict with the floating point calculations about what cell contains each point, and so rely on the topological and geometric reasoning being consistent.