Freesteel Blog » Machining
Sunday, October 13th, 2013 at 1:39 pm - Adaptive
The problem we had was the data describing the clearing work in each level was encoded big disorganized C++ object with lots of values pointers, arrays, and junk like that, and I was too lazy/wise to build a whole serialization protocol for it. I decided it was much easier to make a pure Python unpacking (and repacking) of the C++ object and then simply encode and decode it into a stream of bytes with the pickle module.
pyzleveljob = MakePurePythonCopyOfDataZlevelCobject(zleveljob) # PyWorkZLevel_pickle yellow sout = cPickle.dumps(pyzleveljob, cPickle.HIGHEST_PROTOCOL) # just_pickle red p = subprocess.Popen(["python", "-u", "zlevelprocedure.py"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=False) p.stdin.write(sout) p.stdin.flush() p.stdin.close() # the subprocess starts executing here at popen blue
That’s obviously the simplified code. The -u option is to ensure that the pipes are treated as pure binary streams at both ends.
Here’s what the timing graph looks like, with time going from left to right, with seven concurrent process threads.
This is the follow-on from last week’s A plan to make Python 2.4 multicore. It’s been a disaster, but I’ve proved a point.
In summary, the idea was to run a single threading version of Python (compiled with WITH_THREADS undefined) in multiple different Windows threads in a way that these totally isolated interpreters could interact with and access the same pool of C++ functions and very large objects.
And it might have been easy if the Python interpreter wasn’t so totally infested with unnecessary global variables — especially in the garbage collector and object allocators.
A sample of the hacks is at: bitbucket.org/goatchurch/python24threadlocal. (Go into the files and click on [DIFF] to see the sorts of changes I was making.)
First the results.
This has got to be an old idea. But I don’t know if it’s been done yet.
The issue is when a ball-nosed cutter of radius r goes over the edge of a part it spends a long time (between points a and b) in contact with the same point on the corner, grinding it away as it follows the green circular path (if it crosses the corner at right angles; otherwise it’s an elliptical path).
If you wanted nice crisp knife-edge corners, you aren’t going to get it.
(At least that’s my guess. I don’t actually know as I am not a machinist and don’t have access to a machine on which to conduct experiments.)
I have recently been doing a lot of curve fitting, which gave me an idea.
How about if I detected these outer circular motions and bulged them out slightly by a small factor e. Then the tool would leave the corner at the top surface when it is at point a, and rejoin the corner when it is at point b — and not grind it down smooth on its way over.
If we did this everywhere, would it make the whole part that little bit sharper and nicer?
Who knows? But I’m not going to spend my time programming it for the general case until I see some evidence that it does make a difference. It would be easy enough for a Process Engineer to directly design some special paths that only work on a test corner and do a set of experiments that involve scientifically quantifying the sharpness of the corners that results from this strategy — and then product tests the feature for desirability.
Now, who do I know who works in a large company that ought to be well enough organized to deploy resources to check up on an idea like this? Never mind. I can’t think of one off the top of my head.
There’s an 80% chance that this idea is garbage. It’s necessary feature of progress that there are many failed experiments along the way, because you don’t know what you are doing. Of course, if you did know what you were doing, you wouldn’t be doing anything new, would you?
If we did do these experiments and they didn’t work out, it would be very important to publish the results accurately so that the next people can either (a) avoid going down this dead end, or (b) spot something in our procedure that we didn’t quite get right.
(Let’s set aside the evil capitalist logic that intends to promote damage and waste external to the organization by ownership control, lying and deliberately withholding technical information.)
I believe I have recognized the hallmark of an innovative system — it’s one that is capable of funding these failures. It might seem efficient to direct all your programmers to work only on “real” projects. But if you do you won’t encounter the sequence of failures, abandoned experiments and learning opportunities that are an utterly essential part of the innovative process. Or worse, you’ll continue to push through on a serious “real” project long after it should have been declared a failure and learnt from.
This critique works on the pharmaceuticals system. New drugs require a great deal of innovation to develop. But we only reward the successes through a government-granted patent monopoly long after the work is fully complete. Who is paying for all the necessary failed drugs along the way? This crucial part of the system is overlooked. It is real work, and it has to be paid for. The official story is that the drugs companies recycle the profits from the successes into future failed experiments. But there’s very little quantified evidence of that.
So here’s to failure. We need more of it. And we need not to be frightened of it.
I love Python, but its lack of multi-core support is killing us in the Adaptive Clearing strategy.
We’ve divided the algorithm into 8 separate threads that all work in a pipeline. Some of then can be operated in parallel (for example, the swirly Z-constant clearing strategy within an independent step-down level).
The heavy calculation is done in C++, but the high level toolpath planning, reordering, maintenance and selection from categories of start points is undertaken in Python.
While the C++ functions unlock the GIL (the Global Interpreter Lock) on entry, so that Python interpreter can carry on in parallel — and perhaps call another concurrent C++ function, this only gets us so far. Our limit is full use of two cores, which on a 4 or 8 core machine is a little embarrassing.
So we have two options, neither of which are good. One is to port our thousands of lines of Python code into C++ (which is not going to work as it will be too painful to debug this code yet again in another language that is much harder). And the second is to make Python multi-core.
Following my work attempting to create five axis toolpaths using tracking methods, I’ve fallen back to the more conventional techniques used to create 3-axis toolpaths of defining areas theoretically and then computing their boundaries.
For three-axis strategies almost everything is computed on an XY weave an efficient aligned subdividing structure. When things are aligned in this way it’s possible to be totally strict with the floating point calculations about what cell contains each point, and so rely on the topological and geometric reasoning being consistent.
We all know how rotten software patents are. Unfortunately, software companies are too often run by idiots who play the game by contributing to the worsening of the situation and doing absolutely nothing to make it any better.
Anyways, 19 months ago in March 2012 we got a press release:
Delcam will preview its new Vortex strategy for high-speed area clearance on stand 4011 at the MACH exhibition to be held in Birmingham from 16th to 20th April…
Vortex, for which Delcam has a patent pending, has been developed by the company specifically to gain the maximum benefit from solid carbide tooling, in particular those designs that can give deeper cuts by using the full flute length as the cutting surface. It can be used for two- and three-axis roughing, three-plus-two-axis area clearance and for rest machining based on stock models or reference toolpaths…
Unlike other high-speed roughing techniques that aim to maintain a constant theoretical metal-removal rate, the Vortex strategy produces toolpaths with a controlled engagement angle for the complete operation. This maintains the optimum cutting conditions for the entire toolpath that would normally be possible only for the straight-line moves. As a result, the cutting time will be shorter, while cutting will be undertaken at a more consistent volume-removal rate and feed rate, so protecting the machine.
Sounded a lot like the Adaptive Clearing strategy which I invented in 2004, and was most accurately replicated in 2009 by Mastercam.
Sometimes you get a feeling when you start an idea that you’re going to spend the rest of your life debugging it to make it work.
The idea behind Adaptive Clearing just worked the first time I implemented it. The theory behind it is fundamentally sound. Trajectories in 2D are unambiguous, and you know when they come close and cross.
Just to show willing, and because it’s not working out as effortlessly as I’d hoped, here are some notes on my approaches to five axis theory.
One classical approach is to create a 3-axis toolpath for a ball nosed cutter ignoring the shaft. Then work out a sequence of tilt angles at each point along the toolpath so that the shaft and holder doesn’t collide with the part. The actual cutting tool is invariant to the tilt because it’s a sphere. I made an adequate implementation of this last year.
Another common approach is to create a path in the part surface that is to be the tool contact point and then derive a tool position and tilt from this a la single surface machining. There’s a similarity between this and the first approach in that the fulcrum for the tilt is the contact point at the surface of the tool rather than at the centre of the ball.
I shouldn’t need to go into why deriving a toolpath from a sequence of contact points is an epic fail, having dispatched this as a viable option for all but the simplest, most unrepresentatively unrealistic, yet intuitively compelling cases back in 1994. It’s daleks and staircases. Folks who believe it can be done either haven’t personally tried to make it work, or, if they have made it work will most likely have debugged their code into a completely different algorithm.
Monday, July 15th, 2013 at 11:47 am - Machining
Suppose I have a sphere of radius 5mm. I need it to be triangulated to a tolerance of 0.01mm.
This is the answer I would like to get:
Thursday, June 13th, 2013 at 8:13 pm - Machining
Apparently the following terribly obvious idea isn’t terribly obvious, and everyone who has been collecting time series data from, say, temperature sensors has simply been stuffing them into their flavour-of-the-month database as, basically, rows of the form (time, value, sensor_id) in copious quantities with not much idea about what to do with it.
If you’ve spent your life generating machine tool paths and thinning out the points so that the controllers don’t get overloaded, the first thing you say is: “Hang on, why don’t you just fit some straight line segments to all this data?”
Our good friends at Open Energy Monitor immediately began running with the idea as soon as I mentioned it — and show every sign of comprehending the depth of the science behind the practice which I have unfortunately not had time to investigate in our machine tooling case.
Here is a section from a temperature time sequence that contains 10090 values collected over seven days:
Here’s what it looks like when you thin it to the tolerance of 0.25 degrees which results in 210 points — a compression factor of 50 times:
The important thing about these algorithms is to model the lines, not the points, because that’s what you will be integrating with. This means a record in the database is of the form:
(time0, value0, time1, value1, tolerance, sensor_id)
Now you might think this is unnecessary redundancy because each time and value is listed twice in the database, when you should be able to get away with simply listing the sample points once, but it’s actually the linear segments that define the curve. Databases are very poor at joining rows against sequentially subsequent rows to recover these records, so don’t try to do it that way. The tolerance value refers to how well the line segment from (time0, value0) to (time1, value1) fits the data.
The algorithm I use to do this thinning is pretty simple and recursive on a stack:
segs = [ (t0, v0, t1, v1), (t1, v1, t2, v2), ... ] # input thinsegs = [ ] # output cstack = [ (0, len(segs)-1) ] while cstack: i0, i1 = cstack.pop() di, dtol = GreatestDeviation(segs, i0, i1) if dtol > maxtolerance: cstack.append((di+1, i1)) cstack.append((i0, di)) else: thinsegs.append((segs[i0], segs[i0], segs[i1], segs[i1], dtol))
All we do is fit a line to each sequence of segments, find the element of greatest deviation (on the (t1, v1) point), and split at that place if it exceeds the pre-set tolerance.
Doing it this way in a batch is quite a bit easier than attempting to thinning ongoing at load-time, but it is simple-minded because on an evenly curved path it will tend split to power of 2, which means it’s on average around 25% worse than optimal. And sometimes it’s possible to fit lines to the sequence whose endpoints are not on the sequence, but where the lines fit better. I’ve never done this; it’s hard, doesn’t gain much and may have disadvantages. But it is an important part of the science. One thing I do like about this algorithm is it preserves corners well (although there are some exceptions). I also think that the tolerance should be variable, so it can be tighter where the slope is long and shallow because the low frequency undulations on a long stable sequence could contain important data.
A notable observation of this curve is that it looks it contains exponential decay segments going up and down as the person in the room switched on and off a fan heater to keep warm. The decay curves going down are much clearer, and I wonder if they are all tending towards the ambient outside temperature at a rate given by the insulation rating of the room (very poor).
So we tried fitting exponential decay curves to this data, made possible with the addition of a further parameter in the segment record, called ex.
v = v0 + f (e-(t - t0) ex - 1) where: f = (v1 - v0) / (e-(t1 - t0) ex - 1.0)
This reduced the segment count to 161, which is promising. I can’t plot the curved segments using the annotated timeline, so they show up as straight lines below. However, I can plot the exponent values which show that the decay rate is pretty steady across the intervals whenever the heater is switched off:
So that’s the fun and games we can play with one sensor alone. Imagine what we can do when we have 100 sensors all in the same house and can relate the time displacements of the temperature changes between one sensor and the next.