Freesteel Blog » Machining

Saturday, November 16th, 2013 at 11:50 am - - Machining, Whipping

A lot’s been going on in the world. It’s out of my hands. I don’t know anything or have any influence over it, so what I say shouldn’t matter. Very sensitive. Yadda yadda. Got to watch out because it’s publicly listed companies and there’s shares and high-stakes gambling and all that money stuff.

Funnily enough, private non-listed companies that do not have any jittery share-holders or regulations to worry about are even more extremely secretive about their businesses. I wonder if they bother to invent positive and vaguely plausible excuses for their secrecy when there is apparently no commercial reason for it.

Secrecy is actually about power in all frames. This is the week that the TPP trade agreement got leaked. The text outlines a vast expansion of no-strings-attached corporate monopolies (aka “Intellectual Property”) in all sectors of the economy — policies that have been soundly and democratically rejected in the past. There’s no reason to keep that stuff secret, except in order to facilitate a plan to ram it through into law in an instant as a shock to the system in an opportune moment.

Not sure why I’m talking about that. I wanted to talk about food.

(more…)

Wednesday, November 6th, 2013 at 7:25 pm - - Machining

Let’s make a set of 50 random points, distributed like so:

N = 50
xs = [ random.gauss(0,25)  for i in range(N) ]
ys = [ random.gauss(0,11.9)  for i in range(N) ]
ys = [ y+x*0.3  for x, y in zip(xs, ys) ]
xm, ym = sum(xs)/N, sum(ys)/N
pts = [ (x-xm, y-ym)  for x, y in zip(xs, ys) ]

The displacement by (xm, ym) is so that average is centred on the origin to make things simpler. Here’s the picture:

(more…)

Tuesday, October 29th, 2013 at 6:37 pm - - Machining

The new Microsoft C++ compiler was released two weeks ago with all new code optimization, performance features and other bells and whistles.

You need to keep with the times and not fall behind. How’s it working for us with the old machining kernel?

Internal error 'j >= 0' failed at freesteel\S2weaveIterators.cpp:345 (#1).
  1: S2weavefciter::ExtendToCell+1035
  2: S2weavefciter::ExtendToCell+67
  3: S2weeave::AddValPerCells+813
  4: S2weave::BuildValPerCells+684
  ...
Error: An internal error was generated for toolpath.

Bugger!

This is really old code that hasn’t been looked at for years because it’s 100% reliable.

The problem only appears in the fully optimized release version, so I am reduced to debugging by Print statements.

I chased the problem all the way down to the generation of the regular partition, the substance of which can be reduced to thus:

int n = 499; 
vector<double> b();
for (int i = 0; i <= n; i++)
    b.push_back((double)i/n);
cout << "This should be exactly zero: " << (b.back() - 1.0) << endl

The result was: -1.11022e-16, violating the axiom that z/z = 1.0 for all z != 0.0.

Now, unlike most Computational Geometry programmers around the world, I know that floating point arithmetic is perfectly precise and predictable. It just adheres to its own axioms, which are not the same as those of Real Numbers.

You can choose to “solve” it by assuming that the calculation is approximate. But I have absolutely no truck with allowing any of my algorithms to be globally polluted by an arbitrary epsilon value, such as the SPAresabs deployed in the ACIS kernel because, tempting though it is to deploy, I know how bottomless this can of rotting worms is once opened. The reliance on an epsilon value in computational geometry is a product of group-think and laziness, because the group thinks it has to be this way when it does not. Use of epsilon appriximations is as harmful to programming as the goto statement. It would be nice to have sufficient general enlightenment within the computational geometry community to be able to hold this debate. Unfortunately this is not yet the case and, while I am still completely right, I remain in a minority of one.

And it takes exactly this type of dogged attitude to get to the bottom of the issue and not fall for the trap that floats are inaccurate.

Rene spotted the problem immediately, though it took me some delving into the disassembly code to be convinced. Multiplication is far faster than division in the CPU (probably because all the multiple terms can be summed in parallel rather than engage in the divisory sequential subtraction), so the new compiler has optimized it, like so:

int n = 499; 
double n_reciprocal = 1.0/n; 
vector<double> b();
for (int i = 0; i <= n; i++)
    b.push_back(i*n_reciprocal);

This was substantiated by the following line of Python:

>>> print 1-(1-499.0)*499
1.1102230246251565e-16

The fix is easy. Do the optimization myself:

double nrecip = 1.0 / n;
b.push_back(0);
for (int i = 1; i < n; i++)
    b.push_back(i*nrecip);
b.push_back(1);

Problem is, how many other places in the code do I have this?

Oh dear.

Sunday, October 13th, 2013 at 1:39 pm - - Adaptive

So, we’re having lots of fun with subprocesses kicking off independently executed processes and transmitting data to and from them through its stdin and stdout pipe.

The problem we had was the data describing the clearing work in each level was encoded big disorganized C++ object with lots of values pointers, arrays, and junk like that, and I was too lazy/wise to build a whole serialization protocol for it. I decided it was much easier to make a pure Python unpacking (and repacking) of the C++ object and then simply encode and decode it into a stream of bytes with the pickle module.

pyzleveljob = MakePurePythonCopyOfDataZlevelCobject(zleveljob)  # PyWorkZLevel_pickle  yellow
sout = cPickle.dumps(pyzleveljob, cPickle.HIGHEST_PROTOCOL)     # just_pickle  red
p = subprocess.Popen(["python", "-u", "zlevelprocedure.py"], 
                      stdin=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=False)
p.stdin.write(sout)
p.stdin.flush()
p.stdin.close()       # the subprocess starts executing here at popen  blue

That’s obviously the simplified code. The -u option is to ensure that the pipes are treated as pure binary streams at both ends.

Here’s what the timing graph looks like, with time going from left to right, with seven concurrent process threads.
(more…)

Thursday, October 3rd, 2013 at 6:00 pm - - Machining 2 Comments »

This is the follow-on from last week’s A plan to make Python 2.4 multicore. It’s been a disaster, but I’ve proved a point.

In summary, the idea was to run a single threading version of Python (compiled with WITH_THREADS undefined) in multiple different Windows threads in a way that these totally isolated interpreters could interact with and access the same pool of C++ functions and very large objects.

And it might have been easy if the Python interpreter wasn’t so totally infested with unnecessary global variables — especially in the garbage collector and object allocators.

A sample of the hacks is at: bitbucket.org/goatchurch/python24threadlocal. (Go into the files and click on [DIFF] to see the sorts of changes I was making.)

First the results.

(more…)

Tuesday, October 1st, 2013 at 6:30 pm - - Machining 2 Comments »

This has got to be an old idea. But I don’t know if it’s been done yet.

The issue is when a ball-nosed cutter of radius r goes over the edge of a part it spends a long time (between points a and b) in contact with the same point on the corner, grinding it away as it follows the green circular path (if it crosses the corner at right angles; otherwise it’s an elliptical path).

If you wanted nice crisp knife-edge corners, you aren’t going to get it.

(At least that’s my guess. I don’t actually know as I am not a machinist and don’t have access to a machine on which to conduct experiments.)

I have recently been doing a lot of curve fitting, which gave me an idea.

How about if I detected these outer circular motions and bulged them out slightly by a small factor e. Then the tool would leave the corner at the top surface when it is at point a, and rejoin the corner when it is at point b — and not grind it down smooth on its way over.

If we did this everywhere, would it make the whole part that little bit sharper and nicer?

Who knows? But I’m not going to spend my time programming it for the general case until I see some evidence that it does make a difference. It would be easy enough for a Process Engineer to directly design some special paths that only work on a test corner and do a set of experiments that involve scientifically quantifying the sharpness of the corners that results from this strategy — and then product tests the feature for desirability.

Now, who do I know who works in a large company that ought to be well enough organized to deploy resources to check up on an idea like this? Never mind. I can’t think of one off the top of my head.

There’s an 80% chance that this idea is garbage. It’s necessary feature of progress that there are many failed experiments along the way, because you don’t know what you are doing. Of course, if you did know what you were doing, you wouldn’t be doing anything new, would you?

If we did do these experiments and they didn’t work out, it would be very important to publish the results accurately so that the next people can either (a) avoid going down this dead end, or (b) spot something in our procedure that we didn’t quite get right.

(Let’s set aside the evil capitalist logic that intends to promote damage and waste external to the organization by ownership control, lying and deliberately withholding technical information.)

I believe I have recognized the hallmark of an innovative system — it’s one that is capable of funding these failures. It might seem efficient to direct all your programmers to work only on “real” projects. But if you do you won’t encounter the sequence of failures, abandoned experiments and learning opportunities that are an utterly essential part of the innovative process. Or worse, you’ll continue to push through on a serious “real” project long after it should have been declared a failure and learnt from.

This critique works on the pharmaceuticals system. New drugs require a great deal of innovation to develop. But we only reward the successes through a government-granted patent monopoly long after the work is fully complete. Who is paying for all the necessary failed drugs along the way? This crucial part of the system is overlooked. It is real work, and it has to be paid for. The official story is that the drugs companies recycle the profits from the successes into future failed experiments. But there’s very little quantified evidence of that.

So here’s to failure. We need more of it. And we need not to be frightened of it.


Friday, September 20th, 2013 at 11:30 am - - Adaptive, Machining 3 Comments »

I love Python, but its lack of multi-core support is killing us in the Adaptive Clearing strategy.

We’ve divided the algorithm into 8 separate threads that all work in a pipeline. Some of then can be operated in parallel (for example, the swirly Z-constant clearing strategy within an independent step-down level).

The heavy calculation is done in C++, but the high level toolpath planning, reordering, maintenance and selection from categories of start points is undertaken in Python.

While the C++ functions unlock the GIL (the Global Interpreter Lock) on entry, so that Python interpreter can carry on in parallel — and perhaps call another concurrent C++ function, this only gets us so far. Our limit is full use of two cores, which on a 4 or 8 core machine is a little embarrassing.

So we have two options, neither of which are good. One is to port our thousands of lines of Python code into C++ (which is not going to work as it will be too painful to debug this code yet again in another language that is much harder). And the second is to make Python multi-core.

(more…)

Monday, September 9th, 2013 at 11:58 am - - Machining 2 Comments »

Following my work attempting to create five axis toolpaths using tracking methods, I’ve fallen back to the more conventional techniques used to create 3-axis toolpaths of defining areas theoretically and then computing their boundaries.

For three-axis strategies almost everything is computed on an XY weave an efficient aligned subdividing structure. When things are aligned in this way it’s possible to be totally strict with the floating point calculations about what cell contains each point, and so rely on the topological and geometric reasoning being consistent.

(more…)

Wednesday, August 28th, 2013 at 12:43 pm - - Adaptive 11 Comments »

We all know how rotten software patents are. Unfortunately, software companies are too often run by idiots who play the game by contributing to the worsening of the situation and doing absolutely nothing to make it any better.

Anyways, 19 months ago in March 2012 we got a press release:

Delcam will preview its new Vortex strategy for high-speed area clearance on stand 4011 at the MACH exhibition to be held in Birmingham from 16th to 20th April…

Vortex, for which Delcam has a patent pending, has been developed by the company specifically to gain the maximum benefit from solid carbide tooling, in particular those designs that can give deeper cuts by using the full flute length as the cutting surface. It can be used for two- and three-axis roughing, three-plus-two-axis area clearance and for rest machining based on stock models or reference toolpaths…

Unlike other high-speed roughing techniques that aim to maintain a constant theoretical metal-removal rate, the Vortex strategy produces toolpaths with a controlled engagement angle for the complete operation. This maintains the optimum cutting conditions for the entire toolpath that would normally be possible only for the straight-line moves. As a result, the cutting time will be shorter, while cutting will be undertaken at a more consistent volume-removal rate and feed rate, so protecting the machine.

Sounded a lot like the Adaptive Clearing strategy which I invented in 2004, and was most accurately replicated in 2009 by Mastercam.

(more…)

Tuesday, August 20th, 2013 at 1:16 pm - - Machining 1 Comment »

Sometimes you get a feeling when you start an idea that you’re going to spend the rest of your life debugging it to make it work.

The idea behind Adaptive Clearing just worked the first time I implemented it. The theory behind it is fundamentally sound. Trajectories in 2D are unambiguous, and you know when they come close and cross.

(more…)