Freesteel Blog » Machining

Wednesday, February 26th, 2014 at 1:41 pm - - Adaptive 1 Comment »

Last year we got a chance to see SolidCAM’s iMachining and laughed at the way its progress bar jumped all over the place, from -14% to 200% and back again.

Then we looked at our own Adaptive Clearing strategy which we had just spent the past year making multicore — and noticed it did the same stupid thing!

How embarrassing.

You never notice yourself picking your own nose, but when someone else does it in your face, you realize it’s ugly.

Progress bars don’t get as much love and attention from the programmers as they ought to, given how much time the users have to stare at them. The users think it’s so normal for the progress bar to be absolutely wrong that it’s considered a sign of extreme naivety to think about complaining. They probably believe that we’d going to laugh at them if they raised the issue.

It turns out that the progress bar on multiple CPU processing is not hard to get right, but you do it differently to how you do it on a single-threaded process.

Let’s first think about what a progress bar is for. There are two different options. It could report the time remaining for the process to complete, or it could report the percentage of the process job that has been completed.

The time remaining might be the most useful information for organizing your life (is there enough time to grab lunch while this completes?), but there’s no way you’re going to get that information — even though it’s what everyone wants to know.

You will hear: “How many days till it’s done?” more often than “Are we at the 75% complete stage yet?” for a project — and that’s even before it’s over-run by a factor of two.

In fact, the only practical implementation for the time remaining is to run the whole job first, time it, and then set a count-down timer from that value when you run it again. It’ll make everything run twice as slow, but what’s the big deal?

(more…)

Tuesday, February 11th, 2014 at 5:43 pm - - Adaptive

Taking a break from all my other mindful notions to do some proper work on this stay-down linking job.

I have an A-star function that works on the underlying weave structure. The white zones refer to the weave which defines the interior of the contour, and the red zones are the places where it’s safe to go without colliding with the uncut stock.

astart0area
Most A-star algorithms involve one start point in each tile which spreads out to all sides of the tile. But in my implementation the paths from the sides of the tile are pulled from the receiving end, so it’s possible to have two start points in the same tile with two paths going through, as shown above.

(more…)

Friday, January 24th, 2014 at 6:07 pm - - Adaptive

Sometimes it’s a relief to be certain that I’m doing something which is exactly what I am paid to do. All the rest of the mouthing off comes for free. Though maybe it’s important to always question what you are paid to do, so you don’t wind up wasting everyone’s time doing something that’s actually pointless.

After having got the A* routing through the weave structure to work based on my unpatented subdividing model of a 2D area that forms the geometric basis of the adaptive clearing non-gouging motion algorithm, I noticed that it did not give enough sample points for a linking pass to be within tolerance. The route, which must avoid uncut stock, requires a greater sample rate than the basic weave cells. These cells were sampled to a resolution along the boundary contour so that it would be within tolerance, but it is insufficient to handle the uncut stock structures that exist in the interior space of this area partway through the clearing cycle.

There are two options to deal with this insufficiency of sampling. Either add in an additional sample rate system, or improve the basic underlying weave sampling structure.

(more…)

Tuesday, January 14th, 2014 at 6:47 pm - - Adaptive

A quick day’s work to make a multi-threading percentage progress handling object in Python that handles residuals.

Let’s start with the test code (which I wrote after the working code — take that TDD!):

class ACProgress:
    def __init__(self):
        self.sumprogress = 0
    def ReportProgress(self, lprogress):
        self.sumprogress += lprogress
        print "The progress is at:", (self.sumprogress *100), "percent"

def addprogress(a, n):
    for i in range(n):
        acprogress.ReportProgress(a/n)
        time.sleep(0.05)

acprogress = ACProgress()
addprogress(1.0, 25)

This prints numbers up to 100% in steps of 4%.

Now we want to do this in threads.

(more…)

Friday, January 10th, 2014 at 4:22 pm - - Adaptive 1 Comment »

The stay-down linking code relies on an implementation of the A* search algorithm to quickly find a connecting path between the start point and end point by spreading a network of paths in an optimal way.

There’s a couple of nice animations from Wikipedia showing what they look like.

Dijksta’s A*

Suppose we have a network of paths, like so:

The path from Start to A and the path from Start to B rank the same in the priority queue because:

length(Start, B) + distance(B, End) < length(Start, A) + distance(A, End)

(The length function is the length of the green path, while the distance function is the length of the yellow path.)

According to the A* algorithm, the next step should extend outwards from the point B before we go from A on the basis that there could potentially be a direct path that takes us all the way from B to the End point that is shorter than the best one we are going to get by extending A.

But this is impossible, because the theoretical best direct path cuts through many paths (including the Start to A one) before it reaches End, and any one of these paths could be substituted up to the point of intersection into that Start-B-End path to make it shorter, which is illogical, captain.

Therefore, if there is going to be a shorter path from Start that passes through B, then it will have avoid at the very least the girth of the search tree, defined as a line connected two extreme points, like so:

And when you use this to increase the value of distance(B, End) in the formula above to the length of the two segments required to get round the girth line on either side (whichever is shorter), you don’t get quite so much unnecessary sprawl near the Start point. You can add more girth lines if you like.

It’s a small improvement, but practically for free, and may make a big difference for a lot of realistic cases where there is one major obstruction.

Tuesday, December 10th, 2013 at 6:40 pm - - Adaptive

Of course, I could be going down another blind channel. But for the moment it looks like it is getting closer as I progressively fiddle with the search algorithm and allow it to pick the exit points from each cell and not stick to the mid-point of each edge.

astarnetwork
(more…)

Wednesday, November 20th, 2013 at 8:24 pm - - Adaptive

How software actually gets developed. I came up with this idea over in Denmark when I heard the usual complaints about linking motions going over the top in a retract motion when they could stay down and traverse all hither and thither around the obstacles to get from one position to the next.

There’s always something. Last year it was the order of the cutting and the lack of multi-core processing. No one is complaining about those anymore. Either we fixed it, or they got bored.

Anyhow, I decided I’d show them why you don’t want that kind of a path by planning to do a stay-down path no matter how twisted and complicated it is, and then you can see it’s a lot slower than a straightforward no-nonsense retract motion.

This idea got onto the forum, and it sounds like people want it.

Very well.

I guess that delays my other attempts to make an improved minimal retract linking where it jumps over from one place to another on a hump, like you get with finish machining linking, only it was going to avoid uncut stock and be able to deviate sideways slightly to miss any upstands. There’s a difference between being the fastest toolpath and looking good.

Getting rid of all the retract linking moves is going to look good — we hope. Otherwise it might turn out to be a mad scribble. Who knows until we try it?

So, this requires an implementation of the much-known A* search algorithm on the weave structure. It’s been a long time since I’ve been into this area of the code. I’ve made about three false starts to get anything out, but now I know what I am doing. This is my first picture I’ve got.

Theoretically, because the search can spread out into every single cell in the weave structure, it will always find a connection no matter how complicated.

Now the trick is to improve it by ripping out sections of the path and replacing them with straighter and smoother motions. Oh, and also test for collisions with uncut stock as well in order to avoid it.

Might be slower, but since it now calculates in parallel on separate cores of the computer it will only be a problem if it becomes the bottleneck.

This is parallel program optimization. It’s no longer necessary to make your functions as fast as they possibly can be. They only have to not the slowest of all the jobs that run in parallel.

It’s like a team race where it only counts when every single member of the team has crossed the finish line. So if you have a couple of really fast sprinters on your team it won’t make any difference.

We could make things even slower by completing this linking function for every possible order of cutting. Rather than, at the end of each pass, picking one of the five next paths to go to according to which one is closer, you build a complete linking motion to each and pick the one that actually has the shortest link, a bit like a quantum wave function collapse.

Saturday, November 16th, 2013 at 11:50 am - - Machining, Whipping

A lot’s been going on in the world. It’s out of my hands. I don’t know anything or have any influence over it, so what I say shouldn’t matter. Very sensitive. Yadda yadda. Got to watch out because it’s publicly listed companies and there’s shares and high-stakes gambling and all that money stuff.

Funnily enough, private non-listed companies that do not have any jittery share-holders or regulations to worry about are even more extremely secretive about their businesses. I wonder if they bother to invent positive and vaguely plausible excuses for their secrecy when there is apparently no commercial reason for it.

Secrecy is actually about power in all frames. This is the week that the TPP trade agreement got leaked. The text outlines a vast expansion of no-strings-attached corporate monopolies (aka “Intellectual Property”) in all sectors of the economy — policies that have been soundly and democratically rejected in the past. There’s no reason to keep that stuff secret, except in order to facilitate a plan to ram it through into law in an instant as a shock to the system in an opportune moment.

Not sure why I’m talking about that. I wanted to talk about food.

(more…)

Wednesday, November 6th, 2013 at 7:25 pm - - Machining

Let’s make a set of 50 random points, distributed like so:

N = 50
xs = [ random.gauss(0,25)  for i in range(N) ]
ys = [ random.gauss(0,11.9)  for i in range(N) ]
ys = [ y+x*0.3  for x, y in zip(xs, ys) ]
xm, ym = sum(xs)/N, sum(ys)/N
pts = [ (x-xm, y-ym)  for x, y in zip(xs, ys) ]

The displacement by (xm, ym) is so that average is centred on the origin to make things simpler. Here’s the picture:

(more…)

Tuesday, October 29th, 2013 at 6:37 pm - - Machining

The new Microsoft C++ compiler was released two weeks ago with all new code optimization, performance features and other bells and whistles.

You need to keep with the times and not fall behind. How’s it working for us with the old machining kernel?

Internal error 'j >= 0' failed at freesteel\S2weaveIterators.cpp:345 (#1).
  1: S2weavefciter::ExtendToCell+1035
  2: S2weavefciter::ExtendToCell+67
  3: S2weeave::AddValPerCells+813
  4: S2weave::BuildValPerCells+684
  ...
Error: An internal error was generated for toolpath.

Bugger!

This is really old code that hasn’t been looked at for years because it’s 100% reliable.

The problem only appears in the fully optimized release version, so I am reduced to debugging by Print statements.

I chased the problem all the way down to the generation of the regular partition, the substance of which can be reduced to thus:

int n = 499; 
vector<double> b();
for (int i = 0; i <= n; i++)
    b.push_back((double)i/n);
cout << "This should be exactly zero: " << (b.back() - 1.0) << endl

The result was: -1.11022e-16, violating the axiom that z/z = 1.0 for all z != 0.0.

Now, unlike most Computational Geometry programmers around the world, I know that floating point arithmetic is perfectly precise and predictable. It just adheres to its own axioms, which are not the same as those of Real Numbers.

You can choose to “solve” it by assuming that the calculation is approximate. But I have absolutely no truck with allowing any of my algorithms to be globally polluted by an arbitrary epsilon value, such as the SPAresabs deployed in the ACIS kernel because, tempting though it is to deploy, I know how bottomless this can of rotting worms is once opened. The reliance on an epsilon value in computational geometry is a product of group-think and laziness, because the group thinks it has to be this way when it does not. Use of epsilon appriximations is as harmful to programming as the goto statement. It would be nice to have sufficient general enlightenment within the computational geometry community to be able to hold this debate. Unfortunately this is not yet the case and, while I am still completely right, I remain in a minority of one.

And it takes exactly this type of dogged attitude to get to the bottom of the issue and not fall for the trap that floats are inaccurate.

Rene spotted the problem immediately, though it took me some delving into the disassembly code to be convinced. Multiplication is far faster than division in the CPU (probably because all the multiple terms can be summed in parallel rather than engage in the divisory sequential subtraction), so the new compiler has optimized it, like so:

int n = 499; 
double n_reciprocal = 1.0/n; 
vector<double> b();
for (int i = 0; i <= n; i++)
    b.push_back(i*n_reciprocal);

This was substantiated by the following line of Python:

>>> print 1-(1-499.0)*499
1.1102230246251565e-16

The fix is easy. Do the optimization myself:

double nrecip = 1.0 / n;
b.push_back(0);
for (int i = 1; i < n; i++)
    b.push_back(i*nrecip);
b.push_back(1);

Problem is, how many other places in the code do I have this?

Oh dear.