Friday, January 10th, 2014 at 4:22 pm - Adaptive 1 Comment »

## A minor A* improvement

The stay-down linking code relies on an implementation of the A* search algorithm to quickly find a connecting path between the start point and end point by spreading a network of paths in an optimal way.

There’s a couple of nice animations from Wikipedia showing what they look like.

Suppose we have a network of paths, like so:

The path from Start to A and the path from Start to B rank the same in the priority queue because:

length(Start, B) + distance(B, End) < length(Start, A) + distance(A, End)

(The length function is the length of the green path, while the distance function is the length of the yellow path.)

According to the A* algorithm, the next step should extend outwards from the point B before we go from A on the basis that there could potentially be a direct path that takes us all the way from B to the End point that is shorter than the best one we are going to get by extending A.

But this is impossible, because the theoretical best direct path cuts through many paths (including the Start to A one) before it reaches End, and any one of these paths could be substituted up to the point of intersection into that StartBEnd path to make it shorter, which is illogical, captain.

Therefore, if there is going to be a shorter path from Start that passes through B, then it will have avoid at the very least the girth of the search tree, defined as a line connected two extreme points, like so:

And when you use this to increase the value of distance(B, End) in the formula above to the length of the two segments required to get round the girth line on either side (whichever is shorter), you don’t get quite so much unnecessary sprawl near the Start point. You can add more girth lines if you like.

It’s a small improvement, but practically for free, and may make a big difference for a lot of realistic cases where there is one major obstruction.

Tuesday, December 10th, 2013 at 6:40 pm - Adaptive

## Homing in on an A* structure

Of course, I could be going down another blind channel. But for the moment it looks like it is getting closer as I progressively fiddle with the search algorithm and allow it to pick the exit points from each cell and not stick to the mid-point of each edge.

Wednesday, November 20th, 2013 at 8:24 pm - Adaptive

## A* stay down shortest linking motion

How software actually gets developed. I came up with this idea over in Denmark when I heard the usual complaints about linking motions going over the top in a retract motion when they could stay down and traverse all hither and thither around the obstacles to get from one position to the next.

There’s always something. Last year it was the order of the cutting and the lack of multi-core processing. No one is complaining about those anymore. Either we fixed it, or they got bored.

Anyhow, I decided I’d show them why you don’t want that kind of a path by planning to do a stay-down path no matter how twisted and complicated it is, and then you can see it’s a lot slower than a straightforward no-nonsense retract motion.

This idea got onto the forum, and it sounds like people want it.

Very well.

I guess that delays my other attempts to make an improved minimal retract linking where it jumps over from one place to another on a hump, like you get with finish machining linking, only it was going to avoid uncut stock and be able to deviate sideways slightly to miss any upstands. There’s a difference between being the fastest toolpath and looking good.

Getting rid of all the retract linking moves is going to look good — we hope. Otherwise it might turn out to be a mad scribble. Who knows until we try it?

So, this requires an implementation of the much-known A* search algorithm on the weave structure. It’s been a long time since I’ve been into this area of the code. I’ve made about three false starts to get anything out, but now I know what I am doing. This is my first picture I’ve got.

Theoretically, because the search can spread out into every single cell in the weave structure, it will always find a connection no matter how complicated.

Now the trick is to improve it by ripping out sections of the path and replacing them with straighter and smoother motions. Oh, and also test for collisions with uncut stock as well in order to avoid it.

Might be slower, but since it now calculates in parallel on separate cores of the computer it will only be a problem if it becomes the bottleneck.

This is parallel program optimization. It’s no longer necessary to make your functions as fast as they possibly can be. They only have to not the slowest of all the jobs that run in parallel.

It’s like a team race where it only counts when every single member of the team has crossed the finish line. So if you have a couple of really fast sprinters on your team it won’t make any difference.

We could make things even slower by completing this linking function for every possible order of cutting. Rather than, at the end of each pass, picking one of the five next paths to go to according to which one is closer, you build a complete linking motion to each and pick the one that actually has the shortest link, a bit like a quantum wave function collapse.

Sunday, October 13th, 2013 at 1:39 pm - Adaptive

## Concurrent clashing of piping resources

So, we’re having lots of fun with subprocesses kicking off independently executed processes and transmitting data to and from them through its stdin and stdout pipe.

The problem we had was the data describing the clearing work in each level was encoded big disorganized C++ object with lots of values pointers, arrays, and junk like that, and I was too lazy/wise to build a whole serialization protocol for it. I decided it was much easier to make a pure Python unpacking (and repacking) of the C++ object and then simply encode and decode it into a stream of bytes with the pickle module.

```pyzleveljob = MakePurePythonCopyOfDataZlevelCobject(zleveljob)  # PyWorkZLevel_pickle  yellow
sout = cPickle.dumps(pyzleveljob, cPickle.HIGHEST_PROTOCOL)     # just_pickle  red
p = subprocess.Popen(["python", "-u", "zlevelprocedure.py"],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=False)
p.stdin.write(sout)
p.stdin.flush()
p.stdin.close()       # the subprocess starts executing here at popen  blue
```

That’s obviously the simplified code. The -u option is to ensure that the pipes are treated as pure binary streams at both ends.

Here’s what the timing graph looks like, with time going from left to right, with seven concurrent process threads.
(more…)

Friday, September 20th, 2013 at 11:30 am - 3 Comments »

## A plan to make Python 2.4 multicore

I love Python, but its lack of multi-core support is killing us in the Adaptive Clearing strategy.

We’ve divided the algorithm into 8 separate threads that all work in a pipeline. Some of then can be operated in parallel (for example, the swirly Z-constant clearing strategy within an independent step-down level).

The heavy calculation is done in C++, but the high level toolpath planning, reordering, maintenance and selection from categories of start points is undertaken in Python.

While the C++ functions unlock the GIL (the Global Interpreter Lock) on entry, so that Python interpreter can carry on in parallel — and perhaps call another concurrent C++ function, this only gets us so far. Our limit is full use of two cores, which on a 4 or 8 core machine is a little embarrassing.

So we have two options, neither of which are good. One is to port our thousands of lines of Python code into C++ (which is not going to work as it will be too painful to debug this code yet again in another language that is much harder). And the second is to make Python multi-core.

Wednesday, August 28th, 2013 at 12:43 pm - Adaptive 11 Comments »

## Delcam’s 18 months of patenting secrecy are up

We all know how rotten software patents are. Unfortunately, software companies are too often run by idiots who play the game by contributing to the worsening of the situation and doing absolutely nothing to make it any better.

Anyways, 19 months ago in March 2012 we got a press release:

Delcam will preview its new Vortex strategy for high-speed area clearance on stand 4011 at the MACH exhibition to be held in Birmingham from 16th to 20th April…

Vortex, for which Delcam has a patent pending, has been developed by the company specifically to gain the maximum benefit from solid carbide tooling, in particular those designs that can give deeper cuts by using the full flute length as the cutting surface. It can be used for two- and three-axis roughing, three-plus-two-axis area clearance and for rest machining based on stock models or reference toolpaths…

Unlike other high-speed roughing techniques that aim to maintain a constant theoretical metal-removal rate, the Vortex strategy produces toolpaths with a controlled engagement angle for the complete operation. This maintains the optimum cutting conditions for the entire toolpath that would normally be possible only for the straight-line moves. As a result, the cutting time will be shorter, while cutting will be undertaken at a more consistent volume-removal rate and feed rate, so protecting the machine.

Sounded a lot like the Adaptive Clearing strategy which I invented in 2004, and was most accurately replicated in 2009 by Mastercam.

Friday, May 24th, 2013 at 4:56 pm - Adaptive

## The Queue of Queues Fallacy

Believe it or not, I’ve been working on the Adaptive Clearing strategy recently, attempting to upgrade it to have multi-core capabilities. As you know, the free lunch was declared over 2005 when CPU clock-cycles stopped increasing exponentially. Now, instead, the numbers of cores are increasing. I have 8 cores on my current laptop, and when my state-of-the-art-algorithm runs on it using only 12.5% of the computing capability, it’s embarrassing.

To break the algorithm down into components that can be processed concurrently, I have separated it into 7 working threads, each of which pipes objects to the next layer through a queue. I was told that this is pretty standard architecture — even though it doesn’t look like it can be done using the concurrent.futures module in Python (where I looked for inspiration) because it looked like it needed the complete list of tasks before it could distribute them — at least in an earlier incarnation — and not a pipeline of data. I was also confused by the use of the word “futures” for processes not yet complete, which was similar to the __futures__ module which was features from a later version of Python into an earlier one. I don’t like the terminology anyway, but that’s just my opinion.

Anyway, a Queue is a structure that has functions queue.put(object) [to the front] and queue.get() [returns an object from the back], where each end can be handled by a different operator, like like this:

Let’s say op1 is sending in horizontal slices of the model (the objects obj) and op2 is calculating the clearing toolpath at each of the horizontal layers. We can run these two operations in different threads, and it will all work tickety-boo. The operating characteristics of the threads getting and putting from and into queues is that when a thread gets from a queue that it empty, it automatically sleeps (consuming no processor cycles) until something is put into the queue at the other end, at which point it automatically wakes up and retrieves the object. The opposite happens on the put if you have specified an upper bound for the capacity of the queue and you hit it.

Thursday, January 10th, 2013 at 6:30 pm - Adaptive

## The toolpath collision point with an engagement sweep

Have just checked in 400 lines of the function CheckPathSegmentCollisionSweepArc() to go with the 150 lines I had done in CheckPathSegmentCollisionSweepLinesegment(). The line counts are a bit unfair because there are several sets of duplicated code for the clockwise and anticlockwise, in and out configurations.

This is to do with the new feature of toolpath re-ordering in the Adaptive Clearing algorithm, the culmination of the ideas I worked out with Martin on the train back from Copenhagen in October.

One of the fundamental routines requires considering a length of cutting toolpath P and comparing it with a length of cutting toolpath Q and its adjacent linking toolpaths from later on in the total toolpath, and deciding whether it is safe to do Q before P instead of after it.

Basically, it’s safe to reorder so long as the circular profile of the tool as it sweeps along the Q path does not hit any of the material removed by P.

Our first approximation of the function simply returned True when the bounding boxes of Q and P were separated by more than the sum of the two tool radii, and was good enough to demonstrate that the rest of the re-ordering feature was going to work. (I’ll outline that deceptively simple algorithm at a later date.)

But the results would be tighter the better we approximate of the material removed by the path P.

Our first attempt used the measured engagement value set at every node of the path P as a byproduct of the Adaptive Clearing algorithm.

The plots of it were beautiful, with the sequence of engagement arcs looking exactly like the scores on the metal, but I failed to take a screenshot of them. The problem was determining how frequently one should sample along the toolpath. I don’t like fudging matters by just plucking a small number out of the air that would probably work, but which would leave an unnecessary upper bound for performance. And I couldn’t guarantee that the engagement width along the toolpath between its nodes was never spike up to just below the maximum engagement angle, and then back down to a small value before the next node.

So I decided to assume the value was always the maximum engagement angle along the toolpath — which is already sampled at the widest rate possible specifically to not exceed this engagement angle. That makes it easy. It is an over-estimate, but it is controlled. Most of the time the engagement angle is close to the optimum. And when you want to favour speed of calculation over accuracy you leave a wider margin between the maximum and the optimum. And so it is reasonable for the approximation of the material removal area to degrade correspondingly (though always erring on the side of caution). When you tighten things up, everything should tighten. When they loosen, all parts should loosen.

The geometry is explained in the picture above. We are looking for the point along path Q where the tool profile collides with the material removed by path P. When Q is a cutting path, we just need to know Yes or No. When it’s No we cannot reorder Q before P. When Q is a linking path, we need to trim it back so that we can use the safe part of it to guide the toolpath away from the stock and calculate a new linking move that avoids all previously uncut stock.

Obviously we break Q down into line segments and arcs, and P down into arc and two line components called fsengagementsegsweep which is composed of the red arc (with its two end points) and the two parallel orange lines whose end points are embedded within the arcs and so do not need to be tested. There are 5 different ways for the circle profile to be in contact with an fsengagementsegsweep.

The function CheckPathSegmentCollisionSweepLinesegment() dealt with the line segment linear motion of one of those purple disks along Q to the soonest a point where it makes contact in one of the five ways with the fsengagementsegsweep‘s generated by path P.

But then what to do with the arcs on the path Q (usually quite big ones when they involve linking motions)?

The first version is just to approximate the arcs to short line segments and call CheckPathSegmentCollisionSweepLinesegment() for each one of them. But that’s horrible. What tolerance do you use? There’s no right answer, so what you choose will forever be obstructed from optimality.

Given that it is a soluble problem — circles on circular trajectories making contact with other circles — I’ve done it properly and given the analytic answer. But it’s been two days of deeply tedious work where I could be distracted by anything.

Here are some of my trial experiments. The yellow line is the cutting path P with its red and orange material removal areas. The black lines are repeated instances of the path Q — specifically an arc segment — that I am testing the function against. I have tried them both in the clockwise and anti-clockwise directions and plotted in green the point and disk of contact with the material.

It’s Thursday evening, and now it’s done at last. I’ve got all of Friday to play with other things, annoy other people in the company with emails, and track information down in EU documents.

That reminds me: I haven’t got an answer back from technical support after they forced me to change my password after my first 90 days with the company. I asked if they had any idea what proportion of employees have to write their passwords down because of this policy — even though they are instructed not to. In other words, are they wilfully ignorant of the anti-security consequences of their supposedly pro-security policies?

Monday, December 17th, 2012 at 4:54 pm - 4 Comments »

## Another day another “revolutionary” new roughing strategy

Does this look familiar?

This one from EdgeCam got under the radar.

It seems it was released last month, or maybe earlier in the year.

It does not appear to have implemented retract steps yet — a feature we had from the start in our original 2004 Adaptive Clearing development — but the pitch of the initial clearing spiral is variable, which shows it’s on the right track.

At some point we’ll have enough of these “unique” “revolutionary” cutting strategies for an independent agency to really help us out by properly benchmarking them against one another and publishing the results. That way everyone would know where their weaknesses lay and what to focus development on.

As it is now, with all the different software companies building their own implementations of this fluke cutting technique, and falsely marketing them as though nothing like it exists anywhere else in the world, we are experiencing an extraordinary amount of wasted energy.

The waste is in the form of machinists waiting for unnecessarily inefficient and buggy implementations to complete the calculations, because the developers don’t get the crucial feedback they need to make cheap and substantial improvements in the software, and in the form of developers unwittingly working on areas of the code that are have no benefit to the end user.

That’s quite aside to the unbelievable waste by certain companies sinking their finite resources into pointless patents rather than improving their code.

The Adaptive Clearing is on my mind because we have been spending what feels like months working on toolpath reordering and the multicore version of the algorithm.

The reordering algorithm was worked out on the long train back from Copenhagen, and it’s quite simple. I’ll write it up at some point. And the multicore is something that Anthony wants. He’s promised to make one of his nifty videos where he demonstrates the algorithm using a sit-on lawnmower if we ever get it finished. That’s what’s keeping me motivated when I am losing the will to push on after one too many of these queues of queuing threads hangs and everything is completely broken for a couple of days. What a great idea: take an algorithm that’s already too complicated and make it four times more complex. I’m sure it will work out. Maybe.

Tuesday, November 20th, 2012 at 5:22 pm - Adaptive

## Further Surfware patent disclosure

I had reason to consult some early and embarrassing 2005/2006 entries in this blog, specifically Patent Schmatent, Patent part deux, and Nothing happened.

These concerned software patent 7451013 applied for by Surfware which had just been published while I was at Euromold that year. After running around all excited for a few hours about how this theoretical threat had manifested into something real, I got told by one of the red shirted folks on the Esprit stand (to whom we were attempting to hawk our Adaptive Clearing algorithm) that there was a way to submit our evidence of prior art to the US Patent Office before the patent was granted.

We looked into this, but it seemed to involve a submission on just the right grade of paper, plus \$150 in the appropriate format for the patent office to accept — no doubt some kind of hard-to-obtain currency bank bond that would have cost us \$500 in the way of these things. For details, see CFR 1.99 Third-party submission in published application.

However, there was another way. Under CFR 1.555 Information material to patentability in ex parte reexamination and inter partes reexamination proceedings:

Each individual associated with the patent owner in a reexamination proceeding has a duty of candor and good faith in dealing with the Office, which includes a duty to disclose to the Office all information known to that individual to be material to patentability in a reexamination proceeding.

So we mailed a CD with some videos to their lawyers Akin Gump Strauss Hauer & Feld LLP.

…And didn’t hear anything back.

Not being a professional patent wrangler (my job is to write code that does something, not push papers around containing useless gibberish for the purpose of preventing other people from writing code), I didn’t know of portal.uspto.gov where you could see all the documents relating to the patent process.

And in among all these documents I found the following scan:

Hint: Maybe if you read exhibit (B), the Letter accompanying the CD, then the author and dates of publication of the files in exhibit (A) would not have been unknown. Dipsticks!

Just when you thought your opinion of patent attorneys could sink no lower.

Update: Filed under the It’s Worse Than You Thought department, 35 U.S.C. 122(c) states:

(c) Protest and Pre-Issuance Opposition. The Director shall establish appropriate procedures to ensure that no protest or other form of pre-issuance opposition to the grant of a patent on an application may be initiated after publication of the application without the express written consent of the applicant.

The regulatory explanation is set forth like so:

The American Inventors Protection Act of 1999 (AIPA) contained
a number of changes to title 35 of the United States Code…

The USPTO interprets the provisions of 35 U.S.C. 122(c) as
requiring, rather than simply empowering, the USPTO to ensure that no
protest or other form of pre-issuance opposition to an application may
be initiated after its publication without the express written consent
of the applicant…

Following enactment of 35 U.S.C. 122(c), the USPTO revised 37 CFR 1.291 and 1.292 to prohibit third parties from submitting any protest or initiating any public use proceedings… To balance the mandate of 35 U.S.C. 122(c) that the USPTO establish “appropriate procedures” to ensure that third parties may not initiate protest or other pre-issuance opposition to an application after its publication… the USPTO promulgated 37 CFR 1.99 to permit third parties to submit patents and publications (i.e., prior art documents that are public information and which the USPTO would discover on its own with an ideal prior art search) during a limited period after publication of an application. However, 37 CFR 1.99 prohibits third parties from submitting any explanation of the patents or publications, or submitting any other information…

The USPTO considers any third-party inquiry or submission that is not provided for in 37 CFR 1.99 in a published application in which the applicant has not provided an express written consent to protest or pre-issuance opposition to be inappropriate…

I always did wonder why the avenue for challenging a patent application was so awesomely crappy.

Two things:

(1) It looks like we did exactly as much as we could, having found a loophole in this deliberate fortress of unaccountability.

(2) Anyone hoping that the legislative branch is going to clamp down on the patent excesses promulgated by an over-active judicial branch should be disappointed.