Freesteel Blog » Machining

Wednesday, March 26th, 2014 at 12:29 pm - - Machining

Trying to get this surface triangulation distraction closed down today. The proposition was to see whether I could repurpose my toolpath strategy code and apply it to the problem of triangulating trimmed NURBS surfaces. After all, trimmed NURBS surfaces exist on a 2D parametric domain (like the 3-axis drive plane), and the trimming curve is very much like a machining boundary. You have to know that something like the constant scallop routine is actually modelling a surface that is repeatedly being eroded away, because the toolpaths you see are in fact the boundary outline of that surface.

One of the common triangulation results I kept getting looks like a nice Afghan carpet pattern, like this:

triangulationcarpet

I have an explanation. This happens when maxHdiffFibre is too small.

The surface is divided up into cells, and the sides of each cell has at least length/maxHdiffFibre extra points inserted along them. The convex polygonal area of each cell is then triangulated by lopping off the sharpest corners repeatedly, until you have an 8-sided shape that still has points along the top and bottom sides because they were the longest to begin with, so these are joined up. The outer boundary is the trimming curve, so one side of over-sampled cell is trimmed away leaving three sides that have too many points which need to be joined up in the best way they can. Then, at the corners of the trimming curve, the cells are subdivided to get smaller and smaller complete rectangular cells which are small enough not to be over-sampled.

I don’t think anyone else in the world misuses a 3-axis toolpath algorithm to make a triangulated surface, so this type of buggy behavior is unique.

Meanwhile, a correspondent of the blog took up the challenge of making an excellent triangulation of a cone. He did this by starting with the triangular lattice with rotational symmetry of order 6, and then cutting a pie slice out of it in order to join it together to form a cone. Because he didn’t have the power of Twistcodewiki to draw 3D geometry direct from Python in your browser, he implemented the picture of all the edges by exporting Postscript commands, and didn’t have the capability of drawing the triangles themselves in order to see the folds:

triconesurf

Twistcodewiki is great. I use it all the time, even though its light/shading features are crude and terrible, and the interface is almost non-existent. But the one thing it does is so important and unique that I can ignore the fact that it doesn’t do it very well. The same probably goes for a lot of software out there.

Tuesday, March 18th, 2014 at 1:54 pm - - Machining

I hadn’t worked on triangulating surfaces since the first version of machining strategist in 1994, but always thought I could do it better by distributing the triangles in a nice enough manners. It is true that the people who write surface triangulators in engineering CAD packages don’t in general try very hard, but I don’t think many folks realize that there is no right answer.

Suppose we have a smooth surface S, defined by a function from a subset of the plane to real space

  f: D ------> R3, where D is a subset of R2 
                  representing the set of points within a boundary B defined by:
  b: [0,1] ------> R2 where b(0) = b(1)

… roughly speaking.

We want to approximate this surface to the tolerance e with a set of connected triangles defined by triplets of points in R3-space:
T = { (p0, p1, p2) }

It would be nice if it had the following properties:

(more…)

Thursday, March 13th, 2014 at 7:41 pm - - Machining

The days and weeks are passing me by. I’ve got to stop doing this programming and get with something I’m interested in. I don’t think I’ve been outside of Liverpool since January.

Due to a surface triangulation crisis (the speed of machining operations being four times faster than the initial time it takes the CAD kernel to produce the triangles it needs to start work), I spent the last 10 days hacking up a trimmed NURBS surface triangulator and triangle flipper with Martin based on some of the machining algorithms. We pretend the trimming curves are machining boundaries and encode an XYZ point at each XY 3-axis sample point instead of just tool tip height.

relayfacets

The triangle flipping is a second component experiment I’d been meaning to try for a long time. Can you redistribute the triangles in a better way than simply along the UV parametric lines? Not sure I’ve got an answer yet, but it’s exhausting trying to find out. I’ll write it up later when I have the energy.

Meanwhile, in another part of the city, things are being built by real engineers. I’m looking forward to installing it in the office and making it work. Which is more than can be said about many of the programming projects I’ve put my hand to recently.

I’m also wasting time posting ideas onto the internal Autodesk idea database. Most are sinking like lead bricks into mud. It’s pearls before swine. The latest idea which everyone hates is to hold Simultaneous Satellite Tech Conferences in each region rather than wasting a shedload of carbon by flying all the techies to meet in a hotel in Canada for two days.

“Oh, but I find the in-person meetings are so important for building relationships,” they say. “This never happens with on-line meetings.”

No one seems to think that maybe it’s because on-line meetings generally last less than an hour, but when you travel half-way round the world to a meeting, the duration of the meeting is in practice like 20 to 40 hours long (with sleeping breaks).

Perhaps this extended time period is what’s important, eh?

I mean, look, if you teleported into your favourite tech conference for, let’s say, one hour and fifteen minutes before suddenly vanishing in a puff of smoke, you wouldn’t be able to build a lot of relationship experiences with the people there, would you? However, if you were trapped in an elevator for 10 hours, and all you had was your phone which you could use to call the person in stuck in the other shaft, you’d become friends for life with that individual.

It’s all in the mind. Use your imagination. A functioning virtual telepresence system should not involve booking the suite for an hour on a stupid meeting. Instead it should require being time-locked in a tank for 12 or 24 hours where the only communication line is routed via the office to which your business travel destination has been designated. You can phone home, but the phone call will literally need to be be directed into a physical acoustic coupler device in that office. You are there, with all the inconvenience of being stuck there, meaning that the place of least effort to communicate is there, and it will be rude and boring if the people in that office don’t pay attention and entertain you while you are stuck there. Maybe after the first six hours you will have dispensed with enough pleasantries and finally be talking about the things you need to be talking about, and building those bonds of friendship that take time to form. Glue and cement and gelatin all take time to set. Do you not think they same could be true for our frivolous minds?

Wednesday, February 26th, 2014 at 1:41 pm - - Adaptive 1 Comment »

Last year we got a chance to see SolidCAM’s iMachining and laughed at the way its progress bar jumped all over the place, from -14% to 200% and back again.

Then we looked at our own Adaptive Clearing strategy which we had just spent the past year making multicore — and noticed it did the same stupid thing!

How embarrassing.

You never notice yourself picking your own nose, but when someone else does it in your face, you realize it’s ugly.

Progress bars don’t get as much love and attention from the programmers as they ought to, given how much time the users have to stare at them. The users think it’s so normal for the progress bar to be absolutely wrong that it’s considered a sign of extreme naivety to think about complaining. They probably believe that we’d going to laugh at them if they raised the issue.

It turns out that the progress bar on multiple CPU processing is not hard to get right, but you do it differently to how you do it on a single-threaded process.

Let’s first think about what a progress bar is for. There are two different options. It could report the time remaining for the process to complete, or it could report the percentage of the process job that has been completed.

The time remaining might be the most useful information for organizing your life (is there enough time to grab lunch while this completes?), but there’s no way you’re going to get that information — even though it’s what everyone wants to know.

You will hear: “How many days till it’s done?” more often than “Are we at the 75% complete stage yet?” for a project — and that’s even before it’s over-run by a factor of two.

In fact, the only practical implementation for the time remaining is to run the whole job first, time it, and then set a count-down timer from that value when you run it again. It’ll make everything run twice as slow, but what’s the big deal?

(more…)

Tuesday, February 11th, 2014 at 5:43 pm - - Adaptive

Taking a break from all my other mindful notions to do some proper work on this stay-down linking job.

I have an A-star function that works on the underlying weave structure. The white zones refer to the weave which defines the interior of the contour, and the red zones are the places where it’s safe to go without colliding with the uncut stock.

astart0area
Most A-star algorithms involve one start point in each tile which spreads out to all sides of the tile. But in my implementation the paths from the sides of the tile are pulled from the receiving end, so it’s possible to have two start points in the same tile with two paths going through, as shown above.

(more…)

Friday, January 24th, 2014 at 6:07 pm - - Adaptive

Sometimes it’s a relief to be certain that I’m doing something which is exactly what I am paid to do. All the rest of the mouthing off comes for free. Though maybe it’s important to always question what you are paid to do, so you don’t wind up wasting everyone’s time doing something that’s actually pointless.

After having got the A* routing through the weave structure to work based on my unpatented subdividing model of a 2D area that forms the geometric basis of the adaptive clearing non-gouging motion algorithm, I noticed that it did not give enough sample points for a linking pass to be within tolerance. The route, which must avoid uncut stock, requires a greater sample rate than the basic weave cells. These cells were sampled to a resolution along the boundary contour so that it would be within tolerance, but it is insufficient to handle the uncut stock structures that exist in the interior space of this area partway through the clearing cycle.

There are two options to deal with this insufficiency of sampling. Either add in an additional sample rate system, or improve the basic underlying weave sampling structure.

(more…)

Tuesday, January 14th, 2014 at 6:47 pm - - Adaptive

A quick day’s work to make a multi-threading percentage progress handling object in Python that handles residuals.

Let’s start with the test code (which I wrote after the working code — take that TDD!):

class ACProgress:
    def __init__(self):
        self.sumprogress = 0
    def ReportProgress(self, lprogress):
        self.sumprogress += lprogress
        print "The progress is at:", (self.sumprogress *100), "percent"

def addprogress(a, n):
    for i in range(n):
        acprogress.ReportProgress(a/n)
        time.sleep(0.05)

acprogress = ACProgress()
addprogress(1.0, 25)

This prints numbers up to 100% in steps of 4%.

Now we want to do this in threads.

(more…)

Friday, January 10th, 2014 at 4:22 pm - - Adaptive 1 Comment »

The stay-down linking code relies on an implementation of the A* search algorithm to quickly find a connecting path between the start point and end point by spreading a network of paths in an optimal way.

There’s a couple of nice animations from Wikipedia showing what they look like.

Dijksta’s A*

Suppose we have a network of paths, like so:

The path from Start to A and the path from Start to B rank the same in the priority queue because:

length(Start, B) + distance(B, End) < length(Start, A) + distance(A, End)

(The length function is the length of the green path, while the distance function is the length of the yellow path.)

According to the A* algorithm, the next step should extend outwards from the point B before we go from A on the basis that there could potentially be a direct path that takes us all the way from B to the End point that is shorter than the best one we are going to get by extending A.

But this is impossible, because the theoretical best direct path cuts through many paths (including the Start to A one) before it reaches End, and any one of these paths could be substituted up to the point of intersection into that Start-B-End path to make it shorter, which is illogical, captain.

Therefore, if there is going to be a shorter path from Start that passes through B, then it will have avoid at the very least the girth of the search tree, defined as a line connected two extreme points, like so:

And when you use this to increase the value of distance(B, End) in the formula above to the length of the two segments required to get round the girth line on either side (whichever is shorter), you don’t get quite so much unnecessary sprawl near the Start point. You can add more girth lines if you like.

It’s a small improvement, but practically for free, and may make a big difference for a lot of realistic cases where there is one major obstruction.

Tuesday, December 10th, 2013 at 6:40 pm - - Adaptive

Of course, I could be going down another blind channel. But for the moment it looks like it is getting closer as I progressively fiddle with the search algorithm and allow it to pick the exit points from each cell and not stick to the mid-point of each edge.

astarnetwork
(more…)

Wednesday, November 20th, 2013 at 8:24 pm - - Adaptive

How software actually gets developed. I came up with this idea over in Denmark when I heard the usual complaints about linking motions going over the top in a retract motion when they could stay down and traverse all hither and thither around the obstacles to get from one position to the next.

There’s always something. Last year it was the order of the cutting and the lack of multi-core processing. No one is complaining about those anymore. Either we fixed it, or they got bored.

Anyhow, I decided I’d show them why you don’t want that kind of a path by planning to do a stay-down path no matter how twisted and complicated it is, and then you can see it’s a lot slower than a straightforward no-nonsense retract motion.

This idea got onto the forum, and it sounds like people want it.

Very well.

I guess that delays my other attempts to make an improved minimal retract linking where it jumps over from one place to another on a hump, like you get with finish machining linking, only it was going to avoid uncut stock and be able to deviate sideways slightly to miss any upstands. There’s a difference between being the fastest toolpath and looking good.

Getting rid of all the retract linking moves is going to look good — we hope. Otherwise it might turn out to be a mad scribble. Who knows until we try it?

So, this requires an implementation of the much-known A* search algorithm on the weave structure. It’s been a long time since I’ve been into this area of the code. I’ve made about three false starts to get anything out, but now I know what I am doing. This is my first picture I’ve got.

Theoretically, because the search can spread out into every single cell in the weave structure, it will always find a connection no matter how complicated.

Now the trick is to improve it by ripping out sections of the path and replacing them with straighter and smoother motions. Oh, and also test for collisions with uncut stock as well in order to avoid it.

Might be slower, but since it now calculates in parallel on separate cores of the computer it will only be a problem if it becomes the bottleneck.

This is parallel program optimization. It’s no longer necessary to make your functions as fast as they possibly can be. They only have to not the slowest of all the jobs that run in parallel.

It’s like a team race where it only counts when every single member of the team has crossed the finish line. So if you have a couple of really fast sprinters on your team it won’t make any difference.

We could make things even slower by completing this linking function for every possible order of cutting. Rather than, at the end of each pass, picking one of the five next paths to go to according to which one is closer, you build a complete linking motion to each and pick the one that actually has the shortest link, a bit like a quantum wave function collapse.