Freesteel Blog » Five new scallop ideas before breakfast

Five new scallop ideas before breakfast

Thursday, August 2nd, 2012 at 8:50 pm Written by:

Here is an example random part which the great folks over in Copenhagen have been throwing at my scallop routine.

The problems thrown up by these computer generated random parts are easy to ignore — there is always something better to work on — because no one is actually getting let down by them. But when a bug showed up on a genuine customer part that mattered, I spent a week working through it, which was long enough for the flood-gates to be opened. The random part generator was kicking out failures at the rate of six a day and chucking them into the bug tracker by the bucket load until I told them to STOP!.

The structure of the scallop routine is such that it is able to accomodate new ideas later on as I came to them. The plane is divided up into rectangular cells whose perimeters enter and exit the implicit area within the offset of the previous path at point boundaries an even number of times.

These point boundaries on the cell perimeter can be connected to one another in a variety of different ways (if there are more than 2 on the cell) by the ResolveAmbiguities() function. And the cells can be subdivided in either the X or Y directions at any point along the perimeters.

Here is what the closeup of the problem area looks like from the side.

As you can see, it involves a very narrow channel — the model happened to have a corner between three randomly generated walls that was only just big enough to slide the cutter into. The scallop offset from the previous path was big enough to horizontally bisect this tiny deep pocket.

Here’s what seems to be happening drawn as a cross section.

Take this as a cross section through the model, so the red dots are really curves that intersect with the plane of your VDU, path0 for the original path, and path1 for the offset path, shown by the offset distance which intersects the tool surface in three places, creating an isolated contour between the lower two path1 points.

Let’s take a look at the area directly from above.


An incredible amount of subdivisioning is going on, so much so that it hits the limit.

Fix 1
To prevent infinite loops, there is a max subdivision allowed per primary cell of 400. What if I change this limit to 1400?

It gets the right answer, and shows that the algorithm is sound if left to run for long enough. You can also see, from the wobbliness, the hazard of oversampling.

This is not a satisfactory fix; the cells are consistently being subdivided until there are a few enough point boundaries in them such that they connect up correctly.

Fix 2
How about subdividing cells when they have a very high aspect ratio (when they are long and skinny)?

There are already seven different conditions for splitting up a cell, including an aspect ratio one which seeks to separate off parts of the cell whose boundaries do not connect. Other conditions include attempting to aim a cell bisector at the middle of the contour as it passes through the cell so as to minimize its maximal sample rate and improve its tolerance.

Unfortunately, because this cell is joined up incorrectly this function does not perform as intended. So I included an eighth condition into the mix where if the cell has an aspect ratio of 16 to 1 then it would be always cut in the middle of the long side rather than suffer yet another carrot slice along its length.

The result was pretty good.

Fix 3

I’m still not satisfied. Why were the contours being joined up incorrectly in the first place? It’s important to get it right (as in the shallow area algorithm), because getting it wrong pushes the problem out to another component of the algorithm called GSubdivCell that is in charge of recognizing that it is wrong and invoking the cell subdivision.

You want to keep stuff out of GSubdivCell as much as possible, because once it gets in there it will never be taken out.

For example, there was one bug that caused the scallop path to gouge the triangulated model in rare instances. It would be very easy to fix this by implementing gouge detection in the form of a cutter collision check at the mid point of every segment of the path, and calling for a subdivision of the cell whenever it shows it is seriously out of tolerance. The result of this development would be some pretty reliable paths. And they would take 50% longer to calculate with all the extra cutter location sampling along their lengths that was unnecessary in 99.99% of the cases where the calculation worked.

It’s obvious that once such post-corrective fixes are integrated into the system, all kinds of bugs become invisible and new ones will be added in later years that are never noticed, except by the fact that the algorithm just gets slower and slower over time like a machine with lots of wire and twine wrapped around its gears and drive shafts with still enough power to keep on working against the friction, albeit inefficiently.

To recap, here is the standard operating view I have for GAltCell, which is the function that decides how to join up the boundaries through the use of a probing path that may or may not intersect the region in space defined as those points which are within the offset radius distance of the original path.

This is the 4-way case. I have implemented a whole recursive combination of probing passes between prospective boundaries that is able to separate the cases by the way they should join across the cells, up to and including a 20-way cell.

That’s the picture I had in my head. Here’s what is really happening within this narrow pocket:

In this case if I do the usual probing path between the midpoints of the a0-b0 segment and the a1-b1 segment then it will tend to skim along the edge of the offset volume (the set of points in space less than the offset radius from the original path) and register a connection between the two sides (type GCellAlt=1) when in fact there is a deep trough between them.

What we need to do is drop the cutter down on those midpoints defining the ends of the probe path so that we sample along the probe z drop line, which will correctly encounter no connecting space between the two red sides.

It fixes this problem, and may fix a few others similar to this.

What did I say about doing extra sample points along every line segment of the contour and making it 50% slower? And I’ve gone and just proposed it. The algorithm has been pretty reliable up till now without this extra twist, so it’s a good idea for it to be applied sparingly. I have tried to limit it to use only when there is an actual significant dip along the perimeter of the cell.

On this example the drop cutter function is called 7000 times. I don’t know if it is significant. To find that out I’d need to introduce a flag or category value into each call of the drop cutter function and count the number of calls for each type. This form of instrumenting rapidly gets out of hand.

Fix 4: When a subdivision of a cell fails to link up with the sides for any reason (usually due to some inconsistency in the previous subdivisions), then it attempts to subdivide the cell in a different place.

Fix 5: If an offset completely fails and gets snarled up, it is tried again with a different tolerance/sample rate and later a different offset, rather than bailing out and giving up on the whole job.

Unfortunately these fixes didn’t crack all the problems thrown up by the random part generator. They are so fiendish and different to real user parts that the sample rates that work reliably on real parts do not capture all their complexities. I have to take a break from this work.

Fix 5 might be a very important one in the long run for dealing with the random part bugs. It’s very difficult to predict the appropriate sample rate for a part that’s going to capture all the little holes reliably and not get caught out. If I made the settings that worked for it, I might have an algorithm that runs too slow on “real” parts which have different and smoother characteristics. The try at normal tolerance and retry at tight tolerance could be a crucial technique for solving all the difficult cases while not ruining performance for all the good and simple cases.

2 Comments

  • 1. Anders Wallin replies at 4th August 2012, 4:36 am :

    Hi Julian,

    To me it looks like your problems arise because you allow your solution to jump in 3D from one existing toolpath to the next one we are computing.
    What if you would constrain/force the search for the new toolpath to the 2D cutter-location mesh? You would start at the existing toolpath and then ‘march’ that solution, strictly along the edges of the cutter-location mesh only, in the desired direction by the desired offset distance.

    There are a number of papers on calculating these geodesics on triangular meshes. One is “Fast Exact and Approximate Geodesics on Meshes” (google will find the PDF).

    Have you considered this approach? Or does it have some obvious drawback I haven’t thought of?
    I have some open-source drop-cutter routines so I want to do a const-scallop routine at some point. Calculating a triangular cutter-location mesh and then running a geodesic algorithm on that is now my plan.

    regards,
    AW

  • 2. Julian replies at 9th August 2012, 12:28 pm :

    There’s a continuum between your idea and mine.

    That gridded weave is actually the 2D cutter location mesh you are thinking of.

    Now I could compute it to tolerance across the entire surface, and then perform the geodesic march across its structure to form the constant offsets.

    However, this would run out of memory for any large part and be rather slow. It would be necessary to compromise on the tolerance to get the job done.

    Instead of this, what I do is compute the 2D cutter location mesh to tolerance only in the areas it is required.

    For example, here where the stepover is relatively large you would see zones of tight tolerance near the stepover curves and looser tolerance between them where we don’t need to model the surface precisely.

    In actual fact, what I am doing is dynamically subdividing the 2D cutter location mesh exactly where it is required, and only when the offset curve needs greater sampling to bring it into tolerance. Usually this is where there is a high level of detail. (It’s difficult to predict where these areas are.) As a consequence, we don’t have any subdivision or extra precision where it is not required.

    This is critical to performance — ensuring that the minimum amount of calculation is done to get the result.

    But here is another approach.

    If you perform this scallop against a single mesh that does not subdivide or vary, then it will be pretty simple, robust and reliable. However, the toolpath curves you get out of it will not be to tolerance, especially if there are any vertical walls.

    But, I can imagine that you can get pretty far if you used these out-of-tolerance toolpath curves as a guideline for an in-tolerance curve (eg by sampling along their length and projecting into the surface, on a slant not necessarily along Z) as you do for flowline machining.

    This may well give good enough results to live with, and probably what I would do if I had to code it all again.

    It would be interesting to compare the two approaches properly (my subsampling method, vs creating an approximate guide paths that are projected to the surface into tolerance). I can argue that my method is better in the long run as it is more scalable and is not detail limited. But, boy, is it harder to code that it may not be worth it when you are starting out.

Leave a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <blockquote cite=""> <code> <em> <strong>