Freesteel Blog » Constant scallop code rot

Constant scallop code rot

Friday, November 21st, 2008 at 11:29 am Written by:

The algorithm for doing the constant scallop stepover in HSMWorks is far more robust and speedier than the one I wrote in Depocam/Machining Strategist. Unfortunately this means it gets pushed much harder in the application.

For a start, bugs are a function of processor cycles, not time. Back in the old days when I was writing these algorithms on a 50 MHz Pentium, the users would have to run it for an hour, or all weekend, to produce the metres of toolpath they needed to run their machine tools. We called it High Speed Machining, because they were running their machine tools several times faster than they used to, which would require several times the length of toolpath. They would get this by halving or quartering the gap between each cutting pass of the tool on the surface, so the resulting surface had smaller ridges (scallops? cusps? the terminology has never been standardized! what’s this tell you about the degree of communication between practitioners in the industry?).

In the old days people would (I’ve never seen it) hire someone on low pay to polish these ridges down for hours to get a surface smooth enough to use for a plastic mold.

So anyway, what worked as a pretty reliable algorithm back in 1993 would begin to crumble quite badly in 1998 when the PCs being shipped with the application were 10 or 20 times faster, and the users adjusted to it by tightening the stepover, running it on bigger parts, and just generally executing 10 or 20 times the processor cycles per work day. While we, in the programming team, understood the problem of the software having too many users sending in too many bugs for us to deal with, this speed inflation was equivalent to the user base expanding at an exponential rate, even when the customer base and the size of the programming team remained constant.

There’s always been this mysterious phenomenon we called bit rot when the code you thought was fine suddenly starts breaking and having lots of problems, and you didn’t know why because you hadn’t touched it since last year when you thought you’d finally made it perfect.

If my conjecture is right, this will cease to be an issue as the speed of processors more or less stabilizes or exceeds a threshold where the code is being sufficiently tested in the way that it wasn’t when computers were much much slower.

It also means that there is going to be a difference between software that was designed early on in the processor speed inflationary process (in the 1990s), versus stuff that was put together in the later days.

This is the opposite of my original thinking: that older software would be better than what we write now when the computers are much more powerful, because back then the code had to be efficient and well-made to get the job done, and nowadays with so many more processor cycles to waste we can afford to become slap-dash and inefficient.

This intuition was consistently confirmed by Microsoft who made their products and operating systems more and more bloated and inefficient with each passing year so that a one-page plain text document you saved on a 386 machine in 1992 now required 200Mb on-board RAM minimum just read it. It was a joke. The hardware manufacturers were only just able to keep up with the rate of the software deterioration. What kind of sloppiness and memory leaks does it take to waste 199.99Mb of RAM to load a 10Kb document? How could it be getting this bad? We thought there must have been a conspiracy between Intel and Microsoft during those years driving the continual need for upgrades just to stay the same.

So the story is that later-designed software is going to be better than early software because it will be more in tune with the modern hardware and the way it is used. The old stuff is going to feel like you have changed your bike chain without changing the gears — it’ll skip all over the place.

It’s a pleasant diversion. I have decided I’m not going to fix the bug illustrated in the picture for now — along with the dozens of other ones in the same algorithm — because I am too annoyed after a week of hard work.

What’s happened here is he’s detected the shallow areas of the model using a 10mm diameter ball, and then offsetted all these areas outwards along the surface by 1.5mm so that the contours have wrapped down over the edges. The point is to smooth them in case your shallow area contours are fragmented at the threshold point and you’d them to merge. So here is a case I hadn’t thought about where three areas overlap in the same vertical axis and two of the contours are joining up incorrectly with a double vertical line. (There are four instances of it in the picture.)

It’s notable that I hadn’t thought of this case when I designed the local connecting algorithm (sort of like the patented marching cubes algorithm, only much harder, so no one’s interested), because now I’m thinking “How many other cases did I forget to think about?” Are there going to be thousands more, or is it just this one?

There are thousands more, because you could have a piece of a spiral staircase without the central pillar.

1 Comment

  • 1. anders replies at 24th November 2008, 4:44 pm :

    Hi Julian,
    Would be interested in hearing about how constant-scallop works in a future post.
    Can you build a const-scallop algorithm from the basic drop-cutter and z-slice algorithms (which I think I understand) ?

    thanks,
    Anders

Leave a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <blockquote cite=""> <code> <em> <strong>