Freesteel » Another day another “revolutionary” new roughing strategy
Another day another “revolutionary” new roughing strategy
Monday, December 17th, 2012 at 4:54 pm
Does this look familiar?
This one from EdgeCam got under the radar.
It seems it was released last month, or maybe earlier in the year.
It does not appear to have implemented retract steps yet — a feature we had from the start in our original 2004 Adaptive Clearing development — but the pitch of the initial clearing spiral is variable, which shows it’s on the right track.
At some point we’ll have enough of these “unique” “revolutionary” cutting strategies for an independent agency to really help us out by properly benchmarking them against one another and publishing the results. That way everyone would know where their weaknesses lay and what to focus development on.
As it is now, with all the different software companies building their own implementations of this fluke cutting technique, and falsely marketing them as though nothing like it exists anywhere else in the world, we are experiencing an extraordinary amount of wasted energy.
The waste is in the form of machinists waiting for unnecessarily inefficient and buggy implementations to complete the calculations, because the developers don’t get the crucial feedback they need to make cheap and substantial improvements in the software, and in the form of developers unwittingly working on areas of the code that are have no benefit to the end user.
That’s quite aside to the unbelievable waste by certain companies sinking their finite resources into pointless patents rather than improving their code.
The Adaptive Clearing is on my mind because we have been spending what feels like months working on toolpath reordering and the multicore version of the algorithm.
The reordering algorithm was worked out on the long train back from Copenhagen, and it’s quite simple. I’ll write it up at some point. And the multicore is something that Anthony wants. He’s promised to make one of his nifty videos where he demonstrates the algorithm using a sit-on lawnmower if we ever get it finished. That’s what’s keeping me motivated when I am losing the will to push on after one too many of these queues of queuing threads hangs and everything is completely broken for a couple of days. What a great idea: take an algorithm that’s already too complicated and make it four times more complex. I’m sure it will work out. Maybe.