Freesteel Blog » Adaptive

Monday, February 1st, 2016 at 5:57 pm - - Adaptive, Machining

Well, at least I sorted out the metal supplies by cycling to Scimitar Steels in Bootle at the furthest public limit of the dock road and persuaded him to cut me the most pitifully small piece of mild steel anyone has ever ordered from him. Couldn’t even be bothered to charge me. I think he said the substance was “080A15 EM32B” steel, which according to kvsteel is:

General purpose steel bars for machining, suitable for lightly stressed components including studs, bolts, gears and shafts. Often specified where weldability is a requirement. Can be case-hardened to improve wear resistance. Available in bright rounds, squares and flats, and hot rolled rounds. Can be supplied in sawn blanks, and bespoke size blocks.

Sounds like the correct stuff to me.

metalsawing

Then, as I was fetching my bike, I spotted a picture of a milling cutter on the side of a Lloyd&Jones Engineering truck parked outside the shop, and went in. I didn’t see any on display, so I had to check their website to find out about their sensibly priced 2 flute solid carbide tian coated end mills among their limited selection. It’s 2 flute so I don’t need to run it so fast according to the feeds and speeds calculations.
metalsawing2

Back home on the machine it proved rather good for facing off, being as I could run the spindle at 7000rpm instead of the near-stalling at the minimum of 2400 owing to the fact that the mill cutting speed surface ft/min should be 350 with carbide cutters as opposed to 70 (for “steel 4140”, whatever that is).

I got everything sorted out, probed, position and then gouged.

What happened here is one of the drivers decided to give up and go into an error state, and the other one carried on going back and forth in its arc as though it was still following the rectangle. Luckily it was only a test cut of 0.2mm deep.

This problem means it’s not reliable enough to do the whole job without a proper gouge. We need to put a feedback from the servo driver to the controller to cut everything when one of the axes trips out like this.

I reset it (by power cycling) ran it again far enough to where the linking motions came into play. I’d dialed for a toolpath with the maximum staydown and no retracts — which was a feature I spent a whole year working on.

It’s rubbish. It takes so long to retrace its steps back round the metal instead of nipping over the top. Now I understand why Mr Denmark kept referring to this as a “perception thing” when he urged me to work on this feature, because he knew (even if I didn’t) that it was a waste of time because it made a slower toolpath. But like a know-nothing that I was, I had to try it to believe it. And until you try it, you will want it. But you only need to use it once to be satisfied.

This reminded me of Adrian taking apart a cheap toy car remote control unit the other week and discovering that the aerial poking out the top wasn’t attached to anything. The circuit board didn’t need it, but since most customers believed it should have an aerial, the designers put a pretend one in for them to extend and retract to keep them happy because it was a whole lot easier than attempting to educate them about it.

I was only going for 2D Adaptive clearing here, because I am intending to control the depth with G92 settings to reposition the Z, and so be able to try different depths of cuts. This meant that the retract plane has to be high enough so it works for my lowest cuts, which is not efficient.

I’m pretty sure now that linking motions should totally be calculated by the controller, and not be done in the CAM system. It knows where the material is (and could laser scan it if necessary). That way we wouldn’t have these overly high retract planes, and it could work out its own rounded trajectories, knowing its own kinematics. That should be its job.

.

Meantime, I found this very detailed video blog of someone systematically testing out four different 3D printer software slicers to see how they performed on the same printer. I watched both episodes last night.

I’ve put this here to tell you that reviewing of CAM software is a thing. In this case it’s just for the extrusion 3D printers by one guy who gets the idea. You can see the core developer teams for the different products being able to use this information to direct their work, as well as for users to be informed about which is the best software to use.

In all my experience, this thing — independent benchmarking and reviewing of CAM software with an admirable degree of curiosity for the benefit of users — has never happened for 3axis machining software. Even though such software costs many thousands of dollars, runs on machines that cost hundreds of thousands of dollars, and has been around in various states of development for decades.

The problem is that customers, who are almost always professional engineering firms, are not curious and don’t go looking for stuff. Instead they’re busy earning money and working all the time. So they don’t do anything until a salesmen comes to them personally to sell them software products for them to consider. This massively inflates the prices (to cover the cost of the salesman) and guarantees that the only information they receive is biased and not even generated. By that I mean that the salesman probably won’t even know what’s the best performing software on the market. It’s difficult to find out, and he doesn’t care, because his commission comes from sales of this product only. This is a business after all. Nobody is paid to care about getting things right.

If a detailed independent review of the 3axis CAM software on the market ever took place, it should to discover which of the different products are actually using the same machining kernels under the hood. That would be a minimum outcome.

Monday, January 25th, 2016 at 1:42 pm - - Adaptive, Machining 2 Comments »

Here’s a quick offering from the “Well it’s better than nothing video editing department”. This is the result of 2 days of cutting from short videos taken with my camera. (I’ve got no talent with video editing.)

I learnt one heck of a lot in the process.

  1. Steel is really difficult to work with
  2. Small 3mm cutters are easy to snap
  3. The spindle is under-powered
  4. Big 6mm cutters can handle being bent when the spindle stalls if you hit stop soon enough
  5. You can drop the feedrate briefly to stop the spindle stalling
  6. Multiple cutters with rest machining are essential
  7. 0.1mm stepovers are a better than 0.2mm
  8. I probably need a tapered cutter to create a draft angle
  9. Clamps are a real hassle; I’m going to get a vice
  10. The noise of the machine sounds terrible, but nobody has complained yet because it doesn’t seem to carry into the hallway
  11. My 3D printed ductwork for automatically hoovering out the chips was a failure; I need to prod in the nozzle by hand to remove the chips

I was using the Adaptive Clearing toolpaths in Autodesk Fusion, which I had spent 10 years developing before and after it got sold to AD.

It sucked in several ways that I did not know about, because I’d never used it myself to get something I wanted to get done. I always said I ought to have been put on the job of using CAM software to cut steel on a machine in a factory for a couple of months at some point in my career before being allowed to continue writing software that didn’t quite do stuff right. People get into positions like I was, and seem to do pretty well, but should get the opportunity to go back and fill in some gaping holes in their experience.

The problems I found were:

1) Adaptive takes too long time to calculate small stepovers when clearing around a tongue of material and it has to turn right towards the material to stay in contact. This is probably because the sample rate has to go very small in order to maintain engagement when it does its straight line forward samples. It should detect these situations and do its initial step forward with a curve to the right so that begins with being engaged on the first sample and doesn’t need to resample backwards blindly until it makes contact again.

2) The helix ramp down pitch was not linked to the tiny stepover I was setting and I couldn’t see how to change it. I had to hack the G-code directly.

3) In spite of claims to the contrary and it being mathematically accurate, I am sure that the load going into the corners is higher than when the flank cutting is on the straight. I can hear the spindle being slowed down. This could be because the chip length is longer for the same chip width. The chip length is the distance around the circumference of the cutter that is tearing off the metal, and it can approach a semicircle in a tight corner, or be insignificant when it first engages with the 90degree outer corner of the stock.

Now a real machine tool probably has so much angular momentum in the spindle that no one is going to notice this, but on some underpowered low-spec experimental device, such as this, it becomes apparent. That’s why future innovations would happen here, and are unlikely on the big machines where you don’t notice the flaws.

I can now pretty much see how companies like IBM missed the first wave of the PC, which were toy devices in comparison to the big mainframes they were playing with. Nobody was ever going to do any real work on those barely-up-to-scratch microcontroller-based computers with deplorable amounts of RAM, audio cassette tapes for backup, a complete joke parody of an operating system from Microsoft, and a lack of customers able to pay big bucks. Most of the professional engineers in the world (software and hardware) had all the access they needed to mainframe computers in their workplace or university institutions to do fluid dynamics or graphics or simulations. I’m sure when some overly keen teenager came along with their toy machine he’d soldered together, they put him in his place with a back-of-the-envelope calculation of how many centuries it would take that Apple2 to do something real, like predict tomorrow’s weather, which was something they could do with their latest cool CrayXMP super-computer machines. PCs were obviously an utter waste of time, and because was clear where the cutting edge was if you wanted to actually get stuff done.

Sure, you could say this left a huge gap in the economy for new tech billionaires to emerge and for IBM to eventually become an embarrassment, but think about the wasted capital and precious engineering time of talented people who should have been deployed to make this microcomputer tech good from the beginning. MS/DOS and MSWord might not have existed in the horrible no-good forms they did had it not been left only to people who didn’t know what they were doing and had to learn as they went along, thus locked in their anti-productive design mistakes into the way this tech worked for the next 30 years.

Meanwhile I’ve no idea what I am doing. Should I spray WD-40 onto the metal while it is cutting?

Sunday, September 21st, 2014 at 3:51 pm - - Adaptive 2 Comments »

Lot’s of stay-down linking work going on. Sometimes I worry it’ll never work. While Becka has been caving, I’ve been working through the weekends in the empty bedrooms of Bull Pot Farm, (occasionally interrupted by horrible hangovers).

Here’s the checkin changes of a beast of a file.
shedcode
It’s like building a dry stone wall: I can’t remember any of what I’ve done; I only know what move is next. It’s an incremental procedure. New problem cases crop up, and you go away, have a cup of tea and come up with a solution, and keep repeating in the hope you don’t run out of ideas before it gets fixed.

The basic steps of the algorithm are as follows:
(more…)

Saturday, August 16th, 2014 at 7:25 am - - Adaptive, Cave, Hang-glide

It rains and rains and just won’t stop. It’s also gotten pretty cold and I’m having to wear all my clothes that aren’t damp from lying in the tent. The last day of warm sunshine was six days ago where I stood on takeoff for 2 hours as paragliders wafted past and went down in the totally dead air of that day.
OLYMPUS DIGITAL CAMERA
See the smile? I want more flying. Some canyoning would be good too, but it ain’t going to happen on this trip.

When I finally took off I flew in a direct straight line across the valley beyond the highway and golf course, and then had to walk 5 miles back via an ice cream stand to Base Camp for a lift back up the hill, after which I drove down, fetched my glider, and drove back up again for the walk up to Top Camp (the Stone Bridge).
glideland
Here’s the outside view of Top Camp:
topcampsite
And this is the inside:
OLYMPUS DIGITAL CAMERA
It’s both a rock and a hard place with nothing in between, but it is still better than tents because it is larger, cavernous, and not like a box of damp fabric that progressively rots things as each day passes.

I dropped into the far end of Tunnockshacht down to the new connection to Arctic Angle. Becka was away with the Austrians on a different expedition and couldn’t warn me that it was going to be an unutterably deep one. That wasn’t the problem. The problem was getting terrifyingly stuck in a U-bend crawl while weaseling around the C-leads waiting for the guy with the drill to rig a rope traverse along a ledge of an undescended shaft to access the phreatic continuation. Then we surveyed about 100m until it crapped out, and hauled ourselves back out by 2am.

I forgot to take any pics, so here’s a photo of a nosy horse’s nose:
horsenose
On Tuesday I went to the newly discovered Balconyhohle. (The horizontal entrance is from a ledge within the side of a hole.) I was cold and had to keep eating. We killed a couple of going leads there too, but there’s enough unexplored ways on to keep this one spreading further underground. This has been the big find of the expedition. It’s a lot of work to keep up with the mapping.

Here’s a picture from the walk back to the carpark from Top Camp in the morning:
braungap

Since then I’ve been working on the Adaptive Clearing stay-down linking killing the bugs one at a time while all my HSMWorks buddies have been at a big planning meeting in Copenhagen this week. It seems like an endless grind. Anyway, I don’t plan to go there again, and the ferry between the UK and Denmark is being terminated this September.

One of the things that crashes the system is when the A-star linking can’t find a way to connect from point A to point B, and spreads out through every single cell in the dense weave until it runs out of memory.

One obvious solution is to generate a weave that has a wider cell spacing and solve the routing issue in it, but this is too complicated. I worked out another way, which is to deny it access to most of the interior cells of the fine weave that are nowhere near the boundary or on the theoretical direct line route. The A-star algorithm is so powerful that it will find a way round, even though the domain would look so much more complicated. This initial result becomes the starting solution for the linking path, on which the PullChainTight() function is called. This is actually a bad name for it. It should be called RepeatedlySpliceStraighterSectionsIn(), but this discription wouldn’t remotely be so compelling to the imagination.

This will get implemented only if absolutely necessary. The thing about the linking routine is that it does not need to work 100% of the time, because it can always fall back to the old way of retract linking, which is what everybody puts up with right now, so it might not be worth expending too much effort for the last 2% of awkward cases where the reliability is going to be questionable anyway. In software development the trick is to know when to stop.

Time to work on some cave surveying software today. Maybe I’ll get a flight in tomorrow. Hope so.

Thursday, July 31st, 2014 at 10:05 am - - Adaptive

We’re having some classic Austrian weather right now. It’s been rodding it down since 5am. I’ve been eating Egg And Chips enough times for it to lose its special status.
OLYMPUS DIGITAL CAMERA

Most everyone else is caving at the minute. I haven’t seen Becka in 5 days. I’m waiting for better flying weather.

This is a great time to get some work done on these pesky stay-down linking in the Adaptive algorithm.

Here is an example of the A-star linking path calculated in its setting of the rest of the cutting toolpaths, so you can see how it fits in. The path must traverse through the area without colliding with material that hasn’t been cut yet. This image is of the first iteration. There’s a secondary process which pulls the path of light blue dots tight within its structural matrix once it is connecting in the right direction.
linkingends3

What I mean by the right direction is the way that the path rolls off the preceding path and rolls on to the next cutting path. I can calculate the shortest path reliably now, but it would sometimes result in going 180 degrees back on itself if it’s shorter to go round the circle, so the trick is to artificially insert a circle of exclusion that it has to avoid as it finds its shortest trajectory.

This worked well most of the time, except that the path would sometimes go the wrong way round the circle. It doesn’t know any better. It just finds the shortest path which it can. So I inserted this radial blocking line to block off paths that started off the wrong direction.
blockedlinkround1

And of course sometimes the radial line becomes a liability and the stupid shortest path thing has to go all the way around it when it should be coming in to the path nice and smoothly. I just worked this one out this morning.
blockedlinkround2

It takes ages to discover these errors in the data, looking at unusual tool motions in real examples, and trying to separate them out into the debug development environment so they can be analysed and plotted.

In computational geometry, if you can’t see it, you can’t debug it. In the real world you’d hope that a corporation that depended on vast quantities of computational geometry would have a dedicated team helping programmers visualize their messed up algorithms, so they don’t have to hack up crazy things like my “webgl canvas”.

I’ll spend the rest of the day getting to the root of this one, I guess, and then be one insignificant step closer to finally getting this out of the way.

I don’t deserve to enjoy any flying until I do. Nor do I deserve another egg and chips.

Thursday, June 26th, 2014 at 4:52 pm - - Adaptive

Just some notes on the serious work that’s still going on. This whole thing was supposed to take a couple of months. It’s run 6 months beyond the time I expected it to already because I had no idea how hard it would be. But this shouldn’t matter in the long-run as the code will probably be in service for 60 years if it works.

At the moment Adaptive Clearing doesn’t do any fancy links. If it can’t go nearly directly through the partially cleared pocket, it rolls off and does a retract to a roll-on to the next starting point. This doesn’t look great. Also, when we re-order the passes for better efficiency, we shred all the links that are any good and need another way of recreating them.

I’ve been meaning to write up something about how we re-order the passes for a year, because it’s a really simple trick. It’s unfortunate that it exposes this re-linking problem, so you don’t get the benefit.

I’ve disabled all the smoothing features that were getting in the way. The first priority is to make a path from the start to the end that avoids the uncut stock using the A-star algorithm which operates within the model of the area. This is how it currently looks:
engagementfulllink

We’re going to get lots of questions about how it passes on the “wrong” side of the previously cut path. The diagram below should explain why it is correct.
engagementcutwrong

The real problem is that my linking path is going round the wrong side of the roll-off arc. It then has to back-track to find its way to its linking destination. Note how it zig-zags through the cells because re-smoothing is disabled.
engagementlinksarc

The problem is that the cutting cycle skips out sections when there is almost nothing left to cut. But my re-linking motions spots this material and tries to avoid it. Since there’s not enough room to get round the arc on one side, it goes round the wrong way.
engagementlinks

This small diagram shows how there is a lot less material there than it seems.
engagementcutshort

But still, it’s non-zero, and I don’t want to include a tolerance here. So I’m going to try and barge through the start of the linking motion and splice it in some way. Not sure how to code it yet. Things keep growing more and more layers of unexpected complexity. That’s how it goes.

Wednesday, February 26th, 2014 at 1:41 pm - - Adaptive 1 Comment »

Last year we got a chance to see SolidCAM’s iMachining and laughed at the way its progress bar jumped all over the place, from -14% to 200% and back again.

Then we looked at our own Adaptive Clearing strategy which we had just spent the past year making multicore — and noticed it did the same stupid thing!

How embarrassing.

You never notice yourself picking your own nose, but when someone else does it in your face, you realize it’s ugly.

Progress bars don’t get as much love and attention from the programmers as they ought to, given how much time the users have to stare at them. The users think it’s so normal for the progress bar to be absolutely wrong that it’s considered a sign of extreme naivety to think about complaining. They probably believe that we’d going to laugh at them if they raised the issue.

It turns out that the progress bar on multiple CPU processing is not hard to get right, but you do it differently to how you do it on a single-threaded process.

Let’s first think about what a progress bar is for. There are two different options. It could report the time remaining for the process to complete, or it could report the percentage of the process job that has been completed.

The time remaining might be the most useful information for organizing your life (is there enough time to grab lunch while this completes?), but there’s no way you’re going to get that information — even though it’s what everyone wants to know.

You will hear: “How many days till it’s done?” more often than “Are we at the 75% complete stage yet?” for a project — and that’s even before it’s over-run by a factor of two.

In fact, the only practical implementation for the time remaining is to run the whole job first, time it, and then set a count-down timer from that value when you run it again. It’ll make everything run twice as slow, but what’s the big deal?

(more…)

Tuesday, February 11th, 2014 at 5:43 pm - - Adaptive

Taking a break from all my other mindful notions to do some proper work on this stay-down linking job.

I have an A-star function that works on the underlying weave structure. The white zones refer to the weave which defines the interior of the contour, and the red zones are the places where it’s safe to go without colliding with the uncut stock.

astart0area
Most A-star algorithms involve one start point in each tile which spreads out to all sides of the tile. But in my implementation the paths from the sides of the tile are pulled from the receiving end, so it’s possible to have two start points in the same tile with two paths going through, as shown above.

(more…)

Friday, January 24th, 2014 at 6:07 pm - - Adaptive

Sometimes it’s a relief to be certain that I’m doing something which is exactly what I am paid to do. All the rest of the mouthing off comes for free. Though maybe it’s important to always question what you are paid to do, so you don’t wind up wasting everyone’s time doing something that’s actually pointless.

After having got the A* routing through the weave structure to work based on my unpatented subdividing model of a 2D area that forms the geometric basis of the adaptive clearing non-gouging motion algorithm, I noticed that it did not give enough sample points for a linking pass to be within tolerance. The route, which must avoid uncut stock, requires a greater sample rate than the basic weave cells. These cells were sampled to a resolution along the boundary contour so that it would be within tolerance, but it is insufficient to handle the uncut stock structures that exist in the interior space of this area partway through the clearing cycle.

There are two options to deal with this insufficiency of sampling. Either add in an additional sample rate system, or improve the basic underlying weave sampling structure.

(more…)

Tuesday, January 14th, 2014 at 6:47 pm - - Adaptive

A quick day’s work to make a multi-threading percentage progress handling object in Python that handles residuals.

Let’s start with the test code (which I wrote after the working code — take that TDD!):

class ACProgress:
    def __init__(self):
        self.sumprogress = 0
    def ReportProgress(self, lprogress):
        self.sumprogress += lprogress
        print "The progress is at:", (self.sumprogress *100), "percent"

def addprogress(a, n):
    for i in range(n):
        acprogress.ReportProgress(a/n)
        time.sleep(0.05)

acprogress = ACProgress()
addprogress(1.0, 25)

This prints numbers up to 100% in steps of 4%.

Now we want to do this in threads.

(more…)