Freesteel Blog » 2016 » January
Here’s a quick offering from the “Well it’s better than nothing video editing department”. This is the result of 2 days of cutting from short videos taken with my camera. (I’ve got no talent with video editing.)
I learnt one heck of a lot in the process.
- Steel is really difficult to work with
- Small 3mm cutters are easy to snap
- The spindle is under-powered
- Big 6mm cutters can handle being bent when the spindle stalls if you hit stop soon enough
- You can drop the feedrate briefly to stop the spindle stalling
- Multiple cutters with rest machining are essential
- 0.1mm stepovers are a better than 0.2mm
- I probably need a tapered cutter to create a draft angle
- Clamps are a real hassle; I’m going to get a vice
- The noise of the machine sounds terrible, but nobody has complained yet because it doesn’t seem to carry into the hallway
- My 3D printed ductwork for automatically hoovering out the chips was a failure; I need to prod in the nozzle by hand to remove the chips
I was using the Adaptive Clearing toolpaths in Autodesk Fusion, which I had spent 10 years developing before and after it got sold to AD.
It sucked in several ways that I did not know about, because I’d never used it myself to get something I wanted to get done. I always said I ought to have been put on the job of using CAM software to cut steel on a machine in a factory for a couple of months at some point in my career before being allowed to continue writing software that didn’t quite do stuff right. People get into positions like I was, and seem to do pretty well, but should get the opportunity to go back and fill in some gaping holes in their experience.
The problems I found were:
1) Adaptive takes too long time to calculate small stepovers when clearing around a tongue of material and it has to turn right towards the material to stay in contact. This is probably because the sample rate has to go very small in order to maintain engagement when it does its straight line forward samples. It should detect these situations and do its initial step forward with a curve to the right so that begins with being engaged on the first sample and doesn’t need to resample backwards blindly until it makes contact again.
2) The helix ramp down pitch was not linked to the tiny stepover I was setting and I couldn’t see how to change it. I had to hack the G-code directly.
3) In spite of claims to the contrary and it being mathematically accurate, I am sure that the load going into the corners is higher than when the flank cutting is on the straight. I can hear the spindle being slowed down. This could be because the chip length is longer for the same chip width. The chip length is the distance around the circumference of the cutter that is tearing off the metal, and it can approach a semicircle in a tight corner, or be insignificant when it first engages with the 90degree outer corner of the stock.
Now a real machine tool probably has so much angular momentum in the spindle that no one is going to notice this, but on some underpowered low-spec experimental device, such as this, it becomes apparent. That’s why future innovations would happen here, and are unlikely on the big machines where you don’t notice the flaws.
I can now pretty much see how companies like IBM missed the first wave of the PC, which were toy devices in comparison to the big mainframes they were playing with. Nobody was ever going to do any real work on those barely-up-to-scratch microcontroller-based computers with deplorable amounts of RAM, audio cassette tapes for backup, a complete joke parody of an operating system from Microsoft, and a lack of customers able to pay big bucks. Most of the professional engineers in the world (software and hardware) had all the access they needed to mainframe computers in their workplace or university institutions to do fluid dynamics or graphics or simulations. I’m sure when some overly keen teenager came along with their toy machine he’d soldered together, they put him in his place with a back-of-the-envelope calculation of how many centuries it would take that Apple2 to do something real, like predict tomorrow’s weather, which was something they could do with their latest cool CrayXMP super-computer machines. PCs were obviously an utter waste of time, and because was clear where the cutting edge was if you wanted to actually get stuff done.
Sure, you could say this left a huge gap in the economy for new tech billionaires to emerge and for IBM to eventually become an embarrassment, but think about the wasted capital and precious engineering time of talented people who should have been deployed to make this microcomputer tech good from the beginning. MS/DOS and MSWord might not have existed in the horrible no-good forms they did had it not been left only to people who didn’t know what they were doing and had to learn as they went along, thus locked in their anti-productive design mistakes into the way this tech worked for the next 30 years.
Meanwhile I’ve no idea what I am doing. Should I spray WD-40 onto the metal while it is cutting?
The old hang-glider flight logger is falling apart due to shoddy wiring, strip board soldering, being crammed into a 3D printed plastic box and then thrown about in my bag and on the field for a year.
The new way to do things is to design the circuit separately and then cut a special PCB for it.
I’ve been extracting and running the individual components as best I can, including digging into the airspeed probe wheatstone bridge, which unfortunately needs an INA125 amplifier that I had stupidly soldered down onto the board instead of using a socket.
Now I got to tidy the mess on the desk up and see if any of the electronics still works as well as it once did before I tried to reverse engineer my own work.
Maybe I should have done it this way in the first place. But then that would have added another wall into the learning curve. (The learning curve is the amount you can get done on the X-axis vs how much you need to know to get that amount done on the Y-axis, so a shallow curve means you can get stuff done without needing to know too much.)
Next job is to get some help laying out circuits from someone who has done it before. I hope to sandwich it between two sheets of laser cut acrylic and find some slick way of mounting it on the glider so I don’t kick it on the first day.
Not only that, I’ve got to get back to looking at the data so far. As usual, hardware hacking is a lot more fun than doing the software. This distraction must end. Normal service will be resumed.
Wednesday, January 20th, 2016 at 4:57 pm - Machining
This is about C-shaped biarcs. I’m not interested in the S-shaped ones, so I’ve not looked at them.
A Biarc is a smooth curve composed of two consecutive tangential circular arcs that can be drawn between a given pair of points with a given tangency at those points.
In many ways a biarc fulfills the job of a cubic spline commonly used in CAD to interpolate a curve between two points with given tangencies. They can be chained to form a continuous curve that appears smooth to the eye (continuous first derivative) all along its length.
Given end points and tangent vectors, there are two extra degrees of freedom for the cubic spline. These are usually called the weight factors.
A biarc has only one extra degree of freedom, which determines the point where the two arcs join tangentially.
Biarcs have a larger application in CAM than CAD for the following reasons:
1) CNC mechanisms whose motions are controlled by G-code (most of them) have G1 for linear motion as well as G2 and G3 for circular arc clockwise and circular arc counter-clockwise. They do not have codes to do arcs (although Siemens tried to market a controller in the 1990s that had NURBS), so if you want to produce a perfectly smooth path then you need to provide a sequence of tangential arcs. Biarcs can effectively act as a drop-in replacement for cubic splines.
2) Physical machines in motion are limited by their maximum acceleration which is determined by the power. A cubic spline has a maximum acceleration (minimum curvature) at only one point along its length, and this means all the other points have to be run below this value. On the other hand, a circular arc has a constant and easy to calculate acceleration for a given velocity. This means it is more likely to be programmed to attain the maximum power over a longer distance than is the case for cubic splines which are too hard to optimize.
Let’s get to the point. After days of messy algebra, similar to what’s been done by this guy and in other papers I have read, I spotted a pattern, which I have reduced to a nice theorem in plane geometry.
Theorem: Given points A and B and tangent vectors at them that intersect at E, we consider any biarc that is consistent with these endpoints and tangents. (This is defined as two circular arcs that are tangent at these points and to each other)
This biarc is drawn as the arc AC (with centre a) and the arc CB (with centre b)
If we extend the line EB to the point D so that the length of ED equals the length of EA then the point C where the two arcs meet will always lie on the circle through the points A, B and D (whose centre is c)
The points A, B, E, D are given by the initial conditions. Points C, a, b are given by the choice of biarc.
By the definition of the biarc, the line aA (which is a radius of the left hand big arc) is perpendicular to the line AE, the line bB (which is a radius of the right hand small arc) is perpendicular to BE, and the points a, b and C must be collinear for the two arcs to be tangential at C.
Let f be the midpoint of the arc AC and g be the midpoint of the arc BC and draw in the lines af and bg to form bisectors of their respective arc angles.
Extend the line bg back to the point c where it intersects the line af.
By symmetry (of af being a bisection line), the lengths of cA and cC have the same length. Similarly the lines cC and cB have the same length. Therefore the three points A, B and C all lie on the orange circle whose centre is at c which we just constructed by intersecting these two bisector lines.
By bisector symmetry the angle aAc specified by d is equal to the angle aCc, and the angle cCb is equal to the angle cBb specified by d’. In other words d equals d’.
The tangent line to the orange circle at A is perpendicular to the radius line cA; if we rotate it anticlockwise about the point A through the angle d then it will be aligned with AE. Similarly, the tangent line to the orange circle at B is perpendicular to the radius line bB, and if we rotate it anticlockwise about the point B through the angle d’ then it will be aligned with EB.
Because the construction of both these lines are the equivalent (due to the sizes of the angles d and d’ being the same), the chord lengths of Ah and Bk are the same (where A, h and E are collinear and E, B and k are collinear).
We need to equate the angles chE and chE to prove symmetry. The perpendicular bisector of the chord Ah will pass through the centre of the circle c, and therefore by symmetry around this line the angle chE is equal to 90 degrees plus the angle d. But we also know that the angle cBE is also 90 degrees plus the same angle d’ (since the lines bB and BE are perpendicular).
This proves that the triangles chE and cBE are equal. Therefore the line length Eh equals EB. Since the chord lengths Ah and Bk are equal, the length of Bk is the length of AE minus the length of Eh, which puts k at exactly the point where we originally constructed D in the statement of the theorem.
The orange circle about c passes through the points A, B and D, which is a statement that crucially does not depend on the choice of the biarc.
I’m not saying that any of the above is easy. The consequence, however, is a remarkably easy method to find biarcs.
Using my namedtuple P2 class with a few basic functions, like so:
class P2(namedtuple('P2', ['u', 'v'])): ... def Dot(a, b): return a.u*b.u + a.v*b.v def CPerp(v): return P2(v.v, -v.u) def APerp(v): return P2(-v.v, v.u)
Start with the points A and B with unit normal vectors nA and nB.
To find the intersection of the tangents we need to solve:
E = A + CPerp(nA)*sA = B + APerp(nB)*sB
Compute the values of sA and sB by dotting this equation with nA and nB and dividing out:
sA = Dot(B-A, nB)/Dot(CPerp(nA), nB) sB = Dot(A-B, nA)/Dot(APerp(nB), nA) E = A + CPerp(nA)*sA E = B + APerp(nB)*sB # same value
The centre point c of the orange circle lies on the perpendicular bisector of BD at an unknown distance of h along this perpendicular where it interesects the angle bisector line from E (which is parallel to nA+nB).
sH = ((sA+sB)*0.5) c = E + CPerp(nB)*sH + nB*h solve: Dot(c - E, CPerp(nA + nB)) = 0 0 = Dot(CPerp(nB)*sH + nB*h, CPerp(nA) + CPerp(nB)) = Dot(nA, nB)*sH + h*Dot(nB, CPerp(nA)) + sH h = -(Dot(nB, nA)+1)*sH / Dot(nB, CPerp(nA))
This means we’re almost done as we know where the c lies. All that remains is to choose the point C on this circle.
Project a point from the chord between hB to the circle centred on c with radius rC. Let 0<lam<1 refer to a point position along this chord. rC = (CPerp(nA)*(s1-s0)/2 + nA*h).Len() = sqrt((s1-s0)**2/4 + h**2) sM = min(sA, sB) pC = E + (APerp(n0)*(1-lam) + CPerp(n1)*lam)*min(sA, sB) cpC = pC - c C = c + cpC*(rC/cpC.Len())
Now all that remains is to find the radii of the two arcs in the biarc.
Left hand arc has radius rA and centre cA: cA = A + nA*rA Distance to C should also be rA: rA = (cA - C).Len() = (A + nA*rA - C).Len() rA**2 = (A - C).Len()**2 + 2*Dot(A - C, nA)*nA + rA**2 rA = -(A - C).Len()**2 / (2*Dot(A - C, nA)) Similarly: rB = -(B - C).Len()**2 / (2*Dot(B - C, nB))
Basically, the entire operation is done with with a few dot products and around 2 square roots. No quadratic equations needed to be solved, so we’ve not had to mess around with choosing the right polynomial root or any guesswork which this entails.
This should be able to extend into a reliable arc fitting algorithm to a point sequence which I’ll get to at some point later.
Here’s what it was all about. After facing off at 0.1mm stepdowns, I hard-coded a helix at 0.05mm stepdowns per revolution to a depth of 6mm, and then made a semi-circular spiral out at 0.05mm steps till it went off the edge.
Nothing broke, even at this implausibly low spindle speed, and so on. Even so, I was cowering most of the time till I got brave enough to put my head up above the table.
Anyways, this concludes this little experiment. To complete the job in question, which is some kind of a one-piece profile blade, I’ll need a tapered cutter and the ability to generate the necessary toolpaths, which I can easily see ought to be something similar to Adaptive Clearing.
Unfortunately, Adaptive Clearing is owned by capitalists whose job is make sure all the money and resources from the customers goes to them and away from anyone who could possibly develop a new and more productive version that — for example — was intended to become a standard component of every machine tool controller. I can see that in the future this algorithm ought to be embedded and be able to respond to direct laser scans of the stock on the table as well as force feedback information from the cutting tool. This isn’t in the business plan, so there is zero chance it will ever get done. It’s a wonder that anything ever progresses, when the economic structure demands the real money is made after the fact and then divvied up according to who has the most power among those in receipt of the customer’s payments, while anyone with the vision to create something new has to live off dogfood for the duration.
The folks at AD are almost certainly quite proud that there is nothing I can do to influence the 100% capital flows into their bank accounts from the consumers of this software. No matter what kind of amazing toolpaths algorithms I could design at this point it’s never going to pay enough to get even one assistant to help me out in getting it to work in a reasonable time. I’m completely on my own. What a waste of time and talent, might I add. I’m sure someone will get it done in the industry in about 50 years time. Anyway, best to get back to work. Whatever that is.
Yes, it’s been a long while since I’ve done any blogging. Xmas, New Year, lots of stuff not working, and no interesting holidays to speak of.
A lot of time has been wasted meddling with these horrible Leadshine servo drivers that have the ability to program and measure their performance through a serial port (from Windows), but you can’t reset them when they’ve entered an error mode. The box of electronics is too compact and there’s no toggle switch to turn them off and back on again.
Then there was another two week delay narrowing down and investigating a weird motion effect that was all due to a loose grub screw.
Anyhow, here’s the video of the small part I made yesterday
Here’s what it looks like in the light:
I’ve got a lot of questions to chase up about the errors and marks on the part. It’s all done using my own code written in Python, but don’t try to use it as it’s all twistcodewiki driven, and I’m already forgetting which functions I need to call to make it work.
A note about the “correct” feed and speed mentioned in the video.
Recall that, after 20 years writing the software that generates CNC toolpaths, I’d not ever operated a machine or worked with someone operating a machine in that time period. I’m not unusual among my programming peers. This is an outrageous state of affairs, and tells you everything you need to know about the effectiveness of all those layers of businessmen, managerial staff, supervisors, and resellers who have inserted themselves like slabs of toffee between those who write the software and those who use the software. Even if I wasn’t interested in operating a machine, someone should have forced me to spend some time making at least one thing to a standard of quality at some point in my career as it would have paid off enormously.
The upshot is I don’t know what I’m doing and I don’t know what’s right. Searching for feeds and speeds on brass produced the HSMWorks CNC book. Unfortunately it’s all in Inches.
The calculations are as follows:
Cutter diameter 6mm = 0.236inches Number of blades = 4 Brass SFM (Surface feet per minute) = 175 Brass IPR (Inches per revolution) = 0.002 Spindle speed = SFM/circumference = SFM*3.82/diameter = 2832 revolutions per minute Feedrate = Spindlespeed*IPR*blades = 22.656in/min = 575mm per minute
As I’ve generally been coding F800 into all my toolpath files, I dragged the speed override slider to 71% on the linuxcnc interface after dialing down the spindle.
The spindle was much slower and the motion much faster than I felt would be right (it was actually kind of scary), but it ran quieter and the results were smooth.
The most important part of the calculation above relates the ratio of the feedrate over the spindle speed to the optimal chip size.
So how come, given that the feedrate of an operating tool varies as it slows down to go round corners and accelerates on the straights, the spindle speed is always hard-coded to a constant value and not set as a ratio to the actual feed rate during run-time?
Is there anyone in the business who has an answer to this question?
Clearly the calculations above provide the optimal values, and clearly everyone has gotten away with running outside of this optimal setting for part of the time. And maybe this was forgivable in the past when the controls and the spindle motors did not have the technical capability to vary themselves. But come on folks, the tech should be up to it these days.
What’s the problem?
Too many managers in between the programmers and the users who can’t see the commercial viability of investing in such a blatantly obvious and productive feature? It doesn’t look very hard to do from a technical stand-point, does it? I look forward to hearing their excuses.