Freesteel Blog » Open thread for random user

Open thread for random user

Thursday, February 9th, 2006 at 5:08 pm Written by:

I’ve given a random guy who emailed me just now a download link for the Adaptive Clearing demo version. It’s unlikely that our instructions are good enough to make everything clear. So rather than risk losing all his questions and my answers in my spam-infested email box where they won’t be found when it comes to rewriting the instructions, I’m going to ask him to post his comments here.

11 Comments

  • 1. Anthony replies at 10th February 2006, 12:16 am :

    Actually the instructions were quite clear and to the point. Some questions I do have are below:

    1. Is there a way to set the feeds for the output? Or anyway to make each feed type (leadin,leadout,feed,rapid) a different value to make search&replace of this posible?

    2. How are tolerences controlled? Is this just missing control from the interface? I know the instructions talk about the “stepover” and “max stepover” difference being “a” tolerance, but how do you control “arc approximation” or “toolpath tolerance”? Possibly these are automatically calculated, but this is a question worth asking since people will ask how to reduce the Gcode size.

    3. How does tighter(more triangles) STL triangulation effect the toolpath calculation time and the Gcode size? Naturally I would expect increase in calculation time since this is a 3D toolpath, but does the time increase more exponentially(I used this word lightly) like a “constant stepover” toolpath. Does this have a great impact on Gcode size? I assume you are building an intersection curve for each ZLevel and this curve will have more entities to it, but does this only effect the path along this curve or is this effecting all the offset paths? (I know they are not offset toolpath from the curve, but I do not how you describe them) Since you are analyzing the stock model and building toolpath from that, maybe the STL triangulation has little affect? This is why I ask.

    This is enough questions for now. Need more time to use it.

  • 2. Julian Todd replies at 10th February 2006, 9:42 am :

    1. Errr. I guess since it’s meant to be embedded in another CAM system, we’ve not paid much attention to the post-processing, which in itself can take up loads of tedious parameters. If you promise to use them, we’ll try and put in these features — cutting feed-rate, linking feedrate, retracting feedrate — when we get round to compiling a demo version again. Maybe we’ll have it on the webpage.

    BTW, the linking is not all that optimized (plenty of unnecessary retracts) since we’re concerned with getting the cutting right first.

    2. There are *lots* of parameters that are hidden from view. The range on the allowable cutting engagement is exposed because it is interesting rather than what people might want to use.

    3. You are right. There are two stages. The ZSlicing of the model, and then the clearing within the zslices. The STL triangulation will only make a difference to the ZSlice component.

    The theory is that calculation time is proportional to the length of the toolpath since it is calculated by taking steps forward and sampling. The length is probably going to be something like the ZSlice area divided by the average cutter engagement width.

    It will also be proportional to how short the steps forward have to be (which depends on the tolerance) and the amount of time it takes to calculate each step (probably to do with the number of previous passes the silhouette of the tool overlaps).

    As to GCode length, this “toolpath tolerance” is commonly called the “thinning tolerance”, because it thins the output. This is normally a procedure that’s applied after the toolpath is generated, but before the post-processor is applied. It can be highly developed, but we’ve programmed the most primitive method available at the moment.

    The practice should really be to make the thinning apply during the toolpath generation when you can fit arcs and lines to a looser tolerance since it would know what does and does not gouge and woudn’t need to be so cautious. We’ve not implemented it yet.

    You have guessed correctly that the methods for thinning the toolpath along the ZLevel contour need to be quite different from the other parts of the toolpath.

    The ZContour thinning is a very tricky proposition because you must beware that the lower levels must not go outside the upper levels.

  • 3. Anthony replies at 10th February 2006, 11:56 am :

    1. Don’t worry about adding them to the interface. It was just a question to make sure I wasn’t missing anything. ALthough I would suggest for your next demo version you hardcode the 4 feed types to different values. This at least will allow users to search&replace by each type. If I do machine testing I can make it work the way it is.

    2. Makes perfect sense and what I expected

    3. I enjoy your detailed responses. The STL triangulation effect and what is effecting the toolpath calculation is understood. I completely agree with your current style of building the algorithm. I think companies that will be open to this “new roughing” style will not be the people who complain about toolpath length, but as it becomes a more widely used style and the masses start using it, then comes this thinning. I am happy to hear you want to do the thinning during generation and not after calculation. I have good expierence with CAM systems using both styles and your comment about the “gouging” is 100% correct because you will get unwanted moves and possible gouges doing this after generation (At least in my experience).

    4. Question: I know the linking is in a primitive stage, but does the tool pick up in a small amount in Z+ when doing the RapidMove and move back down in Z- when it reaching the cut? I am talking about the RapidMove on the Z-Level you are cutting. Maybe this is a parameter that is hidden, but I would expect I would want the tool to pick up .005mm in Z when making this move so the tool is not “dragging” along the floor.

  • 4. Anthony replies at 10th February 2006, 12:24 pm :

    Additional question…

    Is your algorithm dependant on a 3D model (STL)? Could you accept 2D chains, Zhieght, and Zdepth and have your algorithm run on this in addition to a 3D model?

  • 5. Neel replies at 10th February 2006, 1:01 pm :

    4) I was told that it was incorporated in early versions, but the machinists who noticed it wanted it taken out. There may be a parameter which controls it, now set to 0. It’s just turned off for simplicity.

    Julian,
    We can post our responses here but the problem is we cannot attach any data ie picture or stl files here. A picture is worth a thousand words.I think you will need a forum site or an ftp area to deal with data .

  • 6. Julian Todd replies at 10th February 2006, 1:26 pm :

    The algorithm takes two boundary definitions. A machining boundary, familiar in most CAM systems, and a stock model boundary which is the new kind. Our beta testers were always getting the two confused. Essentially, if your stock is a cylinder you input a circle with two z-ranges. And if you want to limit your machining just to a notch on the side of the cylinder, the machining boundary would be a rectangle containing the notch, but which overlapped the circle. That way it would contain some area that was not stock and could plunge down into it and cut directly from the side.

    Let me see if a picture works:

  • 7. Julian Todd replies at 10th February 2006, 1:35 pm :

    Nope. Pictures don’t work. I think you’ve got to post it up somewhere else and make a link to it:

    http://seagrass.goatchurch.org.uk/kilk4.jpg

    4) I’ve heard the counter-intuitive case that the lifting off (which I did put in) is undesirable. Dragging along the floor is fine it seems, and it is better to do that than take the force off the cutter momentarily. This would be particularly true if there’s a bit of compression-slack in the machine.

    We should be getting the online version working soon.

  • 8. Anthony replies at 10th February 2006, 10:07 pm :

    I’m not entirely clear on you boundary answer, I believe this is due to my unclear question. So I will rephrase my question.

    Can your algorithm work solely on wireframe geometry? As in a 2D geometry without any “3D/STL model”.

    I believe your response was talking about boundaries for the cutting a 3D model. Which makes that you give it a “stock model” and “machining boundary”. Am I misunderstanding you?

  • 9. Julian Todd replies at 11th February 2006, 1:16 pm :

    If you give it a dud STL file containing no triangles the machining would be determined by the Z-parameters and the 2D areas, yes.

    The machining boundary is a geometric limit on the location of the centre of the cutter, if that is how a user would like to define it. There is also a machining boundary offset parameter (in or out), so you could compensate for the radius of the cutter.

    It’s possible to skip this stage entirely and feed the Z-contours from a CAM system into the algorithm directly, but we’re not writing such an interface until we see an example.

    I don’t know how important it is that with anything other than a slot-drill the radius of the cutter changes close to the tip, so you can’t properly do cutter compensation and clear pockets whose depth is less than the corner radius.

    Another work-round is to convert a 2D chain and z-values (if this is what is a “wireframe model”) into an STL file with a short python script so you can see what’s going to happen, and properly account for the cutter radius.

    It’s all a question of knowing a little more about what you’d be trying to do.

  • 10. Neel replies at 19th February 2006, 1:07 pm :

    Is your application multithreaded, does it take advantage of multiprocessor for large calculations. For eg one thread calculates the rest stock and other calculates the cutter location points.

  • 11. Julian Todd replies at 20th February 2006, 10:44 am :

    We run two threads in the python code, one which works out the z-level slices, and one which does the clearing. Even though the first looks a lot quicker, they can consume the same amount of time because the slicing feature might have to hunt for the flat area and do many times more slices than you actually get in the result.

    We haven’t tried it on a dual processor machine, so don’t know for sure if our python threads can work across processors. We used to use them for work in NCGraphics, but processors now are so fast and the graphics card is like a second processor it’s less of a big deal. Having everything on a laptop is more handy.

Leave a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <blockquote cite=""> <code> <em> <strong>