Freesteel Blog » 2006 » January

Thursday, January 26th, 2006 at 11:27 am - - Machining, Weekends 8 Comments »

I am never going to write a const scallop routine again, ever. Most of the glitches are removed now, though I had to do a lot of evening coding during the week of skiing in France when I was dog tired. On two of the evenings I skipped the free wine (which was minging anyway) so I could work late into the evening.

Anyway, it’s starting to take shape, though I’ll no doubt get the usual “Not Fast Enough” complaint that happens in the absence of horrible gouges.

Meanwhile, Martin’s been doing interesting stuff on the web-version and should have that happening eventually. We spent ages getting the drag and zoom right. The server software is now rendering in MESA rather than VTK, which makes the rendering of the triangles into the depth buffer absolutely concrete. All viewing transformations are turned off. We calculate them ourselves to make what we are doing pixel perfect, the right way up (bitmaps put their origin in the top left corner with the Y-axis pointing down, while graphics rendering

We’ve finally given up on our utterly shite mini-machine tool. Workmanship is too awful and we’d need access to a tool-shop where we could constantly remake the bent bits to keep it operational. We kept going back to the University and getting the technician to make pieces for us, but that can’t go on.

We’ll take it down to Bristol to show a friend who works in education, during our expedition to show off Adaptive Roughing software to a prospective vendor (hopefully not making such a hash of the sales pitch as I did elsewhere in the summer). Maybe it will fall off the train on the way, and we can get a new one, possibly from Peatol Machine Tools in Birmingham. Couldn’t be worse.

Sunday, January 22nd, 2006 at 10:33 pm - - Machining

Thanks for the comments on the last post… Barry: if there is a setting that crops length of comments on this blog, I can’t find it. I may need more background from you.

Further to my posting: One factor that makes someone like me reluctant to make a hole detector into a CAM system, apart from laziness/other-things-to-do and the hope that this problem will sort itself out of its own accord, is the long term situation that when you do build a function of this nature you effectively create a barrier against the proper solution.

Recall that the proper solution would be for the CAD system to explicitly tell the CAM system where higher level features such as drill holes are, rather than leaving them implicit-only in the way that the surfaces and triangles are laid out in whatever petrified data exchange format is being accepted. Recall also that this solution hasn’t happened because cooperation of any kind between the designers of the CAD systems and the CAM systems that someone might buy is practically and inexplicably non-existent.

Let’s suppose the proper solution can solve the problem with 100% reliability. Unfortunately, the CAM system programmer caves in to the management, the sales-team, and the will of the market as it latches on to the quick-and-dirty incorrect solution, and he writes a hole detector that works with 95% reliability. That is to say, there would be an error rate of 5% of holes it would be missing or mislabelling on average. People get numb to the idea that they shouldn’t have to double-check the output of a computer, and they will have to. Perhaps they learn to take account of weaknesses in the algorithm and avoid designing holes across the boundary between two free-form surfaces, for example.

Okay, now someone wants to solve the problem properly. Unfortunately he finds it difficult to justify because he’s only going to make an improvement of 5%, from 95% to 100%, not from 0% to 100%. The gain is less. Even if you observe that 100% reliability means you don’t have to double-check the results anymore, no one’s going to believe you. The sub-optimal solution is going to be locked in forevermore.

There is a way round this, however.

What you do is build the hole detector as a self-contained unit which you do not embed into the CAM system. It’d be a program that runs from the command line, takes, say, an STL file, and writes out a simple text file that lists the location of all the drill-holes.

Normally, running modules this way is ugly and unfavourable because it requires setting off external processes in the operating system and passing information around in temporary files, when it is far cleaner to compile and embed a module directly into the system. However, this would mean that, firstly, such a program could be developed as a fully free open source product under the GNU General Public License, and yet everyone in the commercial world could use it.

And, secondly, the format of this temporary file would be visible and available to anyone who cared.

Suppose all the CAM vendors started borrowing, using, and debugging this module because it’s cheaper and easier than writing their own, kind of like the way we use zip module rather than write one (for many years Machining Strategist/Depocam has piped its data-files through zlib to make them smaller).

Suppose all they had to write in order to latch on to the power of this free module that detected the holes was something that read its simple output files.

Suppose a CAD system developer could see that they could get their holes imported perfectly into every single CAM system at once if they wrote out a version of this hole file at the same time as their geometry file so that the 95% reliable hole detector no longer had to run.

Then that would look like a plan for reaching the correct goal one step at a time. The goal being to overcome the deficiencies of our exchange formats by sending small scraps of information alongside them.

I don’t have any experience with hole detectors or drill cycles myself, so I have simply made all this up.

If I had to write a hole detector, I’d build it in three stages.

(1) Use a Hough transform to find collections of parallel lines in the model. Holes and pockets tend to have edges parallel to their axes to form the walls, at least in the way that they are conventionally triangulated. This is pretty obvious.

(2) Take each orientation in turn and consider all the edges parallel to that orientation so that they become points in that plane of projection. Apply a Hough transform to detect arcs and circles between these points. This is standard science.

(3) Now with a large set of candidate cylinders test each one in turn for interference against the model to calculate the depth of each hole. This function exists in all CAM systems — it’s the depth of cut for a particular shaped tool at a given location.

That’s how I’d do it. Of course, any extablished software company is free to publish their own algorithm and software to do it. This would not be unprecedented, as something similar appears to have been done by Rhinocad with their open nurbs initiative.

Alternatively, if someone wants me to look more deeply into it they’ll have to mail me some CAD files I can read and an outline of the format for the data in the holes file. I think it would be an interesting side project if someone was willing to do the other half of actually using it.

Thursday, January 12th, 2006 at 12:08 pm - - Machining 5 Comments »

Neel wrote:

I guess whats required here is feature (hole,pockets,etc) recognition from surface models

Ah, yes. This old one. This is important enough to bring to the front. I’m going away on holiday for a week, so won’t be able to follow up on any comments.

Most software developers ignore this feature-request because they expect the problem to go away/solve itself when people start using a decent CAD translation format.

We can understand the point of feature recognition from scanned parts and X-ray images, but recognizing features from a pure CAD model where one would assume the CAD operator had typed those numbers (eg diameters of holes) in themselves and the CAD system has recorded seems crazy.

On short, the only reason the CAM system doesn’t know the diameters of all the holes/pockets that were designed in the CAD system is because the data has been transfered in too primitive a format.

And once users and systems designers start transfering the data in a more appropriate format such that it carries over the existing information the users require for machining, the problem would go away.

Even without this, most CAD systems could easily run an automatic script/macro to print out a short ascii file that simply told the CAM system where all the holes were, one per line of text. Not a big deal.

Unfortunately, we can’t account for the mindbogglingly conservative nature of CADCAM users and organizations which must rest on some very shallow understanding of what a file format is. It’s as baffling as a bureaucrat who requires all forms be posted to him in the “correct” sized envelope, no questions asked, because the contents might change in transit. Sure, if you have a special envelope that is inked on the inside, or can be opened and resealed without people noticing, or requires the forms to be folded too many times to fit, he’d have a point. But it depends on the envelope. It can’t just be “wrong”; there has to be something wrong about it in particular. And if they can’t say what it is so we can check it out, then we’re not going to get anywhere.

I’ve had this experience of persuading people to use PNG files for bitmaps rather than TIFFs, and they don’t want to consider it because they think TIFF is the “professional” format for saving images that without “crappy” loss in compression which PNG’s might have, even if they’ve not heard of the PNG format before. For reference, JPG is the lossy image file format which compresses well, PNG always preserves the image completely but compresses it like zip, and TIFF isn’t really a single file format, it’s several rolled into one and sometimes can be lossy like JPG. But TIFF is thought — wrongly — by many people as the one true loss-less format. They don’t know where they got this idea, but they are sure it’s right.

If general understanding about image files can be that wrong even though the the results can be tested directly, understanding CAD file formats is going to be a lot worse. People are going to stick with the formats they rely on for 50 years at least. The only formats that do change are the proprietary formats owned by their CAD system suppliers who give them a new one with their new version of the software whether they like it or not.

What about that idea of writing out a separate file of hole locations to go with the CAD file which the CAM system can use? This should be easy, so why doesn’t it ever happen? The fact that it doesn’t is evidence that cooperation between the developers of a CAD system and a CAM system is zero. They don’t even know about each other’s products.

The way it could work is if you got with your new CAM system a little module that you installed into the CAD system that made it save out these files of additional feature data. Since many software suppliers sell both systems at once to a user, I am not sure why I’ve never seen this solution around. It may exist somewhere. But it feels like there may be a cultural barrier, even though there shouldn’t be. Whenever you buy a new printer you get a CD with the driver which you have to install on your computer to make it run properly.

Anyway, suffice to say that none of these obvious ways through appear to be happening, so we’re left at the point where the CAM system designer/programmer is getting ridiculous requests to write functions to recreate the data of where the holes and simple pockets were cut into a set of triangulated surfaces that was created an hour ago by someone typing in those same holes and pockets. I mean, why not just print out the design on paper and scan it in using OCR to transfer the data instead of sending it electronically? In another world this would be happening, and it’d be locked-in by the peculiar dysfunctional integration within the market and the way that the separate products are sold on the basis of people’s misunderstanding and lack of control over the whole process keeping them trapped in a local minima of inefficiency.

There are techniques for detecting holes and pockets, such as the Hough transform, which one would have to implement. But you don’t want to do it. The holes in the surface/triangulated model are not going to be perfect, so you need to allow a tolerance for detecting them.

Once you go down this route it all turns to bollocks. You have to guess what tolerance the circles are going to fit. This can change. When you get the hole detector working on one set of examples and let the users try it out, you can be sure it will start to fail because they use a looser tolerance. Instead of a hole being modelled by a polygon of a 100 sides, they do it with 50. It’s impossible to tell them to simply tighten up their tolerance, because this message won’t get through, or they know better; you see, files get too large if you triangulate them to a tighter tolerance and they’re not about to reconsider this opinion in light of the fact that computers have a 100 times more memory now than when that opinion was formed. Either that or they don’t know how to set the tolerance, and it’s one of the defaults in the file-writer.

While you can get lots of grief for not detecting the holes which the user can plainly see, it’s an absolute disaster if you get any false positives. That is when you mark as circular something which is not. Suppose the user really wants a 30 sided polygon, but you drill it out smooth. Suppose the designer wants a slightly elliptical hole, but now it’s not possible to specify it because the CAM systems knows better and forces it to be circular. To overcome these errors you need an interface so the user to mark as not-circular those holes the system has erroneously marked as circular. There are enough marginal cases that there is no theoretically right answer. The really “good” software will have been debugged case-by-case over years on the basis of user reports, and will be tuned to guess correctly from the habits they’ve come across. This is not good, because then habits get locked in and prevented from changing.

We have 30 different CAM vendors who have all written 30 different hole detectors. There is no competition between them because you have to change your whole CAM system to try a different hole detector. It’s time for some open software so we can all use the same one, and debug it collaboratively as we encounter the types of exceptions that users create. It can’t keep going on like it’s going now for the next 100 years.

Wednesday, January 11th, 2006 at 11:52 pm - - Machining

The first thing to note is there’s a huge amount of literature on all this stuff which we should be browsing through. I found an article yesterday in the University Library that was relevant. Unfortunately, the authors published it in The Journal of Manufacturing Systems (Vol 23 #3), so I can’t make a link for you because this — one would assume — publically funded research is password protected. There are other interesting journals hosted on the same site which might occasionally have relevant articles, but which the academic establishment has seen fit not to let the public have access to for reasons I’m not going to explain right now.

Anyway, here is the simple idea.

You separate the intelligent planning operation from the dumb manufacturing operation, and make the manufacturing operation reliable enough so that it is unable to do any damage no matter what it gets from the planner.

The result is the task can safely be separated into these two components, and you can buy the manufacturing operation from somewhere, and take risks with the planning operation because its results are not critical.

This is a very general pattern. Consider a team of postmen. The manufacturing operation is executed by the postman. He sets off with a sorted bag of mail. He takes the top letter out of his bag, walks to the address written on the front and delivers it. Repeat until bag is empty, then go home.

Clearly, we the mail gets delivered even if the planning operation is very stupid and fills the postmen’s bags at random at the start of the day. It might take all day, with postmen going all over town and crossing each other’s as well as their own paths, but the right end result would be reached.

A first level of planning operation would take all the mail in a postman’s bag and very cleverly sort it so that his rounds would take the shortest route. This is the infamous Travelling Salesman Problem.

The second level of planning would be to give each postman an optimal set of mail with regards to the route he would follow with such a set of post. This optimal answer is uncomputable, so we shouldn’t worry about finding it out.

Ultimately, with regards to the planning operation, there is no right or wrong answer. All plans work, but some work better than others. Since the post office has to solve the same problem each morning, they work to a template. They assign each postman a route, and all the routes together cover the city. Now for each letter they know exactly which bag it belongs in and what order. The postmen still work to the algorithm outlined above, and take shortcuts on their route if they have a short load.

This pattern, of functional division between a planning and manufacturing modules, is not completely general. For example, the operation of an assembly line cannot be separated in this way. It matters which order you lay down the parts as to whether you get the right result.

I don’t happen to know the technical term for this observation, because it’s just an idea which anyone can make up. However, I am sure that CNC machining fits into the pattern if you have the right machining operation.

If you don’t mind how many layers of paint may get applied to the same place, the job of spray-painting an object until it is covered fits this paradigm. You keep doing it from different angles until all the surfaces are coated.

If your machining operation is roughing, then this limitation does not apply. There is no physical effect of machining air. However, classic finishing routines which assume a limited level of stock on the model don’t fit this scheme.

A machining strategy, such as a variant of the Adaptive Roughing generalized to work in different planes in five axis, would be suitable. Once the basic algorithm is complete, and it can rest-rough against any set of previous toolpaths without gouging, over-loading the tool, or colliding the tool holder, the planning operation becomes simple and non-critical. Every plan that it comes up with would work, even stupid ones. Obviously, it doesn’t have to be very complicated to automatically choose directions where walls are aligned. And if you have parts that are similar to one another, you can work to a template. If you don’t know which template fits the best, run the algorithm on all of them and pick the best.

This is all some way off, but it was good to see the idea demonstrated on a few parts in that academic paper I couldn’t give you a link to. There may be other ideas for automating the machining process, but this one is one I can separate into stages which I know all can work. It also gets across the idea that Adaptive Roughing might ultimately be a piece of a larger puzzle. It just has to be available to be used.

Tuesday, January 3rd, 2006 at 11:56 am - - Cave

cav picture

Lots of work over new year, sitting in Bull Pot Farm while not being forced to cave down Penyghent Pot. Some of it on parsing the Lords for publicwhip and getting carried away. Some of it programming the necklace symbol feature for Tunnel cave survey program (see above) which can do stream arrows, slope lines, directional flowstone, and boulder walls. And some of my time was getting the const scallop to use the cutter location code from Denmark, and start with the scheme of making toolpaths in the right order. And quite a bit of time wasted writing a follow up of a book review.

Now quickly running off to London for a few days with Becka who will be at a conference. She doesn’t go south very often.