Freesteel Blog » 2009 » September

Tuesday, September 29th, 2009 at 5:06 pm - - Machining 3 Comments »

Things are moving fast over in America-land, with the now litigation-free Celeritive Volumill being licensed into Gibbscam:

“VoluMill is the first third-party product that my resellers are talking about and asking for,” says Bill Gibbs, founder of Gibbs and Associates. “Early Gibbs users are raving about the impact VoluMill is having on their operating costs. The VoluMill option will become part of Gibbs’ standard offering and will be promoted, shown and demonstrated at trade shows and in webinars.”

Everyone is excited about this new thing. As they explain in their video exhibiting their new three-axis rest roughing capability:

“Volumill begins by roughing the core using the maximum depth of cut of 0.5. It then reduces these steps to the final step height of 0.1. Volumill is unique in that it doesn’t simply machine the entire part in 0.1 increments.”

Well, they can call it unique if they like, but it’s a lie, because we’ve had it since 2005.

Details, details.

I find this traditional corporate style literature astoundingly irritating, the way everyone has to pretend theirs is the only product in the world. No other thing exists.

It makes it difficult to compete and improve by competition. How are customers supposed to assemble the information and compare their options, when the only stuff that’s ever published is this nonsense?

I’m still searching in vain for the alleged volumill patent filing. Any tip-offs? Like an application number?

Meanwhile, we’ve got some toolpath examples from Mastercam’s Dynamic Milling. I’ll be examining them in techno-geometric detail at some point soon when I’ve got the inclination. When I get some Volumill toolpath samples, I’ll be able to study and report on them as well.

Monday, September 28th, 2009 at 11:41 am - - Vero 2 Comments »

The Freesteel blog has a history of being interested in the Vero Software company, among others.

Here is an unusually interesting Vero Software 17 September 2009 press release:

The Board of Vero notes the recent movement in the Company’s share price. The Company has received a provisional approach from a financial institution that may or may not lead to an offer being made for the Company. Discussions are at a very early stage and there can be no certainty that an offer will be forthcoming.

In accordance with Rule 2.10 of the City Code on Takeovers and Mergers, the Company confirms that it has 37,261,166 Ordinary Shares of 0.5p each in issue and admitted to trading on the London Stock Exchange under UK ISIN code GB0002678273

Here’s what that looks like:


The daily breakdown in volume is 173,907 at £14.50 on 15 September, 366,264 at £20.50 on 16 September, and 164,969 at £18.25 on 17 September from the News Analysis tab on the London Stock Market page linked to from here.

I don’t know exactly what the numbers mean (being a mere software engineer), but it looks like a blip, particularly in light of the following press release:


Full name of person(s) subject to the notification obligation (iii): Foresight 3 VCT plc
Date of the transaction: 16 September 2009
Date on which issuer notified: 17 September 2009
Threshold(s) that is/are crossed or reached: 3%

This information hasn’t yet been entered into the table here:

Shareholder Ordinary 0.5 pence shares % of issued share capital
P. Gyllenhammar 8,253,722 22.2
D. A. Babbs 4,903,380 13.2
E. Galardo 3,440,474 9.2
Artemis AIM VCT plc 2,325,582 6.2
Baronsmead VCT 2 plc 2,325,582 6.2
Baronsmead VCT plc 1,600,000 4.3
Baronsmead VCT 3 plc 1,395,349 3.7
Noble Enterprise VCT 1,330,233 3.6
M. Cignetti 1,259,866 3.4

Now that I have archived it, when they update the webpage it’ll be possible to infer who sold the shares to Foresight. Later in the year, if I stay on this story, Foresight may disclose how much they paid for Vero stock in one of their brochures.

Speaking as a mere Software Engineer, I know that the further you get away from the coal face, as it were, the less you know. The Vero management — according to their website materials — lead you to believe that they have an extremely limited interest in the software engineering (all they ever talk about is financial engineering).

The VCT investors, being even further from the centre of activity, almost certainly know nothing about the state of the software development of the company. And there’s nowhere they’ll find the information out because the financial structuring is entirely walled off from the actuality.

When you know nothing, all you can do is follow others. A good lot to track are the company directors, as they ought to know a thing or two about the value:


As you can see, the company poured a load of shares into the pockets of the directors in 2004, and they haven’t sold them yet. Either they have confidence in the company, or they are not selling to prevent others losing confidence.

If I had time to go through all their Annual Accounts again, I’d be able to tell whether any of the programmers have been given shares. (I don’t think so.)

The programmers are going to know if they are being beaten on features by their competitors, or whether they have just thought of a brilliant killer feature, so they buy some shares in advance because they know their work is going to make the company a lot more valuable, they hope.

Somehow I don’t find that story very convincing, even as an idealistic argument.

As has been demonstrated conclusively over the last few years, the whole financial engineering methodology in the western world is bunk. It has resulted in a severely damaging misallocation of wealth away from places where it could result in a lot of value being created, and towards socially useless activities.

In the Vero company, the distinction between such activities is stark. Value (for everyone concerned) is created by the debugging and improving of the software which they sell and is widely used, whilst farting around with share dealings and corporate technicalities (though probably much harder to do), is arguably totally useless.

Unfortunately, you can get a lot more power and money by focussing all your efforts on the latter while virtually ignoring the former.

There hasn’t been a much public debate as to why the financial system doesn’t work — according to the way it diverts resources away from where it is useful. I am not interested in moralistic ideological arguments about ownership rights, freely entered into contracts, capitalist libertarian theory and related bollocks. The question is, does it actually perform well? Can we introduce rules and regulation so that the results are not so self-evidently poor?

Often the diagnosis of the problem centres on the presence of conflicts of interest. Brutal transparency is the current prescription, tried only when the corporate-owned government is broke and can no longer treat the symptoms with bail-out money.

But I don’t think that’s the full story.

I think the root cause of the problem is that those who disburse the money have no knowledge and no information about the businesses they are investing in. This makes their decisions easy to cheat on the one hand. And on the other — when they are not being cheated — likely to be based on bollocks. They may as well be picking lottery numbers.

But, you say, people don’t have time to check out every company they invest in to the detail necessary. It’s too complicated and fast moving. And they have no right to speak to the employees directly.

And I say, Exactly.

Perhaps it’s a consequence of the extreme concentration of wealth today. The few who have control of most of the wealth don’t have any time to consider where they are putting the money. So it gets done poorly and with little concern for the effect.

And the people who do know how stuff work — for example, the employees in the businesses — are not at any time represented in the market place.

So why is anyone surprised when there is so much systematic misallocation of wealth?

Thursday, September 24th, 2009 at 9:14 am - - Whipping 1 Comment »

Spent all night compiling the documents and filling in the form. Text included below. Most of the time was spent pasting the text out of the correspondence in and editing out all those crappy line breaks that you get in the crappy email convention that we have been stuffed with. Now back to work.

Complainant: Julian Todd
Public Authority: Metropolitan Police Service
Complaint: Refusal to disclose durations of Section 44 Terrorism Act authorisations under FOI Act

This complaint concerns an FOI request I made to the Metropolitan Police Service on 18 May 2009 for:

* the list of durations of police authorisations for Section 44 Terrorism Act (stop and search powers),
* whether it was a renewal, and
* the time it took for the Secretary of State to confirm the authorisation, if it was confirmed.

My request was refused on the basis that (a) it would take too long to manually compile the data, and (b) it would reveal intelligence about geographical areas that could be useful to terrorists.

Neither of these reasons are valid considering facts obtained through earlier FOI requests. Specifically, I know that (a) the information is in a database with the necessary fields to satisfy my request without manual effort, and (b) my request had no geographical component.

The original correspondence of this request up to the final refusal on 28 August is available at:
I have reproduced it in document sec44durationsMPS.rtf.

Background to Section 44

Section 44 of the Terrorism Act allows police forces around the country to self-authorise stop and search powers if a ranking officer “considers it expedient for the prevention of acts of terrorism.”

In order to provide oversight to these powers, Sections 44 to 46 stipulate the process by which such authorisations must be communicated to the Secretary of State and officially confirmed if they are to run for more than 48 hours. Authorisations must be renewed every 28 days. At no point in this process are any security bodies mentioned.

Although statements and reports have been made to Parliament regarding the numbers of people searched under this Act, I have not seen any information about the quantity, frequency and extent of these areas, how much evidence is required for the police to authorise one, and whether or not the Home Secretary has ever turned down an authorisation in the last nine years.

There is a perception among some who have been stopped under these powers that they are a tool of convenience rather than expediency in relation to a terrorist threat.

Prior FOI Requests to the Home Office

On 10 July 2008 I outlined a request to the Home Office for all the information that Sections 44 to 46 Terrorism Act 2000 requires them to receive when a police authorisation is made.

The request was refused on 1 September 2008 on grounds of cost on the basis that all the records were on paper and not in a database.

On 5 September 2008 I made another request to the Home Office for the bureaucratic procedures and form paperwork used in the process of receiving a notification from a police force, recording it, and obtaining confirmation from the Home Secretary.

I outlined in my letter that my underlying intention was to ascertain at what point information was entered into a database from which statistical records could be derived for the purpose of informing me, the Secretary of State, or Parliament, should anyone be concerned with the overall functioning of the Act.

I received a refusal from the Home Office on 26 November 2008 on the basis that such information would “prejudice the free and frank exchange of information and the effective conduct of public affairs.” The internal review confirming this answer was finally completed on 2 April 2009.

FOI Requests to the Metropolitan Police Service

From a report by Lord Carlile on the operation of the Act, I was able to discover that the procedure for communicating notifications of Section 44 authorisations to the Secretary of State involved sending them to the National Joint Unit of the Metropolitan Police Service.

On 28 November 2008 I requested blank copies of forms required for compliance with the Act, the structure of the database they recorded the notifications within, and the contents of the database from the Metropolitan Police Service.
The correspondence of this request is reproduced in document sec44authorisationsMPS.rtf.

The request was refused on 17 December 2008 on the basis of incomplete arguments involving national security and law enforcement.

I sent my case for a review on 31 December 2008 and finally received a further response on 14 May 2009 overturning parts of the refusal and providing me with hard evidence that there was indeed a database into which information about all Section 44 notifications was being entered.

(My subsequent 18 May request obtained a screenshot of the user interface of this database:)
The refusal included the lengthy argument that disclosure of any information about the geographical extent of these areas was going to be detrimental to the public interest.

This resulted in my 18 May request — about which I am making the complaint — for the contents of this database excluding any geographical or intelligence information, and only requesting knowledge about the durations of the requests and how regularly the Secretary of State confirms them.

Thursday, September 10th, 2009 at 11:34 am - - Machining

This is a follow-on from the Diamond Angle article about useful encoding of plane angles without trigonometry.

The same trick can be applied to 3D vectors using a regular octahedron instead of a diamond.

The corners of the regular octahedron are:

  { (1, 0, 0), (-1, 0, 0), (0, 1, 0), (0, -1, 0), (0, 0, 1), (0, 0, -1) }

Another way of writing it that shows off how rubbish mathematical notation can be sometimes is:

  { (x, y, z) | x, y, z in {-1, 0, 1} where 2 of the values are 0 }

The corners of the cube, on the other hand, can be expressed as:

  { (x, y, z) | x, y, z in {-1, 1} }

Let’s start with the conventional angular encoding of a vector into latitude and longitude:

def RadianPolarAngle(x, y, z)
    r = sqrt(x*x + y*y) 
    return (atan2(x, y), atan2(r, z))

This is obviously decoded by:

    (x, y, z) = (sin(a)*sin(b), cos(a)*sin(b), cos(b))

The ranges for the two encoding angles are 0 < a < 360 and 0 < b < 180, but the disadvantages include the use of slow trigonometry, and the existence of two poles where the encoding density is high.

An uneven density means an unnecessary loss of precision:

Suppose I encode this pair of values into two bytes (16 bits) by rescaling them like so:

   int(a/360*256) * 256 + int(b/180) *256

Then the directions near b=0 and b=180 will be preserved far more accurately than the directions near b=90. This represents an inefficient waste of my small and finite number of encoding values.

Here is a picture of the point distribution stepping every 5 degrees:


The three blue lines are the x, y, and z (pointing up) axes.

The slow trig functions problem can easily be solved by substituting DiamondAngle in place of atan2 so that 0 < a < 4 and 0 < b < 2. This gives a picture like so:

diamoctoarg diamoctoargn

The right hand picture plots the vectors without normalization so you can see how they lie on the surface of an octahedron. But looking at the left hand picture, you can see a very slight higher density in line with the x and y axes (due to the DiamongArg encoding), as well as the very high density near the points (0,0,1) and (0,0,-1) axis.

A better solution is to extend the concept of the DiamondAngle to the OctahedronAngle by breaking the 3D direction into octants (rather than quadrants) and encoding the intersection with the triangle spanning the relevant octant.

We can number the octants between 0 and 7 like so:

    octantnumber(x, y, z) = (x > 0 ? 0 : 1) + (y > 0 ? 0 : 2) + (z < 0 ? 0 : 4)

and then, given x, y, z > 0, we can encode the vector’s intersection with the triangle spanning the octant:

    triangle[ (1,0,0), (0,1,0), (0, 0, 1) ]

like so:

    (a, b) = ( x/(x+y+z), y(x+y+z) )

where a, b, > 0 and a + b < 1

The inverse function (subject to normalization) is:

   (x, y, z) = (a, b, 1 - a - b)

So, the first version of our encoding is obvious: We can encode the number of the octant in 3 bits, and the direction within the chosen octant with two positive values whose sum is less than 1.

But we can make this more compact.

There are 8 such right-angled triangles in the square defined by -1 < a, b < 1, so we could use the number of the triangle to number the octant:

    trianglenumber(a, b) = (x > 0 ? 0 : 1) + (y > 0 ? 2 : -2) + (abs(a) + abs(b) < 1 ? 0 : 4)

and substituting a valid range for a and b like so:

    (a, b) = (abs(a) + abs(b) < 1 ? (abs(a), abs(b)) : (1 – abs(a), 1 – abs(b)) )

What we are working towards here is a wrapping of the 2×2 square onto the octahedron, and trying to have the same effect we wrapped the line segment [0, 4] around the diamond.


But can we make it continuous?

We can get continuity on the top pyramid of four triangles easily. For z > 0

   OctahedronArg(x, y, z) = (x / (abs(x) + abs(y) + z), y / (abs(x) + abs(y) + z)) 

And the inverse (subject to normalization) is:

   (x, y, z) = (a, b, 1 – abs(a) – abs(b))

Here is the picture of it.


You can see the DiamondArg style slight elevation in density in the planes of the x, y, z axes, but it is nothing like the huge density near the point (0,0,1) axis when we do it the traditional way.

Maybe a slightly better visualization involves imposing a checkerboard pattern by discarding the points where:

   int((a + 2) * 10 + 0.4) + int((b + 2) * 10 + 0.3) mod 2 == 1

to make it out of phase with the axes and plot how it crosses them


Extending this down to the lower pyramid is less straight forward. The outer triangles of the quadrants (where abs(a) + abs(b) > 1) have to equate their open edges, preferably without discontinuities.

The x, y > 0 double-octant spans the upper and lower pyramids like so (see middle diagram above):

   (x, y, z) =  (x + y < 1 ? (a / (x + y + z), b / (x + y + z)) : ((1 – y) / (x + y – z), ((1 – x) / (x + y – z)))

The x and y coordinates seem to reverse, because at the x + y = 1 line (x, y) = (1 – y, 1 – x).

The inverse function, for a, b > 0 is:

   (x, y, z) = (a + b < 1 ? (a, b, 1 – a – b) : (1 – b, 1 – a, 1 – a – b) )

It’s possible to continue for the final 3 octants and and have something that is as good as it’s going to get:

def InverseOctahedronAngle(a, b)
    h = abs(a) + abs(b)
    if h < 1
        return (a, b, 1 – h)

    if a > 0 and b > 0
        (x, y) = (1 – b, 1 – a)
    if a > 0 and b < 0
        (x, y) = (1 + b, -1 + a)
    if a < 0 and b > 0
        (x, y) = (-1 + b, 1 + a)
    if a < 0 and b < 0
        (x, y) = (-1 – b, -1 – a)

    return (x, y, 1 – h)

For the sake of completion, the encoding function is:

def OctahedronAngle(x, y, z)
    s = abs(x) + abs(y) + abs(z)
    if z > 0
        return (x / s, y / s)

    if x > 0 and y > 0
        return (1 – y / s, 1 – x / s)
    if x > 0 and y < 0
        return (1 + y / s, -1 + x / s)
    if x < 0 and y > 0
        return (-1 + y / s, 1 + x / s)
    if x < 0 and y < 0
        return (-1 - y / s, -1 - x / s)

I’ve been using greater than signs (>) rather than greater than or equal signs, because I don’t know the code for them, and because these are the boundaries where it gets interesting.

Let’s see how everything stacks up with regards to continuity across the z = 0, abs(a) + abs(b) = 1 boundary in the InverseOctahedronAngle(a, b) function.

  a > 0, b > 0  ==> a + b = 1 ==>   (1 – b, 1 – a) = (a, b)
  a > 0, b < 0  ==> a - b = 1 ==>   (1 + b, -1 + a) = (a, b)
  a < 0, b > 0  ==> -a + b = 1 ==>   (-1 + b, 1 + a) = (a, b)
  a < 0, b < 0  ==> -a - b = 1 ==>   (-1 - b, -1 - a) = (a, b)

That accounts for all the internal boundaries of the -1 < a, b < 1 square. But the 8 half edges of the square equate to the 4 lower edges of the octahedron, and the four corner points equate to the bottom vertex of the octahedron at (0, 0, -1).

Let’s start with those corners where a, b in { -1, 1 }

  a > 0, b > 0  ==> (a, b) = (1, 1) ==>   (1 – b, 1 – a) = (0, 0)
  a > 0, b < 0  ==> (a, b) = (1, -1) ==>   (1 + b, -1 + a) = (0, 0)
  a < 0, b > 0  ==> (a, b) = (-1, 1) ==>   (-1 + b, 1 + a) = (0, 0)
  a < 0, b < 0  ==> (a, b) = (-1, -1) ==>   (-1 - b, -1 - a) = (0, 0)

Now consider b = 1. For t > 0, I would hope for (t, 1) to equate to (-t, 1):

  a > 0, b > 0  ==> (a, b) = (t, 1) ==>   (1 – b, 1 – a) = (0, 1 - t)
  a < 0, b > 0  ==> (a, b) = (-t, 1) ==>   (-1 + b, 1 + a) = (0, 1 - t)

So it works!

The same applies to the halves of the other three sides of the square, proving that we have a good mapping and are not going to have to worry too much about precisely knowing which octant a point is within (as implied by all those missing greater-than-or-equal to signs in the above snippets).

It also suggests we could do a pretty good distance function in this space that would satisfy the metric space rule, though this would need to be checked out.

This function was developed for the scallop bisector algorithm in order to pack the directions into one 32 bit word.

A long time ago, when I was working for NC Graphics, I solved the same problem when designing a depth buffer structure by encoding (x, y, z) into three bytes by mapping each -1 < x < +1 into an integer less than 256. The fourth byte was was the colour.

The following year OpenGL became available and we started writing Machining Strategist.

Please post any errors in the comments below.