Freesteel Blog » Machining
Friday, November 21st, 2014 at 1:50 pm - Machining
Sensor readings generally have to be processed before you can use them. Patrick’s explanation of how he filtered the CoffeeMon signal (by picking the maximum value in each time window) suggested that there’s something fishy going on and it would be a mistake to treat the readings as subjected to mere noise.
Here’s a zoomed-in section of my fridge temperature as it rises by about 0.75 degrees an hour, or 12 units of 1/16th of a degree which my Dallas OneWire DS18B20 digital temperature sensor reads at its maximum 12 bits of resolution.
The readings don’t jump between more than two levels when the temperature is stable. You’d expect some more random hopping from signal noise.
Indeed, applying a crude Guassian filter doesn’t seem to do much good. This (in green) is the best I got by convolving it with a kernel 32 readings wide (equating to about 25 seconds)
filteredcont = [ ] # cont = [ (time, value) ] k = [math.exp(-n*n/150) for n in range(-16, 17)] sk = sum(k) # (the 1/150 const chosen for small tail beyond 16 units) k = [x/sk for x in k] for i in range(16, len(cont) - 16): xk = sum(x*kx for x, kx in zip(cont[i-16:i+17], k)) filteredcont.append((cont[i], xk))
The filtered version still has steps, but with a rough slope at the change levels. This filter is very expensive, and not any better than the trivially implementable Alpha-beta filter, which smoothed it like so:
a, b = 0.04, 0.00005 # values picked by experimentation dt = 0.5 vk, dvk = cont, 0 cont3 = [ ] for t, v in cont: vk += dvk * dt # add on velocity verr = v - vk # error measurement vk += a * verr # pull value to measured value dvk += (b * verr) / dt # pull velocity in direction of difference cont3.append((t, vk))
Tuesday, November 18th, 2014 at 5:54 pm - Machining
I have experienced much joy from this hardware hacking. I must have spent a couple hundred pounds on components. The bits arrive in little plastic trays like very expensive chocolate sweeties. There’s always a thrill when you first wire them up and they actually work perfectly. Not only that, you can have fun with them the next day and the day after that, because they have not turned into poop.
I have a few surpluses by now. I got a realtime clock which is 5V, and a microSD card reader which is 3V3; the Jeenodes run on 3.3V and the normal arduinos are 5V, so I can’t easily use either as the controller for datalogger. Some of the more idiot-proof breakout boards have converters on them, so they are safe for either voltage. Adrian has warned me to prepare for the coming of the 1.8V standard everywhere soon. I bought a combined ArduLog-RTC Data Logger, which for the moment is not playing ball.
Meanwhile, I’ve made a rule for the data logging of sensor data. Don’t do it. It’s not an end in itself. Too often people take on projects to collect sensor data and upload it to the internet (it’s Tuesday, so the site must be called Xively) with the idea that anyone else in the world could download it and [rolls eyes] “Do whatever they want with it.”
“Whatever they want!”
If you can’t think of a single interesting application for your data, why do you think anyone else in the world will be able to? And even if there was anyone in the world who could do something with it, they’re probably the sort of person who’d have their own data which is guaranteed to be lot more interesting to them than yours. There’s a reason we don’t have a CCTV channel of someone else’s back door at night on cable TV.
I’ve formulated a stronger principle:
The value of sensor data is inversely proportional to the product of the time that has ellapsed since it was collected and the distance you are from the subject of the data.
Let’s take a simple case.
I’ve been playing around with some geometric signal processing on the Atmega328-based Arduino kit for my run-time line fitting routine, when it occurred to me that I ought to know if I should be using floats or long_ints as the basis for this system.
Short_ints are only 2bytes with a maximum value of 32767, so you’re always overflowing them and it’s not worth the hassle. Therefore you have to use long_ints, which are 4bytes, the same as a float, so saving precious memory is not a factor in this decision.
Anyways, I woke up this morning and decided I needed some benchmarking.
I’ve begun various arduino experiments here in DoESLiverpool, which necessitated moving closer to Adrian’s desk on account of knowing no electronics, there being bugger all adequate instructions on how to wire anything up.
Oh yes, he says, obviously VCC is standard code for “power in” for that red square in the centre-left of the picture that contains a microSD card and requires 3.3V of power — even though this is nowhere stated and all the other circuits in this kit use 5V.
It’s not much of a standard when this is immediately contradicted by the thin thing on the bottom left of the picture (called a Jeenode) which labels its corresponding power pin “PWR“, and the low-power bluetooth blue board on the middle of the white panel which calls its power pin “VIN” for “voltage in”, and the red “real-time clock” thing above it which labels its power pin “5V“, which is so much better because: (a) it is immediately understandable by the man in the street, (b) it conveys the crucial information about the level of voltage required, and (c) it uses one fewer character when the labels are already too small to read without a magnifying glass which I do not have but should get.
So WhyTF do they use any of those other codes?
Ah, you might say, wouldn’t your logic require sometimes writing “3.3V“, which is four characters?
Well, no, actually, because the thing in the middle with the USB plug has two power pins on it, one called “5V” and the other called “3V3“, so they were forced to be sensible.
Of course, I’ll be proved wrong when I find a peripheral that contains both “VCC” and “VIN” pins.
Don’t get me started on all the other pin names, especially on the different arduino boards on which they’ve failed to mark out these all-important SPI pins that are either pins 11, 12 and 13, or pins 4, 1 and 3, or pins 51, 50 and 52, or you have to look it up on this handy diagram if you have a Jeenode.
I think electronics got off to a bad start from the very beginning when they decided that current flows in the opposite direction to the electrons. From then on it’s been seven human generations of miscodings and mistakes that have been adopted as conventions resulting in something not unlike spelling in the english language — ie you can’t see the problem once you have gotten used to it.
I’m back at work on my SLAM based laser scanner. Failure is not an option. Yet it feels like there is a real risk of it.
One of the steps in the process is to displace all the inertial unit measurements by a small error term in order to minimize the error in the correspondences.
More simply, we have a matrix A of height m and width n (m>n), a column vector b of height n, and we want to fill in the column vector x of height m such that A x = b is almost true.
There is no exact solution, so we look for a least squares answer, where (A x – b)^2 is minimal.
Luckily, there is a function scipy.linalg.lstsq() which does the job.
Let’s consider a simple example of a 3×2 matrix:
I just got myself a new laptop and installed Ubuntu-Linux on it. Scares the hell out of me the speed with which I got it up and running. I am now lost in a sea of code. It’s like walking into a public library after you’d been out in the sticks for a month with only two dog-eared issues of the Reader’s Digest to keep you company. There’s almost too much here. I want to read all of it. And any book or manual you do pick up and spend an hour with means there’s another ten thousand you’ve not picked up that you should have been reading.
Anyways, while doing my apt-cache searching stuff for stuff, I noticed stimfit – Program for viewing and analyzing electophysiological data show up in the search for scipy.
It appears to take datasets of electo-potential readings from a single neuron at every tenth of a milisecond and then fit exponential decay curves [the thick grey line] to selected sections from the (negative) peak to the baseline.
A bit like a temperature sequence, eh?
Oh, and it has a funky Python shell built into it to help you automate the analysis functions. What’s not to like?
To make up for my disorganization with the data collected at my house (in that it got lost, was not frequently sampled enough and didn’t happen over the winter), I got this lovely temperature sequence from megni to analyze for my exponential decay theory which took a reading inside their cottage every 60 seconds.
My theory is that by fitting exponential decay curves to the data I would get some invariant values relating to the fabric of the building that would change when you improved its insulation characteristics (eg draught-proofing a window).
The first step is to chop of this data into the sections where the temperature is dropping down. It took a while to get some working code, but it came like this:
gw = 30 # half an hour sampleseqs, sampleseq = [ ], None for i in range(gw, len(samples): vd = samples[i-gw] - samples[i] # positive if past temp higher if vd >= 0: if not sampleseq or vd >= mvd: # restart seq at bigger difference sampleseq = samples[i-gw:i] mvd = vd sampleseq.append(samples[i]) elif sampleseq: sampleseqs.append(sampleseq) sampleseq = None
Suppose we have a temperature sequence like this one gathered from last September in our kitchen when we started to put the fire on in the late evenings. The central heating wasn’t on yet, so there’s no second temperature “bump” in the mornings. (The three bumps in the first peak are the three logs we put on the fire.)
You get an appreciably square looking graph by plotting the data as units of an hour in X, and units of a degree centigrade in Y, so a 45degree slope would represent a 1 degree difference per hour, which is the right scale of change in our environment.
My immediate observation from the first moment I saw such a temperature sequence (roughly in the middle of last summer when we visited megni in North Wales) was that these are exponential decay curves.
The trick is to find them and fit them.
Tuesday, October 14th, 2014 at 10:42 am - Machining
Friday evening I made a visit to the Berlin Fab-Lab open day. They’ve got a heck of a lot of 3D printers in a small space. All kinds colours and materials, from brittle and hard to rubbery plastic. I think they also build their own kits. (I asked them if they’d heard of the Autodesk Spark, and they hadn’t. It’s great to be out in the big wide world!)
But just as they find in DoESLiverpool, the 2D laser cutter gets the most use, because we can design things in 2D for a fraction of the effort.
The weekend was blighted by a desperately bad headache which was entirely unlike a hang-over that confined me to a dark room. (Hang-overs tend to release at around 8pm the following day for me.)
On Sunday afternoon I started to do some work.
Firstly, I discovered that all my laser scanning data was lost on the Autodesk computer which I gave back last week for disposal, so I can’t work on that software till I get some more, probably by going to Bristol and making the device work again.
Then I discovered that almost all my temperature sequence data was also lost on that same computer — although I do have 6 days of records from some time last September before the cold weather properly set in and we got some actual useful data. Whether I can find it anywhere on an SD card, or I have to collect some data all over again with a new Arduino set-up of my own making will have to wait till I get home.
For convenience, I’m putting the maths of exponential decay curves into a separate blogpost.
Here’s a picture of some not very spooky pumpkins in the local supermarket.
Also, I spied a poster for the newly opened Happylab Salsburg. Might be one for a drop-in next expo.
Maybe these are like computer clubs were back in the 1970s. It’ll all make sense in hindsight one day.
Thursday, October 9th, 2014 at 3:43 pm - Machining
I’m casting around for some little long term geometric projects which I could be good at. I’m very bad at the sysops stuff and compilers (which seems to be a breeze for every other hacker in the world). Plotting the geometry which you have calculated is also a drag.
For me, twistcodewiki does it all.
I followed the instructions to compile OpenVoronoi on this very small under-powered linux netbook I have kicking around, and got twistcodewiki to work. Here is me entering a polygon and plotting a voronoi structure from it:
The code is as follows: