## Freesteel Blog » Detecting thermal temperatures from a hang-glider

## Detecting thermal temperatures from a hang-glider

Saturday, September 24th, 2016 at 1:51 pm

I’ve been flying around with my data logger for two years now and I have only this week developed a theory of how to process the temperature data. I might have got to it sooner had I known enough physics or bothered to test the actual response time of my Dallas temperature sensor, which was so abysmal I had to start doing something about it. To be fair, it did take me over a year to even shelter it from the sunlight.

### Part 1: *The responsiveness of the temperature sensor is 0.09*

The apparatus involved running a vacuum cleaner against the sensor housing to create a fast airflow while drawing the air either from the room or down a tube containing a bag of ice.

This was the graph it made, holding the intake near the ice and away from the ice for about 40seconds at a time

The curves you see are exponential decay curves, as the temperature sensor cools to the air cold temperature, then warms to the room air temperature, quickly at first, and then slowly as it converges.

If the conductive responsiveness of the sensor is **r**-degrees per degree in a second, then if the real air temperature is **c** and the measured air temperature is **cM[t] < c** , then one second later the measured air temperature will be:

cM[t+1] = cM[t] + (c - cM[t])*r

Tha analytical solution to the equation is:

cM = c - exp(-t*r' + t0*r') r' = -log(1 - r)

And using the exponential decay factor of **r’** means that for a time step of **dt**:

(1)cM[t+dt] = cM[t] + (c - cM[t])*(1 - exp(-r'*dt))

These equations are easy to prove.

The next task is to fit the measurements to the theoretical curves. For this I’m going to use the scipy.optimize.minimize() function to find the three parameters that determine the shape of the exponential curve that best fits by least squares:

# tcmonotonic = [ (t0, tmp0), (t1, tmp1), ... ] - section of curve to fit def f(t, r, t0, c): return exp(-(t-t0)*r)+c t0, c0 = tcmonotonic[0] def fun(X): return sum((f(t-t0, X[0], X[1], X[2]) - tmp)**2 \ for t, tmp in tcmonotonic) res = minimize(fun, x0=(0.1,0,c0), method="Nelder-Mead")

Here we overlay the best fit curves onto the rising and falling sections of the temperature time sequence, and record the **r’** value of exponential decay:

Approaching this phenomenon with an intention to simulate it, if I program in what I believe was the real air temperature being sucked across the sensor by the vacuum cleaner when moving the ice cubes towards and away from the intake, the graph would look like this:

Applying the exponential decay relation (1) to this sequence gives:

You can see there is a good similarity with the real measurements when we overlay them in this diagram:

(The discrepancies are easily explained by the apparatus slipping in my hand, the flow of air not being consistent on each trial and the square wave not being accurate.)

I realize now that the responsiveness of this temperature sensor is dreadful. If the temperature changes by 2 degrees it takes it at least 8 seconds change by one degree. That’s enough time for the glider to travel 100metres and be a long way out the other side of the thermal.

I did waste some time seeing if it was possible to deconvolve the temperature sequence by solving equation (1) for the target temperature **c** at every point to produce this graph:

The deconvolution relies on the fact that — if the real temperature is steady — the rate of change in the detected temperature predicts where the value is going to asymptote to. You can see the signal there only because I’m holding the temperature steady for 40seconds at a time, which is not realistic.

### Part 2: *Correlating temperature and pressure in a single flight*

Let us consider one of my flights along the ridge in Ager where I bombed out at the far end. The altitude has been exaggerated by a factor of 5.0 to make it more important.

The barometer and temperature graph from the flight logger looks like so:

The problem with these two sensors is they are not synchronized: the barometer takes readings every 20 milliseconds, while the Dallas temperature sensor reads only every 775 milliseconds.

Fortunately I’ve got my **TrapeziumAvg(tstart, tstep, tvs)** function to sum up the average readings within synchronized windows of 3000ms wide using code that looks like this:

tstep = 3000 tzcs = TrapeziumAvg(t0, tstep, tcs) tzbs = TrapeziumAvg(t0, tstep, tbs) pts = [ ((b-97000)*0.001+45,c) for b, c in zip(tzbs, tzcs) ] sendactivity(points=pts)

to create a scatterplot that looks like this of the barometer (in X) against temperature (in Y):

(The width of the averaging interval **tstep** doesn’t make a noticeable difference to the appearance of this graph.)

It’s a good idea to separate the times when the glider is climbing and plot these points in a different colour (red) using the following code:

upts = [pts[i] for i in range(2, len(pts)) if pts[i-2][0]>pts[i][0]] sendactivity(points=upts, materialnumber=1)

You could choose to be convinced by the appearance of the temperatures tending to be on the higher side on the climbs in the direction of the bottom left corner, but I chose not to, because I’ve been here before. And that’s not just because when I landed it was 40°C and I was cross.

### Part 3: *The ratio of the heat capacity, gamma, is 1.4*

The pressure formula for the reversible adiabatic process, is:

Pressure^(1-gamma)*Temperature^gamma = constant

where **gamma** is the heat capacity ratio of the heat capacity of air at constant pressure divided by the heat capacity of air at a constant volume. (The heat capacity is the energy required to raise a unit amount of substance by one degreeC.)

What is the value of gamma?

*[full disclosure: the following determination only worked with part of the data from a shorter flight made on the previous day due to the trend of an increasing energy load during the day.]*

For air, which is predominantly composed of the diatomic molecules of Nitrogen of Oxygen, **gamma=1.4**. I want to show that this can be derived by testing the variance/constancy of this equation for other values of gamma:

gammavar = [ ] for i in range(201): gamma = 1.4 + (i-100)*0.002 tze = [ b**(1-gamma)*(c+273.16)**gamma for b, c in zip(tzbs, tzcs) ] variance = sum(x**2 for x in tze)/len(tze) - (sum(tze)/len(tze))**2 gammavar.append((gamma, variance))

The graph of this reveals a minima close to 1.4, so at least the sensors and the atmosphere are behaving adiabatically.

I’d guess this wouldn’t work if something was bad, like if the sensor was exposed to direct sunlight and shadows. This trick could be used as a calibration test for any similar barometer-temperature apparatus to look for anomalies.

Moving on, let’s now plot the this adiabatically constant value (which can be thought of as like the entropy or energy) with the barometric measurement over time, and see how successfully it corrects the temperature measurement for altitude:

As we know, the average air temperature rises during the day, which is consistent with the graph. But, more excitingly, the moments where the barometer starts to go down (because the glider is climbing having entered a thermal) coincides with a spike in the entropy/energy value.

But I am still not convinced that I’ve got this right.

### Part 4: *Faking the data to illustrate the false positive*

You may have noticed that we are comparing the time sequence of barometric measurements (which are virtually instantaneous) to the time sequence of temperatures (where it takes 8 seconds to move half a degree for a one degree change).

Suppose you catch a good thermal that causes the glider to climb at 3.2m/s, so that after 8 seconds you are 25m higher in air where the temperature ought to be about a 0.25°C cooler.

Suppose the air temperature is 12°C (which the temperature sensor has equalized to) and you enter warmer thermic air that is at 12.5°C. After 8 seconds at this air temperature the sensor will read 12.25°C. However, in this time the air temperature will have dropped to 12.25°C, given a lift of 25metres. We can therefore assume that the average temperature affecting the sensor is really 12.375°C, so it would then only read 12.1875°C by virtue of moving half-way towards it. At that altitude the temperature outside the thermal will be 11.75°C resulting in a difference of 0.4375°C.

If we run this calculation again for another 8 seconds, the sensor reading will be 12.15625°C and the external temperature will be 11.5°C resulting in a difference of 0.65625°C, which is bigger than the half a degree we postulated.

In fact if we run the simulation with the thermal temperature set to exactly the same as the surrounding air, then the mere slow response of the temperature sensor at this rate of climb will erroneously suggest that the thermal temperature is more than 0.3°C higher than the surrounding air after 24 seconds and 75m of climb.

This may be the cause of the signals in the graph above.

Now the problem isn’t the slow response time of the temperature sensor, it’s that we are comparing it against the barometric sensor which has a different response time.

Let’s see how bad this really is by looking closely at three minute section of the original data with everything plotted on top of each other in the same window:

The curves are, as before, more or less synchonized with the temperature going down as the pressure goes down. You get a spike in the red entropy/energy line whenever the the two curves in this picture move towards one another, like when the temperature climbs but the barometer goes down. (I’ve plotted the temperature curve with red dots as well as a cyan line to accenuate its lower frequency and 16ths of a degree resolution.)

Now we generate the fake temperature data from the barometric data as what it would be given a fixed entropy

gamma = 1.4 efixed = 30.17 tds = [ (t, exp((log(efixed) - log(b)*(1-gamma))/gamma) - 273.16) \ for t, b in tbs ]

Then we smooth it by the exponential delay of 0.09 degrees per second:

def expfilter(tvs, r): t0, v0 = tvs[0] ftvs = [ ] for t, v in tvs: v1 = v0 - (v - v0)*(exp(-(t - t0)*0.001*r) - 1) ftvs.append((t, v1)) t0, v0 = t, v1 return ftvs ftds = expfilter(tds, 0.09)

Our sample rate is too high because it’s the same as the barometric rate of 20ms per sample when it should be 775ms. We have to average it across the sample windows:

taftds = TrapeziumAvg(t0, 775, ftds) ftcss = [((i+1)*775+t0, c) for i, c in enumerate(taftds)]

[image omitted because it looks almost the same]

Our samples are too precise. The Dallas temperature sensor measures to the “nearest” 1/16th of a degree (where nearest is a matter of weighted probability when it is between two values).

ftcs = [ (t, int(c*16+(t%100)/100)/16) for t, c in ftcss]

This is what happens when you compute and plot the entropy against values between the barometric data and the fake temperature data which we know should have constant entropy.

Just to really grind in the falsity of this signal, here’s that graph of pressure (in X) against temperature (in Y) with the points where the glider is climbing plotted in red which appeared to show that thermals were hotter than the surrounding air. (It’s not in this temperature data, because we specifically created it with zero heat differential.)

### Part 5: *Smooth the barometer to make it comparable to the fake temperature*

What if we filtered the barometric sensor by the same factor that affects the temperature sensor before combining the measurements?

If this idea works then it should make the entropy variability in the fake data disappear. Here’s what happens if we filter the barometric data by the same 0.09 factor plot the comparisons:

ftbs = expfilter(tbs, 0.09)

The entropy value is nearly flat.

standard_deviation = sqrt(sum((e-efixed)**2 for e in fftze)/len(fftze)) = 0.001538

(We’re actually plotting the entropy value *50 so you can see it’s variability.)

Where is this residual variability coming from?

Well, if we remove the 1/16th of a degree precision that we applied to the fake temperature data, then the standard deviation on the entropy measurement halves to 0.0008. And if we change the temperature sample rate from 775ms to 300ms, the standard deviation halves again to 0.0003. Remove that sample rate filter all together and the standard deviation is 0.00002. So we can reasonably say that here the variability in this corrected entropy curve is due in equal measure to the 775ms sample rate and to the the 1/16th of a degree precision of the Dallas temperature sensor.

### Part 6: *Smooth the barometer to make it comparable to the real temperature*

And now the bit you’ve all been waiting for. If we run that smoothed barometric data against the measured temperature data and plot the entropy, we get:

Within this three second window, the uncorrected entropy measurement has standard deviation 0.021 and the corrected measurement has standard deviation 0.011, so it is still at least above the noise threshhold.

What about the barometer vs temperature plotted red for the climbing points?

There’s a lot more overlapping between the ups and downs now, isn’t there?

Let’s plot the full flight record to see the correlation.

Something has changed. At the start of thermal climb **A** (where the barometric pressure suddenly drops) the strong (false) signal in the uncorrected entropy graph seems to disappear in the corrected graph. But at start of climb **B** the signal survives, although it is weaker. But at **C** we get a new sudden increase in entropy just as the glider starts to drop steeper than at any time before. This was at a special moment in the flight where I went over the back at Ager and was warned that you will go down very fast, so you must begin with at least 2200m in altitude and carry on in a straight line as far as you can go to get away from the back-of-the-mountain down-draft before you will find another thermal. (Do down-drafts create bodies of hot air?)

### Part 6: *Preliminary ignorant speculations*

The above does not count as effective analysis; it’s merely pre-processing. On that matter I have still not made any progress. I am honestly stumped. In all the ways I’ve graphed it, smoothed it, differentiated it, factored out the trend, I keep getting a globular cluster, like this:

One trick out there is to filter the entropy data in order to factor out the trend:

Then plot the points that are significantly above the trend back onto the GPS flight track to see if they lie in explicable places.

Of course, this still doesn’t prove that anything good has been found on the basis of these instruments, because who knows what artifacts this smoothing has produced.

There are theories that thermals are not actually warmer air, but in fact huge gaseous disruptions carrying so much momentum and adjoining air that they follow no rules.

I don’t buy this. Air is viscous and it must take energy to drive the wind, and that energy can only come from the conversion of potential energy due to heavier cold air falling down the gravity well and squeezing out the less dense expanded air that’s been soaking up heat from its contact with the ground. And then there’s cloud-suck where further humid air is pulled into a condensing cloud to gain its latent heat of condensation, irrespective of its temperature (in fact, the colder the better as it will be closer to the dew point).

The more I think about it, the less the traditional stories make sense.

How is it that you can sometimes get an intensely hot day with insane quantities of watts going into the ground, and not a single convective thermal? It’s a day when the air behaves like porridge so that instead of boiling and bubbling like fresh water, it farts and then burns itself to the bottom of the pan.

And similarly, how is it that on a day when the air is very unstable, so that the lapse rate exceeds the adiabatic cooling, when any slight breeze over an obstruction ought to lift the air enough cool it by less than it’s surroundings, you don’t get a humongous permanent thermal formed over that obstruction all day?

And how does any unstable air mass appear on the scene in the first place, where it is colder than it should be at altitude, being that this defies the laws of thermodynamics by carrying so much pent-up energy, and is contradicted by the structure of both warm fronts and cold fronts where the warm air always over-rides the cold air?

Ultimately there is an interplay of the glider pilots using their eyes to read the clouds and the land at a distance in order to guess at the macro structures that are generating the thermals — a phenomenon whose causes (eg air temperature differences) may not be measurable by a traveling point source sensor no matter how accurate. This game could be as futile as a hedgehog on a country road unable to comprehend how sudden proliferation of fast and careless cars on its road is due to a car crash 50 miles away on the M1 during rush hour and the realtime non-cooperative rerouting of all these internet enabled satnavs.

At the end of it all, I’ve got this box of sensors, and I’m going to use it. Something is bound to work

## Leave a comment