## Neuronal curve fitting

Thursday, October 23rd, 2014 at 4:31 pm

I just got myself a new laptop and installed Ubuntu-Linux on it. Scares the hell out of me the speed with which I got it up and running. I am now lost in a sea of code. It’s like walking into a public library after you’d been out in the sticks for a month with only two dog-eared issues of the Reader’s Digest to keep you company. There’s almost too much here. I want to read all of it. And any book or manual you do pick up and spend an hour with means there’s another ten thousand you’ve not picked up that you should have been reading.

Anyways, while doing my apt-cache searching stuff for stuff, I noticed stimfit – Program for viewing and analyzing electophysiological data show up in the search for scipy.

It appears to take datasets of electo-potential readings from a single neuron at every tenth of a milisecond and then fit exponential decay curves [the thick grey line] to selected sections from the (negative) peak to the baseline.

A bit like a temperature sequence, eh?

Oh, and it has a funky Python shell built into it to help you automate the analysis functions. What’s not to like?

Using the power of open source, I can find the code that does the exponential curve fitting, which looks pretty familiar in the way it hackily matches a sequence of values to a positive or negative exponential decay curve by guessing the “floor” (limit) value, subtracting and flipping the readings if necessary, applying a log function before doing a least squares simple linear regression.

```void stf::fexp_init(const Vector_double& data, double base, double peak, double RTLoHi, double HalfWidth, double dt, Vector_double& pInit ) {
bool increasing = data[0] < data[data.size()-1];
Vector_double::const_iterator max_el = std::max_element(data.begin(), data.end());
Vector_double::const_iterator min_el = std::min_element(data.begin(), data.end());
double floor = (increasing ? (*max_el+1.0e-9) : (*min_el-1.0e-9));
Vector_double peeled( stfio::vec_scal_minus(data, floor));
if (increasing)
peeled = stfio::vec_scal_mul(peeled, -1.0);
std::transform(peeled.begin(), peeled.end(), peeled.begin(), log);
Vector_double x(data.size());
for (std::size_t n_x = 0; n_x < x.size(); ++n_x)
x[n_x] = (double)n_x * dt;
double m=0, c=0;
stf::linFit(x,peeled,m,c);
...
}
```

Shame this is unnecessarily in C++, which makes it hard to hack. I suspect this is for historical reasons, and left as-is because it's mathematical stuff which people like to treat as a black box once it works.

Meanwhile, on another idea that I have been sitting on for years, I've found a complete simulator for spiking neural networks written entirely in Python.

I have to get used to all of my ideas being out there already, done for years. I was going to do my spiking simulation idea in Javascript so that I could drag the neurons around interactively on the screen. However, it's good to see the confirmation bias kicking in here if I ever consider writing some CAM functions again. I will not be using C++, which is an out of date dead language, like FORTRAN, as far as I am concerned. Nothing new should be started in it.

Brian is a simulator for spiking neural networks available on almost all platforms. The motivation for this project is that a simulator should not only save the time of processors, but also the time of scientists.

Brian is easy to learn and use, highly flexible and easily extensible. The Brian package itself and simulations using it are all written in the Python programming language, which is an easy, concise and highly developed language with many advanced features and development tools, excellent documentation and a large community of users providing support and extension packages.

The efficiency of Brian relies on vectorised computations (using NumPy), so that the code above is only about 25% slower than C.

What's the trade-off between searching for stuff to reuse and writing it from scratch?

What's it going to be like in a hundred years time, when whenever you try to write some new experimental code, someone next to you can always find something that does it better already and will be trying out your idea before you've even settled down?

They're like DJ-mixing a thousand sound-tracks simultaneously from the last 50 years when you're trying to put down a five note riff on your guitar. Nearly all time spent coding will be wasted. But then again, nearly all time spent writing poetry is wasted, because someone else has already written a poem expressing that thought and emotion a million times better.

It must be so hard to do schoolwork these days when you can look up every answer and see it expressed a million times better than you're going to put in your homework essay. It's all so clearly futile.

There was an Ideas-Suggestions website inside of Autodesk. My all-time favourite idea that I submitted was for all source code committed by employees to go through an automatic plagiarism detector to make sure you are never cutting and pasting a perfect implementation of a function from elsewhere, and that you are only implementing your own crappy versions that you wrote yourself on the day.

Proprietary code is like school homework essays. It has to be your own work or it's not right. On the other hand, open source code is like Wikipedia entries -- almost always of better quality than the unpublished work from any random student in school.

And of course the school-work is crap. You don't know how to do it and you're trying to learn. The question is, are those programmers making proprietary code for a corporation trying to learn? Or is the point to make code that is owned, because that's actually more important to the boss than code which is any good?

### 1 Comment

• 1. Graeme replies at 24th October 2014, 7:14 am :

Most of this is way over my head but if your going to go back to CAM programming you could do worse than a Plunge milling profiling cycle and a G32 threading cycle that produces acme threads using a standard grooving tool.