Freesteel Blog » Optically verifying the BNO055 absolute orientation sensor

Optically verifying the BNO055 absolute orientation sensor

Saturday, July 27th, 2019 at 8:20 pm Written by:

I’ve been trying to use the BNO055 for hang-glider experiments for a while. My current serial micropython interface is here or here. The sensor contains its own dedicated microcontroller that continually reads the gyros, accelerometers and magnetometers at a high frequency, fuses them, and provides orientation in the form of a quaternion, and acceleration separated into it’s kinetic and gravity components. (I’d prefer a version where you set the device working, and it streamed the measurements down on the wire instead of needing to be polled every 100ms.)

After years of not really having a clue, I’ve got far enough to be able to make a movie with these frames from an overhead unit attached to the keel of the glider with the intention of measuring the control inputs (ie the pilot’s hang position which determins his weight shift) in relation to the aerodynamic response.

There are two objectives of this work — other than the byproduct of learning a whole load about sensors that ought make me useful for something.

Firstly, we’d want to quantify the hidden variables of a glider (eg glide angle, control responsiveness, etc) in order to better compare between them.

Secondly, I want to quantify pilot behaviour and rig this up on a flight by a top pilot who always seems to do well, and find out what they’re doing that’s different from what I’m doing in order to enable some form of effective coaching that would save me a lot of time. (Generally the top pilots don’t know what they’re doing as they merely report doing it by feel, and that feel very luckily for them happens to coincide with doing it right.)

However, I don’t really trust these accelerometer readings when I watched this. It seemed like it was sticky. That is, its mathematical filters sometimes hold one value until it is no longer valid, and then swings wildly to another other value.

GPS readings sometimes do this and have nasty discontinuities. This is going to happen when there are bimodal probability distributions of the error where there are two likely interpretations of the position from the same measurments.

Meanwhile, I was looking at namespaces in OpenCV and the function findChessboardCorners() caught my eye. I wondered what that was for. Turns out it’s used for Camera calibration — finding lens distortion and focal length.

Then I found out about ArUco Markers and tried to use them for measuring the pilot’s position (see above).

That didn’t work on half the frames because the light catches it badly.

This lead onto the amazing all-in-one charuco board technology which has an aruco tag in each white square of a chess board so that nothing can possibly get confused.

This is great, because it gives an external means of verifying the absolute orientation sensor. If I could get the numbers to agree, then I’ll have proved I’ve understood all the orientation decoding and would know the error bounds.

My code is in this Jupyter notebook, though the bulk has been moved into the videos module of the hacktrack library for safekeeping.

Here’s the procedure, with the unit up top under my thumb containing the orientation sensor, the video camera embedded in the main unit (with the orange light visible to show that it’s working) looking down past the randomly flashing LED light.

And this is what it looks like from the camera’s point of view:

The phone is running a very crude android app I wrote called Hanglog3 for receiving the orientation sensor data via wifi over a socket from the ESP32 attached to the sensor and storing it in the phone’s copious memory, so I don’t need to use a mini-SD card writer wired to the microcontroller that causes it to stall for up to 80ms when the data is flushed to it.

What’s that LED light doing in the view?

That’s used to synchronize the logged orientation data with the frames from the video. I’ve made an interactive function called frameselectinteractive() that lets you slide a box around the LED in the image, like so:

Then the function extractledflashframes() measures the mean red green and blue values in the box for each frame so you can see that there is a clear enough signal.

In this case, the 200 value in the red channel shows a clear enough signal, which can be converted to a boolean on or off, and aligned with the timestamped LED on and off commands in the flight data file that also carries the orientation sensor data. I’ve used the Dust measurement records from this to save reprogramming anything, as the Dust sensor no longer exists (it was a ridiculous thing to carry around on a hang-glider anyway; what was I thinking?).

videoledonvalues = ledbrights.r>200
ledswitchtimes = (fd.pU.Dust==1)  # one timestamped record for every on and off of the LED
frametimes = videos.framestotime(videoledonvalues, ledswitchtimes)

Since the LED flashes at random intervals, there can be only one way to align them, which allows me to assign a timestamp to each video frame, and consequently to any information derived from that video frame, such as camera orientation.

The function which extracts camera orientation from the video frame is findtiltfromvideoframes().

The code which does this from an image frame works like this:

# extract the DICT_4X4_50 aruco markers from the image
markerCorners, markerIds, rejectedMarkers = 
  cv2.aruco.detectMarkers(frame, aruco_dict, parameters, cameraMatrix, distCoeff)

# try harder to match some of the failed markers with the knowledge of 
# where they lie in this particular charucoboard
cv2.aruco.refineDetectedMarkers(frame, charboard, markerCorners, markerIds, 
                                rejectedMarkers, cameraMatrix, distCoeffs)

# derive accurate 2D corners of the chessboard from the marker positions we have
retval, charucoCorners, charucoIds = cv2.aruco.interpolateCornersCharuco(markerCorners, 
     markerIds, frame, charboard, cameraMatrix, distCoeffs)

# Calculate relative camera to charuco board as a Rodrigues rotation vector (rvec) 
# and translation vector (tvec)
retval, rvec, tvec = cv2.aruco.estimatePoseCharucoBoard(charucoCorners, charucoIds, 
                                            charboard, cameraMatrix, distCoeffs)

# Convert rvec to single vector of the vertical Z-axis kingpost
r = cv2.Rodrigues(rvec)[0][2]
row = {"framenum":framenum, "tx":tvec[0][0], "ty":tvec[1][0], "tz":tvec[2][0], 
                            "rx":r[0], "ry":r[1], "rz":r[2]}

Notice that this cannot work without accurate values for the cameraMatrix and distCoeffs (distortion coefficients), which in the case of this camera has been calculated as:

cameraMatrix = numpy.array([[1.01048336e+03, 0.00000000e+00, 9.46630412e+02],
                            [0.00000000e+00, 1.01945395e+03, 5.71135893e+02],
                            [0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
distCoeffs = numpy.array([[-0.31967893,  0.13367133, -0.00175612,  0.00153122, -0.03052692]])

These numbers are not fully reproducible. However, they do make the picture look okay with the undistortion preview where straight lines look straight. Maybe there are too many degrees of freedom in the solution.

Now the hard part. The camera and the orientation sensor are not precisely aligned.

First the vertical axis of the orientation sensor, whose values are given in quaternions (pZ.q0, pZ.q1, pZ.q2, pZ.q3) needs to be extracted to provide a tilt-vector (of the Z-axis):

r00 = pZ.q0*pZ.q0*2 * pZ.iqsq
r33 = pZ.q3*pZ.q3*2 * pZ.iqsq
r01 = pZ.q0*pZ.q1*2 * pZ.iqsq
r02 = pZ.q0*pZ.q2*2 * pZ.iqsq
r13 = pZ.q1*pZ.q3*2 * pZ.iqsq
r23 = pZ.q2*pZ.q3*2 * pZ.iqsq
pZ["tiltx"] = r13 + r02
pZ["tilty"] = r23 - r01
pZ["tiltz"] = r00 - 1 + r33

Then the (rx, ry, rz) vectors from the video camera images needs aligning to the (tiltx, tilty, tiltz) vectors from this orientation sensor.

I have no idea how I cracked this one, but it went a bit like this:

# kingpost vertical vectors from camera
rx, ry, rz = tiltv.rx[t0:t1], tiltv.ry[t0:t1], tiltv.rz[t0:t1]

# Interpolated orientation tilt vector (so the timestamps are the same) 
ax = utils.InterpT(rx, lpZ.tiltx)
ay = utils.InterpT(ry, lpZ.tilty)
az = utils.InterpT(rz, lpZ.tiltz)

# Find the rotation between these two sets of points using SVD technology
# a[xyz] * r[xyz]^T
H = numpy.array([[sum(ax*rx), sum(ax*ry), sum(ax*rz)], 
                 [sum(ay*rx), sum(ay*ry), sum(ay*rz)], 
                 [sum(az*rx), sum(az*ry), sum(az*rz)]])
U, S, Vt = numpy.linalg.svd(H)
R = numpy.matmul(U, Vt)

print("Rotations in XYZ come to", numpy.degrees(cv2.Rodrigues(R)[0].reshape(3)), "degrees")


# Apply the rotations to the camera orientation
rrx, rry, rrz = \
(R[0][0]*rx + R[0][1]*ry + R[0][2]*rz, 
 R[1][0]*rx + R[1][1]*ry + R[1][2]*rz,
 R[2][0]*rx + R[2][1]*ry + R[2][2]*rz)

In this case, the rotations in XYZ came to [ 0.52880368 0.96020647 -80.29792978] degrees.

Thus, since I am now thoroughly running out of time, here is the comparison in XY of the unit vectors:

And this is how it looks to the individual components:

I am totally going to forget how any of this works when I get back. It is this hard to validate an orientation sensor against a video image, it seems.

Next is to make a charuco board with a flashing light in it with its own orientation sensor and pin it to the back of the pilot’s harness. Then maybe I’ll be measuring the glider relative to the pilot rather than the other way round.

All of this is so hard, and the main objectives haven’t even begun. Why is it so hard to get anywhere?

Leave a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <blockquote cite=""> <code> <em> <strong>