Dec 272023
 

A couple of years ago I extended my workshop and wanted really good lighting. I discovered Yujileds and installed 5 metres of CRI 95+ 5600K around the edges of the ceiling to reduce shadows.

These LEDs are more expensive for a reason: No flicker, the light is operating-theatre quality and colours are rendered exactly as they do in sunlight.

A month before the guarantee expired the remote control of the dimmer failed. I contacted them and after exchanging a couple of emails, a very helpful Queran sent me a new remote control and dimmer, which arrived by DHL a few days afterwards, free of charge.

Summary: Brilliant lights (forgive the pun) and excellent after-sales service. Highly recommended.

Feb 182020
 

Some years ago I was working for a large multi-national corporation. One morning I received an email from the CIO, addressed to the entire company, explaining a botched SAP rollout. The corporate-speak was some of the best that I have ever read; I reproduce it here, unaltered save for anonymisation, with my translation into layman’s English.

On June 1, we reached go-live with Dressing, Sauces and Oils North America (DS&O) on the TC3 instance of our SAP solution.

On June 1st we forced go-live of our half-baked SAP solution on DS&O, simply because we were so over-budget that the only other option would have been to scrap the project.

Since go-live, DS&O has experienced significant issues with the solution that are impacting the business’ operations and financial reporting.

Since go-live, the business has been losing money like a leaking sieve, because our ‘solution’ was a complete and utter disaster.

This is unacceptable for DS&O and for us.

DS&O have said clearly: “Make it work or die”. For us, ill-engineered solutions are the norm.

To be trusted partners, Global IT commits to fully supporting our businesses … so, we will do what’s needed to stabilize performance for DS&O.

SAP consultants will be brought in, at 3’000$ a day, to criticise the implementation and spend a king’s ransom on new servers that will make the problem worse.

To that end, Andrew Brown, our SAP lead and a member of the CIO leadership team (LT), will personally spearhead the effort to review all factors that could be contributing to the instability.

Andrew’s balls are, in theory, on the table to get this working. In reality, at 3’000$ a day, he doesn’t give a flying fuck, and anyway he has a new client ready to shaft just down the road.

This includes looking at business processes, operations, data as well as assessing the actual solution itself.

We will re-hash the miserable initial requirements analysis and lay the blame squarely with the consultants that are no longer with us.

We want to fully understand the root cause to thoroughly address the issue for DS&O and add to important learnings.

I heard ‘root cause’ in a management course and it sounds good here.  ‘important learnings’ is cute too; I’ve been managing IT for 30 years and I still have yet to learn one single lesson from my impressive catalogue of mistakes.

Andrew will be 100% focused on this effort, with members of the CIO LT stepping in to support other areas for which Andrew is responsible.

Andrew will try to attribute blame on his colleagues. If he is successful, I shall fire them, if not I’ll fire him. With a glowing recommendation.

We undertake this initiative with confidence we can stabilize DS&O and make any warranted enhancements to our approach or technology.

‘warranted’ means that we never made any bad choices initially. ‘enhancements to our approach or technology’ means that we might well need to choose a completely new technology, that will triple the budget and push the project back by three years. Until the next fuck-up.

We will work to ensure that the difficulties experienced here are not repeated.

Those of you who have already been through an SAP implementation will see the humour in this statement.

We will share these learnings with our businesses currently preparing for (or in) deployment.

Our bookmaker is currently taking bets on SAP projects at 15 to 1 of failure.

We also will continue to stress the importance of readiness … businesses and functions must undertake needed changes to prepare for the adoption of new processes, a new organizational structure, and more – all of which are required to execute enterprise process strategies.

Given the magnitude and cost of this unholy blunder, it would seem only fair that you accept some of the responsibility. After all, we did talk to you a couple of times before implementation.

What’s critical to understand is the  Leadership Team (CLT) remains as committed as ever to our ERP modernization journey and SAP as our solution.

We take the meaning of ‘dogmatic’ to levels that even a religious zealot couldn’t imagine. Every morning, we recite ‘Our SAP which art in heaven, hallowed be thy name”.

We are not stopping, we are not slowing down … we are moving ahead with our roadmap and the commitments we’ve made.

Yes, we are going to continue, hell-bent, on forcing SAP down your throats, no matter at what cost or damage to the business, because they had the sexiest PowerPoint presentation.

Leaders have confidence in our function and demonstrating our commitment to their businesses (as we are with DS&O) means we will continue to deserve that confidence. 

Truth be told, the business is sick and tired of IT delivering shitty service by some ignoramus in Bangalore.

Should you have questions about our work on DS&O, please reach out to your manager, SAP leadership or your CIO LT member.

For the British, ‘reach out’ has a distasteful innuendo, possibly acceptable in such a gush of platitudes. Notwithstanding, if you have any self-respect, find a job elsewhere.

Thank you,

If I was honest, I’d say I’m sorry, but I’m above that

Brian

Feb 032020
 

Messieurs,

Il est important de protéger l’environnement, donc j’ai commandé des panneaux solaires et une pompe à chaleur, qui me libéreront de ma dépendance aux énergies fossiles et m’éviteront dorénavant de brûler 6 tonnes de mazout chaque année. Il est édifiant de découvrir que cette démarche n’a strictement aucun intérêt financier. 

Le subside que je recevrai de la Confédération n’est en réalité qu’un prêt qui sera remboursé en moins de sept ans avec les impôts que je paierai sur le courant que je vous revendrai.

Le récent fiasco des contrats dénoncés unilatéralement par SwissGrid pose une lumière crue sur les rétributions accordées aux particuliers qui revendent leur électricité. La journée, vous me facturez 26 ct/kWh et vous allez me le racheter 12 ct/kWh, soit un bénéfice de 116%. Cela alors que vous n’apportez quasiment aucune valeur ajoutée à la transaction, car les électrons que je générerai seront utilisés par le consommateur le plus proche : mon voisin.

On serait tenté de croire que votre politique de rétribution est mue par des considérations purement mercantiles, mais celles-ci ne résistent pas à l’analyse : la production privée, presque homéopathique, se mesure en MWh alors que vous traitez en TWh.

La raison réelle est plus sournoise. L’idée qu’un consommateur aie ne serait-ce qu’un peu d’indépendance énergétique vous est totalement rédhibitoire et les restrictions des volumes de fluides caloriporteurs le confirment : vous ne tolérez même pas que je puisse emmagasiner de la chaleur le jour afin de l’utiliser la nuit suivante.

Dès lors, on constate que la sollicitude des instances publiques pour l’énergie renouvelable n’est qu’une fumisterie hypocrite ; celui qui produira de l’électricité verte le paiera intégralement de sa poche et sera taxé pour son impudence.

Le seul espoir reste dans la prochaine ouverture du marché de l’électricité, qui sonnera le glas de votre monopole et peut-être l’arrivée de concurrents plus enclins à acheter une énergie propre à un prix équitable. Cela serait un vrai encouragement à l’abandon des énergies fossiles si nuisibles à notre environnement.

Recevez, Messieurs, l’assurance de mes sentiments distingués.

Maurice Calvert

Jan 082020
 

In order to create an accurate plan of an old house, I purchased a Leica X4. It is a robust, pleasant instrument, clearly designed by engineers, for engineers and the build quality is faultless.

My first pleasant surprise was that each device is comes with an individual calibratation certificate and the tolerances are much tighter than the blurb in the spec-sheet:

As it should be, the deviation tolerance (±2σ) is supplied at a specified temperature, with tolerance (±3°). Second pleasant surprise, 0.2mm at 7.8 metres is 0.002% – this is clearly a laboratory-grade instrument.

Indoor measurements are made with the red laser dot, outdoor measurements can also be made using the builtin zoomable camera, both very intuitive

The quick-start guide is about as terse as can be – essentially useless for more than using the device as a tape measure – you’ll need to download the manual to be able to make use of all the X4’s features; it is well-written and easy to follow. It took me a good half-hour to become capable of making every measure possible, they are extensive:

  • Room area and volume, corner maxima, wall width from 3 points
  • Angles and resulting line projections
  • Indirect object height from measure-to-base and altitude angle (to measure the height of a tree without a laser dot on the top branch)
  • Stake-out
  • Min-max measures
  • Bluetooth connection to Android phone to transmit values
  • … and so forth

In a nutshell, the X4 can measure (or deduce using Pythagoras), almost any measurement you can imagine and it does it quickly and precisely.

The indirect height function was disabled as-delivered, I had to download the firmware update to my Android and flash it to get this to work (I tested it against known targets and it works extremely well). Aside from firmware updates, this “DISTO Plan” program offers several plugins for plans, facades and room layouts. The interface is clumsy and the advanced features are payable. Disappointing.

Verdict: The X4 is a gem of precision Swiss engineering that is a delight to use and of extraordinary accuracy; just delete the Android software once you’ve used it to flash the latest firmware.

Nov 032018
 

Intel’s RealSense cameras are astonishingly precise but not as accurate. By optimising the calibration of the depth stream and correcting for non-linearity, the accuracy can be improved by an order of magnitude at 2.5 metres and becomes almost linear in the depth:

Skip to solution

Sources of error

Calculating the coordinates of the 3D point corresponding to a depth reading is straightforward trigonometry – here‘s a quick refresher – but the accuracy of the results depends on several factors.

Accuracy of the intrinsics

The supplied Intel® RealSense™ Depth Module D400 Series Custom Calibration program uses the traditional method, displaying a chequerboard to the camera in various poses and solving for the intrinsics. There are several issues with this methodology:

  1. This method establishes the intrinsics solely for the colour camera.
  2. Although it resolves to sub-pixel accuracy, it does so on a single frame, which is imprecise. The results of 3D calculations are extremely sensitive to errors in the field-of-view: A one-degree error in the vertical field of view translates into >11mm error at 1 metre. Concomitant errors in the horizontal field of view make matters worse and they are quadratic in the depth.
  3. The depth stream is synthesised by the stereo depth module and the vision processor. Imperfections anywhere in the chain (unforeseen distortion, varying refraction at different wavelengths, heuristics in the algorithms, depth filtering) may negatively affect the accuracy. One cannot assume that an apparently perfect colour image will produce ideal results in the depth map.

Non-linearity

This is readily observed with the supplied DepthQuality tool. When viewing a target at a measured distance of 1’000 from the glass, the instantaneous reported depth is out by ~22mm:

By averaging depth measurements over a period, errors in precision can be eliminated. Averaged over 1’000’000 measurements, my out-of-the-box D435 reports a range of 980.70mm – an error of 19.3mm. This is within the specified accuracy 2%=20mm but increases quadratically, as is to be expected. Fortunately, this non-linearity appears to be constant for a given camera and once determined, can be eliminated.

Focal Point

The focal point of the depth map is supplied in the Intel RealSense D400 Series Datasheet, for a D435 it is defined as being 3.2mm behind of the glass. Presumably due to the manufacturing tolerances of ±3%, the focal point may in reality be tens of mm away.

Mounting

No matter how precisely the camera is mounted, there will be errors between the mounting and the camera’s true central axis. Knowing them improves the accuracy when translating from the camera frame to the parent (vehicle or world) frame.

Solution

I have written a program that calibrates a camera based solely on measurements in the depth stream. It derives all the parameters discussed above by making several observations of a target with known dimensions. A much higher degree of accuracy is obtained by averaging over a large number of measurements. The optimal parameter values are then calculated, as a single problem, with a non-linear solver.

It is open-source, available on GitHub https://github.com/smirkingman/RealSense-Calibrator

Screenshot of an optimiser output:

Discussion

The comparisons presented above use the Z-range as the metric, as this is the metric in the reference documentation. The Z measure alone is only part of the answer, a more realistic metric is the 3D error of the point: the vector between the truth and the 3D point determined by the camera and software. Futhermore, just supplying a number doesn’t tell the whole story. Traditional error analysis supplies descriptive statistics, which give a value and a confidence known as the 68–95–99.7 rule, which allows us to make statements like “The error will be no more than Xmm 99.7% of the time” (3-sigma, or 3σ). 

The 3D error – the length of the vector between the true coordinates of the point and what the camera+software reported is:

The 3-sigma error is:

what this shows is that at 1.5 metres, the coordinates of the 3D point will be between 1’447 and 1’553 from the camera 99.7% of the time.

Nov 012018
 

A depth camera measures the distance from the camera to a point in 3D space. For a given point, the camera supplies the row and column on its ‘screen’ and the depth towards the point. It is worth pointing out here that classic depth cameras like the Kinect supply the length of the ray; RealSense cameras supply the range, or Z component.

Calculating the coordinates of the point is fairly straightforward trigonometry. Suppose a D435 camera is mounted 500mm off the ground, pointing at the horizon. 1000mm away there is an object 101.5mm high:

To warm up, the camera’s vertical field of view is 56°, so at 1’000mm half of the height is

    \[1000*tan(56/2)=531.7mm\]

The camera has 480 rows, so it will see the 101.5mm-high object at row

    \[((480-1)/2)+(-398.5)/531.7*((480-1)/2)=row\ 60\]

Bonus: It sees the object at an angle of

    \[Atan⁡(398.5/1000)=-21.7^\circ\]

Now we define constants for the Fx intrinsics, the centre row and the height of a pixel:

    \[VFov2=VFov/2\]

    \[VSize=Tan(VFov2)*2\]

    \[VCentre=(Rows-1)/2\]

    \[VPixel=VSize/(Rows-1)\]

    \[VRatio=(VCentre-row)*VPixel\]

    \[Y=Range*-VRatio\]

Notice the ‘Rows-1’ because there are 479 intervals between 480 pixels: row 239 points just under the horizon and row 240 points just above the horizon.

Then, for the example above we define our constants:

    \[VFov2=56/2=28\]

    \[VSize=Tan(28)*2=1.0634\]

    \[VCentre=(480-1)/2=239.5\]

    \[VPixel=1.0634/(480-1)=0.00222\]

and calculate the Y coordinate:

    \[VRatio=(239.5-60)*0.00222=0.3985\]

    \[Y=1000*-VRatio=-398.5 mm\]

The calculations for the X-coordinate are identical, replacing ‘Vertical’ with ‘Horizontal’ and Z is simply the supplied range.

Jun 122018
 

A quick review of the D435 with pictures of real-life situations; I wish I’d had this when I was considering buying a D435.

Unboxing is a breeze. The D435 is small enough to hide in your hand, in a sleek aluminium case weighing in at 71.8g. It comes with a 1m USB3 cable and a cute, flimsy little plastic tripod. Amusingly, the 28.6g tripod isn’t up to the task of supporting the camera once the 41.2g cable is plugged in and it falls over; I’ve given it to my grand-daughter for her Lego. The standard camera thread mounting underneath works perfectly for a real tripod; for an aligned mounting there are 2 M3 tapped holes on the back, 45mm apart but they are only 2.5mm deep.

All the software is available on GitHub, quick-start with the Intel.RealSense.Viewer.exe to get this, depth at 1280×720:

The toy tripod is in the foreground. Notice that I switched on the hole-filling filter, which is off by default. Switching to 3D with depth colours and quads:

and using the camera colours:

The red light meter to the right of the tripod reads 28 lux: I was astonished at the depthmap accuracy in such poor light.

On of the major failings of depth cameras is reflections, so my next tests were with glass. Here, an interior, again the light meter in the foreground reads 15 lux:

The right-hand side of the table is glass and thus appears further away, which is normal. The wine glass next to the light meter is captured perfectly. The model on the front-left of the table is a tiny Sterling engine. The Viewer has a nice zoom function:

The stereo matching doesn’t get the inner details of the wheel, but the outline is well delimited.

Next, looking outside from behind a window of 6mm + 15mm glass at 260 lux. My cursor was on the middle of the small hedge to the right of the wall, the 32 metres measured through 2 layers of glass in the depth stream seem quite reasonable:

Stepping outside to 1’800 lux (and light rain), there are a few artefacts due to the reflection in the water in front of the chair and the window on the right appears to be at the same distance as the trees, which makes perfect sense as that is what is reflected. Notice that the depth images cover a greater area than the colour images. This scene is extremely unfavourable for a depth camera: poor light, reflections, textureless walls and rain:

Zooming in on the flower pot in the foreground, it isn’t immediately clear if the flowers themselves are distinguished:

but a little image enhancement shows that they indeed are:

Panning left, there’s a buddha statue about 8 metres away. I positioned the cursor on the roof of the building above the Catalpa tree; the reading was 65.54 metres, which seems right (and is quite astonishing):

The buddha is clearly rendered in the zoomed depthmap, with only a handfull of dead pixels:

The camera button on the depthmap outputs the PNG, the RAW and the metadata:

Frame Info:
Type,Depth
Format,Z16
Frame Number,64305
Timestamp (ms),1528815686864.40
Resolution x,1280
Resolution y,720
Bytes per pixel,2

Intrinsic:,
Fx,639.315613
Fy,639.315613
PPx,637.479370
PPy,362.691040
Distorsion,Brown Conrady

which is a nice touch.

The SDK, librealsense-2-12.0 has the CPP sources for everything and wrappers for C#, Unity, OpenCV, PCL, Python, NodeJS and LabView. I tried the C# example SLN; after adding the reference to Intel.Realsense.DLL, they ran on first compile. The Depth tutorial generates a cute 70’s-style image made with characters (it’s my hand):

and the 2nd example – 100 lines of code –  with depth and colour:

Conclusion

Over the years I have studied many depth cameras, particularly for outdoor use: Kinect, Stereolabs, SwissRanger, PrimeSense, Bumblebee, to name but a few. None were satisfactory, either because they were blinded by sunlight or cripplingly expensive. The D435 works perfectly both inside (with IR) and outside (with stereo matching); it is cheap, resilient and accurate. I think it is going to be a revolutionary game-changer in computer vision.

Aug 282017
 

TLDR: Results . Lessons learned . Summary & Conclusions . Quick-start guide

I am building an autonomous lawnmower which needs to know its position precisely – in the order of centimetres. Standard GPS is an order of magnitude too coarse and until recently, all flavours of differential GPS were very expensive. This changed last year when UBlox released the C94-M8P with a price tag of €359 for a complete solution. The announcement said:

The NEO-M8P module series introduces the concept of a “Rover” and a “Base Station”. By using a correction data stream from the Base Station, the Rover can output its relative position with stunning cm-level accuracy in good environments.

An old dog, I was wary of “in good environments”. That RTK is capable of centimeter accuracy when stationary and with a horizon-to-horizon view of the sky, I have no doubt; but what about in a less GPS-friendly environments? This article is the result of real-life testing in my garden, cluttered by surrounding buildings and vegetation..

Objectives

My aim was to establish whether or not the UBlox solution would be sufficiently accurate for my mower,
in particular how would it perform in less-than optimal conditions, so I devised a series of experiments, which I have named for the purposes of this discussion.

  1. Open. Stationary for 24 hours in the middle of the lawn, with a decent view of the sky. A base-line test to determine the system performance.
  2. Sky270. The receiver is next to the corner of a building, 90° of sky are obscured.
  3. Sky180. The receiver is close to a wall of a building, 180° of sky are obscured.
  4. Sky090. The receiver is in a corner of a building, 270° of sky are obscured.
  5. Canyon. The receiver is between two buildings with a heavy tree canopy, most of the sky is obscured.
  6. Mobile. The receiver is moved around the garden, stopping for 10 seconds at a known location in each of the previous tests.

Tests 2-6 are performed for an hour. As described later, I discovered that the receiver needs 25-30 minutes to settle on a solution and these data were removed.

All the tests are made with the antenna 54cm from the ground.

Test setup

I built a rooftop base station

powered by a solar cell with an MPP charger, the DC1621A evaluation board based on the LTM8062 chip from Linear Technology. As I had no idea how much power the C94-M8P would need – after all it has a UHF transmitter – I opted for a 10Ah 2S Lipo. On the left we have the DC1621A, on the right the UBlox board and the battery below. There’s a cell-balancer and packets of silica gel tucked behind the DC1621A. After verifying that every was working correctly, I mounted it on my roof:

My rover was simply a low table in the garden, with the antenna 54cm off the ground.

The rover’s signature is

In what follows, I am focussed solely on horizontal position – the X, Y values; I ignore altitude as it is only relevant for surveying and aviation.

All my calculations were performed in 64-bit floating point which gives ~15 decimal digits of precision.

The raw GPS data from the receiver contains latitude/longitude values. To convert these into distances I use Vincenty’s formula. I validated my implementation against 500’000 known distances, which are precise to 0.1µ metre.

Outputs

Each test produces 5 results.

1. Summary statistics

Quality (the NMEA term) is the type of Fix that the GPS currently has. Possible values are:

  1. Autonomous GNSS
  2. Differental GNSS
  3. 3D fix
  4. RTK Fixed
  5. RTK Float

It is worth noting that the ‘best’ fix is 4.

SVs is the number of satellites in view. I configured the rover UBX-CFG-NMEA to high precision mode (to output 9 digits of precision on latitude and longitude) and as a result considering mode is disabled. It is unclear whether satellites-in-view is those seen or those actually used.

hDop is the horizontal dilution of precision created by the constellation geometry.

ErrorLat is the distance in millimetres from the average centre of all the readings. Similarly ErrorLon.

ErrorLatLon is Euclidean distance from the measured point and the averaged centre of all the readings.

The ErrorX values are calculated using Vincenty’s Formulæ.

2. Error statistics

Limit is simply a reminder of the percentages associated with the number of standard deviations.

σ is the standard sigma calculation, the square root of the variance.

Out is the actual number of measurements that exceed the CEP below.

CEP is obtained by finding the number of samples ‘Out’ which exceed the limit %.

The difference between σ (a statistical calculation) and CEP (the true value for the data set in question) is readily explained: The statistical σ is valid only when the data are normally distributed (a Gaussian). Even over 24 hours, GPS readings are not Gaussian because the average is not stationary. Disraeli’s quip remains as true as ever.

3. Deviation map

An X-Y plot of ErrorLat horizontally against ErrorLon vertically:

A point’s colour indicates its age as hue corrected for the human eye. First blue, then green, yellow, orange and finally red.

4. ErrorLatLon over time

The Euclidean distance from the average centre to the measured point, in mm, over time:

5. Histogram

The number of errors at each bucket of the Euclidean distance from the average centre:

 

Test results

Open test

Sky visibility is typical for a residential area. To the north and west a single-storey building

The receiver, antenna and PC are on a low table in the foreground.

To the east, a 4-storey building (actually 3-storey but on higher ground)

To the south and west, mature oak trees and 2-storey houses

The receiver is in Fixed mode 71% of the time.

The 3σ CEP of 712 mm is surprisingly high and this is visible both in the deviation map:

and in the histogram (I have added log(Count) overlayed on Count to show the long tail more clearly):

The problem occured mid-morning and early afternoon:

The cause appeared to be due to receiver occasionally losing the RTK solution. To analyse this I zoomed in on the first occurrence and plotted the absolute error along with the Fix. 3D=2, Fixed=4 and Float=5. To highlight the differences, Fixed errors are green, Float errors are blue and 3D errors are red. Notice that the vertical scale is logarithmic:

At 10:28:18 and 10:32:52 the receiver switches for one measurement from Float to 3D. The same thing happens at 11:21:18:

and in both cases takes some 20 minutes before settling down.

The replay in ucentre shows nothing untoward at the instant of the loss:

I note that 20 minutes is the time that it takes the receiver to lock on cold start (details in Lessons Learned below) and postulate that momentary interference can cause a complete loss of the solution.

There were 4 such events in the 24-hour test. 4 in 86’400 = 1 in 21’600; a frequency less than 4σ, so I sought an explanation.

On the weekend of the tests there was an airshow nearby with several passes of military aircraft in formation. One can reasonably assume that 6 fighters at low altitude produce vast quantities of electromagnetic radiation and I imagine that military RF applications care little about the interference that they generate. Furthermore, the dropouts only occurred during the day, when the aircraft were flying, so I decided that they were the most likely cause. Consequently I decided to use a subset of the data with these events excluded. I also excluded the noisier data as of 04:32 in the morning, on ‘benefit of the doubt’ reasoning, for want of an explanation. The results then become:

   

Which is what I expected and much more reasonable.

Sky270 test

The receiver is against the NW corner of the house, 90° of obscured sky.

The receiver is slightly more in Float rather than Fixed (4.63 average quality). The 3σ CEP is 54 mm.

There is significantly more longitude error than latitude.

Sky180 test

The receiver is near the NW wall of the house, half the sky is obscured.

The receiver is continuously in Fixed mode. The 3σ CEP is 43 mm.

The histogram has longer tails due to the less favourable constellation geometry (practically no SVs to the SE):

The CEP is better than the Sky270 test, where only 90° of sky were obstructed. I attribute this to the fact that due to the lay of the land, the angle of the apex of the house is much larger when viewed from a corner. Additionally the sky view away from the house in this position is a little more cluttered.

Sky90 test

The receiver is in a corner of a building, facing SW, 270° of sky are obscured.

The receiver is in Fixed mode for the whole hour. The 3σ CEP is 185 mm.

  

The deviation shape clearly reflects the SW direction of the corner.

Canyon test

The receiver is in a NE-SW canyon between two houses with heavy canopy:

The wide-angle lens gives the impression that there is a wide view towards the camera. This is not the case, the two houses’ walls are almost parallel, some 12 metres apart and the canopy almost reaches to above the camera.

The receiver never achieves Fixed mode. The 3σ CEP is 135 mm.

The error and histogram are noisier and the canyon effect is clear in the deviation map, where the longitude errors are double the latitude errors, again due the the constellation geometry (few SVs NE and SW):

Mobile test

I walked around the garden, pausing 10 seconds at each of the SKYNNN points to hold the GPS at a marked position.

The red points are the 9 nearest points to the average of each location.

The receiver was in Fixed mode 7% of the time.

The average position error when moving is 19 mm.

Lessons learned

Here are some tips which might be useful to those envisaging the C98-M8P solution.

Status

The status bar at the bottom-right of U-Centre is very helpful:

From left to right:

  • Version of connected GPS
  • Port, if a GPS is connected
  • File currently open (recording or replay)
  • The last message-type received. This is particularly useful to confirm that you are receiving RTCM messages, where you will see the field display UBlox-NMEA-RTCM in quick succession.
  • Elapsed time
  • TOD

Antennæ

UBlox stress repeatedly that the solution’s accuracy depends on the antenna quality. I verified this and was unable to get the receiver to converge to ‘fixed’ mode (the highest accuracy) with the supplied patch antennas, so I bought a pair of Garmin GA38.

Startup accuracy

I was disappointed with my initial tests, until it dawned on me that the receiver needs quite a while to converge on the optimal solution. Here is an error plot on power-up:

After 52 seconds the error is momentarily down to 24 mm, but it quickly rises to 1’400 mm, thereafter gradually getting close to zero over the next 13 minutes. It again wanders up to 200 mm over 5 minutes and achieves first Fix at 25:54. It finally alternates between Fix and Float for a couple of minutes before permanently settling to Fixed at 26:47. All this is perfectly reasonable, given the calculations involved.

The takeaway is that you should give the receiver half an hour to warm-up before you can expect accurate results. Note also that seeing it switch to Fixed is not a guarantee that it has completely settled.

Fixed .vs. Float

It is apparent from the Sky090 test results that Fixed mode is an indicator of better quality but not a guarantee. All that can be said is that if the quality is 4 or 5, it’s RTK; lower values are standard GPS.

Power

On a 7.45 V battery, the base station consumes ~0.077 A = 0.573 W.

At 12.58 V, the rover consumes 0.066 A = 0.83 W.

So the evaluation board seems to use a linear regulator and I imagine that a non-negligible amount of that power goes to the blinking LEDs.

At closer to 3.5 V, you can assume a power consumption base or rover << 0.5 W.

Interference

As I discovered in the Sky360 test, unexpected interference can cause the unit to lose the RTK solution. The application can detect when this occurs by observing a Quality < 4; the consequence is that the position accuracy will be significantly degraded for the following 20 minutes.

Baseline & Radio range

Although not relevant for my use-case, I nonetheless tested the maximum baseline distance by driving away from the base station with the unit on my dashboard.

The receiver was never in Fixed mode in the car and lost Float mode (falling back to a 3D fix) between 576 and 745 metres away from the base. The distances to the loss points are in red:

(The terrain in my area is almost perfectly flat.)

This limitation is certainly not due to the (amazing) UHF radio, I drove 13 kilometres away from the base and the RTCM stream arrived faultlessly during the whole journey.

Survey-In

There seems to be some confusion about precision and accuracy of the base station. Given the non-Gaussian nature of GPS measurements, acquiring a TRUE position (whatever that means) by averaging would take weeks (months?) simply because the precision of the average depends on the ‘Gausian-ness’ of the data. The base station’s absolute coordinates, insofar as they are within metres of the truth, are irrelevant because the base transmits only satellite relative errors to the rover. The rover uses these relative numbers to adjust it’s solution; the accuracy of the result is not contingent on the precision of the base. Given that the satellites are ~20’000 Km away, a rover-base distance < 100 KM means that they both see identical distortions and this is why RTK baselines are generally recommended to be less than 50-100 Km.
If follows that it is perfectly acceptable to perform a Survey-In in 1 minute with an error of 5 metres, your accuracy will be no better than surveying for an hour with a 30cm error. That said, expect a survey-in of 1 metre to take about 10 minutes.

Deviation Map

You can scroll in and out with the mouse-wheel, but the minimum is 1 metre. To scroll in more, hold the Shift key, whence you can scroll in to a diameter to 20 cm.

Radio baud rate

The default baud rate is 19’200. Initially, I couldn’t get the RTCM messages to flow and finally discovered an error message: “GNTXT More than 100 frame errors, UART RX was disabled”. Reducing the baud rate to 9’600 fixed this, with no apparent consequences.

Summary

What does all this mean in layman’s terms?

Assuming the following conditions:

  • Urban environment
  • Reasonable antenna
  • 50 cm off the ground
  • During one hour, from a known truth position

I can make the following assertions about the accuracy and precision:

Accuracy. The error of the reported position will be less than 28 mm from the truth 99.73% of the time. In an urban canyon the error rises to 68 mm and with only 25% of the sky visible 93mm:

On average, you will get one reading worse than these values every 6 minutes.

Precision. Will never be worse than the precision of your base station ±356 mm (the radius of the 24-hour CEP).

Conclusions

The promised accuracy is indeed ‘stunning’.

When combined with a TOF camera, the accuracy of a few centimetres is more than adequate for the mower I am building, so I’m perfectly satisfied with it.

That said, the limitations I discovered preclude this solution in some other use-cases that come to mind. Both UAV and farming applications require a much longer baseline and the inability to maintain Fixed mode accuracy is a handicap for those needing consistent high accuracy.

If UBlox can fix the baseline limitation and significantly improve the unit’s ability to stay in Fixed mode, this GPS will become a truly amazing feat of engineering.

In the spirit of openness, you are welcome to use the data that gathered, for whatever purpose you see fit. Download here.

I am indebted to my local surveyors, JC Wasser, for explaining the finer points of their art and their advice on how to present my results.

sorry, you'll have to type this yourself

Appendix: Quick Start

If you’d like to try this solution, here’s a summary of the user guide, along with a few tips.

The user guide is clear and setting-up the two stations is easy once you get the hang of u-centre (a nice piece of software, by engineers, for engineers). Assuming a classic setup with a fixed base station, here’s a quick-start guide:

  • Download the latest firmware. When you install it (Tools-Firmware Update), it’ll ask for the FIS file which nobody talks about; it’s is hidden away in the u-centre program files, called flash.xml.
  • Download the latest version of u-center

Setting up the Base

  • The base station needs power. It’ll take if from USB but you’re going to need a source of external power once you’ve finished configuring it. It needs to be between 3.7 and 20 VDC, the + and – signs are on each side of the power connector, you’ll need a magnifying glass to read them.
  • Fire up u-centre and connect the card you’ve chosen as base station to a USB port (the cards are identical, it’s your parameters that make them base or rover). Don’t try connecting more than one station to your PC at the same time!
  • In what follows, you need to click ‘Send’ in the toolbar at the bottom to transmit you settings over USB to the base station.
  • In Message view-UBX-CFG-GNSS, make sure GPS and GLOSSNAS are enabled.
  • The base station also needs to know its own location, which it can figure out itself. Expand UBX-CFG-TMODE3, select Mode=1-Survey-In; this is what makes it a base station. The Maximum Observation Time and Required Position Accuracy are actually quite unimportant, because it doesn’t matter what the true latitude and longitude are, so 300 seconds and 3 metres are fine. Assuming you’ve got a good sky view it’ll finish the survey easily in the 5 minutes. If you really insist on better precision, it takes about 10 minutes for an RPA of 1 metre.
  • UBX-CFG-MSG and enable F5-05 RTCM3.2 1005, 1077, 1087 and 1230 for UART1. They should all be set to 1 Hz except 1230 where 10 seconds is enough for the GLOSSNAS Code-phase biases.
  • UBX-CFG –> PRT set the target=1-UART1, Protocol-in=none, Protocol-out=5-RTCM3 and the baud rate to 9’600. This is saying: Send all RTCM messages to UART1, where they’ll get transmitted on the UHF radio link. I couldn’t get the recommended 19’600 baud rate to work, I kept getting “GNTXT More than 100 frame errors, UART RX was disabled” messages.
  • UBX-CFG-CFG and save the configuration. That way, when you power-cycle the base, it will resume operation normally. If you forget this and lose power, go to step 2.
  • After a while, perhaps up to half an hour, in the Data View, you’ll see the base station switch to TIME. If it doesn’t, don’t worry, it’ll get there eventually and FLOAT mode already gives good results.

Setting up the Rover

  • Unplug the base station and connect the other, rover, station.
  • UBX-CFG-PRT set the target=1-UART1, Protocol-in=5-RTCM3, Protocol-out=none and the baud rate to 9’600. This is saying: RTCM messages will coming in on UART1, the UHF radio link.
  • After a moment, the rover should start receiving RTCM messages. You won’t see them by default, UBX-CFG-Messages right-click on RTCM and ‘Enable child messages’. Then open the packet console to see them (you won’t see anything in the Text Console).
  • Once they’re flowing in, time for a beer, the rover has to converge on the solution and this can also take a while, for me 20 minutes at minimum. You can monitor the convergence progress with UBX-NAV-RELPOSNED, click the ‘Poll’ button bottom-left to update the display if it isn’t enabled. The N and E Relative accuracies will fall gradually and eventually reach 0.01 = 10 centimetres.
  • After some time it should switch to Fixed mode. If it doesn’t, don’t worry, as I mentioned in Lessons Learned, Fixed mode is better but not a pre-requisite for decent results.
  • Now, and only now, can you make meaningful observations. Restart u-centre (the only way I found to reset the Deviation Map) and View-Deviation Map (F12). Yellow dots = last value, green dots = previous values. To zoom in closer than 1 metre, press Shift whilst scrolling the mouse wheel, the minimum is 20 centimetres.
 Posted by at 1:54 pm  Tagged with:
Apr 262017
 

[mass noun] combination of computer hardware, software and telecommunications equipment that allow individuals to disseminate vacuous guff to a wide audience.

The ultimate DrivelWare©™ is Twitter. As it’s name and logo clearly indicate, it allows hundreds of millions to parrot sparrows by creating digital noise. It is a fact that sparrows’ tweets have important Darwinian functions: to congregate, warn of danger and attract mates for reproduction. In contrast, human tweets fulfill none of these functions; there is no congregation or dangers in cyberspace and reproduction requires a physical encounter. Twitter has become the de facto leader in DrivelWare©™ due to its limit to 140 characters, which curtails – wisely – the amount of information that can be transmitted. In practice this limit is not problematic as the average tweet length is 28 symbols, a good proxy for the authors’ IQ.

The most pervasive DrivelWare©™ is Facebook, where the gerbil-like publish self-important, whimsical information created by random synapse firings: location, bowel movements, olfactory sensations and so forth. The behaviour is rewarded with ‘likes’ from correspondents, sustaining a Pavlovian feedback mechanism that encourages cyclic eructations.

Finally, the epitomy of DrivelWare©™ is SnapChat, where the mindless content is automatically deleted a few seconds after it is created, thus reinforcing the correlation between the quality and the lifetime of the message.