"Server status: clock unstable"

Bit of an odd one.

I am receiving “Server status: clock unstable” with the associated “This feeder is not being used for multilateration because its timing information appears to be unreliable. This can be caused by the site location being incorrect, or because your Pi is running out of free CPU.” message on my feeder page. Relevant logs (/var/log/piaware.log) are as below:

Dec 30 21:34:03 deb-adsb piaware[383]: 300 msgs recv’d from unknown process (207 in last 5m); 300 msgs sent to FlightAware
Dec 30 21:38:31 deb-adsb piaware[383]: mlat-client(745): Receiver status: connected
Dec 30 21:38:31 deb-adsb piaware[383]: mlat-client(745): Server status: clock unstable
Dec 30 21:38:31 deb-adsb piaware[383]: mlat-client(745): Receiver: 26.3 msg/s received 7.4 msg/s processed (28%)
Dec 30 21:38:31 deb-adsb piaware[383]: mlat-client(745): Server: 0.0 kB/s from server 0.0kB/s TCP to server 0.1kB/s UDP to server
Dec 30 21:38:31 deb-adsb piaware[383]: mlat-client(745): Results: 2.8 positions/minute
Dec 30 21:38:31 deb-adsb piaware[383]: mlat-client(745): Aircraft: 2 of 5 Mode S, 8 of 8 ADS-B used
Dec 30 21:39:03 deb-adsb piaware[383]: 590 msgs recv’d from unknown process (290 in last 5m); 590 msgs sent to FlightAware
Dec 30 21:44:03 deb-adsb piaware[383]: 923 msgs recv’d from unknown process (333 in last 5m); 923 msgs sent to FlightAware
Dec 30 21:49:03 deb-adsb piaware[383]: 1292 msgs recv’d from unknown process (369 in last 5m); 1292 msgs sent to FlightAware
Dec 30 21:53:32 deb-adsb piaware[383]: mlat-client(745): Receiver status: connected
Dec 30 21:53:32 deb-adsb piaware[383]: mlat-client(745): Server status: clock unstable
Dec 30 21:53:32 deb-adsb piaware[383]: mlat-client(745): Receiver: 74.1 msg/s received 19.7 msg/s processed (27%)
Dec 30 21:53:32 deb-adsb piaware[383]: mlat-client(745): Server: 0.0 kB/s from server 0.0kB/s TCP to server 0.3kB/s UDP to server
Dec 30 21:53:32 deb-adsb piaware[383]: mlat-client(745): Aircraft: 3 of 4 Mode S, 8 of 9 ADS-B used
Dec 30 21:54:03 deb-adsb piaware[383]: 1624 msgs recv’d from unknown process (332 in last 5m); 1624 msgs sent to FlightAware
Dec 30 21:59:03 deb-adsb piaware[383]: 1850 msgs recv’d from unknown process (226 in last 5m); 1850 msgs sent to FlightAware
Dec 30 22:04:03 deb-adsb piaware[383]: 2078 msgs recv’d from unknown process (228 in last 5m); 2078 msgs sent to FlightAware

I am running piaware on a Debian virtual machine with the latest version of dump1090-mutability (built around a week ago from github). The host machine is not under load at all. Load on the VM is around 0.10-0.20 and has 90% free memory. Other than MLAT, the feeder is working great - I’m getting plenty of reports and I’m happy with it - I’m just confused as to why my setup would be unable to provide MLAT and if there is anything I can do about it. I have one or two MLAT position reports on my FA feeder page, but it seems to only work for a minute or two when I restart the setup - and then it defaults back to the “anomaly” state…

lat/lon is set on the FA feeder page and is identically set in the /etc/defaults/dump1090-mutability config file. fa-mlat-client is running, as is faup1090.

Any ideas from anyone?

Debian virtual machine

I have seen problems with VMs dropping lots of USB traffic, which will hose mlat timing.

1 Like

Fantastically quick response, thank you.

That’s a damn shame… I will do some troubleshooting in that vein and see if I can get anywhere…

Is there any way to see how stable my MLAT performance is? As I understand it, computation is done on the Flight Stick side of things - that makes getting data about performance really difficult, but if I do some tweaking I’d be really willing to give it a try.

If it’s worth anything, I have had no issues with any of my VMs dropping USB traffic. I use another USB as an airband voice receiver and it works flawlessly. Although that said, I do suppose MLAT is more sensitive to timings…

Not much you can do beyond monitoring the feedback the server gives you. When I looked at a similar VM problem a while back the problem was mostly invisible on the receiver, it would just drop 1% of sample blocks or something like that. You wouldn’t notice it in terms of a reduced message rate really as you’re still seeing 99% of the traffic, but on the mlat server side it shows up as the intervals between messages from the receiver being way off compared to the same messages being received by other receivers.

1 Like

You could try using rtl_test I guess (@ 2.4MHz sample rate) - that will report dropped samples.

Thanks - that helps, I’ll give that a try.

What hypervisor were you troubleshooting with, out of interest? I’m currently dealing with Proxmox/KVM.

It was a vmware VM, IIRC

Thanks - I would assume vmware would have a better grasp on USB passthrough than KVM, so I may have to give in and use a Raspi, but never hurts to try.

Would you (or anybody else) be able to dump their latest statistics block from /var/log/dump1090-mutability? The excerpts I’m interested in look like:

CPU load: 6.4%
2186951 ms for demodulation
278934 ms for reading from USB
56449 ms for network input and background tasks

1 Like

Same issue here. Running piaware in a virtualized environment (ESXi host, Debian guest). I am looking for the statistics your requested, but my system does not dump them. What parameters to use with dump1090?

Running Ubuntu 16.04 in a Virtual Box VM. Everything is working great except for MLAT.

Here are my stats:
CPU load: 8.7%
72193 ms for demodulation
8327 ms for reading from USB
2316 ms for network input and background tasks

1 Like

I recently dabbled with a VMWare Debian setup on Win7 host and was able to get MLAT working perfectly without error for about 3 weeks before I shut it down. Make sure you set the guest USB settings to USB 2.0, don’t use the USB 3.0. Also, I gave the guest 4 processors and 4GB ram - overkill, I know - the rest was set default so far as the guest is concerned. I did use a USB extension with ferrites to help with USB noise. I also made sure to plug this into a USB 2.0 port that was on the motherboard (don’t use front panel connectors etc). I initially had the extension plugged into a USB 3.0 port and had issues as you all described with MLAT. USB3.0 ports are noise makers, so stay away from them when messing with SDR for this type of thing. They maybe needed for radios with higher throughput, but we’re not dealing with that here. Perhaps this will help someone, it worked for me at least.

Here is a few lines from /var/log$ less dump1090-mutability.log

Statistics: Wed Mar 1 15:19:24 2017 CST - Wed Mar 1 16:19:24 2017 CST
Local receiver:
6816792576 samples processed
0 samples dropped
0 Mode A/C messages received
44803638 Mode-S message preambles received
27620379 with bad message format or invalid CRC
16661625 with unrecognized ICAO address
491547 accepted with correct CRC
30087 accepted with 1-bit error repaired

Bad message format and unrecognized ICAO seem high. I am thinking it has something to to with the VM.

I removed the USB extension cable and plugged the receiver in direct, but that didnt make a difference. USB 2 port is what I am using.

I don’t think that’s out of line mate. Here is a sample from a working Pi3 setup (read non-VM):



Statistics: Wed Mar  1 15:12:36 2017 MST - Wed Mar  1 16:12:36 2017 MST
Local receiver:
  8640004096 samples processed
  0 samples dropped
  0 Mode A/C messages received
  65758094 Mode-S message preambles received
    40689673 with bad message format or invalid CRC
    22856951 with unrecognized ICAO address
    2117929 accepted with correct CRC
    93541 accepted with 1-bit error repaired


The drops (bad msg) are simply packet collisions or other noise, not necessarily being dropped from the USB itself. We’re both running ~61% badmsg/Mode-S, which seems to be about par for the course in somewhat noisy environments. Is timing still popping up on FlightAware as being an issue? If so, perhaps try a different USB port? Keep that extension and add some ferrites, or find a cable with ferrites if possible. Again, I realize VM makes things crawl through a multitude of extra layers (especially if Winblowz is the host), but either I was really lucky, or there is a method to make it consistently work through VM. All I can provide is simply what worked for me.

Yea, a little more research revealed that the stats are probably normal. MLAT not working on VM’s seems to be a common thing, but the fact that you had it working Nitr0 gives me hope it can be solved. The research continues…

The latest VirtualBox & Extensions VM hosted on Windows 10 did NOT work for me.
I posted a similar experience in: ads-b-flight-tracking-f21/multilateration-mlat-now-available-on-piaware

I moved the dump1090 process to windows with the USB radio (bare metal per se) and
set it up to feed the piaware on VM and that appeared to work.

I didn’t have time to explore the issue more.
It was easier to buy a PI3 which I wanted to test anyways.

Yea, that’s what I have decided to do as well. Pi out for delivery tomorrow. The VM would have been nice since the server is running anyway, but for time sensitive applications, I just don’t think it will work. Same here, I have always wanted to try a Pi but have just never ordered one.

I’d like to continue this quest because I don’t think I simply got lucky and everything just worked; There must be something to the setup.
My setup consists of VmWare: 12.5.2 build-4638234. I have a Win7 as host and installed Debian in VMWare using: http://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-8.7.1-amd64-netinst.iso From there I installed using Joe’s script: https://github.com/jprochazka/adsb-receiver after a simple apt-get update/upgrade.

I dedicated 4 cores to the VM along with 4GB Ram, rest using default, No other changes. I used a USB extension cable plugged into the back of a USB 2.0 port on the back of the motherboard which contained a ferrite core (near the host PC). USB cable plugged into an RTL-SDR v3 dongle. I ran said setup straight through an FA filter with 25 feet of RG-6 to an FA 26" antenna. MLAT worked like a champ without issue for about 3 weeks solid, no other tweaks (I installed dump1090 mutability through the script).

Would be good to get to the bottom of this for others that maybe looking at using VM instead of Pi.

might not have any relevance at all, but not all processors have hardware / native ability for virtual machine’s.

Maybe it makes all the difference having hardware support.

This is possible. The machine I used was based on https://ark.intel.com/products/75044/Intel-Core-i5-4570S-Processor-6M-Cache-up-to-3_60-GHz - my HTPC. This processor does have native VT-x - which may or may not make a difference. It is also quite possible (or plausible) that the underlying host USB driver is the culprit as well. Here is a shot of what’s in use on that host machine:
http://puush.hopto.me/4hFE

For those who haven’t been able to get it to work properly, perhaps it would be a good idea to check their host’s USB driver as well. I know some of the generic/pre-canned Microsoft drivers have some issues. Honestly, I’m not sure how much of a role the host’s USB driver plays when the device is passed through to the VM. I had experience with VirtualBox dropping a lot of USB packets for another project I was working on, so I switched to VMWare a couple years back which gave much better results. Since MLAT is pegged tightly to timestamping, I’d only suspect that the problem must exist somewhere in the USB path. I’m far from a virtualization expert, so am only able to insert guesses. It would be neat to get this issue nailed down so others can refer to this thread should they decide to go down this path.

The only reason I went down this path was because I wanted to test the AirSpy through a VM to see if in fact the Pi3 was holding things back so far as processing power goes. I couldn’t get the VM to latch onto the AirSpy no matter what distro I tried - Debian, Ubuntu 14.x, 16.x (after compiling the latest airspy_tools), so I fell back on using the VM for a few weeks with the regular RTL dongle to log any performance differences (there were none between VM and Pi3 by the way).

does the VM still behave if you update to the latest Intel chipset drivers?