FlightAware Discussions

Max messages/sec and dropping UDP to flightaware?

I have been trying to troubleshoot an anomaly error reported in the feeder stats page. I am seeing ~ 35-40% of UDP multilateration traffic sent from piaware but not reaching flightaware. I was also feeding to flightradar24, but I am seeing this issue even when only feeding FA. I hadn’t noticed the issue prior to upgrading to the orange FA USB stick and having my range/number of aircraft increasing quite a bit. I am now getting 200-250 aircraft and 1200+ messages/sec when I start to notice the drops. I have my Pi 3 B+ wired and I have a fairly robust network, so this does not appear to be a “local” issue. Any thoughts on this? Anyone else seeing these drops at high message rates?

[2019-10-01 07:57 EDT] mlat-client(768): Receiver: 913.5 msg/s received 296.3 msg/s processed (32%)
[2019-10-01 08:08 EDT] 25841 msgs recv’d from dump1090-fa (2848 in last 5m); 25841 msgs sent to FlightAware
[2019-10-01 08:08 EDT] 506 msgs recv’d from dump978-fa (28 in last 5m); 506 msgs sent to FlightAware

That’s not a log of the UDP packet loss.

Remember that the RPI 3B+ and older models share one USB2.0 bus with the ethernet, all four USB ports and, I think, the SD card. If your network is not dropping the packets(and your ISP connections are not) then it could be the RPI.

I had the same error and my problem was a wireless repeater I used for my wlan connection. It was not all the time. A restart of the repeater did help.

Hmm… I wonder if adding the 2nd radio created this issue. I am going to disconnect the UAT radio and see if reduced USB load changes anything.

Does anyone know the UDP ports specifically involved here for the MLAT client sending traffic to FA? I have a few active states on the firewall, trying to monitor the specific flow with the errors…

It’ll be the one.

UDP goes to the same host as the main TCP connection (there are several possible hosts, see what piaware.flightaware.com resolves to) with a varying port between 4999…9999.

So the fact that this says “processed” makes me think the Pi isn’t even processing/sending this data and this is NOT actually a network issue?

2019-10-02 09:13 EDT] mlat-client(768): Receiver: 975.9 msg/s received 339.8 msg/s processed (35%)

That percentage is something else altogether.
Not all messages are useful for MLAT, thus only a part of them is processed.

Interesting, because the percentage listed for processing is very close to the percentage listed as not reaching the servers?


There is another message in the logs about the UDP loss.
You’ll have to check the logs locally, so you can read the complete log:

sudo journalctl -eu piaware 

Sample from my log (doesn’t have UDP loss message):

3566814 msgs recv'd from dump1090-fa (1819 in last 5m); 3564114 msgs sent to FlightAware
3568740 msgs recv'd from dump1090-fa (1926 in last 5m); 3566040 msgs sent to FlightAware
3570876 msgs recv'd from dump1090-fa (2136 in last 5m); 3568176 msgs sent to FlightAware
mlat-client(7064): Receiver status: connected
mlat-client(7064): Server status:   synchronized with 299 nearby receivers
mlat-client(7064): Receiver: 1530.1 msg/s received      219.1 msg/s processed (14%)
mlat-client(7064): Server:      0.1 kB/s from server    0.0kB/s TCP to server     2.5kB/s UDP to server
mlat-client(7064): Results:  39.6 positions/minute
mlat-client(7064): Aircraft: 12 of 25 Mode S, 59 of 99 ADS-B used

I reset the states on my firewall and now I’m connected to a new server IP. And so far, no UDP loss messages. Hmm…

The mlat client has two parts; a C implementation that handles the incoming network connection and does packetization and filtering, and the Python layer that does the main processing of messages and communicating with the mlat server.

The Python half is much slower and would overload the original (Pi 1 class) hardware if it had to look at every message, so the C layer does a lot of filtering based on the requests that the mlat server provides (e.g. only forwarding particular combinations of message type + aircraft address requested by the server). If that wasn’t done in the C layer, it’d have to happen in the Python layer - either way, the filtered-out data never goes to the mlat server.

So it’s entirely coincidence. In your example, 35% of messages make it through the filter to the Python layer; 65% are dropped early by the C layer. Then some subset of the 35% that make it through get sent via UDP; and of those some are getting lost along the way.

(Also note that UDP datagrams are not 1:1 with Mode S messages, many Mode S messages are packaged into a single datagram)


Awesome explanation. Thank you.

FYI, discovered my ISP is throttling UDP uploads… I redirected out a secondary connection and had no drops for 24 hours. Moved back to primary, and had 30% loss again. Wrapped the traffic in a VPN tunnel across primary and back to 0% loss. What I find very strange is my VPN tunnel is using UDP as well so I would have thought that would be throttled as well but they must have some exceptions? Regardless, I think I’m all good.

What do those log entries represent?

I just noticed that I have the same problem, it looks there’s some occasional packet loss when pinging piaware.flightaware.com:
Pinging piaware.flightaware.com [] with 32 bytes of data:
Reply from bytes=32 time=128ms TTL=44
Reply from bytes=32 time=128ms TTL=44
Reply from bytes=32 time=123ms TTL=44
Request timed out.
Reply from bytes=32 time=126ms TTL=44
Request timed out.
Request timed out.
Reply from bytes=32 time=125ms TTL=44

Tracing route to piaware.flightaware.com []
over a maximum of 30 hops:

1 <1 ms <1 ms <1 ms router.lan []
2 11 ms * 6 ms d8d861001.access.telenet.be []
3 8 ms 10 ms * dd5e0cae2.access.telenet.be []
4 11 ms 16 ms 12 ms dd5e0fa71.access.telenet.be []
5 12 ms 15 ms 12 ms be-dgb01a-rb1-ae-19-0.aorta.net []
6 14 ms 13 ms 13 ms be-bru02a-ra1-vl-6.aorta.net []
7 126 ms 125 ms 127 ms prs-bb3-link.telia.net []
8 128 ms * * ash-bb2-link.telia.net []
9 213 ms * * atl-b22-link.telia.net []
10 121 ms 120 ms 125 ms atl-b24-link.telia.net []
11 148 ms 258 ms * hou-b1-link.telia.net []
12 138 ms 135 ms 140 ms internap-ic-345829-hou-b1.c.telia.net []
13 133 ms * 125 ms border3.ae2-bbnet2.hou.pnap.net []
14 127 ms 182 ms 125 ms edge1.ae1-edgenet.hou.pnap.net []
15 127 ms 126 ms 138 ms superconnect-7.edge1.hou.pnap.net []
16 * * * Request timed out.
17 126 ms 124 ms 128 ms cagso.hou.flightaware.com []