Bug: FlightAware NTP server abuse

While debugging an unrelated issue, I noticed that FlightAware, or at least the PiAware software, is hitting pool.ntp.org once a minute, 24/7. Given the number of FlightAware systems out there, having tens of thousands(?) of systems constantly hammering their servers must come close to qualifying as a DoS attack. Could the NTP query frequency be dropped from once a minute to a more sane once every six or twelve hours?

Where did you find that information? I’d like to check my own system. Thanks.

Firewall logs, it shows a query to pool.ntp.org once a minute.

Actually just had another look and it’s not strictly once a minute, sometimes there’s a two-minute gap between queries, but in any case it’s a neverending stream of NTP queries interspersed with occasional data uploads to FlightAware.

I have Raspbian and installed PiAware from scripts so I don’t have a firewall. My router’s not showing any unusual or excessive traffic on the wifi network though.

The piaware sdcard image uses the standard upstream NTP config, there’s nothing special there. I don’t have a system immediately on hand to check, but I thought that uses the debian-specific vendor pool, not the generic pool.ntp.org. edit: I remembered correctly:

pool 0.debian.pool.ntp.org iburst
pool 1.debian.pool.ntp.org iburst
pool 2.debian.pool.ntp.org iburst
pool 3.debian.pool.ntp.org iburst

From memory the default config backs off to a 1024s interval assuming reasonable network conditions.

(I considered applying for a vendor pool for the sdcard image, but given that Raspbian itself doesn’t do that and just uses the Debian pool, and we’re using the default config, it seemed like overkill)

The piaware feeder itself does no NTP queries. (It will periodically ask the local ntpd for current clock synchronization info, but it doesn’t otherwise interact)

Do you perhaps have the FR24 client installed? I believe that does its own NTP internally and may misbehave.

2 Likes

What’s the name of the process causing these queries? Are you using the Piaware Image or the Raspberry Debian package on top of the OS?
Without that information it’s hard to say where these queries come from.

I remember a discussion here in this forum about a similar case, but i can’t find it anymore.

I am using darkstat traffic analyzer on my device. pool.ntp.org is on the list, but with a much lower frequency.
And it can come from almost everything

It sure misbehaves and has for ages.
It’s misbehaving slightly less if i’m not mistaken, it used to get the time from 7 world zones at the same time.
Now it just uses the pool at way too high freqency (factor 7 improvement is something at least and some small zones don’t get hammered as much).

FR24 has been told, they ignore it.

It’s the unmodified PiAware image from PiAware - build your own ADS-B ground station for integration with FlightAware - FlightAware. The only change from the default image was setting the feeder-id, switching to a static IP address, and enabling SSH access.

1 Like

Had a poke around in case it was systemd being systemd but it’s not that, it appears to be ntpd.

1 Like

That image just uses the standard upstream package’s config - take a look at /etc/ntp.conf. If there’s something wrong there then you should probably raise an upstream bug with Debian about it.

Offhand, there are a couple of things I can think of that it might be:

  • I’m not sure exactly what your firewall is reporting, but if it’s DNS queries then the traffic may just be ntpd refreshing the pool servers periodically, and not actual NTP traffic? (It can’t be reverse DNS on actual NTP traffic, because pool.ntp.org and subdomains resolve to a large pool of third-party hosts)

  • There are some bad interactions between the upstream dhcpcd hook scripts and ntpd that I haven’t fully investigated, but which seem to lead to a ntpd restart every time a DHCP lease is refreshed. Usually this isn’t a huge problem, but if you have a very short DHCP lease time then perhaps those restarts could explain it.

It’s probably worth looking at ntpq -p etc to see what ntpd is doing, perhaps you have reachability problems that are making ntpd never establish a good association with a peer?

1 Like

It’s not DNS, the log shows flows from PiAware-IP:123 to pool.ntp.org:123, 48 bytes out, 48 bytes in each time. It’s also sending requests and responses to a local NTP source, presumably part of the NTP pool, at about ten-minute intervals.

Ah, forgot to mention, I’m running with static IP addresses not DHCP, so it shoudn’t be that either.

pool.ntp.org resolves to a large number of hosts (4319 at the time of writing), as I mentioned above. What’s the IP?

It changes on most queries, but for the last few it’s been, in order, 132.181.2.72, 103.242.70.5, 130.217.74.61, 103.242.70.4, and 103.242.68.69.

So, none of those have reverse DNS that resolves to pool.ntp.org, so I’m having trouble understanding what the flow you are looking at actually is. Your firewall reporting sounds like it’s aggregating a bunch of separate traffic into a single flow and misleading you about what it is.

It’s normal for ntpd to have associations with several pool servers and cycle between them.

Have you looked at ntpq -p to see what associations are actually up and what the interval / reachability is?

Yup, all looks OK;

$ ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 0.debian.pool.n .POOL.          16 p    -   64    0    0.000    0.000   0.000
 1.debian.pool.n .POOL.          16 p    -   64    0    0.000    0.000   0.000
 2.debian.pool.n .POOL.          16 p    -   64    0    0.000    0.000   0.000
 3.debian.pool.n .POOL.          16 p    -   64    0    0.000    0.000   0.000
-ns2.tdc.akl.tel 202.46.178.18    2 u   42   64  377    2.382   -0.163   0.704
#101.100.146.146 202.46.177.18    2 u   49   64  377   12.931    0.328   0.690
+time.cloudflare 10.46.8.10       3 u   45   64  377    2.175   -0.954   0.498
+ntp2.its.waikat .GPS.            1 u   43   64  377    4.574   -0.408   0.545
+ns2.att.wlg.tel 131.203.16.6     2 u   40   64  377   10.971    0.613   0.380
+ns1.att.wlg.tel 131.203.16.6     2 u   41   64  377   11.209    0.619   0.299
-101.100.138.250 202.46.178.18    2 u   40   64  377    4.055   -0.182   0.665
#125-236-210-101 103.242.68.68    3 u   36   64  377    6.772    1.547   0.970
#joplin.convolut 131.203.16.6     2 u   36   64  377   12.454    0.297   0.202
+ns1.tdc.akl.tel 202.46.178.18    2 u   40   64  377    2.341    0.389   0.322
#time.cloudflare 10.46.8.10       3 u   31   64  377    2.223   -0.784   0.465
*ntp3.its.waikat .GPS.            1 u   32   64  377    4.106    0.247   0.261
-132.181.2.72    .GNSS.           1 u   39   64  377   18.084    0.710   0.276
+ntp1.its.waikat .GPS.            1 u   35   64  377    4.431    0.605   0.383

reach indicates responses are getting back, and the poll of 64 would match the roughly-once-a-minute behaviour. Assuming I’m reading that right.

OK, so that doesn’t work too well in a non-monospaced font, but the important bits are poll = 64, reach = 377 so 8 out of 8 requests were replied to, I’m guessing that’s an octal value so 8 bits set.

```
monospace
```

OK, so not seeing anything abnormal there, it’s all working as intended?

The ntp docs I have here suggest that the default poll interval is 64 … 1024s, and the selection of the interval is done based on clock/network stability, so perhaps ntpd is not confident enough in that to increase the poll interval. If you wanted to force it to use a longer interval, you could add minpoll / maxpoll parameters to the pool lines in ntp.conf

But the tl;dr here is that it all seems to be working as expected, and it’s using the NTP pool servers in the way that the pool describes that it should be used, so to go back to your original post / post title I don’t see a bug here and certainly no abuse of the pool going on.

(discourse uses a markdown variant, you can get monospaced formatting with a triple-backquote block, I’ll edit that into your post in a moment)

Ah, thanks for the edit.

So it may be working as intended, but sending out an NTP query roughly once a minute rather than, say, every twelve hours, implies there’s something wrong with the intent (the last two frames of this XKCD spring immediately to mind :-).

Just looking at some other Pi stuff running here, with out-of-the-box distros, there’s a Raspberry Shake sending out three queries every 16-17 minutes, which would be the 1024 second max interval, and a Pi-based AIS receiver that sends out a single query every hour. The rest of the embedded stuff is non-Pi, they use NTP sync but so infrequently that any queries have rolled off the end of the logs.

So it may be a Raspbian issue rather than strictly a PiAware issue…