Traffic upload irregularities 36GB over 3 days - Resolved - Incorrect stats reported by device

Hi there,
I am seeing a very large portion of my upload traffic going to the flightaware servers over the last number of days - 3 days to be exact, I currently run a Fortgate firewall, which has flows directly into Qradar and it is reporting the below.

I am sure this is excessive, any feedback greatly appriciated.

70.42.6.232 24GB

216.48.109.64 19GB

image

image
image

216.48.109.64 isn’t us.

The remaining traffic to 70.42.6.232 is unexpectedly high if that’s for 72 hours, and is wildly different to what I see on the server side. How much do you trust that “total bytes” figure? (Notably, some systems derive a “byte count” from what’s actually a packet count + some assumptions about average packet size which often don’t hold true)

FWIW on the server side I see around 34MB (not GB!) of expanded non-mlat data from your feeder for the complete day of the 28th (UTC), or about 0.4kB/s. The actual transferred data will be less than this due to compression. From the fa-mlat-client logs, mlat upload is also low (< 0.5kB/s)

Looking at the raw network traffic arriving right now, it all seems normal and low volume (TCP is the main upload, UDP is mlat traffic). (The duplication of packets is an artifact of how I did the capture, it picked up both the bonded interface and the physical interface)

03:19:26.003441 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [P.], seq 2557559958:2557560108, ack 196733663, win 6090, length 150
03:19:26.003441 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [P.], seq 0:150, ack 1, win 6090, length 150
03:19:26.003464 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [.], ack 150, win 8195, length 0
03:19:26.003466 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [.], ack 150, win 8195, length 0
03:19:26.105582 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [P.], seq 1:78, ack 150, win 8195, length 77
03:19:26.105597 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [P.], seq 1:78, ack 150, win 8195, length 77
03:19:26.225852 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [.], ack 78, win 6090, length 0
03:19:26.225852 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [.], ack 78, win 6090, length 0
03:19:26.225873 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [P.], seq 78:236, ack 150, win 8195, length 158
03:19:26.225876 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [P.], seq 78:236, ack 150, win 8195, length 158
03:19:26.342597 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [.], ack 236, win 6090, length 0
03:19:26.342597 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [.], ack 236, win 6090, length 0
03:19:35.677007 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [P.], seq 150:287, ack 236, win 6090, length 137
03:19:35.677007 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [P.], seq 150:287, ack 236, win 6090, length 137
03:19:35.677027 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [.], ack 287, win 8195, length 0
03:19:35.677028 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [.], ack 287, win 8195, length 0
03:19:40.683412 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [P.], seq 287:333, ack 236, win 6090, length 46
03:19:40.683412 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [P.], seq 287:333, ack 236, win 6090, length 46
03:19:40.683445 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [.], ack 333, win 8195, length 0
03:19:40.683448 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [.], ack 333, win 8195, length 0
03:19:42.685631 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [P.], seq 333:423, ack 236, win 6090, length 90
03:19:42.685631 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [P.], seq 333:423, ack 236, win 6090, length 90
03:19:42.685643 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [.], ack 423, win 8195, length 0
03:19:42.685644 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [.], ack 423, win 8195, length 0
03:19:43.687759 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [P.], seq 423:479, ack 236, win 6090, length 56
03:19:43.687759 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [P.], seq 423:479, ack 236, win 6090, length 56
03:19:43.687776 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [.], ack 479, win 8195, length 0
03:19:43.687778 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [.], ack 479, win 8195, length 0
03:19:48.449276 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [P.], seq 479:1286, ack 236, win 6090, length 807
03:19:48.449276 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [P.], seq 479:1286, ack 236, win 6090, length 807
03:19:48.449306 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [.], ack 1286, win 8195, length 0
03:19:48.449310 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [.], ack 1286, win 8195, length 0
03:19:50.433787 IP 82.14.xxx.xxx.40364 > 70.42.6.232.8988: UDP, length 44
03:19:50.433787 IP 82.14.xxx.xxx.40364 > 70.42.6.232.8988: UDP, length 44
03:19:51.104682 IP 82.14.xxx.xxx.40364 > 70.42.6.232.8988: UDP, length 111
03:19:51.104682 IP 82.14.xxx.xxx.40364 > 70.42.6.232.8988: UDP, length 111
03:19:51.862313 IP 82.14.xxx.xxx.40364 > 70.42.6.232.8988: UDP, length 54
03:19:51.862313 IP 82.14.xxx.xxx.40364 > 70.42.6.232.8988: UDP, length 54
03:19:52.696312 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [P.], seq 1286:1436, ack 236, win 6090, length 150
03:19:52.696312 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [P.], seq 1286:1436, ack 236, win 6090, length 150
03:19:52.696335 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [.], ack 1436, win 8195, length 0
03:19:52.696337 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [.], ack 1436, win 8195, length 0
03:19:53.289927 IP 82.14.xxx.xxx.40364 > 70.42.6.232.8988: UDP, length 54
03:19:53.289927 IP 82.14.xxx.xxx.40364 > 70.42.6.232.8988: UDP, length 54
03:19:53.698434 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [P.], seq 1436:1574, ack 236, win 6090, length 138
03:19:53.698434 IP 82.14.xxx.xxx.43179 > 70.42.6.232.1200: Flags [P.], seq 1436:1574, ack 236, win 6090, length 138
03:19:53.698450 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [.], ack 1574, win 8195, length 0
03:19:53.698453 IP 70.42.6.232.1200 > 82.14.xxx.xxx.43179: Flags [.], ack 1574, win 8195, length 0

Do you have finer grained data?

Hey there,
many thanks for your very detailed reply, i have taken a look at Qradar itself and yes there is indeed an issue with how the flows are been recorded, i have reviewed the bandwidth utilization direct from the Fortigate itself and see the results below for 7 days

Thanks again for the detailed reply.

16MB/day sounds a bit more reasonable!

nb: there is a pool of servers used for receiving traffic from piaware; you may need to look at a few different IPs to get the full picture if piaware has reconnected during your measurement period. The IPs used are those named in DNS for piaware.flightaware.com

That address is feed.adsbexchange.com.
That would seem excessive even for the old adsbexchange feed client.
The current one (including MLAT) should be 1 to 10 kByte/s depending on how many aircraft you see.
That translates to roughly 100 to 900 MByte per 24h period but 900 MByte per day would be a receiver with very many aircraft.
If you need to reduce that there is a configuration option to do that but really for most connections it is not an issue at all.
I suppose if you have a busy receiver and a monthly traffic limit or something i’d consider it.

Cheeky question @obj and curiosity really. I’d be interested to know what my rough average daily figure of data sent to FA is. If it can be determined quickly, is there any chance you could have a look please? Obviously it’s low at the moment due to the levels of traffic but it’s just an intriguing figure.

You can install nethogs … it kinda provides something like what you’re looking for.
It’s not 100% accurate.
I’d recommend getting 30 or 60 second averages

sudo nethogs -d 30

It will take 30 seconds before displaying any data …
You’d have to check the piaware logs for the mlat-client bandwidth usage … as the tool only does TCP.

Unfortunately this would be a bit painful - looks like your connection moved servers a few hours ago so it’d involve tracing things back across all the servers to work out where your connection lived in the past.

It does but that’s all traffic - I have something like that here. I was interested in how much goes specifically to FA from my two feeders.

No problem, like I said, it was just a ‘nice to know’ sort of thing.

No it’s not, try what i suggested.

Actually just saw you can also use a maybe more appropriate mode:

sudo nethogs -v 1 -s

The KB will count up in that mode instead of displaying KB/s
Hmm … too bad it doesn’t show for how long it’s been running :slight_smile:

Apologies, assumption and all that :slight_smile:

Anyway. three minutes of runtime and this does give me a huge WTF moment.

Why WTF?
You’re sending beast data to another pi, no?

All the entries for ADSBX.

I don’t think I am sending the beast data to another Pi. It’s going into my VRS instance but I can’t think of another Pi that I’m sending it to.

That’s the curl from the stats package, it’s called every 5 seconds from a script and this software doesn’t summarize separate processes.
It doesn’t add up to huge amounts of data either.

The received data for the curl seems to be wrong … i’ll double check with tcpdump.
(hmm might be the SSL for the https … but i don’t see 5 KB in the tcp dump more like 2 KB maybe)
Anyhow feel free to remove the stats package.

So dump1090-fa is sending the data to the VRS.
Anyhow just explaining the largest position on there.

I am using darkstat on my devices. Filtered on the non-LAN adress give you a good overview where the traffic goes to.

The only feed which causes that high volume on my end is opensky-network because it’s a full stream.
this is approx 1 - 1.5 GB per day

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.