Aggregation of dump1090-mutability outputs


I have a tricky question: In our community network we have actually three sites with Raspberry and dump1090-mutability setup. Each site covers different part of sky, so if you want to check nearby airplanes, you have to visit all three servers. So I want to ask if is possible to aggregate data from these three sites and display all data on one of the three servers - especially MLAT data of local planes.


Dump1090 Options
Multiple ADS-B on one PiAware Skyview
Second Piaware - Pacific coast

There are three pieces to this:

  1. Something to aggregate the data and display the results.

You can run an additional instance of dump1090 with the --net-only option to do this. It will only process data from the network in this mode and doesn’t need a dongle. If it’s on the same machine as one of the receivers also running dump1090, you’ll need to change the default ports so they don’t conflict.

  1. Feeding data from the satellite receivers to the aggregator.

You can use netcat or socat to do this. You want to feed data from the “beast out” port (default 30005) on each receiver to the “beast in” port (default 30004) on the aggregator:

$ socat -u TCP:receiverhostname:30005 TCP:aggregatorhost:30004

  1. Feeding mlat results to the aggregator

You can tell piaware on each receiver to send results directly to the aggregator:

$ sudo piaware-config -mlatResultsFormat "beast,connect,localhost:30004 beast,connect,aggregatorhost:30004"

(You can omit the beast,connect,localhost:30004 if you don’t need to see mlat results on each individual receiver too)


Thanks, we just tried it and … it works :slight_smile:. Once again thank you.


This is great, I’ve been looking at doing something like this!

What happens if the same aircraft is seen by more than one site? I assume it doesn’t create two targets, but just updates more often…?



Yes, it does not show plane twice. Cannot say if more often, but propably yes.


Just wondering why you need a second instance of dump1090 on the aggregator when just the socat feeding port 30004 on the single instance seems to work.

On the remote Pi all I’d run is dump1090 without feeding FA but feeding the aggregator instead.

The remote Pi needs to be stable since I won’t have access to it physically.

Any concerns or comments in this setup?


mlat doesn’t work if you aggregate multiple receivers then try to feed the results to piaware/mlat. Also, you can only provide one location, which is no good if the receivers are separated.


Ok, I’ve been trying to figure out something that is probably very simple but my brain refuses to see it!

What I’d like is to have one webpage display the aggregated output from a remote receiver and the local resident dump1090 while each feeding piaware independently. Is this what we’re trying to do here?

On the remote I configure dump1090:

dump1090-mutability --net-bo-port 30007 --net-bi-port 30006

So this is not feeding Piaware, I assume now. I guess I can’t feed piaware & the aggregrator at the same time.

On my aggregrator I start a second dump1090 (changing the ports to avoid conflicting with the first instance):

dump1090-mutability --net-only --net-bo-port 30007 --net-sbs-port 0 --net-fatsv-port 0 --net-bi-port 30006 &

This should not interfere with the primary instance which is running fine at this point.

Then on the main primary Pi again, I start the socat:

socat -u TCP:remotehostIP:30007 TCP: &

This connects to the remote and feed the data to 30006 which belongs to the second dump1090 process.

If I send it to 30004, which is the main dump1090 process port, then it comes up on the webpage and if I reconnect my local antenna, I assume it would show me both the local receiver data and the incoming network feed.

socat -u TCP:remotehostIP:30007 TCP: &

Now at this point Piaware is complaining because it’s detected two receivers feeding.

So then the third command:

piaware-config -mlatResultsFormat "beast,connect,localhost:30004 beast,connect,aggregatorhost:30006"

sends mlat data from the receiver to both piaware & the aggregrator…
Do I need to run this one the local main Pi as well? I assume so.

So what am I not seeing?

Thanks for your time!


On each receiver that you want to aggregate from:

  • run dump1090-mutability on the default port 30005
  • sudo piaware-config -mlatResultsFormat “beast,connect,localhost:30004 beast,connect,aggregator:30006”
  • run piaware normally

Each receiver continues to feed piaware as it already did previously. The only change here is reconfiguring mlat to also forward results to the aggregator.

On the aggregator (can be on one of the receivers if you want):

  • run dump1090-mutability --net-only --net-bi-port 30006 (plus disabling other ports as needed)

Somewhere (could be on each receiver, could be on the aggregator):

  • run socat for each receiver to connect receiver:30005 to aggregator:30006

The “local” receiver isn’t special in this setup, it just happens to be on the same machine as the aggregator but otherwise works the same.

You can connect as many things to port 30005 of a dump1090 as you need to. (For example, in a standard piaware setup, both faup1090 and fa-mlat-client are connected to port 30005)


yep, That works fine… but the web interface (gmap.html) on the ‘local’ receiver still only shows what its receiving and not the aggregate dump1090 instance. I can only see the aggregate if I run the second instance in interactive mode.


Run the second instance on a different http port and look there.
Or point it at a different json data dir if you are using an external webserver.


ok, it’s working now with a few issues:

I noticed in the javascript console in Chrome a few errors from loading the aggregator page only:
GET 404 (Not Found)
GET 404 (Not Found)

And at some point a couple more .json files which aren’t showing up right now.

Almost there… thanks for your patience. Trying to understand all the interactions between the different processes and what is doing what gets a bit confusing when your new to all of this! :confused:



Take a look at:


This is mostly harmless. It means the static database of aircraft info (registrations etc) isn’t there; it’s not available when using the internal webserver.


VRS is pretty good if you have a windows machine you don’t mind keeping on 24/7. Under mono it is a bit flaky, and it’s a bit heavy for a Pi.


ok, thanks for the info. I haven’t successfully run VRS yet… I’m on iMac and have installed mono and I also can run a windows virtual machine via Parallels… not sure I’d want to run it 24/7 but once I do try and see how it works perhaps I can find an older PC kicking around.

As for the aggregation thing, I’m running lighttpd on port 80… so I assume from your note that hitting port 8080 is using the internal web server and not lighttpd.

I found this through a google search:

1. Set your port for production use, either by leaving server.port
commented or set:
server.port = 80
set your standard document root:
server.document-root = “/path/to/production/version”

2. Add these lines to your config:
$SERVER“socket”] == “:81” {
server.document-root = “/path/to/testing/version”

3. Restart lighttpd.

4. Connect to http://hostname:81 or http://[your IP]:81 (

Ok, so I’ve just setup the above and have the second instance saving to a different .json folder… so if this is going to work, in step#2 above, where must :81 point to?.. or am I missing the point? :slight_smile:


If you want to run two webmaps from one lighttpd, it’s entirely possible but the packaging doesn’t do it for you so you will need to do a bit of manual config.

Take a look at /etc/lighttpd/conf-available/89-dump1090.conf
You will want to take a copy of this (call it 88-aggregator.conf or something) and:

  • update the URL paths to something different (say, /aggregator/ rather than /dump1090/)
  • change the /data/ alias to point at the json data dir for the aggregator instance

You can keep the DB and html paths pointed at the same place, sharing the main install, since those files will be the same for both copies.

You should end up with something like this:

url.redirect += (
  "^/aggregator/$" => "/aggregator/gmap.html",
  "^/aggregator$" => "/aggregator/gmap.html"

alias.url += (
  "/aggregator/data/" => "/run/dump1090-mutability-aggregator/",
  "/aggregator/db/" => "/var/cache/dump1090-mutability/db/",
  "/aggregator/" => "/usr/share/dump1090-mutability/html/"

Then tell lighttpd to use your new config:

$ sudo lighty-enable-mod aggregator   # this should match the filename in conf-available
$ sudo service lighttpd restart

Then look at pihostname/aggregator/


Brilliant!!! It’s working!

I had to comment out the last line in the .conf file since it was returning a “duplicate” error when I was restating lighttpd.

I’ve put the second instance and the socat commands in rc.local
The only thing is after reboot, the aggregator json folder disappears, things kick in as soon as I manually log in and create it.

Should I just add a mkdir command in rc.local

Thanks you again for all your help, there is absolutely no way I could have done this on my own!



Yeah. If you’re putting it in /run, that’s on tmpfs which goes away on reboot. The init.d script creates the data dir for the main instance, if you’re starting dump1090 manually then you’ll need to do the same.


Things are working nicely for more than a day now. I hopefully should be able to deploy the remote receiver next week.

I’m trying to find a way to tag and/or show the originating receiver for each plane in the table… I suspect that it just all gets ‘aggregated’ into the json files without any discrimanating info I could use from what I see.