Aggregation of dump1090-mutability outputs

Hello,
I have a tricky question: In our community network we have actually three sites with Raspberry and dump1090-mutability setup. Each site covers different part of sky, so if you want to check nearby airplanes, you have to visit all three servers. So I want to ask if is possible to aggregate data from these three sites and display all data on one of the three servers - especially MLAT data of local planes.

Thanks

There are three pieces to this:

  1. Something to aggregate the data and display the results.

You can run an additional instance of dump1090 with the --net-only option to do this. It will only process data from the network in this mode and doesnā€™t need a dongle. If itā€™s on the same machine as one of the receivers also running dump1090, youā€™ll need to change the default ports so they donā€™t conflict.

  1. Feeding data from the satellite receivers to the aggregator.

You can use netcat or socat to do this. You want to feed data from the ā€œbeast outā€ port (default 30005) on each receiver to the ā€œbeast inā€ port (default 30004) on the aggregator:



$ socat -u TCP:receiverhostname:30005 TCP:aggregatorhost:30004


  1. Feeding mlat results to the aggregator

You can tell piaware on each receiver to send results directly to the aggregator:



$ sudo piaware-config -mlatResultsFormat "beast,connect,localhost:30004 beast,connect,aggregatorhost:30004"


(You can omit the beast,connect,localhost:30004 if you donā€™t need to see mlat results on each individual receiver too)

Thanks, we just tried it and ā€¦ it works :slight_smile:. Once again thank you.

This is great, Iā€™ve been looking at doing something like this!

What happens if the same aircraft is seen by more than one site? I assume it doesnā€™t create two targets, but just updates more oftenā€¦?

ā€“Dan

Yes, it does not show plane twice. Cannot say if more often, but propably yes.

Just wondering why you need a second instance of dump1090 on the aggregator when just the socat feeding port 30004 on the single instance seems to work.

On the remote Pi all Iā€™d run is dump1090 without feeding FA but feeding the aggregator instead.

The remote Pi needs to be stable since I wonā€™t have access to it physically.

Any concerns or comments in this setup?

mlat doesnā€™t work if you aggregate multiple receivers then try to feed the results to piaware/mlat. Also, you can only provide one location, which is no good if the receivers are separated.

Ok, Iā€™ve been trying to figure out something that is probably very simple but my brain refuses to see it!

What Iā€™d like is to have one webpage display the aggregated output from a remote receiver and the local resident dump1090 while each feeding piaware independently. Is this what weā€™re trying to do here?

On the remote I configure dump1090:


dump1090-mutability --net-bo-port 30007 --net-bi-port 30006

So this is not feeding Piaware, I assume now. I guess I canā€™t feed piaware & the aggregrator at the same time.

On my aggregrator I start a second dump1090 (changing the ports to avoid conflicting with the first instance):


dump1090-mutability --net-only --net-bo-port 30007 --net-sbs-port 0 --net-fatsv-port 0 --net-bi-port 30006 &

This should not interfere with the primary instance which is running fine at this point.

Then on the main primary Pi again, I start the socat:


socat -u TCP:remotehostIP:30007 TCP:127.0.0.1:30006 &

This connects to the remote and feed the data to 30006 which belongs to the second dump1090 process.

If I send it to 30004, which is the main dump1090 process port, then it comes up on the webpage and if I reconnect my local antenna, I assume it would show me both the local receiver data and the incoming network feed.


socat -u TCP:remotehostIP:30007 TCP:127.0.0.1:30004 &

Now at this point Piaware is complaining because itā€™s detected two receivers feeding.

So then the third command:


piaware-config -mlatResultsFormat "beast,connect,localhost:30004 beast,connect,aggregatorhost:30006"

sends mlat data from the receiver to both piaware & the aggregratorā€¦
Do I need to run this one the local main Pi as well? I assume so.

So what am I not seeing?

Thanks for your time!

On each receiver that you want to aggregate from:

  • run dump1090-mutability on the default port 30005
  • sudo piaware-config -mlatResultsFormat ā€œbeast,connect,localhost:30004 beast,connect,aggregator:30006ā€
  • run piaware normally

Each receiver continues to feed piaware as it already did previously. The only change here is reconfiguring mlat to also forward results to the aggregator.

On the aggregator (can be on one of the receivers if you want):

  • run dump1090-mutability --net-only --net-bi-port 30006 (plus disabling other ports as needed)

Somewhere (could be on each receiver, could be on the aggregator):

  • run socat for each receiver to connect receiver:30005 to aggregator:30006

The ā€œlocalā€ receiver isnā€™t special in this setup, it just happens to be on the same machine as the aggregator but otherwise works the same.

You can connect as many things to port 30005 of a dump1090 as you need to. (For example, in a standard piaware setup, both faup1090 and fa-mlat-client are connected to port 30005)

yep, That works fineā€¦ but the web interface (gmap.html) on the ā€˜localā€™ receiver still only shows what its receiving and not the aggregate dump1090 instance. I can only see the aggregate if I run the second instance in interactive mode.

Run the second instance on a different http port and look there.
Or point it at a different json data dir if you are using an external webserver.

ok, itā€™s working now with a few issues:

I noticed in the javascript console in Chrome a few errors from loading the aggregator page only:
GET 192.168.1.100:8080/db/C.json 404 (Not Found)
GET 192.168.1.100:8080/db/A.json 404 (Not Found)

And at some point a couple more .json files which arenā€™t showing up right now.

Almost thereā€¦ thanks for your patience. Trying to understand all the interactions between the different processes and what is doing what gets a bit confusing when your new to all of this! :confused:

D

Take a look at: virtualradarserver.co.uk/

This is mostly harmless. It means the static database of aircraft info (registrations etc) isnā€™t there; itā€™s not available when using the internal webserver.

1 Like

VRS is pretty good if you have a windows machine you donā€™t mind keeping on 24/7. Under mono it is a bit flaky, and itā€™s a bit heavy for a Pi.

ok, thanks for the info. I havenā€™t successfully run VRS yetā€¦ Iā€™m on iMac and have installed mono and I also can run a windows virtual machine via Parallelsā€¦ not sure Iā€™d want to run it 24/7 but once I do try and see how it works perhaps I can find an older PC kicking around.

As for the aggregation thing, Iā€™m running lighttpd on port 80ā€¦ so I assume from your note that hitting port 8080 is using the internal web server and not lighttpd.

I found this through a google search:


1. Set your port for production use, either by leaving server.port
commented or set:
server.port = 80
AND
set your standard document root:
server.document-root = ā€œ/path/to/production/versionā€

2. Add these lines to your config:
$SERVERā€œsocketā€] == ā€œ:81ā€ {
server.document-root = ā€œ/path/to/testing/versionā€
}

3. Restart lighttpd.

4. Connect to http://hostname:81 or http://[your IP]:81 (http://192.168.1.1:81)

Ok, so Iā€™ve just setup the above and have the second instance saving to a different .json folderā€¦ so if this is going to work, in step#2 above, where must :81 point to?.. or am I missing the point? :slight_smile:

If you want to run two webmaps from one lighttpd, itā€™s entirely possible but the packaging doesnā€™t do it for you so you will need to do a bit of manual config.

Take a look at /etc/lighttpd/conf-available/89-dump1090.conf
You will want to take a copy of this (call it 88-aggregator.conf or something) and:

  • update the URL paths to something different (say, /aggregator/ rather than /dump1090/)
  • change the /data/ alias to point at the json data dir for the aggregator instance

You can keep the DB and html paths pointed at the same place, sharing the main install, since those files will be the same for both copies.

You should end up with something like this:



url.redirect += (
  "^/aggregator/$" => "/aggregator/gmap.html",
  "^/aggregator$" => "/aggregator/gmap.html"
)

alias.url += (
  "/aggregator/data/" => "/run/dump1090-mutability-aggregator/",
  "/aggregator/db/" => "/var/cache/dump1090-mutability/db/",
  "/aggregator/" => "/usr/share/dump1090-mutability/html/"
)


Then tell lighttpd to use your new config:



$ sudo lighty-enable-mod aggregator   # this should match the filename in conf-available
$ sudo service lighttpd restart


Then look at pihostname/aggregator/

Brilliant!!! Itā€™s working!

I had to comment out the last line in the .conf file since it was returning a ā€œduplicateā€ error when I was restating lighttpd.

Iā€™ve put the second instance and the socat commands in rc.local
The only thing is after reboot, the aggregator json folder disappears, things kick in as soon as I manually log in and create it.

Should I just add a mkdir command in rc.local

Thanks you again for all your help, there is absolutely no way I could have done this on my own!

D

Yeah. If youā€™re putting it in /run, thatā€™s on tmpfs which goes away on reboot. The init.d script creates the data dir for the main instance, if youā€™re starting dump1090 manually then youā€™ll need to do the same.

Things are working nicely for more than a day now. I hopefully should be able to deploy the remote receiver next week.

Iā€™m trying to find a way to tag and/or show the originating receiver for each plane in the tableā€¦ I suspect that it just all gets ā€˜aggregatedā€™ into the json files without any discrimanating info I could use from what I see.