FlightAware Discussions

Tar1090 on rpi with nginx

Since I am in lock down I dusted off a R Pi I had that was well getting dusty. I installed nginx first hoping never to see lighttp. I loaded Piaware though I can’t find where the HTML went. Lighttp got installed anyway. (I don’t want to learn yet another web server.)

All that said I ran the script to install tar1090. There is one line to add to the server block. Funny thing is the default installation of nginx doesn’t have a server block in the nginx.conf. I wrote the block myself and added the line with the “locations” in it and it works, but I am wondering if I missed something.

server {
listen 8080;
server_name localhost;
include /usr/local/share/tar1090/nginx-tar1090.conf;



Also, blockquotes don’t work well with code on this forum, use
code here

lighttpd is a dependency for dump1090-fa, you can disable the service or configure it to use some other port.

By default 8080 will be used by lighttpd due to dump1090-fa configuration file, check this folder:


I restored the files in /etc/nginx to the original contents. I put tar1090 on 8081. It works. You do have to explicitly request tar1090 or it gives you the default nginx page. Also piaware is now on 8080. Dump1090-fa using about 33% of a cpu.

Next up is to try your history logging.


It mentions the URL at the end of the install.

If you want it at /, see this:

What are you referring to?
The typical stuff only caches in memory for 24h.
Or do you mean graphs?


--write-json-globe-index --write-globe-history /var/globe_history

Ah i see.

Be careful about my readsb dev branch, i force-push to it with some regularity.
Also i might change some stuff so the current state is discarded after the upgrade and you lose the tracks up to the previous UTC zero hour.

Apart from that, it should be quite simple to set up :slight_smile:
Also that branch can handle 12000 planes in the air at the same time, it’s in active use for adsbexchange.
If you have any questions let me know.

What i have already considered is running an independent database and building meta data for all the traces for the previous day, once they’ve all been written at 0100Z.

Well you could save me some time regarding where to place the --write-json etc line. I know it is a dump1090 option, but I need to see where to place it. It isn’t clear if I have to change to your dev branch and update.

Ultimately I need to download the scripts that are used to install piware and such. I have no idea of the file structure.

I noticed there is a memory resident database that can be accessed via SQL>

If I had my druthers, apparently sqlite can handle json natively. Sqlite runs like a bomb.

That has no relation what so ever with my projects.

See this:

It says:

To accomplish this, you need to use the dev branch of my readsb repository.

I’ve added some more information to the readme in regards to the configuration file.

But please consider the last part of that paragraph:

If you can’t figure out how to make it work with the above information, please don’t ask.
I don’t support this feature for the general user base.
This information is only for people who could figure it out from the source code anyhow,
so that they don’t have to spend as much time figuring it out.

Anyhow good luck and have fun.
The issue is, any more support and i can start making an install script that configures everything because that becomes easier than explaining everything.

I assume after changing to the readsb repo you would reinstall dump1090-fa?

I could just use a fresh SD card rather than ruin one that works. It doesn’t take that much work to reinstall everything from a fresh OS.

Just run readsb in --net-only mode as described here:

That way it shouldn’t interfere with the dump1090-fa you have running currently.
Please read the link above carefully again, i’ve changed it since you first saw it.

This might also be relevant when running dump1090-fa and readsb at the same time: https://github.com/wiedehopf/tar1090#multiple-instances

Most issues you could have can just be fixed by uninstalling readsb …
I’d also recommend trying to understand which programs do what … it helps with such stuff.

Quick question. On this page

The line “cd” with no argument doesn’t make sense to me. Is your intent here for the user to just change directory to where they want to build this code. For instance /usr/local/src/readsb . I have never seen cd without an arugument.

cd with no argument changes to $HOME

Ah the home directory of the user. I learned cd ~ . This is one of those “Today I learned” deals.

20K views and no flame wars.:wink:

There seems to be an issue with pthreads. To be sure the libraries are set up properly, I did a build of the released readsb code. That went fine. Then I did the developers build. The pastebin below has every step of the process.

readsb developers build

The errors lines follow (if you don’t want to read the pastebin):

cc -g -o viewadsb viewadsb.o anet.o interactive.o mode_ac.o mode_s.o comm_b.o net_io.o crc.o stats.o cpr.o icao_filter.o track.o util.o ais_charset.o globe_index.o geomag.o -Wl,-z,relro -Wl,-z,now -L -lpthread -lm -lz -lncurses
/usr/bin/ld: readsb.o: undefined reference to symbol 'pthread_create@@GLIBC_2.4'
/usr/bin/ld: //lib/arm-linux-gnueabihf/libpthread.so.0: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
make[2]: *** [Makefile:63: readsb] Error 1
make[2]: *** Waiting for unfinished jobs....
/usr/bin/ld: net_io.o: in function `serviceConnect':
./net_io.c:328: undefined reference to `pthread_create'
/usr/bin/ld: ./net_io.c:339: undefined reference to `pthread_mutex_trylock'
/usr/bin/ld: ./net_io.c:345: undefined reference to `pthread_join'
collect2: error: ld returned 1 exit status
ldd --version

Seems you are using an very old glibc …

I’ve re-added some linker options that you might need with that bronze age system of yours.
Seriously … use something not quite so ancient.

I used 2020-02-13 and also did update/upgrade. This is as new as you can get and stay true to the disty. Now I know on opensuse that you can add different repositories that basically break the disty version. I do this all the time because some software can’t be built otherwise. I found this link and will add any repos you think are useful:
That said the new build instructions work. Thanks.

I seem to have issues with dump1090-fa service. After making code changes the dongle is busy. I need to unplug and plug it in again. Rebooting doesn’t do the trick.

In any event I have the decoder line added. I don’t have a /var/globehistory file. Was I supposed to do a touch to create it? I am getting tracking so I will let it run awhile. It could be a need to fill a buffer. Everything appears normal.

systemctl status dump1090-fa
● dump1090-fa.service - dump1090 ADS-B receiver (FlightAware customization)
   Loaded: loaded (/lib/systemd/system/dump1090-fa.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2020-04-20 02:36:28 BST; 3min 30s ago
     Docs: https://flightaware.com/adsb/piaware/
 Main PID: 3272 (dump1090-fa)
    Tasks: 3 (limit: 2200)
   Memory: 4.2M
   CGroup: /system.slice/dump1090-fa.service
           └─3272 /usr/bin/dump1090-fa --device-index 0 --gain -10 --ppm 0 --max-range 360 --fix --net --net-heartbeat 60 --n

Apr 20 02:36:28 raspberrypi systemd[1]: Started dump1090 ADS-B receiver (FlightAware customization).
Apr 20 02:36:28 raspberrypi dump1090-fa[3272]: Mon Apr 20 02:36:28 2020 BST  dump1090-fa 3.8.1 starting up.
Apr 20 02:36:28 raspberrypi dump1090-fa[3272]: rtlsdr: using device #0: Generic RTL2832U (Realtek, RTL2832UFA, SN 00001000)
Apr 20 02:36:28 raspberrypi dump1090-fa[3272]: Detached kernel driver
Apr 20 02:36:29 raspberrypi dump1090-fa[3272]: Found Rafael Micro R820T tuner
Apr 20 02:36:29 raspberrypi dump1090-fa[3272]: rtlsdr: enabling tuner AGC
Apr 20 02:36:29 raspberrypi dump1090-fa[3272]: Allocating 4 zero-copy buffers

systemctl status tar1090
● tar1090.service - tar1090 - compress dump1090 json data
   Loaded: loaded (/lib/systemd/system/tar1090.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2020-04-20 02:37:19 BST; 3min 41s ago
 Main PID: 3389 (tar1090.sh)
    Tasks: 5 (limit: 2200)
   Memory: 2.0M
   CGroup: /system.slice/tar1090.service
           ├─3389 /bin/bash /usr/local/share/tar1090/tar1090.sh /run/tar1090 /run/dump1090-fa 8 450 60 no
           ├─3392 /bin/bash /usr/local/share/tar1090/tar1090.sh /run/tar1090 /run/dump1090-fa 8 450 60 no
           ├─3440 /bin/bash /usr/local/share/tar1090/tar1090.sh /run/tar1090 /run/dump1090-fa 8 450 60 no
           ├─3653 sleep 120
           └─3779 sleep 8

Apr 20 02:37:19 raspberrypi systemd[1]: Started tar1090 - compress dump1090 json data.

Which decoder line?
Why are you showing me dump1090-fa after talking about compiling readsb?

Why haven’t you just looked at the system log using journalctl.
It would tell you readsb can’t write the files.

You could also just look at the help:

# readsb --help
                             Extended Globe History
      --write-json=<dir>     Periodically write json output to <dir> (for
                             external webserver)
      --write-json-every=<t> Write json output every t seconds (default 1)
      --write-json-globe-index   Write specially indexed globe_xxxx.json files
                             (for tar1090)

Anyhow look at the system log for readsb and figure it out yourself.

Oh and the tar1090 status won’t help you.
The tar1090 you’re running is probably pointed at the dump1090-fa json directory.

Also the tar1090 service isn’t really needed in this mode, but it doesn’t hurt either.
I’ve added some more pointers to the readme …

Here is what I did to make a usable system. You can deem if I did it in a kosher manner. The narrative will be useful if someone else wants to try it.

I edited the file
The tmpfs is now /run/readsb hence:

# nginx configuration for tar1090
location /tar1090/data/ {
# alias /run/dump1090-fa/;
  alias /run/readsb/;

The file /etc/default/readsb has the decoder line modified:

#next line is the original decoder options
#DECODER_OPTIONS="--max-range 360"
DECODER_OPTIONS="--write-json-globe-index --write-globe-history=/var/globe_history --write-json=/var/readsb_json --max-range 360" 

Nothing shows up on /var/readsb_json though I just did this to see what the files would be placed there. It isn’t needed since /run/readsb has the data.

I’m just running readsb and nginx. There is no what I would call traditional database running such as sqlite, mariadb, etc. Rather the file structure creates a database. For example /var/globe_history has a directory for each day. Example /var/globe_history/2020-04-21 . That directory contains a directory of “traces”.

The traces directory contains files spanning 00 to ff. These hex numbers refer to the last two digits of the ICAO code. If there is an entry in these directories, the associated aircraft can be viewed. For example
contains trace_full_a057eb.json and trace_full_a3a0eb.json .

Entering a3a0eb in the search box on the tar1090 web page displays the trail.

So I guess if I want a history of what was sniffed, I can scrape the directory meta information.

If you define the instances correctly (see multiple instances), then the tar1090 installer will give you a nginx config file pointing to /run/readsb.
You might still want to do that to have a classic tar1090 interface pointing to /run/dump1090-fa in addition to the current global style tar1090 interface.

The json-dir is specified on the command line in the systemd service file after the DECODER_OPTIONS, that’s why it has no effect.
Also not sure why you would do this.

Actually the trace for the last 24h is split in two parts, recent and full.
Those current traces are written to /run/readsb/traces.

If you then click on history on the left and go back to at least yesterday UTC, only then is /var/globe_history used.

It’s not a particularly pretty setup and a DB could work better.
Flat files offer the advantage that you don’t need PHP or nodejs to interface with a database, instead the webserver can just access the file.
Obviously due to many small files and such stuff, a DB might in the end perform better.
I might try and use a DB in the future.

Yeah there is no overview of which aircraft where seen any particular day.
But it shouldn’t be too hard to read the files every day and create an activity log.

Not using PHP is a feature. I am a believer in less is more. Keep the attack surface small. Your file based database is a stick shift. Lots of people drive stick shifts! Plus I could figure it out just poking around and my computer skills or lack thereof are quite obvious.

Third part modules for databases show up from time to time nginx. Drizzle is still there and I see now there is a module for Postgres. I have some websites where I could use a database, have bought books on PHP/mysql and just don’t think I could use it in a hack proof manner. The SQL injection prevention looks like work. Postgres would be nice and this should be PHP free.

They don’t show any examples with user generated input. Rather you have a database and nginx does the inquiry. At least that is my interpretation from a one minute overlook.

What I think I will do is write some ugly bash script to create html links to the planes seen that day. Since you have the “?icao=” feature, that should be doable.

The idea is you write an ugly script using “cut”, “sort” and similar programs, then when it works you write a clean program. Of course I never get around to writing the clean program and just use the embarrassing ugly scripts that I never want anyone to see.

So thanks again for your help. I’m going to go dark for a while. I have an antenna I designed in NEC2 to build and three more weeks of “lock down”.