how to log dump1090 output to file on pi?

I set up my ADS-B using the piaware image and its working great. I enjoy watching the planes come and go on the pi’s dump1090 web browser. I built it hoping I could log the instances of flights with date/time, flight#, altitude, location, speed, ICAO, etc.

How do I capture this data and log to an ascii file from pi?


Go to console terminal and type:

sudo wget -O - -q http://myreceiver:30003 > log.txt .csv .kml or wathever format you want, press ctrl+c to stop it, type in terminal ls and you see your file in root directory, there are many options , make a executable sh or automate the process with crontab. you can upload data to or in google earth. Enjoy

1 Like

Heh, heh. I’ve never thought of using *wget *to capture a local port stream. Makes sense though. I use netcat.

nc myreceiver 30003 > somefile

where myreceiver is localhost or if running on the same rpi as dump1090.

Thanks, this is very helpful. I also discovered that I probably don’t need all those lines -and I could try to keep the file smaller. Piping through grep and only keeping those lines with positions, altitudes and callsign is probably the most I need.

I assume that myreceiver is supposed to be the host name but I only know my pi by its IP address. I also noticed I don’t seem to have write access to /home/pi directory that putty drops me into, plus i figure its probably better to append instead of overwriting. Once I figured those out, things went swimmingly. Here are the 2 different commands I tried- based off both your suggestions:

nc 30003 | egrep --line-buffered 'MSG,1,|MSG,3,|MSG,5,' >> /tmp/flight_tracks.csv

sudo wget -O - -q | egrep --line-buffered 'MSG,1,|MSG,3,|MSG,5,' >> /tmp/log.txt

I don’t know which is better nc or wget so I did a top command to see which used fewer resources:

 2036 root      20   0 16924 6916 1680 R  16.8  0.7 388:12.99 dump1090
 2393 root      20   0 11416 9644 5344 S   3.0  1.0 121:18.04 fa-mlat-client
 5373 root      20   0  5756 2744 2540 S   1.0  0.3   0:02.49 wget
 5372 pi        20   0  3560 1756 1664 S   0.3  0.2   0:00.95 egrep

 2036 root      20   0 16924 6916 1680 S  21.1  0.7 393:23.80 dump1090
 2393 root      20   0 11416 9644 5344 S   3.3  1.0 122:06.22 fa-mlat-client
 5414 pi        20   0  2360 1492 1384 S   1.0  0.2   0:03.83 nc
 5415 pi        20   0  3560 1828 1736 S   0.3  0.2   0:01.79 egrep

They are almost exactly the same with a slight advantage to nc though I notice that wget uses root permissions vs nc’s user level. I don’t know the difference

This is almost done. The process of logging to a file implies 2 other steps, at least to me anyhow :smiley: :
1.) I’d want the data to be logged to a file automatically, instead of me having a window up on my workstation. You mentioned executable sh or chrontab but could you possibly show me that as well?
2.) To prevent the file from getting too big over time, it should stop and start a new file possibly every day. For instance I notice that the /tmp/piaware.out gets renamed to /tmp/piawareout.yesterday around midnight. I’m not quite sure how this is done so possibly something similar could be used for these files?


“logrotate” is probably what you’re after. If you are just using shell append redirection, this works well with logrotate’s “copytruncate” option.

you can use Dropbox to auto upload log file once time a day and after upload successful auto delete file from rpi so big file wouldn’t be a problem for rpi.

Just FYI, you don’t have to sudo the wget command either.

I ended up having problems using grep ( and egrep). After ~ 10 hours they would stop with an error:

egrep write error

and though grep had no problem piping to stdout, I wasn’t able to pipe anywhere to disk again without rebooting the pi.

Using an unfiltered pipe led to a file almost 300MB after only 12 hours, so that would be an issue. A little more digging and I may have a substitute command for parsing the dump1090 output:

nc 30003 | sed -n '/MSG,\(1,\|3,\|4,\)/p' >> flightdump.csv

sed uses a little more resources than egrep but if it doesn’t drop out it may be a keeper.

I also discovered that piping to the mp directory is a bad idea because reboots clear the contents out, as well as according to the df command, even a moderate size file puts used allocation at 100%.

pi@piaware /tmp $ df
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/root        7319248 2116800   4854816  31% /
devtmpfs          469756       0    469756   0% /dev
tmpfs              94812     236     94576   1% /run
tmpfs               5120       0      5120   0% /run/lock
tmpfs              47408       0     47408   0% /run/shm
/dev/mmcblk0p1     57288   20264     37024  36% /boot
tmpfs              47408   47408         0 100% /tmp

From reading about logrotate it does look like it may do the trick -it worked suitably from the command line. Still just a little stuck on tying it into a script/chron.

The Dropbox idea of creating a mirrored directory is great as it’s very simple, and there is a seemingly easy tutorial to install it, but my remembrance of the Dropbox mirrors is that if the file is deleted on the origin server, it will also delete from all the mirrors.

I’m thinking some sort of wput may be able to place the logged files on a local network share but haven’t figured it out yet.

thanks :slight_smile:

/tmp is often configured to be in-memory only, i.e. it never actually goes to the sdcard (and is size-limited to avoid exhausting memory). The grep write errors are probably just that it’s run out of space, I’d guess.

logrotate is usually set up to run automatically - you can drop config file fragments into /etc/logrotate.d

I did the following command

nc 30003 | egrep --line-buffered 'MSG,1,|MSG,3,|MSG,5,' >> flight_tracks.csv

and it worked with no issue. But when it comes to opening it up in a KML file, I get errors. Did you over try to change the file to a KML and open it it up Google Earth? Thanks.

I found that with my area 4 hours of capture yields over 500k lines of data. Using piaware the pipe creates 22 columns when all that is usable is 8 columns. Here’s a sample of 3 seconds:

msg	identifier	date           	time	           callsign	altitude	lat	         long
3	A98C7A	2015/12/10	11:42:28.163		14675	33.73855	-118.13181
5	A4FC9A	2015/12/10	11:42:28.268		12725		
3	A98C7A	2015/12/10	11:42:28.593		14700	33.7392	-118.13066
3	A4FC9A	2015/12/10	11:42:28.597		12750	33.62598	-118.41785
5	A98C7A	2015/12/10	11:42:28.790		14700		
5	A98C7A	2015/12/10	11:42:29.047		14725		
5	A4FC9A	2015/12/10	11:42:29.077		12750		
3	A98C7A	2015/12/10	11:42:29.123		14725	33.73965	-118.12988
5	A4FC9A	2015/12/10	11:42:29.137		12775		
5	A4FC9A	2015/12/10	11:42:29.223	MAA6845 	12775		
5	A4FC9A	2015/12/10	11:42:29.224		12775		
5	A4FC9A	2015/12/10	11:42:29.783		12800		
5	A98C7A	2015/12/10	11:42:30.017		14750		
3	A4FC9A	2015/12/10	11:42:30.613		12825	33.62356	-118.41425
3	A98C7A	2015/12/10	11:42:31.013		14800	33.74194	-118.12601
3	A4FC9A	2015/12/10	11:42:31.042		12850	33.62305	-118.41351
3	A4FC9A	2015/12/10	11:42:31.553		12875	33.62247	-118.41261
5	A4FC9A	2015/12/10	11:42:31.786		12900		
3	A98C7A	2015/12/10	11:42:31.933		14850	33.74286	-118.1245

MSG 5 is adding no useful information so just using MSG 1 (callsign) and MSG 3 (position) would cut the number of lines in half. In addition the resolution of aircraft position are sometimes more than one sample a second which seems like it could be reduced.

I mapped it into CartoDB and created a map like below and I exported from there to a KML. Loading the KML in Google Earth was an exercise in patience as it took about 10 minutes before it would display.,33.93880275084578,-118.23554992675781,34.03359906947543/1736/666.jpg

I’m currently looking at python solution that samples the web output at /aircraft.json each second and filters by lat/long. This has required me to move to dump1090 mutability but yields complete information on one line in a smaller file size. For whatever reason though the script is stable enough but dump1090 has crashed on me multiple times.

The capture solution is still a work in progress for me.

Volume of data is always the problem! Multiply this by 4000 receivers and you start to see some of the fun that happens on the FA side :slight_smile:

The port 30003 output is giving you one line for each decodable message it sees. It’s not trying to summarize at all. The redundant messages just reflect what’s on the air, secondary radar queries for altitude a lot which is where the altitude-only messages come from.

Sampling aircraft.json is probably a better way if you want briefly summarized data.

If you want heavily summarized data you could run an extra copy of faup1090 which is the summarizer that piaware uses (it lives in /usr/lib/piaware/helpers). It reads raw data from dump1090’s beast port (usually 30005) and writes one line per report to stdout in a tab-separated key-value format. The reports are typically every 5 - 30 seconds per aircraft.

Resurrecting an old thread…

So now that one has all this saved date, is there an easy way to display it?

Basically, I’d like to collect the data received an be able to graphically display what’s flown over in the last day/week/month/epoch.

Stuffing it into a database allows searching by any of the disgorged fields.


Virtual Radar Server might do the things you need. It will keep a database with all the statistics. You can select periods, plane registrations, plane models, plane operators and more.