What are people using to download data from their RPi?

It’s been a while since I’ve done anything with the data from my feeder. Before I set it up, I wonder what other people are doing. I think before I simply copied the data files, but maybe people are feeding a local mysql database or some other solution. If you’re doing something like that, can you explain (or point me to a thread where it was discussed)?

Thanks.

It depends on what you’re trying to achieve.
There is a script available at Github which creates a daily excel sheet. Or you can use the newer version which writes into a MySQL database.

Both can be found here:

The RPi has a limited amount of storage, so the idea is to periodically offload the data to a computer with mass storage. I used to do this by running a script at specific intervals to pull the files created by dump1090. Then I could answer questions like just came up in the neighborhood forum: what was the helicopter circling over the south end of the neighborhood last night? Since I’ve not been pulling the data for some time, I can’t answer that question, and it’s been too long to simply inquire on the current data files on the RPi.

Of course, things like heat maps and animations of flights would be of interest. The later, would require more data to be stored, though.

For other Raspberry Pi applications, I have just put a large USB thumb drive into a spare USB port and directed the output of the application to the thumb. For years I used an SDR and RTL-FM to capture FM broadcast programs off the air using a shell script. I had a 128 GB thumb, and piped the audio through lame encoder to save as MP3. It would take months to fill the thing up and have to either download / purge or swap the USB thumb for a fresh one.

About what rate can I expect to see dump1090 files accumulate - e.g. megabytes per thousand position reports heard? How long before old files are overwritten “automagically”?
This would help with scheduling how often to offload them.

Do you have the script and wish to share the code? I have wanted to automatically pull files from various rpi units while everything is unattended, rather than manually run FileZilla or other ftp tool to grab them when I remember to.

Does the script copy files through the network such as to a NAS or machine with an sftp server running, or does it just to removable media such as a USB thumb? Is it a crontab triggered script or something that runs all the time?

@ExCalbr

If it helps, the original discussion thread is here:

If I want to move files between systems no matter the OS I use FileZilla on Windows. It works both ways - Windows to RPi and RPi to Windows.

After several tries, I finally remembered my password. I don’t have a copy of the script I used anymore. It was just a shell script that I think used scp to copy the json output files. Logging on to the RPi just now, I don’t see such files. Where is data being stored in Piaware 4?

I use FileZilla on all of my systems to clone / move files too. Robust, simple, no nonsense system.
As ExCalbr also asked, is there a document to find all the log data stored in PiAware 4?

I also need to delve deeper into the innards of PiAware 4, as someone gave me a pre-built .iso file ready to put my own credentials in. But I really want to build a PiAware on the most recent PiOS Buster this spring, as the experience will likely reveal a lot of other secrets as to what can be done and expanded with it.

One goal is to export the raw positional data logs then write a Python program to parse and export as a set of daily .csv files, which then can be further passed along to other programs to produce assorted GIS data that can be sliced and diced and analyzed in ways limited only by one’s imagination and coding skills.

Another goal for this year is to add a VPN tunnel to a PiAware installation to better support remote secure downloads of data and configuration / administration where the physical site access may not be convenient or speedy. This will make it easier to set up feeders at remote sites, such as co-located on an existing commercial radiot tower or the roof of a high-rise office building.

It appears that there’s currently only one document home/pi/documents that contains a snapshot of the data. It used to be the case that this was backed up so you could get older versions of this file. But I guess that must have been ditched.

Edit: scratch that - the mod date on that file is in 2016.

OK. The folder I was looking for: /var/run/dump1090-fa

Awesome. I know what I’m going to be doing with that Pi this weekend.

Here’s a Python script that retrieves all the fields in the JSON file and outputs to tab-delimited files. Comment out the uninteresting columns.

# Dump1090.py
# Python program to retrieve JSON records from Dump1090 and write them to a file
# Mark Moerman - March 2021
#
# Requires Dictor from https://pypi.org/project/dictor/
#
import json
import urllib.request
import time
import datetime
import csv
import math
from dictor import dictor
from math import radians, sin, cos, sqrt, asin
#
# Download URL for raw json object - change to your local network address
url = "http://192.168.12.132/dump1090-fa/data/aircraft.json"
#
def haversine(lat1, lon1, lat2, lon2):
	R = 6372.8  # Earth radius in kilometers
 
	dLat = radians(lat2 - lat1)
	dLon = radians(lon2 - lon1)
	lat1 = radians(lat1)
	lat2 = radians(lat2)
 
	a = sin(dLat / 2)**2 + cos(lat1) * cos(lat2) * sin(dLon / 2)**2
	c = 2 * asin(sqrt(a))
 
	return R * c
#
def truncate(number, decimals=0):
	"""
	Returns a value truncated to a specific number of decimal places.
	"""
	if not isinstance(decimals, int):
		raise TypeError("decimal places must be an integer.")
	elif decimals < 0:
		raise ValueError("decimal places has to be 0 or more.")
	elif decimals == 0:
		return math.trunc(number)
#
	factor = 10.0 ** decimals
	return math.trunc(number * factor) / factor
#
aircraftCount = 10000000  #  A large number to force initial file write
homeLat = 42.5048833  # Used for calculating distance from your home to the aircraft
homeLon = -96.4609833
#
while True:
	data = urllib.request.urlopen(url).read().decode()
	obj = json.loads(data)
	obj_timestamp = obj['now']
	for item in obj['aircraft']:
		# Change this number of seconds since the aircraft was last seen to reduce stale records
		# Aircraft can stay in the JSON file for up to 300 seconds after the last time it was seen
		# It should be around one second more than the sleep time between successive reads of the JSON file
		if item['seen'] < 2.1:
			if aircraftCount > 128000:  # Change this number to the approximate number of lines you would like in a file
				aircraftCount = 0
				# Create an output file and write a heading line
				outputFile = 'R:/Dump1090/Logs/All-' + str(int(obj_timestamp)) + '.txt'  # Yes, I'm a Windows person
				dump1090log = open(outputFile, mode='w', newline='')
				dump1090log_writer = csv.writer(dump1090log,delimiter='\t')
				dump1090log_writer.writerow([\
					'timestamp',\
					'iso_time',\
					'hex',\
					'flight',\
					'alt_baro',\
					'alt_geom',\
					'gs',\
					'ias',\
					'tas',\
					'mach',\
					'track',\
					'track_rate',\
					'roll',\
					'mag_heading',\
					'true_heading',\
					'baro_rate',\
					'geom_rate',\
					'squawk',\
					'emergency',\
					'category',\
					'nav_qnh',\
					'nav_altitude_mcp',\
					'nav_altitude_fms',\
					'nav_heading',\
					'nav_modes',\
					'lat',\
					'lon',\
					'nic',\
					'rc',\
					'seen_pos',\
					'version',\
					'nic_baro',\
					'nac_p',\
					'nac_v',\
					'sil',\
					'sil_type',\
					'gva',\
					'sda',\
					'mlat',\
					'mlatProvided',\
					'tisb',\
					'tisbProvided',\
					'messages',\
					'seen',\
					'rssi',\
					'distance'])
#
			lastSeenPos = float(dictor(item, 'seen_pos', default='-1'))
			# Evaluate the number of seconds since the aircraft provided a new position
			# Because seen_pos from the JSON file is not a required field, the default for lastSeenPos is set to -1
			# Excluding the -1 records keeps positionless messages out of the log
			# Setting the left side of the evaluation to (lastSeenPos >= -2) will include these messages
			# Setting the right side of the evaluation to around one second more than the sleep time reduces the number of new messages without position updates
			if (lastSeenPos >= 0) and (lastSeenPos < 2.1):
				# Added shorthand fields to indicate whether MLAT or TISB data was provided
				if not item['mlat']:
					mlatProvided = 'F'
				else:
					mlatProvided = 'T'
				if not item['tisb']:
					tisbProvided = 'F'
				else:
					tisbProvided = 'T'
				# Added field that provides human readable timestamps from the UNIX epoch
				ISOtime = datetime.datetime.fromtimestamp(obj['now']).isoformat("T", "milliseconds")
				#
				aircraftLat = float(dictor(item, 'lat', default='-1'))
				aircraftLon = float(dictor(item, 'lon', default='-1'))
				# Added field to calculate the aircraft distance from your home lat and lon
				if aircraftLat != -1:
					aircraftDistance = truncate((float(haversine(homeLat, homeLon,aircraftLat, aircraftLon)) / 1.609344), 1)
				else:
					aircraftDistance = -1
				#
				altitudeBaro = dictor(item, 'alt_baro', default='')
				if altitudeBaro == 'ground':
					altitudeBaro = '0'
				#
				dump1090log_writer.writerow([\
					obj_timestamp,\
					ISOtime,\
					dictor(item, 'hex', default='FFFFFF'),\
					dictor(item, 'flight', default=''),\
					altitudeBaro,\
					dictor(item, 'alt_geom', default=''),\
					dictor(item, 'gs', default=''),\
					dictor(item, 'ias', default=''),\
					dictor(item, 'tas', default=''),\
					dictor(item, 'mach', default=''),\
					dictor(item, 'track', default=''),\
					dictor(item, 'track_rate', default=''),\
					dictor(item, 'roll', default=''),\
					dictor(item, 'mag_heading', default=''),\
					dictor(item, 'true_heading', default=''),\
					dictor(item, 'baro_rate', default=''),\
					dictor(item, 'geom_rate', default=''),\
					dictor(item, 'squawk', default=''),\
					dictor(item, 'emergency', default=''),\
					dictor(item, 'category', default=''),\
					dictor(item, 'nav_qnh', default=''),\
					dictor(item, 'nav_altitude_mcp', default=''),\
					dictor(item, 'nav_altitude_fms', default=''),\
					dictor(item, 'nav_heading', default=''),\
					dictor(item, 'nav_modes', default=''),\
					dictor(item, 'lat', default=''),\
					dictor(item, 'lon', default=''),\
					dictor(item, 'nic', default=''),\
					dictor(item, 'rc', default=''),\
					dictor(item, 'seen_pos', default='-1'),\
					dictor(item, 'version', default=''),\
					dictor(item, 'nic_baro', default=''),\
					dictor(item, 'nac_p', default=''),\
					dictor(item, 'nac_v', default=''),\
					dictor(item, 'sil', default=''),\
					dictor(item, 'sil_type', default=''),\
					dictor(item, 'gva', default=''),\
					dictor(item, 'sda', default=''),\
					dictor(item, 'mlat', default='NONE'),\
					mlatProvided,\
					dictor(item, 'tisb', default='NONE'),\
					tisbProvided,\
					dictor(item, 'messages', default=''),\
					dictor(item, 'seen', default=''),\
					dictor(item, 'rssi', default=''),\
					aircraftDistance])
			aircraftCount = aircraftCount + 1
	time.sleep(1.1)  # Change the number of seconds until the next retrieval of the JSON file
	```
3 Likes

Saved it.Thanks so much for posting that Python script.
It looks like it’s easily tweaked for specific locations, and easily migrated to Debian, Ubuntu etc.

It’s far better than the crontab and shell script I did to grab the directory .json history files every hour

# incomplete script because I forgot to bring the USB thumb today
# archive_1090_json.sh
# missing definition of hpath and spath from memory
# $hpath is ~/dump1090_json/YYYY/mm/dd/hh
# $spath is dump1090-fa/data/
mkdir -p $hpath 
cp -upr $spath/* $hpath

Then inserted via crontab -e is this line
0 * * * * ~/archive_1090_json.sh
# triggers at top of every hour

Here’s a link to the contents of the JSON file.

https://github.com/flightaware/dump1090/blob/master/README-json.md

Thanks. Saved it. I saw this which matched my lucky guess Sunday night:
“These files are historical copies of aircraft.json at (by default) 30 second intervals. They follow exactly the same format as aircraft.json. To know how many are valid, see receiver.json (“history” value). They are written in a cycle, with history_0 being overwritten after history_119 is generated, so history_0.json is not necessarily the oldest history entry.”

Here is the improved shell script I have running on my Pi since last night:

#!/bin/bash  
# file zip_dump1090.sh    
# place in /home/pi directory    
# chmod 755 zip_dump1090.sh    
# crontab -e    
# insert line    
# 0 * * * * ~/zip_dump1090.sh    
# will trigger this script to run exactly at the top of every hour    
    
tarname=$(date +"%Y"_"%m"_"%d"_"%H".tar.gz)    
    
tpath=1090_logs/$(date +"%Y"/"%m"/"%d")    
# daily directory    
lpath=1090_logs/$(date +"%Y"/"%m"/"%d"/"%H")    
# hourly directory for uncompressed .json snapshots    
    
mkdir -p $lpath    
    
cp -upr /run/dump1090-fa/* $lpath    
# Caution - this will fill up a Pi micro sd quickly! Back up and purge frequently.    
sleep 3    
touch $tpath/$tarname    
# Caution - .tar.gz file must exist or tar command breaks even with create flag.    
tar cfz $tpath/$tarname $lpath/*    
    
#tested 2021 09 March after trial and error    

Next thing is to add the Python script and collect .csv output, possibly zip those to save space and transfer time.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.