V5.0.5-airspy dump1090-fa with native AirSpy support now available

I’m a bit confused on the install side
not all match aarch64 files
could some one show correct commands for correct files please
i did manually compile first

No worries. It’s a hobby project and we are happy for what we get and when we get it.

Or just some extended learning regarding the airspy products :wink:

But I know what you mean. I’m going to try keep things on topic in this thread from now on.

Did another take on the numbers comparing airspy_adsb against dump1090. Used the reported ADSB-stats from FA and omitted the MLAT–stats and then my numbers look like this:

Comparing only the ADS-B stats the difference is smaller than my previous results.
I was running an old version of the script @gtj0 now is putting together and with an older version of dump1090 (5.0.1) due to the need to have a demod.json-file to make the scripts work (and I still needed to do some changes since the scripts relied on an even older version of dump1090).

But the method of running all possible settings on a dump-file seem to produce good results. I think that it’s important to choose the right time of day generating the dump files. Probably best to do it when your station has the most common traffic volume (yes, a sketchy definition).
I noticed that dumps made at different times varied in results regarding what settings that produced the best result.

I’ve switched back to airspy_adsb after a week of running dump1090. I’ll test again when there are some performance updates to try. For now though, airspy_adsb is getting better results.

Red is airspy_adsb for the 7 complete days before switching and blue is the dump1090 decoder for the 7 complete days after.

The number of messages received per aircraft seems to be a good indicator - It’s directly comparable for any given number of visible aircraft, so peaks in traffic shouldn’t affect it too much.

1 Like

Are you just dividing the total number of messages by unique aircraft tracks?

It’s not tracks, it’s number of visible aircraft. The data I’m using is taken from the collectd data used by graphs1090.

The left scatter plot is number of visible aircraft against the corresponding message rate. The data points are at 1 minute intervals. The scatter plot on the right is the message rate divided by number of visible aircraft, plotted against the number of visible aircraft.

Both plots there show the same data, just presented differently. Note that the right plot is showing an average message rate per aircraft derived from the aggregate total message rate rather than being an average of the actual message rate of each individual aircraft. I’m not collecting data with that granularity at the moment.

The script I used is a minor modification to this one which produces an aircraft/range plot instead of the messages per aircraft one:

The data used is taken from the csv files produced once per day by graphs1090 if the enable_scatter option is enabled. It’s disabled by default. The normal interval collected is 3 minutes, so each data point would effectively be a 3 minute average but I changed it locally to use 1 minute because the space used by the files isn’t very big.

1 Like

This version will be needed to use the upcoming stats package.

I love stats. I hate creating them. :slight_smile:

The “sample-analyzer.zip” package is attached to the v5.0.5-airspy release

There’s a readme in the zip file.
IT’S NOT FOR THE FAINT OF HEART!!!

Downloaded and is now running the first file (1200 tests). Four files of each 10s generated with your script should, with my options, generate 12000 tests :grimacing: so I opted for using only one file the first run…

There is a bug in the script sample-analyser that prevented the scripts from running.
I haven’t been able to fix it (it’s early morning here, I’ve not had my first cup of coffee yet and it’s hot and damp) but I did a temporary fix by commenting out [MARK_LIMITS]=0,1 in the config file since this line caused sample-analyser to bug out on line 185 with a “command not found”-error.

The script started, but I don’t know if this will cause problems later in the run since the parameters probably are used and not there just for fun…

But now some coffee, then I may go on a bug hunt…

I also noted that in the script “display-stats” markdown mode is used when formatting the output.
This mode is not available in the version of sqlite3 that is used in “stable” for raspberry Pi OS, it’s only available in the version of sqlite3 available in “testing”. Noticed the same with the old script and had to modify it a little.

Will do the same with this one and post the modification here if there is anyone else that run into the same problem.

Think I am going to sit this out for a few more days and see what comes up :slight_smile: … Plus aerial man coming on Monday/Tuesday to mount my aerial on the chimney so will be good to have some stable stats for the days leading up to it to compare.

Fixes for the MARK_LIMITS and sqlite3 issues coming up shortly.

Great!

Stumbled upon another problem:
load-database bugs out with the following message:

pi@airspy:~/testning/scripts/analyzer/sample-analyzer/results $ ../load-database
Using existing demod.json
Creating demod_brief.json with 1200 rows
jq - commandline JSON processor [version 1.5-1-a5b5cbe]
Usage: jq [options] <jq filter> [file...]

        jq is a tool for processing JSON inputs, applying the
        given filter to its JSON text inputs and producing the
        filter's results as JSON on standard output.
        The simplest filter is ., which is the identity filter,
        copying jq's input to its output unmodified (except for
        formatting).
        For more advanced filters see the jq(1) manpage ("man jq")
        and/or https://stedolan.github.io/jq

        Some of the options include:
         -c             compact instead of pretty-printed output;
         -n             use `null` as the single input value;
         -e             set the exit status code based on the output;
         -s             read (slurp) all inputs into an array; apply filter to it;
         -r             output raw strings, not JSON texts;
         -R             read raw strings, not JSON texts;
         -C             colorize JSON;
         -M             monochrome (don't colorize JSON);
         -S             sort keys of objects on output;
         --tab  use tabs for indentation;
         --arg a v      set variable $a to value <v>;
         --argjson a v  set variable $a to JSON value <v>;
         --slurpfile a f        set variable $a to an array of JSON texts read from <f>;
        See the manpage for more options.
jq: error: rint/0 is not defined at <top-level>, line 8:
msgs_per_track: (.total.messages / .total.tracks.all | rint) }
jq: 1 compile error

Hade tried to locate it but haven’t been able to find the problem. I suspect that it’s a similar problem as with sqlite3, stable vs testing. What version of jq do you use?

1.6 on my desktop. I guess I should have tried it on the Pi before publishing. :slight_smile:

Just the version that “testing” is using :slight_smile:
Well, on my Pi’s I’m running stable since they do “important” tasks. On my other systems it’s a “nice” mix of testing and unstable.
I guess that most users here are using stable as well.

No harm done, I’ve refreshed my “understanding and trying to debug someone else’s scripts skills” :grin:

HA! :slight_smile:
Anyway, fixed scripts… sample-analyzer.zip

1 Like

And it’s working as expected. Now it’s time to run the rest of the files…

1 Like

I’m curious about these lines from dump1090 stats function:

Jul 17 10:03:53 airspy dump1090-fa[29126]:       6883 unique aircraft tracks
Jul 17 10:03:53 airspy dump1090-fa[29126]:       6315 aircraft tracks where only one message was seen
Jul 17 10:03:53 airspy dump1090-fa[29126]:       6350 aircraft tracks which were not marked reliable

It’s a huge percentage (90+%) of the total and also shows up on the graphs. But it doesn’t seem to propagate through to the FA-stats if I understand the FA-stats properly. The station do report a lot of “Other” in the Position report, but it amounts to approx 20% of the reported ADSB-positions and a little over 60% of reported MLAT positions.

I’ve noticed that using “sample-format sc16” produces a significant amount less of the single- and unreliable tracks, but the ones that performs best (according to the dry-runs) all gives higher numbers.
Is it related to the implementation of the various sample-formats in the demodulator or is it just that IQ-format is the better in this regard?

EDIT:
What format does airspy_adsb use? I would guess IQ-format, but one can never be sure…

Those stats are concerning. Under what conditions did you get them?

Real vs IQ:

With the u16o12 format, we’re simply taking the native 12-bit sample and shifting it left 4 bits to create a 16 bit value whic is fed to the demodulator. While this is extraordinarily simple and fast, the drawback is that the lower 4 bits are all zeros so the possible values are still limited to 12 bit precision. I.E. they go from 0 to 65535 in steps of 32. 0, 32, 64, 96 … 65503, 65535.

When you use the sc16 format, say at a 12MHz sample rate, the receiver is actually instructed to sample at 24MHz then the software takes 2 12-bit samples at a time and creates 1 IQ format output sample at an effective rate of 12MHz. That’s done in libairspy. dump1090-fa then converts the IQ sample back to a single 16 bit unsigned real value and feeds it to the demodulator. This process results in smoother 16-bit values that aren’t in steps of 32.

No way to tell exactly what’s going on there.

Which you can only do with R2, not with the mini.

You need to shift that?
Hmm i don’t know what values the decoder puts out.

I think it’s not a secret and mentioned in a previous thread for sample connection when prog was tweaking the decoder.
airspy_adsb uses U16_REAL with 12, 20 or 24 MHz sample rate.

timeout 20 airspy_rx -r /run/sample.bin -t 4 -a 12000000 -f 1090 -g 16

That’s for a 12 MHz sample rate 20 second sample as binary.

I suppose i could add an option to airspy_adsb to process such a file if prog isn’t opposed.
(there is an internal option but it’s not exposed via commandline but rather commented out if i’m not mistaken)
This would allow running a sample through your decoder and airspy_adsb, then compare CPU used and DF17 messages received.