FlightAware Discussions

Airspy ADS-B decoder

It’s like mining bitcoins. At some point, the gain is irrelevant compared to the effort.
Your own tests agree with this observation. Even an i5 core can be put on its knees with very marginal return.
BTW, I spend my days writing high performance DSP code on different platforms for SDR. Not that I don’t know what threads are, so, please don’t bring that colloquial “but my cad app uses threads”… This is still a hobby project I develop for the community for free, but it is done with actual engineering, not trial and error, or worse yet, brute-force. What you are asking for is to change a working code to add brute-force message mining. Not only this will add more overhead, but it will also lower the overall quality and create more problems for the future platforms that may rely on the decoder code, all this for barely noticeable difference in the statistics. No, thanks!


I know that are apps that can’t be done in MT. I also know that we don’t even pay for your beer, and you do this mostly alone.

I was just joking before, sorry, I appreciate everything you do.

But we are in an age when multi cores are the solution to the CPU clock limits… we like it or not. So we all are used to think that’s a solution for all. Especially after all the mining craze…

Actually, the decoding was multi-threaded and we decided to remove the MT for good reasons that are well beyond this discussion. It’s not that we can’t do it, but we should not do it.

1 Like

Qualcomm realized that are apps that work better (or only) in single thread.
Their flagship, the SD 855, features four Kryo 485 Silver high-efficiency cores operating at 1.8 GHz along with three high-performance Kryo 485 Gold operating at 2.42 GHz and another higher-performance Kryo 485 Gold core operating at 2.84 GHz.

Intel’s i5/i7 boost the frequency when only one core is in use. My i5 is boosted from 2.5 to 3GHz in that case. The equivalent (dual core) i7 would go up to 3.1…3.3 GHz, is not that much faster (I have another identical laptop with a Quad i7 and, on single thread, is not that much better).

Here’s an exercise for you: run two instances: One on the RPi3, one on the i5, then check the stats. Tweak both to stay barely below 90%.
The i5 core is probably 50 times faster than the Pi, but I expect the position stats to stay within a a dozen of extra messages.
No amount of multi-threading on the Pi is going to make any difference.


I am playing now to get the adsb software on a virtual Ubuntu machine on that laptop, so I free the Pi for other things.
I hit some compilation snags though… but this is for another thread.

You are only adding more layers to a very simple problem. Running USB at these speeds in an emulated environment is a harder problem than you think.

On my first dable with the N2 in this exercise, I noted at one -e setting… 9 perhaps, not relevant though to this question, the ADSB CPU utilization was up about 90% (arrgg I went then looked at the overall CPU utiliz only sitting at 15/%, calm down)… is the idea of this tweaking to have the ADSB CPU utilization at about 90% ??

That’s what we are talking about. 90% on one core. “Overall” usage is worthless, it just assumes that the load is equally spread over all cores.
You should see some ridiculous “overall” numbers on my dual CPU 8 core/16 threads workstation (32 virtual cores)…

No that’s fine, thanks. A bench mark per say. Now these -e adjustments can be used to the 1 decimal point eg 7.5 ??

What is your ultimate objective? fully utilizing the CPU or getting the most messages?


OK. Now I think this whole discussion is irrelevant. There’s a serious knowledge gap to be addressed before we can come up with something more constructive. Let’s leave it here for the moment.

Well one would think the best performance, at the least amount of strain to the equipment. I am agreeing with your thoughts of you not necessarily get the best reception at 100% effort, as I have discovered before this last tweaking frenzy by backing of the N2 gain wise to get better results. :slight_smile:

That’s yet another aspect of the problem (RF/IF/Mixed signals).
The tools developed by @wiedehopf et al. are very effective for getting the gain settings right.


Agree with that comment :slight_smile:

In the current incarnation, what would you estimate to be good gain?
Mostly i’d say to have the weakest signal at -20 to -30 dBFS is a good compromise and works for most people.

Anyway i’ve noticed i can tweak the gain in a very wide band without noticing barely any changes in performance.
This might be in part due to my range being limited by terrain.

The same rules apply because we are still using the same analog path. In general, just enough gain to not lose the weak signals.

That might be closer to -30 dBFS for the weakest signal or even a little lower.
But it’s hard to tell :slight_smile:

Anyway that number will always depend on the sample rate as well as that changes the full scale at the A/D due to shorter or longer integration times.

With 20 MHz, i’ve noticed that turning the gain down so that weakest signal drops below -30 dBFS or so, reducing the gain doesn’t reduce the weakest signal, indicating some weak signals might get lost.

Anyway -20 dB to -30 dB for the weakest signal seems to work well.
Interestingly i’m seeing some bogus decodes with -40 dB right now.
Pretty sure that’s just the preamble detection and decoding detecting a random pattern, not sure how it produces 2 messages though.

Note that there is a difference between an actually weak signal and a weak SNR report because you lowered the gain.

1 Like

Yeah i’m well aware, the weakest signal shows less SNR when lowering the gain.
But if you drop the gain too much that signal is no longer decoded and your weakest decoded signal doesn’t drop as much as you would expect from lowering the gain.

Maybe i wasn’t clear with the terminology.
-20 to -30 dBFS for the weakest decoded signal seems like a good place for most people with little to improve when shifting that to let’s say -10 dB by cranking up the gain.

As you don’t know the objective signal strength, it’s hard to know when you’ve caught the weakest signals you can receive or not :slight_smile: