Results from FlightAware 1090 MHz ADS-B Antenna - 26 in

Simply change it to green, blue or whatever you prefer :slight_smile:

Since long I am happily living with the following red errors :wink:

Pi-1

image

Pi-2

image

Pi-3

image

There a lot of attention to % of strong signal statistics. Is there a reason to look at the other end of the RSSI scale and ask what % of weak but detectable signals are below the AD quantization threshold?

The 1090 MHz downlink signals are encoded as pulse position modulation so there is no amplitude information beyond 1 bit for a given downlink frame. Additional bits could serve to distinguish overlapping signals on the basis of RSSI.

Setting up the gain of an 8 bit SDR so that the top of the quantization equals the strongest signals means that the weakest detectable signals will be at most 48 dB below the strongest.

Turning down the gain to move the AD window higher desensitizes the system. I assume that’s why gain reductions usually reduce both % strong signals and message rates.

It “seems to me” that the AD converter ought to be set up to quantize a bit or two into the background noise floor, and the AD should have enough additional bits to span the RF dynamic range. 14 bits ought to work…

In practice we mostly have 8 or 12 bit SDRs with fixed gains and have to work within those limits. I don’t have a way to access the AD output directly anyway. So, I have adjusted for maximum message count averaging over a few weeks at a time.

2 Likes

In practice the demodulator needs more than 1 bit / 6dB SNR, but otherwise the logic is sound - you are moving the approx 48dB ADC window around. Usually you can just about make that stretch to cover the range of signals you see within line of sight (signal levels drop sharply as soon as the aircraft crosses the radio horizon) even with only an 8-bit ADC, but there’s some tradeoffs to be made. 42dB will cover roughly a 128x range of distances (e.g. 2-256km) for the same transmitter power. There’s also a 4x 6x or so difference in transmitter powers, but lower power transmitters also tend to be flying lower (and so fall below the radio horizon at closer distances) so that cancels itself out to some degree.

Trying to measure the rate of signals that you would have received but can’t because of the ADC resolution is tricky because, well, you didn’t receive them…

3 Likes

That’s would be way too expensive for the 1GHz frequency.
I switched to Airspy because they use a 12 bit ADC @ 20 MSPS (10.4 ENOB, 70dB SNR, 95dB SFDR).
Is that much better than the 8 bit ADC? It depends of everyone’s conditions (traffic pattern, bigger difference with higher numbers of signals) and expectations.

PS: The Effective Number of Bits (EONB) should be more advertised in ADC/DAC products. I see people claiming they have 24 or 32 bit in digital audio, but when you actually look at the EONB they barley scratch 20-20.5. Some even go as low as 18 EONB on a product that claims “32 bit DAC”.

It’s a hobby so who is to say what some might pay?

Anyway, the point is that the dynamic range of the ADC, however many bits, should be set to best cover the 1090 MHz signals’ amplitudes. Adjusting to reduce % of strong signals is not necessarily an optimal approach.

The goal has never been to reduce the strong signal rate. The goal is to improve overall reception, and the strong-signal rate is one metric you can use to tune that; it’s an indication that you might be losing some signals at the strong end of the range which you might want to receive.

What other metric are you suggesting?

(One thing I’ve been considering is to look for full-scale values in the IQ samples, as an alternative indication that the ADC is hitting the top of its range. However, there’s a lot of digital baseband processing that goes on in the RTL2832 between the raw ADC values and the sample values we get, so I’m not sure how useful it’ll be)

Well the usual guidelines don’t tell you to reduce the percentage to zero.

I think most people do understand tweaking for that percentage is just to get you in the ballpark.
It usually works just fine to set it to 1% to 5% and it’s in most cases pretty optimal.

If you’re interested mostly in nearby aircraft, then reducing the percentage to something really low makes perfect sense.

Anyhow it’s always a compromise and rough guidelines is what most people are looking for.
Everyone is welcome to use their own judgement.

1 Like

If you have a few signals that get to the 100% of the digital range it’s fine. But once you have multiple strong signals, they start to overlap statistically. Signals are not correlated, so the more planes you can see, the more probability of collisions increases.
There is no mechanism for solving collisions.
So the overlapped full scale signals will start merge one with the other, loosing the transitions (fronts). Impossible to recover the data.
On the other hand, if you have some delta between levels, the software can still detect the transition.

Overlapping messages are unlikely to be decoded by rtl-sdr anyhow.
And that’s not the issue with having the gain too high.

I don’t have enough background to know what the exact cause is, but from my understanding part of the analogue portion gets overloaded and the signal gets distorted from the overload.

There aren’t enough collisions that would explain completely loosing reception on a close by helicopter / aircraft when you have the gain set too high.

I have a very dim decades old memory of a capacitor with a time constant that was slightly too long such that weak signals in a timeslot directly after strong signal were lost. It was in a military packet radio but I can’t remember the exact circumstance. I do remember that it drove us nuts trying to track it down.

1 Like

I thought about this many times in the past years, never had time or energy to dig into the code to see how it’s currently being handled, but have always figured that packing unprocessed (otherwise discarded) stream into arrays sorted by signal strength for post-processing could help. That said, I’m smart enough to realize what I don’t know, and no clue of the feasibility of such processes or if even possible to reliably sort bits of the stream by signal strength to begin with - although I’m pretty sure this already being done in some capacity.

The premise would be to follow the logic that bits/chunks of the same strength would more than likely belong to the same transmitter (within a logical timeframe) and these fragmented packets could be chained together to see if they pass a round of error checking before being flushed. I’m talking a bit out of my arse since I haven’t written a line of code in a couple years now outside of some basic patches or otherwise kindergarten level work…

That’s true for amplitude modulated signals.
But here we have digital signals, more like FM. A clipped signal versus a clean sinus wave is preferable. I think that the front of the signal (or transitions) are used to demodulate the signals. What the tip part looks like is probably not that important as long as is fairly flat.
But I admit I can be wrong. Sometimes an overload amp will “clamp” up and stay like that for a while. Don’t know if the CMOS parts inside decoder chip do that.

From the “Nose Source” - right? :laughing:

2 Likes

 

The sinus and the nose are related to each other :wink:

 

1 Like

Oh man, tough crowd.

4 Likes

It’s OK, they’re here all week. Avoid the fish and don’t forget to tip the waitress.

I just posted some comparison results here…

Update. Been able to visit my home this week so I can now view the graphs over the last 2 months.

I lowered the filter to 31 (from 42) and my errors decreased to 1.9%

Good? Bad?

I’m changing back to a spider antenna today too so I can see how it compares to the FA antenna…

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.