All four (4) of of my receivers have adaptive-gain=yes and adaptive-burst=yes.
Three of these behave like Graph-1, and one behaves like Graph-2
The 3 receivers with Grapg-1 have good antenna and dongles are FA Prostic Plus and RB Flightstick having builtin LNA. The receiver chip therefore gets strong signal.
The fourth receiver with Graph-2 has a generic DVB-T (black) and its antenna is the stock whip antenna which came with DVB-T. Naturally the receiver chip gets much weaker signal than those of other 3 recevers. In my opinion gain stuck at a fixed valuue is resilt of week RF signal input to receiver chip.
The weak RF signal input to receiver chip may be caused by low dc voltage hence LNA of dongle is not working properly. A bad or dying power supply adaptor may be the cause. Other reason of low RF signal may be moisture ingress inside antenna, or loose cable connectors.
When adaptive gain is doing an upward scan, has reached maximum gain, and the dynamic range is still acceptable. (At this point it canāt increase gain further, because itās reached the maximum).
The trigger for a decrease in gain is if measured dynamic range drops below a threshold (measured dynamic range + half a gain step < dynamic range target). If measured dynamic range doesnāt drop, you wonāt see any further adaptive gain activity, because thereās nothing to do ā thereās no apparent need to reduce gain (dynamic range is acceptable) and gain is already at maximum so thereās no need to probe for a higher gain. dump1090 continues to measure dynamic range continuously, but if it never makes a decision to change the gain, it wonāt log anything.
Hardware changes, RF environment changes, software bug ā take your pick.
I want to see stats.json so I can see what the measured dynamic range is ā thatād let me identify or rule out a software bug. You might want to capture/graph dynamic range (itās under adaptive.noise_dbfs in stats)
I am learning so please bear with me, is ādynamic rangeā basically the difference between the strongest and weakest signals received, in dB?
If so (or in any case), does this graphs1090 chart essentially show dynamic range (the difference between the peak and weakest signals)? And then, if so, note how when the issue arises (between July 3 and midday July 7), and the gain is stuck at max and agc is halted, the difference between the peak and weakest signals is only about -20dB, but after a reboot and agc is operating and gain is set to 49.6 (midday July 7 onwards), the difference between peak and weakest is about -30dB. Isnāt that better, so agc should have detected that and restarted scanning on its own before I had to reboot?
Apologies if I am in left field. Appreciate your time.
The adaptive gain logic estimates dynamic range by looking at the 40th percentile of all samples that are not part of a successfully decoded message (sorted by magnitude), so itās not directly looking at the strength of any demodulated message. Itās roughly trying to estimate the noise floor (in a different way to what graphs1090 is plotting as ānoiseā). The target is 30dB, i.e. it will try to find the highest gain that keeps the dynamic range >30dB (i.e. noise floor below -30dBFS)
So the big difference in the peak:weakest signal level shown in the graphs1090 chart (~20dB vs ~30dB), does not indicate that agc should have restarted automatically, right? i.e. that chart does not mean that gain = 49.6 is better than max gain?
Btw, wrt the possible reasons for the sudden (and stuck) gain change (āHardware changes, RF environment changes, software bug ā take your pick.ā), there havenāt been any hardware or software changes, and any RF environment changes would surely still be true after a reboot, so presumably the gain wouldnāt go straight back down again on a reboot.
Adaptive gain tries to pick an appropriate gain automatically based on what it measures. Thereās hysteresis in how it does this to avoid repeatedly changing the gain if there are two gain settings that are right on the threshold for changing gain; perhaps in your setup that hysteresis means that a sufficiently āquietā period can cause it to increase gain (and it may take some time after a restart for a period like this to happen), but subsequently it never gets sufficiently āloudā to bring the gain back down again.
Adaptive gain tries to provide a good automatic default, and Iād argue itās providing a reasonable setting for your site (both gain settings seem like plausibly good values) ā¦ but if you have better knowledge of which gain setting works better, then you should disable adaptive gain and manually set that gain.
Looking at other stats (range, a/c seen, message rate, etc), I canāt discern any significant difference between the two different gain settings, so as you say, either seems to work.
One other curiosity, when gain = max, CPU utilization is higher than when at 49.6.
scan every sample looking for something that might be a preamble pattern (a sequence of pulses at the start of every Mode S / ADS-B message)
when we see a possible preamble pattern, try to decode the following samples and see if it is a valid message
Part (1) is an approximately constant CPU load, and (per sample) is cheap to run ā it has to run on every sample, so it has to be cheap.
Part (2) is more expensive, which is why we only run it selectively.
When the gain is set higher, thereās more noise received, and there are more false positives in step (1) where noise is misinterpreted as a possible preamble. That means that part (2) needs to run more often, increasing the overall CPU load.
@obj, I really do appreciate the time you have taken to provide such clear and detailed answers to my questions. What I have learned is very enlightening, interesting, and I have enjoyed the āconversationā. This software is very smart, and well written (I am a software engineer myself, so I have some idea)!