dump1090 --phase-enhance option

I have not tried this. Since it was turned off by default a few releases back, I feel it may not be beneficial.

Has anyone benefited by using this option? How big an improvement did it make?

TIA

Generally 5-15%, depending on exactly which version of the code you’re running.

If you don’t mind bleeding-edge code you might be interested in github.com/mutability/dump1090/ … ersampling too (run with --oversample)

Thank you. Information on phase-enhancement is hard to find. How has it varied with the different versions of dump1090?

Presumably when oversampling you take multiple samples during the interval that the incoming signal is meant to be either 0 or 1, rather than just sampling once in the middle of the window. How many times do you oversample? From what I have been able to pick up about ADS-B transmissions so far, PPM (pulse position modulation) is employed with a transition in the middle of the bit interval. As with all things analogue, I suspect the zero and one decision levels either side of the transition are going to be subject to noise and there is probably some uncertainty (jitter??) in determining the middle of the bit cell. Is this what you are trying to overcome with oversampling? I imagine the packet preamble is what sets up the timing for decoding. What sort of arbitration technique do you use to process the multiple (over) samples to derive the final value? A simple majority or is it weighted? Multiple samples would seem to be a good method of defeating noise spikes.

How much extra CPU load does it put on the Raspberry Pi? It probably depends on how many packets per second you are handling. How much improvement have you been able to achieve with oversampling? Do you use oversampling combined with phase-enhancement? So far all I have been able to dredge up on phase enhancement implies that it is some sort of amplitude based correction to improve the chance of successfully predicting a 0 or 1.

Sorry for the barrage of questions. Google is not proving very helpful. I experimented a little with oversampling amateur radio RTTY signals a few years back but did not get good results.

TIA

The main change I am thinking of is this one: github.com/MalcolmRobb/dump1090/pull/36
(Malcolm merged that a few weeks ago)

Previously the phase enhancement would only move bit energy in one direction, that pull request changes it so that it can move it in both directions depending on which direction the phase offset is in; it also tries to make the adjustment proportional to the size of the phase offset. The weighting is fairly arbitrary, it was basically what seemed to work after some experimentation.

That change roughly doubled the number of messages recovered by phase-correction that I saw, and Malcolm saw about the same:

(I got slightly higher values, up to around 15% - it probably depends on the exact receiver setup)

Oversampling is probably the wrong terminology (I don’t have a DSP background!).

The problem is that the symbols are transmitted at 2MHz and we are also sampling at 2MHz. So we only have one sample per symbol (2 samples per Manchester-encoded bit), and we don’t have any control over the phase of our sampling rate versus the incoming signal! So it’s mostly luck whether we get a good phase match or not. The worst case is where we happen to sample right on the transition edges - there’s nothing we can do there. In most other cases we have the problem that our sampling window overlaps two symbols, interfering with our ability to compare the levels; this is what the existing phase enhancement code tries to deal with.

(Nyquist sampling limits are not so bad as they would first appear here, because we’re getting complex I/Q samples rather than single real samples)

The changes switch to sampling at 2.4MHz, rather than 2MHz. That’s basically at the limit of what the dongle will do, higher sample rates are not stable. Then are getting about 1.2 samples per symbol, 2.4 per Manchester-encoded bit. This means 12 samples every 5 bits. Each bit is going to have a different phase offset with respect to the samples, but the pattern repeats every 5 bits. It would be great to be able to sample at a higher rate, as there’s still a lot of ambiguity here, but unfortunately you just can’t run these dongles above 2.4MHz reliably :frowning:

The decoder picks an initial phase offset based on where the peaks in the preamble appear to be (see mode_s.c:2036).
Another approach I tried here was to do a trial bit decision pass over the first few bits of the message with different offsets and pick the best one, but in practice this wasn’t so effective and later changes made it a bit pointless - there is still some dead code floating around for it though (mode_s.c:1980). edit: actually, that code is still live! Huh. Guess I must have decided it was still useful in the end.

The decoder then steps through the bits one at a time. For each bit it knows what the current phase offset should be. The bit decision is done by correlating what an ideal 0/1 transition should look like at the current phase offset with the actual data. The slicers are set up so that the sign of the correlation tells you the bit value (i.e. the correlation weights sum to zero) - as a 1/0 transition looks like an exactly-180-degrees-wrong 0/1 transition. See the loop at mode_s.c:2146 and the slicers at mode_s.c:1897.

When sampling at 2.4MHz the existing phase-enhancement code is not used. Instead it brute-forces all possible phase offsets for messages which seem close to being decodable (demodulated OK, but had CRC errors). See mode_s.c:2359. That was what made the initial scanning for a good phase offset a bit redundant… picking a phase based on the preamble peaks is good enough (and much cheaper), and then we pick up the pieces later for those messages that it doesn’t quite work on.

In terms of CPU load and message reception rates it is unfortunately hard to directly compare, because you can’t feed the same test file to both versions. I need to get myself a splitter and feed the same RF signal to two dongles, really. That said my rough estimates were that it was getting 20-30% more messages and CPU load was OK so long as you don’t turn on digital AGC (which seems to cause a lot of false positives in the preamble detector). Looking at one of my B+ receivers at the moment it’s taking about 40% CPU to process around 900 messages/second (it’s not linear, there is a fixed overhead just to scan for signals even if none are found).

Thanks for the great reply.

The problem is that the symbols are transmitted at 2MHz and we are also sampling at 2MHz. So we only have one sample per symbol (2 samples per Manchester-encoded bit), and we don’t have any control over the phase of our sampling rate versus the incoming signal!

Ouch! It is a miracle it works as well as it does. Sounds like the cheap and simple RTL dongle is right at the edge of its capabilities and it will be very hard to do much better with it.

Gents, please keep this very informative discussion going. Learning a lot!
Would the more capable R820T2 (as opposed to the R820T) be of any benefit? It has a wider bandwidth a i understood.
Have one in the mail and expect it to arrive somewhere after the weekend.
/paul

Unfortunately it’s not the tuner that’s the limiting factor, it’s the RTL2832U itself.

The 2832 samples the tuner output at 28.8MHz then does the rest of the work (downconverting the IF and generating I/Q outputs, then decimating to the requested sample rate) on the digital side of the ADC.
So the data is already there, somewhere. We just can’t get at it because the USB output stage isn’t reliable at sample rates above 2.4MHz and starts dropping data.

Yeah, when I first looked at the dump1090 code my reaction was “This can’t work, why does this work?!” :wink:

So, with my simple level of understanding, there doesn’t seem to be much chance of improving things for the cheap and nasty RTL dongle for ADS-B.

Presumably the different type of modulation used for DVB-T (OFDM ??) compared to PPM for ADS-B is the reason why the DVB-T mux rate (max about 853 kbytes/sec) can be processed OK by the RTL chip. Any comments please.

I believe when doing DVB-T the demodulation is mostly happening directly on the dongle and only the processed data is shipped out - it’s not in SDR mode.

Thanks obj, real good info about dump1090. -Paavo

The --aggressive option to dump1090 seems to be of mixed benefit in that it’s invocation may result in some errored data. Are there any issues like this with --phase-enhance or --oversample, please? Why was the --phase-enhance default value changed to disabled a little while back?

From what I have been able to dig up, it seems that rtl1090 can output a file of raw IQ values. I am not clear where dump1090 “attaches” to the RTL dongle. Could this rtl1090 file of IQ values be fed into dump1090 using the --ifile command line option? As obj referred to decimating the IQ values, I have gained the impression (rightly or wrongly?) that obj’s reference to needing an RF splitter for comparing the performance of two dump1090 variants may be able to be circumvented this way. I thought this was how you and Malcolm Rob did your testing. I assume rtl1090 would have the same USB interface problems wrt maximum stream bit rate. Given the possible unit to unit variablity of the dongles, I do not have a good feeling that the same RF feed to two dongles would produce identical results using the same dump1090 variant. I am probably barking up the wrong tree as usual.

The dump1090 --help output references TCP ports 30001 & 30002 with the --net-ri-port and --net-ro-port options. Do these TCP ports take raw IQ values or binary ADS-B packets rather than ASCII ADS-B packts? To get Beast format data ports 30004 and 30005 operational, do you require the --net-beast option? Or does this option change the data encoding for ports 30001 and 30002?

TIA

Yep. The main change there is that it starts fixing 2-bit errors which is a bit more marginal - there’s a greater chance you actually have (say) a 4-bit error that looks like a 2-bit error, and then it gets corrected to a valid-but-incorrect message. The risk of this is lower if you’re only fixing 1-bit errors, as you need more error bits to get a false positive that looks like a 1-bit error.

Are there any issues like this with --phase-enhance or --oversample, please?

Not that I know of, other than you’re throwing more noisy messages at the CRC checks, so (a) more messages in total means more messages with undetected errors, even at a constant bit-error-rate and (b) the extra messages are likely to have a higher bit-error-rate, too, because they were marginal in the first place.

It’s worth noting that you will always have a risk of receiving bad data that contains undetected errors. The question here is what rate of undetected errors is acceptable. Error detection is not magic :slight_smile:

Why was the --phase-enhance default value changed to disabled a little while back?

You’d have to ask Malcolm, I don’t know the exact reasoning.
He did have some comments on my pull request that it tends to increase the number of messages with bad interrogator fields in DF11 messages, as these aren’t protected by CRC. This is unfortunately always going to be a problem as you try to extract more messages from noisy data. But FlightAware et al doesn’t use the interrogator IDs AFAIK, it’s only really of interest for “beamfinding” (or if you are a secondary radar and know your own interrogator ID!)

From what I have been able to dig up, it seems that rtl1090 can output a file of raw IQ values.

You can also use rtl_sdr to generate this:



$ rtl_sdr -f 1090000000 -s 2000000 a-sample-file.bin


I am not clear where dump1090 “attaches” to the RTL dongle. Could this rtl1090 file of IQ values be fed into dump1090 using the --ifile command line option?

Conceptually yes - I don’t know if the format is exactly the same, so long as it’s raw sample output it should be OK. But there’s no particular reason to use rtl1090 for this, rtl_sdr probably makes more sense.

As obj referred to decimating the IQ values, I have gained the impression (rightly or wrongly?) that obj’s reference to needing an RF splitter for comparing the performance of two dump1090 variants may be able to be circumvented this way.

The problem is that the raw sample file has to be at the right sample rate - you can’t use the same file to test both 2MHz and 2.4MHz.
You can’t convert between sample rates in a meaningful way; you can certainly downsample or upsample between the two rates, but that’s not going to produce the same data that would have been captured off the air, just a lossy approximation of it.

The decimation happens within the dongle, from the dongle’s internal 28.8MHz rate to what the application has requested - either 2MHz or 2.4MHz. To generate both sample rates from the same input, we’d need the 28.8MHz data, which we can’t get.

I thought this was how you and Malcolm Rob did your testing.

Testing against a known sample file is very useful, yes - but you need a sample file with the right rate. Hence the comparison problem.

I assume rtl1090 would have the same USB interface problems wrt maximum stream bit rate.

Yes, it’s a hardware limitation

Given the possible unit to unit variablity of the dongles, I do not have a good feeling that the same RF feed to two dongles would produce identical results using the same dump1090 variant. I am probably barking up the wrong tree as usual.

Well, you can always swap which dongle is used in which mode - test A@2MHz vs B@2.4MHz, then test A@2.4MHz vs B@2MHz.

The dump1090 --help output references TCP ports 30001 & 30002 with the --net-ri-port and --net-ro-port options. Do these TCP ports take raw IQ values or binary ADS-B packets rather than ASCII ADS-B packts? To get Beast format data ports 30004 and 30005 operational, do you require the --net-beast option? Or does this option change the data encoding for ports 30001 and 30002?

Ports 30001/30002/30004/30005 are all dealing with demodulated ADS-B messages, not the raw IQ samples. RI/RO use the “AVR” format, BI/BO use the “beast” format. Both are demodulated but uninterpreted data i.e. you will have 56 or 112 bits of ModeS message data in each message (+ framing and metadata depending on which format is in use) and it’s up to you to interpret that.

–net-beast is really only there for legacy compatibility, all it does is make --net-ro-port behave like --net-bo-port. (i.e. “–net-beast --net-ro-port=12345” is the same as “–net-bo-port=12345”). In older versions I believe it did change the encoding format, but dump1090 now supports both formats simultaneously so you probably don’t want to use it at all, you can get the same effect more obviously by just using --net-bo-port directly. (Or just don’t specify any port options - all the ports are on by default when you specify --net)

Hi,

I cloned the oversampig branch from mutabilitys dump1090 git, and built it.


git clone -b oversampling git://github.com/mutability/dump1090.git
mv dump1090 mutability-dump1090-oversampling
cd mutability-dump1090-oversampling
make

The --help output does not say anything about oversampling, but strings dump1090 output looks like this:


Oversampling enabled. Be very afraid.
--oversample

Has someone tested/user this with the --oversample? What kind of results did yuo get?

I’ve been running that branch live for a month or so. Current hourly stats for one of my receivers (with --oversample --phase-enhance --fix):



Statistics as at Thu Nov 20 11:25:39 2014
65937 sample blocks processed
0 sample blocks dropped
716139 ms CPU time used to process 3601039 ms samples, 19.9% load
0 ModeA/C detected
79651 Mode-S preambles with poor correlation
20070514 Mode-S preambles with noise in the quiet period
23633565 valid Mode-S preambles
   6384014 with phase offset 4
   3681466 with phase offset 5
   3669517 with phase offset 6
   3880132 with phase offset 7
   6018436 with phase offset 8
64261 DF-?? fields corrected for length
75467 DF-?? fields corrected for type
6717260 demodulated with 0 errors
1002610 demodulated with 1 error
695756 demodulated with 2 errors
105170401 demodulated with > 2 errors
3212775 with good crc
   463942 with phase offset 4
   719664 with phase offset 5
   579333 with phase offset 6
   895188 with phase offset 7
   554648 with phase offset 8
5819222 with bad crc
57208 errors corrected
   57208 with 1 bit error
   0 with 2 bit errors
5762014 phase enhancement attempts
11997634 phase enhanced demodulated with 0 errors
3211454 phase enhanced demodulated with 1 error
1917684 phase enhanced demodulated with 2 errors
205910107 phase enhanced demodulated with > 2 errors
188575 phase enhanced with good crc
   23719 phase enhanced with phase offset 4
   39836 phase enhanced with phase offset 5
   44155 phase enhanced with phase offset 6
   29670 phase enhanced with phase offset 7
   51195 phase enhanced with phase offset 8
18163451 phase enhanced with bad crc
14731 phase enhanced errors corrected
   14731 phase enhanced with 1 bit error
   0 phase enhanced with 2 bit errors
3473289 total usable messages


(some of the stats aren’t resetting properly hourly, IIRC, so take with a grain of salt)

I have spent some time scratching my head over why there’s an obvious bias in the choice of phase offset, but didn’t come up with anything conclusive yet.

Hi,

I’ve been running that branch live for a month or so. Current hourly stats for one of my receivers (with --oversample --phase-enhance --fix)

That is encouraging, I’ll go that way as well.

You are not using --aggressive? Is this the reason:

The main change there is that it starts fixing 2-bit errors which is a bit more marginal - there’s a greater chance you actually have (say) a 4-bit error that looks like a 2-bit error, and then it gets corrected to a valid-but-incorrect message. The risk of this is lower if you’re only fixing 1-bit errors, as you need more error bits to get a false positive that looks like a 1-bit error.

I have 822 “X with 2 bit error” lines /var/log/dump1090.log, 2919 total demodulated with 2 errors, not much compares to total 58161 of usable messages. I’ll take --aggressive away.

-Paavo

Generally 5-15%, depending on exactly which version of the code you’re running.

Can confirm that, according to the rate increase here from roughly 550 to 630 msgs/s (using FA’s 1.18 build).
/paul

dump1090 --oversample --phase-enhance increase Pi CPU utilisation by about 10%, which is not a problem. I do not have a very high message/sec rate. CPU is running at 800 MHz.

Hi,

I have now


--net --fix --stats-every 3600 --modeac --phase-enhance --oversample

but I’m not gettig any ModeA/C messages any more, every stats output has


0 ModeA/C detected

Not a problem, but is this a bug or a feature? Does the oversampling dump1090 support --modeac any more?

-Paavo