Antenna testing - in-progress notes

Intro

Currently performing antenna testing using two generic USB SDR dongles, effectively trying to maintain a control instead of relying on air traffic to remain the same day-to-day (which it never is here). This post will be updated during the testing, and I’ll eventually make a new post with a summary of the results.

Sites

Site 1: First dongle will remain connected to the supplied whip antenna, and is uploading data to Test01.

Site 2: Second dongle will connect to a variety of antennae, and is uploading to Test02.

Obviously impossible to site the antennae in the exact same location, but they’re very close. Some images will appear below. The only change being made to Site 1 whip antenna: height will change so that the tops of the two antennae are at the same height.

Currently testing:

Since 2023-10-29 (first full day)

Site 1: Whip antenna
Site 2: ‘NooElec’ telescopic antenna on metal lid

My first USB RTL-SDR dongle was from NooElec, and came supplied with a telescopic antenna. NooElec recommended keeping the antenna fully collapsed for ADSB use.

Previous testing:

Summary

All values relative to Site 1 with the whip antenna.

Antenna Aircraft Positions Range
Whip +0.4% -2% +29.2%
Whip on can +1.7% +6.7% +8.8%
Whip on lid +10% +18.3% +0%
Telescopic (min) +3.5% +6.2% +11.8%

Whip only - 2023-10-02 to 2023-10-05

Site 1: Whip antenna
Site 2: Whip antenna

This is a control test, to make sure that we’re working with a level playing field. Aim is to prove that the two USB SDR dongles, and the two antenna sites perform the same.

Quick results:

  • Aircraft: Site 2 logged 0.4% more aircraft
  • Positions: Site 2 logged 2% fewer positions
  • Range: Site 2 logged 29.2% greater peak range

Whip on can - 2023-10-07 to 2023-10-09

Site 1: Whip antenna
Site 2: Whip antenna on upturned can

image

Quick results:

  • Aircraft: Site 2 logged 1.7% more aircraft
  • Positions: Site 2 logged 6.7% more positions
  • Range: Site 2 logged 8.8% greater peak range

Whip on metal lid - 2023-10-11 to 2023-10-13

Site 1: Whip antenna
Site 2: Whip antenna on metal lid

Quick results:

  • Aircraft: Site 2 logged 10% more aircraft
  • Positions: Site 2 logged 18.3% more positions
  • Range:
    • Site 1 & Site 2 had identical peak range
    • Site 2 logged 10.7% greater average max. range

Nooelec telescopic antenna - 2023-10-15 to 2023-10-26

Site 1: Whip antenna
Site 2: ‘NooElec’ telescopic antenna

My first USB RTL-SDR dongle was from NooElec, and came supplied with a telescopic antenna. NooElec recommended keeping the antenna fully collapsed for ADSB use.

Quick results:

  • Aircraft: Site 2 logged 3.5% more aircraft
  • Positions: Site 2 logged 6.2% more positions
  • Range:
    • Site 2 logged 11.8% greater peak range
    • Site 2 logged 2.1% greater average max. range
4 Likes

Reserved for any follow-on/overun.

Is “site 1” on a pile of plastic play blocks?
Unless you are deliberately handicapping your reference site, at least give it a ground plane to work with.

The whole point of this is comparison against a control. Not interested in absolutes. This way, I’ll get a relative “better or worse than the control” figure which can eventually be used to compare the other antennae against each other.

The control in this is a whip antenna by itself, as I wanted to use exactly what was delivered with the USB SDR. But it doesn’t matter as long as the control stays the same.

Yeah, I get that, but the supplied antenna is a ground plane monopole. If you don’t even give it a ground plane, it’s barely an antenna . Also, it’s known to be a pretty poor performer and you are making it worse.
If you set your benchmark low enough, “anything” will look good by comparison.

Again, that doesn’t matter. I care that I can compare all the other antenna against each other, without actually having to physically pit each one against each other.

Again, I get it.
It’s what those of us with multiple receivers have been doing for years.

You seem determined to take your worst antenna (cripple it) and use that as your reference.
I take the opposite approach - I use my best antenna as the reference and see if I can improve on it (you acknowledge no day is comparable to another).
I then take my worst antenna and bin it.

It’s a hobby - do what makes you happy.

2 Likes

Great, seems like we agree on methodology, but approach it from opposite perspectives.

(And to be clear: I’m not comparing against Site 1 to make a judgement on the antenna at Site 1. I don’t care about the antenna at Site 1, other than it stays the same. The point is to use the results to compare the antennas at Site 2.)

Another way to do comparisons is to use a nearby site as a reference. It takes several days to make a comparison since conditions change on a day to day basis. So you have to compare over several days. The only assumption in this is that the nearby site is not also changing things!

2 Likes

My test setup is a single antenna followed by a 1090 filter, LNA and splitter that feeds signals via identical cables to a single RPi with both the reference receiver and the test receiver. That variables are the dongle hardware and the software settings.

Curious to follow along on this. One of my buddies just wrote a program to help test range, using the assumption that if you received one packet you should receive the next until the thing is out of range. We found ADS-B through Wingbits, and this has been a fun little project. Not sure if it’s OK to post links, but here’s the Github.

I wrote a script called sitetotals which extracts Total Aircraft and Total Positions from the stats web page and renders them as a list of numbers. This means numbers can be extracted from any site and processed to do this kind of comparison.

The next stage is to write some maths to do these tests. This is what I’ve tested, albeit only in a spreadsheet so far and only with two sites, but it worked quite well. I welcome your (collectively) thoughts on this approach.

The goal is to be able to compare a site against the average of a number of other sites in order to determine whether a site change (gain, antenna, etc etc) has had a positive or negative effect. Using the average of a number of other nearby sites is designed to minimise the risk that a single site’s anomalies affect the score. Meanwhile, if aircraft and/or position numbers are up or down, all sites see that and the effect of that cancels out. So, in the end, only site changes are seen, within a margin of error.

So, using just Aircraft Totals for simplicity (but it would the same approach for Position Totals), and using just two days (but it could be done in an array for pairs of consecutive days for the whole month shown on the stats page):

  1. Enter the site URL of my site to be measured ––> A

  2. Enter the site URLs of 5 other nearby sites ––> B, C, D, E, F

  3. Fetch the Aircraft Totals for all the sites and extract and store yesterday’s value (x) and the day before’s value (y). These are fixed values (new day starts at 0000 UTC) ––> Ax, Ay, Bx, By, Cx, Cy, etc

  4. Average Bx, Cx, Dx, Ex, Fx as Vx. Average By, Cy, Dy, Ey, Fy as Vy. The B-F values can now be discarded, we will now be comparing my site A against “virtual site” V, which should minimise any one particular site affecting results as much.

  5. Calculate A rolling ratio ––> Ar = Ax / Ay. This is “How did my site perform yesterday compared to the day before?” That could be caused by anything – site changes, aircraft numbers, day of week, etc

  6. Calculate V rolling ratio ––> Vr = Vx / Vy. “How did the virtual site perform yesterday compared to the day before?”. Same comments again.

  7. Calculate the relative site performance, which is the ratio of the two previous ratios ––> P = Ar / Vr. This is “Did my site perform better or worse or the same as the virtual site yesterday compared to the day before?” Consider P < 0.98 to be worse, P = 0.98 to 1.02 to be the same, P > 1.02 to be better. I found these thresholds work well as a starting point.

If any individual B-F site fluctuates the effect is smoothed out in V (eg to prevent my site seeming to have increased and then decreased because any one site had, and fixed, a problem). As aircraft numbers change it affects all the sites and so you might find that A and V both increased 25% yesterday compared to the day before, which equates to a P of 1.00, ie, no change in site performance. If you tweak your antenna and V decreased by 7% and your site decreased by 7.5%, that wil be a P below 1.00, ie site performance has decreased.

The maths is the same as (Ax * Vy) / (Ay * Vx), but the above shows the logic more simply.

This all works well in a spreadsheet, but there will be some interaction between the number of sites needed for V and the thresholds used to interpret P in the context of changes made to A.

I guess the alternative approach is to use an antenna splitter and another Pi and dongle which remains at a fixed config, and use that as the source for V. Simpler but requires inserting a small loss into A (and more biscuits/choc tins :slight_smile: )

1 Like

One additional tweak – you need to subtract UAT 978 MHz Aircraft from the totals. Assuming you are comparing 1090 MHz receivers.

Good point, I’m in the UK so didn’t consider this. Unless aircraft are counted twice (both transponders at once?) perhaps this would not matter anyway since it boils down to being another site running at some level of performance. I think I’d want to remove it anyway to be sure.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.