Thoughts on optimizing gain

It’s 12 KByte vs 32 KByte for a day of data gzipped vs uncompressed.

So it doesn’t really matter either way, 14 MByte shouldn’t be a problem.
(deleting data older than 432 days)

Probably not worth it then, and it simplifies dealing with it later.
Could always gzip up and archive old data rather than delete it.

I’m pretty sure for plotting the current data are sufficient.

Anyway if you decide to add more columns to the file that shouldn’t be a problem, just don’t modify the existing columns order or something :wink:

You are welcome to test the modification to graphs1090 in a couple of minutes when githubs zip file has caught up.

The files will be in /var/lib/graphs1090/scatter and only have their date as the name.

To generate the file for yesterday, just run sudo /usr/share/graphs1090/scatter.sh if you want to test it.

To plot a week worth of data you’ll have to concatenate the dates in question which shouldn’t be a problem.

If you want the last 7 days in one file, you can do it like this:

cat $(for i in {1..7}; do date -I --date=-${i}days; done) > /tmp/last7days
cat $(for i in {8..14}; do date -I --date=-${i}days; done) > /tmp/7days_before_last7days

Then just plot the file as usual.
You could even make the ranges to be compared be specified by the user.
But defaulting to comparing the last 7 days with the 7 days before that seems like a good default.
If you want to do comparison.

It has updated and seems to be working fine. I’ll try some different plotting options to see what works well.

You can actually merge the files within gnuplot directly, so it shouldn’t be necessary to create temporary files to do it. The main difficulty is navigating the gnuplot documentation, which seems to have been written like one of those old style choose your own adventure books. You follow a load of links only to find that the switch or command you are looking at does something completely different to what you thought.

This will generate the files for the last 7 days so you have some data to work with.
(older data isn’t the same granularity but you could use the command to create even more past days at reduced granularity)

for i in {1..7}; do sudo /usr/share/graphs1090/scatter.sh $i; done

Oh you’ll have to update the graphs i just gave scatter.sh a command line switch used in that command :slight_smile:

To be honest i would just make a temporary file by concatenating it instead of using the gnuplot options.
Don’t think digging through gnuplot options is helpful.

You also don’t know how it will react if some files are missing.
Concatenating with cat will just ignore missing files and concatenate the rest.

That is good timing, as I was just looking at doing something like that to get data. Saves me the trouble thanks.

Wow i just looked at the file sizes for getting some more historical data:

for i in {1..40}; do sudo /usr/share/graphs1090/scatter.sh $i; done

I noticed that i had totally forgotten to make the fetch command actually fetch the correct day :slight_smile:
The argument only changed the filename :keyboard: :face_with_head_bandage:

Anyway i’ve since fixed it.
Also it was getting 1 overlapping data point per day, so i changed the start to end-1439m (1min less than 1 day)
Now it seems solid.

:smile:
I had pulled some files and thought it was odd why they were all the same size for the period - I hadn’t got to the point of using them and finding out why though.

Here’s a test version that compares the last week and previous week:

I’m not really happy with the palette on the scatter plot because it makes it hard to tell the two data series apart, however I haven’t worked out if it’s possible to assign a different palette to each one yet. Possibly choosing different point styles will work OK, or I might just keep the range colouring for one of them and give the other a flat colour.

1 Like

Moving the plot to /run/dump1090-fa/ makes it to be deleted every time dump1090-fa is restarted, e.g. when the gain is altered.
After a restart one has to run caius_scatter_plot.sh manually.
I suggest putting the plot / file graph.png somewhere else.

The plot isn’t refreshed anyway.
You execute the script to get one picture.
After you have viewed it, you can save it if you want it on your main computer.

It’s just a hack to quickly view the image, not to keep it around.

OK, fair enough.
I run the script from a cronjob at midnight and link to the graph from the portal, hence my suggestion.

That’s something i’d consider an install and would call it as much.

Anyway just download and modify the script.
At the very end you can just change the destination directory :slight_smile:

Maybe i’ll just make the script part of the graphs at some point.
Or a separate install but requiring the graphs.

Some changes made during the last week has apparently broke the scatter plot.
Last week it looked like this:


When I run the script now it doesn’t stop. After 5 minutes I stopped the script.
It seem to have looped and started over (after 10k iterations).

Just press S to stop the fit.

Mathematically reducing the difference between the line approximating the values and the values is not converging.
The formula i used for the fit is a little tricky.
Might be improved.

1 Like

I’ve adapted the fitting process, maybe try again in 5 minutes when github has updated.

1 Like

Doesn’t work since I run it directly from your github-wiki.
If I download the script and try to run it locally I get various errors.

Ran the gnuplot-calculations directly via command-line, used ^C to stop and then used S to stop the fit. Produced a corrupt graph (0 in size).

I’ll see if I get time to do some serious debugging tonight, now I have a date with the brushcutter and a couple of acres of weed (not the smoking kind…). :upside_down_face:

The problem will likely be this line:

FIT_LIMIT = 1.e-14

You can either remove it, or change the value to 1.e-5 which is the default. This parameter controls how close it tries to get the fit line, and obviously it is unable to reduce it to within the limit for the data you currently have. It shouldn’t really affect the graph too much, so it’s probably better if I just get rid of it, or wiedehopf can do it if he sees this before I get a chance.

1 Like

Can sure remove that.

Ah yeah it goes faster for me as well.

I’ve also included a second fit process to run first with 2 parameters so they closer.

That limit is the difference in Chi squared value between each iteration. The algorithm is a bit naive, in that it will get stuck on a local minimum if it finds one, and it can sometimes start oscillating around a particular value without the difference getting small enough for it to recognise it is stuck.

Lower values give a better fit, but are more prone to this problem. You can also sometimes fix it by changing the starting coefficients, but for this purpose having a lower value should resolve most of these issues and not affect the output enough to be a problem.