If PiAware has been configured to allow FlightAware to remote upgrade and send commands to PiAware, could you add radio buttons to the web interface to
For the server to automatically send the commands to restart dump1090 or PiAware if one of those modules has stopped working.
If (1) doesnt fix the problem for the server to initiate a remote reboot.
This assumes the server still has sufficient communications to issue the commands.
If PiAware has stopped, we canāt send commands.
If dump1090 has stopped feeding data, the local piaware will automatically try to restart it after a while.
While itās not automated, you can send reboot commands manually from the website if you think the system has wedged. Iām a little reluctant to automate reboots on systems that arenāt maintained by FlightAware.
If weāre running a flight aware standard image and weāve requested you to remotely update the software - then you are 90% maintaining the station.
if we tick a checkbox to say please attempt to restart if data stops coming through, it just removes the the 8-10hour delay (6 hours to send notification, 4 hours to wake and see it) when āthe radioā has wedged.
If dump1090 has stopped feeding data, the local piaware will automatically try to restart it after a while.
Iād like it if there was a standard watchdog to restart all the standard modules, if that didnt work do a reboot. with tha final fallback of hardware watchdog restart
Iāve read quite a few reports and experienced it myself that dump1090 will just crash when a wedge occurs and the regular restart attempts by piaware donāt work.
ā¦ which is why i was asking if something could be put in ghe software that could be enabled once a feed has been established to attempt the following.
Restart the pi-aware software
Restart the pi
Have the hardware reset setup so if pi-aware doesnt keep updating a timer counter ā¦ will hard reset the pi.
Steps 2 & 3 could be selectable by flags on the flightaware account (like automatically update to latest version software), and dont get enabled until pi aware us up and running (so the pi doesnt end up in a reboot loop on comms failure)
I know itās been a while, but has anyone put any thought into doing, or putting a script onto the standard image to configure the bullet proof feeder.
Presently Iāve got the settings for flight aware to notify me if my feeder is offline for 6 hours - the usual notification arrives after 10-12 hours.
I would really like to have a watchdog that is activated once feeding has commenced, then if it fails
Restart Pi aware
reboot the pi
use the hardware watchdog to restart the pi (if all else fails)
I think Iām configured to do nearly
this, but cant test easily.
Iāve got a Pi that has been offline for a month and I wonāt be onsite to poke it for another month, so I too would like improved ārecoveryā options.
However, Iād like to point out, you canāt put ābullet proofā monitoring into software because it relies on the Pi still being able to execute the code.
The option of ābullet resistantā would be desirable especially on remote sites.
Mmh, I think my RPi stopped working only once in more than two years, and then I could not ssh into it, so I guess everything else stopped working too and had to unplug it to reboot. Not sure anything on the software side could have prevented that.
What I do once a year is putting in a new sdcard with ample size for wear levelling from a known brand like Samsung or SanDisk. I also have a spare ready so that it can be easily substituted in case it breaks and I am away.
A lot of problems can be tracked down to an insufficient power supply, cheaper ones and those made for charging phones are known to cause trouble. If the troubles come from the power supply side, maybe a ups could help, I think I remember seeing ups hats for the RPi.
I read here some people used cron to reboot their Pi each day or rather night, not sure what the verdict was then, good, or bad.
I think itās generally considered a poor kludge.
You are better finding and fixing the problem that requires a daily reboot.
For your system, you would have had over 700 unnecessary reboots for your one failure.
The most likely failures are the wifi link goes down or the kernel panics and stops. So what is needed is a loop that that detects if a repair / restart is needed, and a preceding loop to test that the system - following a startup - has reached the correct condition to start the main loop.
A good startup loop might test for the WiFi link being up, if it is set a flag to start the main loop, otherwise sleep 5 before retesting.
The second loop process enables the watchdog timer in the preamble then
Sets the timer to say 30 seconds,
Tests the WiFi
If itās ok, sleep 20 then repeat setting the watchdog and test WiFi.
If WiFi is down, restart the pi
While this is happening the watchdog in hardware is counting down, if it reaches zero (software panic), it will perform a hard reboot ā¦ with no shutdown.
The hard watchdog should be rarely needed if ever and i believe piaware already sets the watchdog, but that only triggers if the Pi really locks up which should be rare.