The VM mlat problem was definitely a USB issue not a UDP issue when I looked at it at the time. UDP is making it out fine. The VM is just dropping a lot of the raw SDR data on the floor (IIRC it was something like 10-15%, in bursts). Enough sample data gets through that reception “works”, but the frequent drops completely ruin mlat timing which relies on being able to reliably count the number of samples received, so you never get clock sync.
Not sure how much has changed in the past few years, but when I was processing live .ts/mpeg streams from satellite through USB in a VM, the Oracle VM (VirtualBox) made a complete mess of the USB stack - it was totally unusable for what I was needing it for (Win7 Host). I moved to VMWare Workstation (still use) and it does a much better job of not randomly dropping packets. I have not tried running FA through VMWare, but it may be worth a shot. FWIW anyhow.
I put the whole project aside for a few days because (a) I had a lot of other things to do and (b) the piaware package was giving me a lot of grief. Yesterday I took it up again and by now all the rpms build cleanly. This doesn’t mean that they work; of course they don’t and now I’m in the debugging phase. Things like
openat(AT_FDCWD, "/usr/lib64/tcl8.6/tclx8.4/autoload.tcl", O_RDONLY) = 5 fcntl(5, F_SETFD, FD_CLOEXEC) = 0 ioctl(5, TCGETS, 0x7fff1c06d9d0) = -1 ENOTTY (Inappropriate ioctl for device) read(5, "#\n# Modified version of the stan"..., 4096) = 2272 read(5, "", 4096) = 0 close(5) = 0 access("/usr/share/tcl/piaware/main.tcl", R_OK) = -1 ENOENT (No such file or directory) write(2, "piaware: can't read '/usr/share/"..., 67piaware: can't read '/usr/share/tcl/piaware/main.tcl' (tcllauncher)) = 67 write(2, "\r\n", 2 ) = 2 close(4) = 0 exit_group(254) = ? +++ exited with 254 +++
because that tcllauncher specfile that I swiped from SuSE moved tcllauncher’s basedir from /usr/lib to /usr/share. Anyway, all the FA packages and their external dependencies are here and anyone who wants to help debug is most welcome. All you need is a CentOS 8 VM. Run this as root
dnf install epel-release cat << EOF > /etc/yum.repos.d/provocation.repo [provocation] name=provocation-$releasever baseurl=http://www.provocation.net/rpms/el8/rpms gpgcheck=0 enabled=0 EOF
and then install stuff with
dnf --enablerepo=provocation --enablerepo=epel install <package name>
Do not enable the provocation repo by default, because it contains other stuff that conflicts with epel.
The first thing is to iron out pure packaging bugs like the one above. No radio hardware is needed for that. When all the basic applications (tcllauncher, piaware, dump1090, mlat-client) can actually start and run and communicate with each-other, we’ll see about them being able to also do some actual work. I’ve got a new sdr dongle on its way from the UK for that purpose.
This thread is meant for general discussion, not as a bug tracker, so I set up a proper bug tracker here. With a bit of luck and a bit of help we might soon have a nicely packaged x86_64 FA application family.
So this means we can’t run ARM binaries on an x86 VM host, unless this problem is overcome. How long ago was it you did those tests? Did you test any other virtualisation environments, say KVM/qemu or VMware? Do you have any hunch whether the host OS could have anything to do with the packet loss, i.e. whether, all other things equal, running the VM on OS/3 might have worked out better than running it on OS/1?
Well… you wouldn’t want to do that anyway (at least for dump1090/dump978), you would need a beefy CPU to emulate ARM at the speed needed to keep up with realtime.
Year ago maybe? It’ll be on these forums somewhere.
Wasn’t my test or VM, I just diagnosed the problem from the mlat server side.
Above is outdated. Here is latest (Piaware 4.0)
The known VM USB issues are not related to the piaware version but connected to the VM / hypervisor settings / version.
So it’s pretty certain to say that upgrading to the newest piaware version won’t change if MLAT works.
I guess my comments are off topic.
Ill just delete them.
Your coments were very relevant and informative. I feel sorry that you deleted them.
I mentioned the link to latest howto so that in case seeing your success the OP wants to install piaware on VM Ware, he can use the updated howto.
It’s fun to see in these discussions how we constantly come back to the subject of technology vs psychology. I think: running in a VM makes it possible to test and fool around before committing money. You think: but it won’t work properly anyway, so what’s the point? We’re both right, but we’re not on the same page.
I started by pulling an old RPi 1B from under a thick cover of dust, used an SD-card that was lying around and only bought a stick and the absolutely cheapest antenna I could find (€30 together). I set it up and it worked, but not well; I had bad reception, MLAT didn’t work (“unstable clock”, which I have now gleaned can mean dropped data), the RPi run hot at 100% CPU. In short: a lousy result.
What happens when someone with an interest in technology is confronted with technology that doesn’t work well? He gets an urge to fix it. That’s what happened to me, so I went out and got a new RPi 4B, a long length of good quality coax, an assortment of connectors, a 4,5m antenna mast, a soldering iron and solder, clamps, bolts, a cutter, a Stanley knife, a wrench, mains cable and plugs… all in all pretty close to €200 if not even over. What gets the credit (or more accurately: the blame) for this expenditure? The technology that didn’t work properly. I certainly wouldn’t have spent this money up front for just an experiment; it was the experiment that went almost well, but not well enough, that pushed me further.
So you can read this train of thought between the lines when I speak of VMs and x86 rpms and all that stuff. Anything that can allow people to play and fool around is good, even if it doesn’t work perfectly. Getting ADS-B data is a Good Thing™, even if MLAT doesn’t work. Getting some data on and off from an area without any coverage is always better than getting no data at all. I look at FA’s coverage map and see a saturation of piaware receivers in northern Europe and the US, such that it might even reach a negative cost:benefit ratio in terms of the infrastructure needed to process all that data, and then huge black patches in Africa, Asia, Canada and South America, only dotted occasionally by some lonely flightfeeders. Those are areas where, already because of bad internet connections and the absence of nearby receivers, MLAT will never work. So why brush off a lousy VM half-measure, when a proper rig wouldn’t work better anyway? And if it doesn’t keep up with real time and starts dropping data, it will still get enough ID and latlong data to produce a dotted track, which is better than no track at all. And that might then trigger the purchase of an RPi or a request for a flightfeeder, at which point the bad experiment will have fully served its purpose. Or so I tend to think.
My point was more “why not just use x86 binaries on an x86 VM?”
Given a Debian install on an x86 VM, it is very straightforward to build x86 binaries.
You certainly could run ARM binaries on an emulating VM, but … why do that when for about the same effort you can run it natively? Not going to stop you from doing silly things, but it doesn’t make them any less silly
Ah, in that case I misread you, sorry. Even though I also think that ‘apt-get install piaware’ would offer a much lower threshold than ‘git clone piaware; make etc’.
I dfidn’t even get to read them; they were gone already before I arrived here today. Would you please consider flagging them for resurrection?
Is there something like mock for .deb-based systems? If there is, it would be easy to produce clean debs for different systems and make a repo of all of them.
For the benefit of those too lazy to follow and read the link: mock creates a minimal system in a chroot, and uses it to build packages in a fully automated way. The chroot system does not need to be the same as its host system, so you can build clean fedora32 rpms on centos5 and vice versa (in fact you can also use it to build fedora and centos rpms on debian). The chroot is cleaned after each build, so there can be no unintended linking by Makefiles to stuff that was not foreseen by the build’s specfile. But mock is yum/dnf-based and 100% rpm-oriented.
cowbuilder is what we use. There are probably others.
This is indeed a big advantage of building within a clean environment. However the problem that abcd has is not due to an unclean environment - it’s because the dependent packages that were used to do the build on one distro variant are not available on another distro variant.
Just run following two bash commands, and they will do everything (i.e. clone source, install dependencies, build package & install the package). Isn’t copy-paste a bash command as easy as
Successfully tested on:
- Debian 10.6 amd64
- Ubuntu 20.4 amd64
- Kali 2020 amd64
sudo bash -c "$(wget -O - https://raw.githubusercontent.com/abcd567a/piaware-ubuntu20-amd64/master/install-dump1090-fa.sh)"
sudo bash -c "$(wget -O - https://raw.githubusercontent.com/abcd567a/piaware-ubuntu20-amd64/master/install-piaware.sh)"
My dump1090-fa specfile says ‘BuildRequires: ncurses-devel’, FWIW.
There should be a premium for harmonising package names across distros (and some mild punishment for not doing so; quartering by horses comes to mind).
Just messing around in my VMWare and get the following- I initially was doing it manually as I always do, but figured I missed something silly, so went the script route - same error:
I have also tried building libusb and librtlsdr from source (as I always do) and got the same error on those attempts. FYI, latest Debian 10.6.0 amd64
Underlying libc and libusb are installed
I first tested the bash script 18 days ago on Debian 10.6, then again last week when I made a fresh install of Debian10.6, and it worked ok both times.
Ok, tonight I will make a fresh install of Debian 10.6 on Oracle VM, run the bash script, and let you know the outcome.
am I allowed to enjoy this? I consider myself old but my workplace really wants to take everything into docker because it’s en vogue, whether the app is written as microservice or not. (Mostly not, of course.) Docker isn’t for sandboxing but a logical extension of chroot. I “docked” my then employer’s Java applications into multiple chroot in the early 2000s myself and saved them millions as well as the disgrace of running server-side Java on Windows. Then Sun (remember Scott McNealy?) wrapped chroot into Solaris 9 and tried to sell it as alternative virtualization before officially calling the technology “container” in Solaris 10.
More seriously, there are discussions about the existing docker image (I think for multi-feeder) being problematic, too. I will study USB issues more. (Never used anything from USB in VM.)