Greetings:
This has been discussed on a couple of other threads, but I wanted to bring this up again on it’s own thread.
Is anyone here using ASL3 for large-scale deployment, I.E. single nodes holding 100 or more connections without issue?
In September of 2024, I did a test on a system with plenty of resources and bandwidth, and due to issues with how telemetry was being handled, it pretty much bombed out after about 85 connections.
Since then, there have been lots of code optimizations to app_rpt, but I have not had the opportunity to do that sort of test again.
I know that some of the big networks – East Coast Reflector, Fireside and others are still using HamVoIP, which truncates some of that telemetry stuff it passes on to all the other connected nodes.
To increase load capacity a little bit on a single HamVoIP node, I have used multiple internal nodes connected together with some dial plan logic that basically does round-robin-style connection routing for new connections, and that helped some, but for multiple reasons, that system needs to be migrated from HamVoIP.
It hosts a daily net that regularly gets well over 100 direct connections just to one node, and more from other nodes hanging off of other connected nodes.
Previously, I couldn’t do as well with a single ASL3 node running on an older quad core Zeon workstation with 64GB RAM on a multi-GB pipe than I could with HamVoIP on a Raspberry Pi 4 with some internal load balancing. Hopefully, things have improved since then.
Prior experimentation shows that a Raspberry Pi 4 running HamVoIP chokes at right around 141 node connections. At that point, it’s CPU bottlenecked, and just can’t go anymore.
Has anyone here actually figured out what the theoretical maximum connection count is with ASL3 on a decent X86/64 system with lots of room to play without encountering a bunch of packet loss? I know this will be dependent on a few factors – mostly CPU and network, but are there some general benchmarks about this anywhere?