A Minimal ASL Node Without Asterisk Dependency (R&D)

Sharing something I’ve been working on in case it helps anyone. I recognize that this is a bit strange.

My goal has been to create a minimal implementation of the IAX2 protocol that will enable an ASL node that has no dependency on Asterisk, or even Linux for that matter. Ultimately, I’d like to put ASL into a microcontroller in a similar way to my EchoLink project.

I’ve got a working Python script that will accept a call from the AllStarLink Telephone Portal and will play a brief audio announcement. Creating this script has provided a lot of useful insight into how AllStarLink actually works under the covers. When you strip away all of the Asterisk stuff, it’s really not that complicated.

The project is here for anyone who wants to review: GitHub - brucemack/hello-asl: A minimal implementation of an AllStarLink (ASL) server in Python to support ASL research and development.

3 Likes

Nice to see the start of something like this and I look forward to see where it goes. In general I believe Linux is a very important foundation for a node, and believe a Linux SoM platform (such as used in the M17 LinHT) will be the future of nodes, but I’m not convinced that Asterisk needs to be in the picture for most use cases. Asterisk and its config files are a mess as is much of the legacy stuff in app_rpt. It takes most people more time to figure out how to do something within those constructs than it would to just call a php or python or shell script. Duplicating all the needed functionality of Asterisk may not be easy, but in Asterisk that functionality seems to be opaque, buried in endless layers of packages, poorly documented and next to impossible to change. ASL doesn’t use 95%+ of the features of Asterisk and the majority of features such as the channel drivers could be ported to a new system fairly easily. We just need to make and authenticate some IAX connections, move some audio between USB and IAX with a few common codecs, and support monitoring and control. ASL + Asterisk is probably Millions of lines of code and it seems to be a full-time job for multiple developers to maintain it. Maybe fixing the existing wheel would be a more practical use of everyone’s time than inventing a new one, but if you look at the size of the codebase and the time and work put into ASL, it kind of seems like an overly complex system. With that said though it does work well, and to properly support all the required features in a new system could take years of work.

1 Like

Hi David,

Thanks for reading my stuff and thanks for the comments.

To avoid any confusion, I’ve got nothing against Asterisk (or Linux). Asterisk is an impressive piece of work that solves a lot of problems. The fact that clever hams have been able to build on top of Asterisk to make something as cool as AllStarLink shows that Asterisk is a solid foundation.

But I think we’re on the same page. I agree with your assessment about the tangle of configuration files and your guess that we (hams) aren’t using 95% of the Asterisk capability. From my R&D so far, it looks to me like most of the value in the ASL code is (a) the nice central registration/authentication/portal infrastructure and (b) the parts of app_rpt that deal with the radio interface (or maybe that’s the channel drivers - not sure of the right terminology). All of the stuff that negotiates IAX2 connections and passes VOIP packets around is actually pretty straight-forward when you boil it down to the UDP packets that need to be created/interpreted.

My project at the moment is to fully understand ASL at the nuts and bolts level, mostly motivated by the enjoyment of learning. I’m not setting out to replicate Asterisk.

Today I got a C++ version of my protocol demo running on a $7 microcontroller (RP2040 Pico W) and I was able to host two calls in a conference - one IAX2 via the AllStar Telephone Portal and one SIP via a Polycom VVX410 handset. That gives me some confidence that this can work without Asterisk or Linux. Next I’m going to spend some time integrating a radio to see how that works out. I’ll reach out when it’s working and perhaps we can have a QSO. :slightly_smiling_face:

1 Like

I love to see stuff like this.

The SharkRF M1KE is using an ESP32 S3 WROOM-1, and has IAX2 support for Allstarlink. Of course, that code isn’t open-source, as it’s a commercial product.

Ultimately, I want to eventually do fun things, like using higher sampling rates (not currently possible with DAHDI dependency) to link nodes together. 16 kHz would make a noticeable difference with good hardware. Any more than that might be overkill for RF purposes, but why not? IAX2 is certainly capable of it, and even existing nodes can load the codecs, though it’s all still limited to 8 kHz for now.

Asterisk I’m sure is great if you have a complex office phone system, and with Jim Dixon having come from a telecom background and looking to create a low-cost open-source telecom system 20 years ago, when just a phone line interface card to a PC used to be $500+, it made sense at the time, and will remain a powerful and flexible option for more complex use cases eg. if you’re running a repeater system with autopatch, voter inputs, etc. But ASL has transitioned since that time to being primarily used in small personal node devices and 99% of those devices are just used to connect to a few other nodes and a few other basic functions.

Running cloc on an ASL build dir shows over 2 Million LOC,

allscan@node571570:~/asl3-asterisk-22.4.1+asl3-3.5.5$ cloc .
    7297 text files.
    5587 unique files.
    3516 files ignored.

github.com/AlDanial/cloc v 1.96  T=86.60 s (64.5 files/s, 34944.7 lines/s)

Language         files          blank        comment           code
C                 1665         220320         226650        1006366
D                  866              0              0         398548
C/C++ Header      1279          44495         182803         194040
MSBuild script      49             51             36         155063
C++                403          18467          16620         119535
Bourne Shell        89          20582           9721         107491
XML                317           1931           1932          95261
JSON                24              5              0          36725
C# Designer          1           5641           5593          18441
Python             424           4928           6613          16831
m4                  32           1924            468          16554
Text                25           3612              0           8884
Objective-C         19           1823           1299           8316
make                68           1635           1110           6521
Bourne Again Shell  17            840            696           6337
Visual Studio Soln   4              4              4           6085
Markdown            27           1478             12           4666
C#                  46            617            542           3251
Perl                18            447            548           3064
JavaScript           2            473            213           2862
Java                12            505            386           2522
SQL                  8           1343            268           2227
yacc                 2            278            131           2160
diff                37            127            916           2025
HTML                17             61             38           1769
Windows Module Def  12              0              0           1316
YAML                23             70             61           1213
SWIG                 4            144             34           1190
Groovy               5            110            292            900
Mustache            13             82             90            845
Swift               11            176            279            592
XSLT                 3             64             59            497
PO File              3            172            179            393
Kotlin               1             77             47            348
CSS                  2              7             17            333
XAML                11             46              3            315
Assembly             4             85            184            288
vim script           3             22             40            169
Gradle               8             23              2            142
DTD                  1             46              0            128
Tcl/Tk               1             14             20            114
Rust                 3             36              5            108
DOS Batch            5             40             16            107
PHP                  1              9              0             64
MATLAB               1              8              7             57
Objective-C++        1             18             68             57
SVG                  1              0              0             56
Windows Res File     2             29             34             55
ProGuard             3             15             25             49
TNSDL                2             16              6             39
QML                  1              2              2             37
Mako                 2             14              0             32
awk                  1              0              0             24
Dockerfile           2              3              3             22
Qt Linguist          1              0              0             22
TOML                 2              4              2             14
Properties           2              0              2             10
IDL                  1              0              0              1

SUM:              5587         332919         458076        2235081

That’s a lot of files and code, and I’ve never seen any good documentation of the overall structure. If you wanted to do something “simple” like add user callsign metadata to outgoing IAX packets, it would probably take you 6 months to get PRs through to all the various Asterisk and ASL modules involved, whereas in your “hello-asl” code you could probably do that in about 6 minutes. To support 16KHz sample rates in ASL would also probably take 6 months, vs. probably about 6 minutes in this code. To add IPV6 support to ASL could take months, but probably already works just fine in this code. And ASL is really not very portable, seems to be stuck on an outdated audio system that seems to not work on most other Linux distributions, whereas a new approach would not have that baggage and could easily run on WSL or a Linux SoM. I think the more modern approach now is to keep things as simple and modular as possible, like Linux itself which has 1,000+ different utilities you can call as needed. Use C/C++ in the performance-critical areas, scripts in others (very easy for anyone to extend), and add a web management interface and SQLite DB. As Patrick mentioned, some commercial closed-source vendors have already made this happen on embedded platforms and it should greatly benefit AllStar for this to be open-source.

1 Like

Thanks Patrick, I wasn’t aware of M1KE. But that’s further proof that a “lightweight” (i.e. Asterisk-free) implementation could work.

Regarding 16kHz audio, it looks like the IAX2 protocol supports CODEC negotiation so I don’t see why it wouldn’t work. Is there a “standard” 16kHz CODEC that hams normally use? Looking at the packets that my SIP handset is sending out I see that there is a “G.722 16kHz ADPCM” CODEC in the list. Is that a good one? Or maybe G.711.1? I don’t know a lot about VOIP.

I will test out 16k and make sure it works.

Hi David.

Rather than implementing my radio demo on a microcontroller, I think I’m going to do it on Linux first since it will make it easier for others to run it using standard USB audio interfaces (like your AllScan boxes). I don’t know much about Linux audio. What’s the best/modern way to talk to audio hardware from a Linux process? Is PipeWire the way to go?

Thanks!

G722 is probably the easiest to implement. Asterisk has long supported that codec, though app_rpt won’t currently give you the benefit of the increased sampling rate. It will still translate to 8 kHz through app_rpt, even with two nodes connected using G722. However, I think two compatible non-app_rpt G722 clients connected to each other through a system running app_rpt and G722 enabled will get a straight passthrough. I don’t currently have a good way to test that. Certainly, though, testing between your device and a G722 phone, not involving Asterisk, should be easier to deal with.

1 Like

Hi Bruce, I haven’t written any Linux audio apps but according to google AI PipeWire “is the recommended and increasingly default audio server in modern Debian versions (Debian 12 and newer). PipeWire aims to unify audio and video handling, offering low-latency performance and compatibility with ALSA, PulseAudio, and JACK applications. It provides a flexible stream/filter API and is ideal for new projects seeking modern functionality and broad compatibility.” So that sounds good, and any AllStar URI with CM1xx IC should be plug&play with any Win/Mac/Linux PC, with exception that the standard OS drivers will probably not support reading COS/PTT state directly, which use the CM1xx VolDn HID input and GPIO3. It may be just as easy to compile the ASL channel driver code into a standalone program that reads and writes to the URI directly and provides a socket for the PCM I/O. I could help with that if needed, as I am now in the process of making some other enhancements/cleanups to the USBRadio channel driver. Ideally PipeWire could provide a way to directly read and write the CM1xx registers, and everything could be done there (ie. setting the CM1xx mixer levels), but the channel drivers can then also be useful for filtering, DTMF, CTCSS, squelch decode and timing logic. I do also have some newer URI models that support UART PTT/COS for apps that don’t have integrated CM1xx PTT/COS support. I’ll msg you with more details.

1 Like

Thanks David, let me look through the ASL channel driver code and get more familiar with USB audio. I’ve done a lot of work with audio CODECs on I2S/I2C interfaces but I’ve never worked with USB. Another good learning opportunity.

And thanks for pointing out the significance of the GPIO part. It’s very convenient that the ADC/DAC chip designers put those GPIO pins in there. From what I can tell, PipeWire doesn’t know anything about GPIO - it’s strictly the audio flow. That’s not too much different from the way the ASL channel driver appears to work. A separate paradigm (HID GET/SET) is being used to talk to the GPIOs.

Another option rather than reinventing the wheel would be to help out on the project itself. There's work being started on both removing the DAHDI module dependency as well as working on a generic ALSA/PipeWire + GPIO channel driver. There's also always the opportunity to clean up the legacy configuration layouts and streamline things. A lot of stuff is still "as it has been" for lack of labor on getting all these things taken care of. We'd have a much more robust and user-friendly ecosystem with DAHDI removed and the ALSA channel driver.

1 Like

Hi Jason:

Point taken. I’m a complete novice when it comes to AllStarLink and especially USB audio (as you can probably tell from some of these noob questions) so I’m not sure you all want me touching your code just yet. :slight_smile: Right now I’m on a deep-dive to understand how AllStarLink works from the bottom up with the hopes of putting it into a microcontroller.

As for re-inventing the wheel, I’m a homebrewer so that’s the fun of it for me. The ultimate test of whether you really understand how something works is to be able to replicate it. Check out my QRZ page for much worse examples of wheels being reinvented. HIHI.

But I would welcome the chance to contribute once I get more educated. Is there a place where the developers talk about the ongoing work on the code? You mentioned that work has been started on a generic PipeWire driver, but when I searched the forum yesterday I only saw one (unrelated) reference to PipeWire is response to a question someone asked. Thanks!!

If you are interested in getting involved in the code, then just get involved. No better way to learn than by doing. We work out of GitHub (all code goes through a review process) and Slack. If you're interested, PM me your GitHub ID and an email address for Slack.

FYI while fixing a bug and doing some general testing I observed some issues with the filtering in USBRadio (see Channel drivers, tune utilities: add Tx Audio Stats, fix issue 791 by davidgsd · Pull Request #793 · AllStarLink/app_rpt · GitHub ) and am now starting on some further testing of that, and looking at moving some filtering functionality to scipy.signal, initially supporting (in a tune menu utility) calculating FIR coefficients for some of the xpmr.c filters in a python script and loading into a .conf file. And I’ll be adding a configurable AGC function for local Tx audio, initially just trying to use the AGC function in func_speex.c, which should already be linked into ASL, but which apparently cannot be called from the dialplan for outbound connects due to ASL not supporting extensions.conf contexts for outbound connects. This should be easy acc. to google AI ( Google Search ). Also see Support Asterisk AGC() Automatic Gain Control function · Issue #695 · AllStarLink/app_rpt · GitHub

There is also a known bug where ASL seems to be tracking USB devices by devstr only (which is subject to change at the whims of the OS enumeration) rather than physical port. See GitHub · Where software is built I proposed a fix there and am awaiting further feedback. Bruce, that could be a fun and simple project aligned with what you’re now working on. The process of doing a PR was not well documented, and Allan is always very helpful with any questions on that, but I recently documented it here: GitHub - davidgsd/app_rpt at doc And then there is also slack and this forum so I think you’d find that it is easy to do things (other than of course the inherent inefficiencies of working in a large legacy codebase), and you’ll have no shortage of support.

BTW, app_rpt itself is only 50K LOC which isn’t unreasonable. Asterisk may be 2M LOC but most of that is libraries, codecs, and various PBX stuff that’s not used by AllStar. I’d say some of the stumbling blocks with ASL+Asterisk are that there’s no good documentation of the SW architecture, and it’s limited by Asterisk’s convoluted and sparsely documented .conf file formats.

allscan@node571570:~/app_rpt$ cloc .
     108 text files.
      81 unique files.
      28 files ignored.

github.com/AlDanial/cloc v 1.96  T=1.18 s (68.8 files/s, 54108.7 lines/s)

Language     files          blank        comment           code
C               37           5407           6938          45694
C/C++ Header    30            690           1217           3032
YAML             5             14              9            316
Bourne Shell     2             27             23             89
Markdown         1             48              0             56
JSON             1              0              0             40
diff             5              7             33             25
SUM:            81           6193           8220          49252

And a last comment, there are likely some ways that ASL could be made more modular ie. more loosely coupled where practical, which would better support code reusability and make some of the various ASL modules/features/functions less dependent on Asterisk. Just a thought for eventually supporting an open-source simplified version on other platforms. That might involve some significant code changes to abstract some Asterisk config structures / typedefs for example but that could probably all be preprocessor stuff that would not affect compiled code.

It's not a known bug it's an extremely unfortunate legacy of using the beyond-legacy OSS sound system. Your explanation is an over-simplification of how the device IDs work. Depending on the hardware and vagaries of kernel startup times it's entirely possible that between reboots with no hardware changes, the USB device ports will move. We experience that in a minority but statistically-significant cases. That's why, again, we want to move to an ALSA/Pipewire-based audio system so we can use the robust logic that already exists to identify and "latch on to" the device/access association that already provides.

People are free to do whatever they want with their time, including coding replacements for Asterisk. However I still posit that more would be achieved for the community as a whole would that labor be dedicated to replacing known weaknesses in app_rpt than reinventing the wheel.

If ASL loses track of interfaces that’s definitely a bug, that causes nodes to fail and users to have to manually intervene to get things working again. Tracking by physical port seems like a simple, reliable solution that has no dependencies on the audio system used. (It’s simple to translate the physical port to a device string or whatever the audio system might need to know).

There are ways ASL can be made more modular and reusable as enhancements and bug fixes are made, which benefits all use cases with or without Asterisk. Code modularity and reusability is an important and common goal of SW development, which might seem like a new idea within a large legacy FOSS app, but which should be a goal of AllStarLink.

I’ve published a new version (asl-hub-server-2.py) in the same repo that fills out some missing parts of the protocol and adds audio support using a Python ALSA binding. Surprisingly, Python is able to keep up with all of this and audio delivery sounds smooth in both directions. I’m using a RPi 5 using the stock kernel.

I tested it with a radio and a USB audio dongle that reports as “CM106 like” and it sounds fine to me. I’ve used the FIR LPF parameters out of chan_simpleusb.c. I don’t have GPIO so PTT/COR isn’t there yet.

Still learning …

Update on this work. I’ve ported my Python demo to C++ and built out the rest of the basics needed to conduct AllStarLink QSOs. I made three contacts today using my software running on the stock Raspberry PI5 OS, an AllScan UCI90, and the Node Remote iPhone app. I received good signal reports. Thanks to David NR9V for providing a lot of technical help.

So far it takes ~2,200 lines of code to reproduce the minimal Asterisk behavior needed to make and receive calls using the G711 uLaw CODEC. Running on a microcontroller should not be a problem.

Next I’m going to build out some other CODECs and get the DTMF decoder working.

I’ll make the code available to build/try once the docs are done. If you’re interested in testing early software drop me an e-mail direct.

2 Likes