Other implementations welcome on AllStar network?

I’m currently developing an embedded RoIP gateway device. As it stands now, it’ll work with EchoLink or it can be used independently. It doesn’t run Linux and doesn’t have an SD card or anything, it’s just a dual-core Cortex M33 with an all new implementation written in C. One of the big attractions is that it draws less than a watt, and of course it doesn’t have any of the administration or security concerns of a Pi.

I’ve had a few people say that ASL is their main interest in linking these days, but from what I can tell there’s no distinction made between the network and the software package that implements it and I don’t see any other implementations out there.

I’ve only just started to dig into the technical feasibility. There’s no point in putting in hours of reading through code if the political climate makes it moot, though. One issue with any system that doesn’t have alternative implementations is that there’s generally no comprehensive list of technical standards to be met since the system is entirely homogeneous. That looks like it might be the case here, and I can understand if it’d be considered too much of a burden to try to produce such a standard, or to take a risk on permitting interoperability with a new implementation that doesn’t have exactly the same feature set.

So that’s my first question: Is ASL open to other implementations?

The answer might also depend on how ASL is defined. I know it can interoperate with EchoLink and other systems. Would it be more appropriate to define the new device not as a piece of ASL itself, but as a sort of peripheral to the system? I’m not after the ability to take a pile of my EchoBridge devices and set up an independent network that would also be AllStar, like you could with the Asterisk-based system. I’m just interested in producing low-power, easy-to-use devices that can take the place of a node based on a PC or SBC and participate in ASL.

Thanks and 73,

Scott
N1VG

but from what I can tell there’s no distinction made between the network and the software package

Think of AllStar as the network and app_rpt as the software. App_rpt is open source software than anyone can use but only hams can use AllStar. One example of this distinction is GMRS. They use app_rpt but have formed their own networking.

there’s generally no comprehensive list of technical standards

The system is based on the Asterisk PBX registration of connected servers. That’s a well understood architecture. Allstar merely ads a directory, a list of node number to IP address list, which app_rpt uses to verify and find nodes. The node list is distributed both as a (diffed) file and as DNS records.

I’m just interested in producing low-power, easy-to-use devices that can take the place of a node based on a PC or SBC and participate in ASL

That would be a welcome addition to the family of AllStar clients.

The client in the allstar network is actually an asterisk, which provides communication between nodes via the IAX2 protocol, while connections between nodes and radiointerface control are being managed by the app_rpt application included in the asterisk. It will probably be very difficult to create a node without using asterisk, while retaining the rich functionality of app_rpt.
A simpler solution would be to use such a client as a remote radio interface, which would receive the audio signal from the radio and transmit it over ip to a host that would provide control of this air interface using app_rpt and manage connections to other ASL nodes over IAX2, which are part of asterisk .
The radio interfaces on the ASL node are controlled by the channel drivers CHAN_USBRADIO, CHAN_SIMPLEUSB, CHAN_VOTER, and several others that are used much less frequently. These drivers, together with the radio interfaces, carry out the reception and transmission of an audio signal by the radio and switching ptt.
It would be more reasonable to consider your RoIP device as a radio interface. The Voter radio interface (VOTER - AllStarLink Wiki) works exactly on this principle, but is significantly outdated and designed for very specific functionality. It would be reasonable to simplify the task somewhat and use your device as a remote radiointerface that would connect to the ASL node over IP but without the complicated CHAN_VOTER specifics. It’s easier to use CHAN_USBRADIO logic, replacing the USB radio interface with your device.

The possibilities of building a RoIP network would be much wider if your device could connect to the ASL node not only via IP, but also via an FLRC modem, which is part of Semtech SX128x single-chip transceivers. Based on these chips, NiceRF makes radio modules at 2.4 GHz with a power of 27 dBm that allow you to build direct wireless links between the radio interface and the node located at a considerable distance from each other.

Anything that does not harm the integrity of the NETWORK and insuring only HAMS may access it,
should always be welcome.
But one must consider the stupid things some users will try, even if not by design.
With that in mind, my only advice is to take a goof proof approach and keep network integrity.
Develop away. Everyone likes new toys.

Having been a EE and embedded C developer since the 90’s for satellite modem OEMs, who spent many years on simple platforms such as ARM7 with only 1MB Flash, I can say that now 25 years later there is no reason to go with these platforms for ham applications. Running Linux does not require a significant amount of CPU power by today’s standards (if you’re not running a desktop and 10 GUI apps), and 8GB flash/eMMC is more than enough memory for Debian + ASL. On a bare-bones MCU that’s not capable of running Linux you will spend half your time reinventing the wheel, instead of innovating. (For that reason, in 2010 I moved to Linux, PHP, SQL and JS development and now use C only when necessary.) Asterisk + ASL is a lot of code, that’s heavily tied into Linux, and it wouldn’t make much sense to me to try and reinvent all that from scratch. Even if a platform only uses 1W, which in fact isn’t really much lower than an RPi or microPC which typically idle at < 3W. I would bet that your products will be more innovative and more future-proof if you support Linux from the start, in which case you can run ASL off the bat with no limitations, no hacks needed, and none of the issues that inevitably occur when trying to duplicate such a large and complex system. You could then spend your time improving and innovating ASL rather than years recreating it.

If my goal was only to produce a device to run EchoLink or ASL as quickly as possible, then I’d definitely be going with a Linux system. But unless I just wanted to run theBridge and shoehorn on a new interface (and deal with license restrictions that might be present for non-amateur use) I’d have to implement that part myself anyway.

What I’ve got instead is a framework with an RTOS and an object-oriented audio routing engine and well-tested input and output subsystems. This platform evolved from a convergence of APRS tracker and repeater projects that really didn’t warrant a full OS, and now the groundwork is already laid for other applications. Once I had the conference functionality implemented, which wouldn’t have been much different on a PC, I just hooked that object in to the routing framework and it worked with an actual receiver and transmitter with all of the timeouts and safety checks and everything already implemented.

I could take all of those classes like the ones for EchoLink and the store-and-forward repeater and create a similar audio routing system on Linux, but it’d take some considerable effort to get everything perfect. Being close to the hardware makes it a lot easier to, for example, ensure that your PTT timing is precise relative to when your last audio samples actually make it out of the DAC. It’s not a big deal for voice, but this framework was designed for data modes as well.

Parts of the framework (including a lot of the network stuff) came from other projects where running embedded Linux is really not an option - with 12 to 15 mm wide PCBs that have to tolerate board flex and can’t safely use BGAs, for example.

So as a small company producing a variety of low-volume niche products there’s an advantage to being able to build low-complexity boards in house or keep costs low for short runs through a contract manufacturer. The MCU used is a 64-pin LQFP and the only other major IC is an 8-pin SOIC for flash. It’s a 2-layer board.

We could probably get away with a $25 Linux SoM instead of $8 of MCU and flash, but we’d still be building carrier boards with all of the radio interface hardware, indicators, connectors, and so on.

For this project in particular, once we’ve ruled out just packaging up theBridge again, what does Linux get us? The in-house developed Cobalt framework EchoBridge runs on already has a shell (with text editor and scripting system and telnet access), network management, TCP/IP stack, HTTP server with WebSockets, firmware update management system that allows safe, monolithic updates of the whole system in about 15 seconds, WAV file handling, a configuration system, a storage system with deterministic timing behavior, and lots of other bits and pieces.

It’s had years of ongoing development so the hard parts are already done and there are thousands of devices in the field with some version of the framework. Linux would just add complexity.

And besides, I like the challenge of building lean and mean solutions that just work.

Scott

Linux can definitely be a PITA, sure, but Asterisk, drivers, web server, PHP, SQLite, etc. represent Millions of lines of code that would need to be implemented on an MCU before it could offer even half the features of a typical ASL node. Nodes do more than just IAX and SIP, and also usually include web monitoring/dashboard apps, support all the Asterisk features, scripting, macros, ability to execute shell scripts (AGI), and can run 1000’s of other apps.

Linux indeed has a learning curve, but once someone learns it they have a stable long-term limitless platform that can do anything, all with a consistent API that millions of programmers are already very familiar with, supporting 100’s of languages, Desktop GUIs, RDP, VPN, etc. This is a big difference between AllStar and other modes ie. EchoLink is just a basic app whereas most AllStar nodes are a complete Asterisk phone switch system and Linux server. Thus a low-$ MCU could implement only a small subset of ASL features. Which may indeed be good for some use cases, but in general to me seems to be an unnecessary duplication of a much more powerful system that already works great on small low-power ~$25 hardware platforms, and which is a mature and reliable system.

If you look at ASL only from the perspective of some IAX data going back and forth it seems simple, but people also use numerous web apps and phone apps with it, which require a standard Linux web server stack. You can do all sorts of other stuff on ASL nodes too, wifi, mesh networking, home automation, etc. Anyhow not trying to convince anyone of anything, just wanted to highlight some of the features that ASL nodes provide. $8 MCU systems will never be able to provide even half of those features in a consistent, fully compatible and reliable way and you’d then be tasking yourself with creating a whole new ecosystem.

The easiest and most universal and efficient way to port to an asl node is using USRP.
Channel driver interface is in the wiki.

I’m not so interested in implementing a full ASL node in an embedded device. It sounds like USRP is more likely to be what I want to pursue. AllStar is never going to be the main focus of this device, it’s just one thing people are going to ask for compatibility with. My main intention in coming here was to feel out the political situation. There have been some notable amateur radio projects/networks that have been very monolithic and resistant to interoperability or even transparency. I’m glad to hear that doesn’t seem to be the case with ASL. It’s kind of soured me on parts of the hobby over the years.

I won’t have a chance to research USRP and whether its scope covers everything I hope to do until I get back from vacation in a couple of weeks, but it’s encouraging that this system seems to be fairly open.

Scott
N1VG

I totally understand the desire to do this. That’s why I use RTCMs exclusively for my nodes. People think they are for voting and simulcast operation only. But that’s not the case. They allow me to put a low power IoT device, in voting or non-voting application at my remote and sometimes inaccessible repeater sites. They also make a great mobile or radio-less device. And they can interface to a repeater controller port.

My suggestion would be build a peripheral device that uses the USRP or chan_voter protocol and run Asterisk/app_rpt on a PC or cloud server.