Creating a requested feature list

I am compiling a wish list for new features for Allstar and or the ASL 3 client.
If anyone has any features they would like added, please comment.

’73

KM6RPT
Todd

Greetings:

Here are a few I can think of off the top of my head.

  1. Add a way to prioritize the audio of specific nodes. I.E. the way ducking works in monitor mode. Let’s say your repeater is controlled by ASL3, and you want local repeater traffic to duck the audio of stuff coming in from the outside by a few DB, or for a specific list of nodes only. This is a feature requested by a club I am helping set up a multi-receiver RTCM repeater. I thought of a way this could possibly be implemented without modifying the code, but it would be messy.
  2. Mute transmitted audio to the network when the command character (* by default) is detected, but continue to pass the command through locally. This is a feature Echolink has had for years, to avoid noisy transmissions as people fumbling with DTMF entry, as they try to disconnect from a node. This is especially annoying when the majority of nodes on your system are half duplex.
    I brought this up once, and someone suggested this could create false detection. I would think this would be extremely unlikely if this were only triggered by one specific frequency pair.
  3. Add an equivalent to usbradio’s rxsquelchdelay in simpleusb. I think there is already an issue on Github about this.
  4. This is probably way out there, but for systems that have enough processing power to not make this slow and awkward, send spoken telemetry text through an external TTS rather than static files, with cached audio to avoid unnecessary regeneration, and the ability to parse node numbers and call signs in a way appropriate for TTS processing, I.E. node 508429 = node 5 0 8 4 2 9, or N2DYI = N 2 D Y I. Add a switch to announce call signs rather than node numbers sent to TTS processing, similar to what Echolink is supposed to do. This, if done properly, could significantly improve spoken repeater telemetry without having to create a large number of static files to be concatenated. The node could say something like node 5 0 8 4 2 9, connected to node 5 0 8 4 2 0 in one fluid utterance. Or, with calls enabled, if node 508429 connects to node 51018, it could say W6EK, connected to N2DYI, again, in one fluid utterance rather than a bunch of individual prerecorded files. For probably obvious reasons, this shouldn’t be the default behavior, as Piper, for example, takes quite a while to start on rPi, but is very fast on X64.

Just some thoughts.

73
N2DYI

1 Like

Todd, is this in reference to the ASL3 .deb packages, the app_rpt module, Allstarlink, Inc. provided services (stats server, registration, etc), or all of the above?

A couple of things off the top of my head…

  • app_rpt
    • Implementation of the K: Key IAX Text (or similar feature)
      • Would enable numerous development opportunities allowing visibility into traffic source through unlimited layers of Allstar connections, rather than only being able to show keyed status of direct / adjacent nodes.
      • Could ex. show keyed node 3 “hops” away on Allmon display
    • DCS encode / decode support
    • Better structure / customization of the way telemetry messages are handled / generated.
      • I have always found it extremely silly that the message is “node [localnode] connected 2 node [remotenode]”, even if the connection direction was actually remote >>> local.
      • In all of my use cases, telemetry is far too descriptive and a simple “[remotenode] connected” would be more than sufficient.
3 Likes

The asl3-tts package already covers this except for the cached sound part.

1 Like

Sorry, I should have been more specific. What I mean is telemetry sent out by the app_rpt module itself. In other words, redirect what app_rpt would do with static sound files to a pipe to be sent as a textual representation to asl3-tts, or even some other TTS system if you have one installed. I use DECtalk on my systems, and made a modified version of asl3-tts so Piper speaks just a little faster.
This would allow for future dynamic handling of telemetry verbosity, order of spoken text, easier language localization, customizable text blocks and the like. You could theoretically do some of this now with connpgm and discpgm, just as two examples, but there are lots of other messages sent from app_rpt itself.

2 Likes

Improved logging (I can’t seem to find a log that shows echo link connections)

Snmp support for remote monitoring (tx transmit time etc)

Perhaps my biggest wish. Native digital to analog bridges (multiple modes into a single node )

1 Like

Make “permanent” connections actually permanent. e.g. reconnect after network interruptions.

David WD5M

2 Likes

I would like to see 1) - A Time Out Timer on INPUT signals in the usbradio.conf and simpleUSB.conf because… with a node connected to a hub with MANY repeaters - one radio can fail - send a false COR signal to the computer, and time out the rest of the system and remain timed out until the faulty node is located and disabled… This has happened to me on a couple of occasions where maybe 10 repeaters and a handful of nodes are permanently connected and one radio dies, sending a true signal (simpleusb) to the node and continues until the node is disconnected.

  1. Echolink on a Simplex Node will not send a broadcast (Example Echolink Node 826983, the ARRL Newsline) until the a radio first keys into the node - When a scheduled connection is made, the node just sits there - connected, but not sending until a user keys their mic.

The work and performance of Allstarlink 3.0 is AWESOME - really great work and a HUGE Thank You for all the hard work !!! de nu5d

1 Like

First, thanks so much for the ASL3 system. It is impressive after using earlier versions and HamVoip. Awesome!

A CM108/119 sound FOB is the DIY default sound card. But it requires rather tedious modifications to connect COS / PTT. Since a Raspberry Pi has many pins for I/O, using a couple of those for external connections would be extremely convenient.

At this point, leave the sound fob to do the audio but make the other physical connections through Pi pins.

I appreciate your efforts.

'73
NM0D

1 Like

“Mute transmitted audio to the network when the command character (* by default) is detected, but continue to pass the command through locally. This is a feature Echolink has had for years, to avoid noisy transmissions as people fumbling with DTMF entry, as they try to disconnect from a node. This is especially annoying when the majority of nodes on your system are half duplex.”

This suggestion would solve my biggest problem as a brand new user without a lot of allstar or linux seat time. I don’t want to have to buy a $7 android app to have network control of my node and after an hour or two of searching thus far, I can’t find any other easy way to disconnect other than fumbling around with *10 on my HT and keying up the entire 20+ repeater network I’m connected to. It’s insane to me that I can’t disconnect without having to identify or doing a “kerchunk and run”. I admit there could be an option out there I’ve missed and my searching skills have been greatly hindered in recent years since search engines and their AI counterparts seem to be engaged in a race to the bottom for who can provide worse search results.

FYI there are a wide variety of high-quality USB Radio Interfaces available from various makers for as little as $29 that require no soldering or mods and have much better RFI filtering than a fob while also providing status LEDs and other nice features. An example:

Also, RPi’s are a somewhat outdated way to build a node IMO. There are now many Mini PCs that are less expensive, more powerful, more reliable and that have more features and better RFI, EMC and thermal performance.

FYI there are several free web apps (Allmon, AllScan, Supermon, etc.) that support various node and connection management functions in any mobile or desktop browser.

There is little to no information pointing you to do that. As an inexperienced person approaching Allstar nodes and Pi’s for the first time, it took me like ten hours to setup, troubleshoot, fail, and give up on hamvoip and then try the asl3 install like 3 times to finally get configured/on the air. The last thing I wanted to do at that point is dive into learning some whole new thing that nobody explained to me anywhere in the eight thousand pages of information I already went through the past couple days to even get functional. Not to mention the comical amount of passwords and usernames I have already had to create. Just thought it would be a nice feature if it were native in one of the many things I’ve already had to go out of my way to learn how to use as my brain is quite overloaded already from the process.

What are you asking here, that DTMF tones not be repeated either locally or over links? They are not now by default.

This is actually my request.
If I wasn’t clear enough originally, what I’m asking for is not just a mute during the duration of the tone, as what happens now, but for the audio portion of the transmission to stop sending to all nodes on the network when the * DTMF is detected, which would allow you to keep your radio keyed and activate as many DTMF’s as you want without continuing to transmit toward nodes, potentially holding things up, especially with half duplex systems, but the command would continue being processed locally. This is how Echolink works.
As soon as the system detects the start of a command, other nodes won’t hear audio from the command-sending node until the next transmission.
With ASL, audio is muted for the duration of each decoded tone, except for a bit around the edges until it is detected sometimes, but even the muted audio continues to be transmitted down the line.

This would break other commands that’s expecting audio to follow a DTMF command like cop,55.

Would it? As far as I understand, cop,55 is only local anyway. If the audio continues locally, but not to every other connected node, it would still theoretically work as expected, I would think, with the possible complication that no one on the other side of the network, if any other nodes are connected, would have a clue what’s going on.
As it is, they wouldn’t hear the locally parroted audio anyway.

I’m envisioning the audio path working as it always has locally, but not to connected nodes. To be fair, I don’t know if that’s technically possible the way things are now.

Thank you for your response and commentary.
Unfortunately, it missed 4 points about the question. Raspberry was used as ageneric representation for access to COS/PTT.

  1. Response requires buying another piece of dedicated hardware.
  2. There are many options on computers. However, the Raspberry is well established, well recognized, well used for numerous other purposes, and there are numerous of them available, often in a parts box.
  3. Raspberry, with Allstar has numerous, supportive uses. For example, the Pi easily aids security, such as unlocking doors to repeaters and noting changes in conditions. I can set miles away, detect problems, and activate corrections.
  4. If capabilities are available for the Pi to control PTT / COS, those capabilities are obviously available to other computer platforms.

We are looking at more ways to use Allstar, rather than limit it with arbitrary equipment.

I appreciate all the effort put in by the group, moving Allstar to more open.