ASL questions, documentation resources, Asterisk, IAX?

It seems there is not a lot of documentation on the fundamental software architecture and protocol layering and functioning of ASL, Asterisk, IAX, etc. Can anyone recommend any websites, books, etc. that give a more detailed picture of how these tie together, specifically in contexts such as repeater control, interlinking, and various full-duplex (FDX) and simplex node use cases?

I am particularly interested in the full-duplex aspects of what ASL supports, and some of the subtle details of what happens in some common scenarios:

  1. The use case of a simplex node seems simple enough, it’s either idle, receiving audio from connected remote node(s) and sending to your local radio (or speaker/headphones in the case of a radioless node), or, if COS line is asserted, transmitting your local Rx audio to the connected remote node(s).

Even this use case has some things are not clear however. What if you and a remote node key up at the same time? If they are both simplex nodes then neither person will hear each other. When the first node stops transmitting it will then go to Rx, but how would the user know whether there was a double? Depending on what the other node was saying it might not be clear if the other node had just keyed up or had keyed up much earlier. If a simplex ASL node starts transmitting your local audio to remote connected node(s) but at the same time starts receiving audio from remote node(s), can it for example play a specific courtesy tone after your Tx to indicate there was a double?

With one repeater system I use with ASL I notice that after some transmits there is a sequence of 3 short tones (beep beep beep), whereas after other transmits there is just the default courtesy tone (which in my case is no tone because I have the default set to none (since end of Tx is already explicit from the audio and squelch tail)). What kinds of different (event-based) courtesy tone types does ASL support and how could I find out why I get (non-default) tones sometimes? Is there any documentation on what connection/link status events ASL can detect and how it can notify the node user about them?

Another question is why do I often hear echoing on ASL-linked repeaters when more than one user keys up at the same time? It doesn’t happen and everything works fine if only one user keys up at a time, so it would seem the node connections are valid ie. no loops in the network graph, therefore I would expect that if 2 nodes key at the same time that it should work like a VOIP conference call and you should hear all talkers just fine without any loops/echoes. I have heard this on multiple ASL-linked repeaters. The cause may not be ASL, it may have something to do with Echolink or with how the repeater is set up. Probably a lot of repeater users don’t check for this kind of thing since it only happens when there are doubles, which are somewhat rare. But in reality they’re not that rare and happen on a daily or hourly basis anytime you have more than a couple people in a QSO or there’s a net. What would be the best way to find the cause of this kind of issue?

  1. Now on to the case of full-duplex nodes, specifically a full-duplex hotspot (NOT a repeater). This would be for example an ASL node that uses a radio supporting cross-band full-duplex, or 2 different radios (eg. one receiving on VHF and one transmitting on UHF). Full-duplex nodes are useful for a number of reasons, the first being that you can hear what’s on the remote node(s) when you are transmitting and if there’s a double you will hear it right away and can unkey. If everyone did this it would make large nets much more efficient because people would know if and how well they were getting through and that they weren’t doubling with anyone. This applies to plain-old RF repeaters as well as ASL or any other linking protocol. I know at least a few 2m repeaters that have additional receivers on 70cm and that combine all receive signals so multiple people can talk at the same time with no issues. Works great for trivia nets or for when people want to throw in a quick comment. Of course this requires the users to have a radio that supports cross-band full-duplex or 2 different radios, and use a headset or headphones, but that’s easy to do and you can find used radios any day for not much over $150 that do cross-band duplex, or just get a couple of ~$50 HTs/mobiles one on VHF and the other on UHF.

So in this case it works the same as a simplex node except that the node will always be looking for COS and will transmit your audio to the remote nodes when you key up. Because this node is not a repeater however, it will NOT combine or mix your Tx audio into your Rx audio.

As a side-note, this also brings up some terminology questions. The terms Tx or Rx audio could get confusing. We might have a network architecture like the following.

User A portable radio
	^	| 
    RF	RF
	|	v
User A node radio
	^	|
   Audio I/O
	|	v
User A node
	^	|
     IAX
 	|	v
User B node 
	^	|
   Audio I/O
	|	v
User B node radio
	^	|
	RF	RF
	|	v
User B portable radio

From user A’s perspective the left signal flows are “Rx” and the right are “Tx”, but in any case, if one of the nodes supports Full-Duplex and neither node is operating as a repeater, Rx audio should never be routed into Tx audio, they are separate streams. Other than a couple duplex cfg settings there does not seem to be much documentation about when these 2 streams might be mixed or combined. (I’m now setting up some full-duplex nodes so want to be sure I’m not missing anything important.)

However it gets more complicated if you have 3+ nodes that support full-duplex, because for example the Tx audio from nodes A and B has to be combined and sent as Rx audio to node C, Tx audio from nodes B and C has to be combined and sent as Rx audio to node A, etc. There is no central server doing this however - the nodes connect directly to each other, ie. node A has connections to B and C, but B and C do not have a direct connection. So a 3 node architecture showing only the IAX connections might look like:

User C node
	^	|
     IAX
 	|	v
User A node
	^	|
     IAX
 	|	v
User B node

BTW a question about IAX, is it the only protocol used between nodes? Is the above diagram accurate in that it shows 2 IAX connections between nodes for Tx and Rx audio, or is it considered just one control connection that uses something like UDP for the RX and TX audio streams?

Thus in this case with node A connected to 2 other nodes, you can’t just call the left stream “Tx” and the right stream “Rx”. Node A’s Tx audio will go out to both B and C, whereas if B is transmitting at the same time that will come into A and need to be combined into the Tx data stream going to C. So the ASL software in each node might then have to do the following:
A. Combine (mix) the Tx audio from all other nodes (but not the local audio in), route this to the node’s local audio output
B. For EACH connected node, combine the Tx audio from all OTHER nodes with the local node Tx audio, and forward to that remote node (without mixing in Tx audio from that remote node).

This would imply that if there were 5 nodes connected, any of which might be transmitting at the same time (not an unusual case eg. for a VOIP conference call), each node would have to do a separate Tx audio mix to each of the other connected nodes, which however does not seem scalable to larger network topologies. It seems like it would be simpler (but potentially more network bandwidth-intensive) for all nodes to separately forward their Tx audio as a separate UDP stream to each connected node, and for other nodes to then forward those individual streams unmodified to other connected nodes. In that case if you have 5 FDX nodes who are all keyed, each of them would be receiving individual Tx audio streams from all connected nodes, which would each then be forwarded to all other directly connected nodes. Each node then mixes its incoming Tx streams and sends that to the local audio output. Is this how ASL, Asterisk and IAX actually works? Is there any documentation online that talks about this in more detail? (And maybe even has a few nice graphs and diagrams to go along with it?) I would imagine there are probably some books on Asterisk and/or IAX that cover this well, though they might not then cover the ASL-related aspects and use cases. This would also imply that for large full duplex conference calls that a lot of audio streams would have to be forwarded around since the mix of streams has to be unique for each node ie. to include all streams from all nodes while excluding the local Tx audio. There could potentially by some hybrid approach though depending on the connection topology. Are there any cases where audio streams would be combined before being forwarded to other nodes?

  1. Other use cases would be repeaters and other links. Repeaters would be basically the same as (2.) above but would also mix their RF Rx audio into the RF Tx audio, or it might be preferable in some configurations that the repeater hardware itself does that instead of in the ASL node. And there might be other requirements, for example some repeaters want Analog RF to take precedence over ASL/IRLP/Echolink/digital so if there is an incoming RF signal, audio from ASL/etc. is ignored. Or some may do it the other way around and give ASL/etc. precedence over Analog RF. (The latter can be advantageous in the case of jammers. If someone is jamming a repeater by RF there’s not much anyone can do (short of a foxhunt over 100’s of square miles) and ASL users may not even be able to key the repeater. Whereas if ASL audio is given priority the jammer can’t do anything. And jamming from ASL itself is not an issue because all the repeater admin has to do is block the specific node.) So this brings up some interesting configuration questions for repeaters, ie. how do you configure a repeater node to allow audio from one set of nodes to take priority over another set of nodes? eg. let’s say you want to combine (mix) audio from all nodes, but only allow audio to be combined from another specific set of nodes if the first set of nodes are not transmitting?

As for links, a repeater might have additional receivers connected via ASL over internet or point-to-point RF IP links that it then either combines with its local RF Rx, or uses a voting system to select the stronger signal. A link might be a simple one-directional simplex connection, or it could be a 2-way system combining multiple repeaters ie. a hub.

Thanks in advance to anyone who can provide specific answers, further insights or clarifications on any of these questions.

https://wiki.allstarlink.org/wiki/Special:AllPages

What if you and a remote node key up at the same time? Then you key-up at the same or at different overlapping times. Both audio paths pass through the network. No audio is blocked.

no loops in the network graph - Loop protection is provided by the current software by checking the connection paths of the connect’e and conect’r before making the connection. If either have the same node#'s in the list of contentions, it is refused/rejected.

I might suggest not writing a book of questions in one post, but make a single topic of directly related stuff in one post.

These posts are forever and searching them may get you many answers to questions. But not if so many different questions are in single posts that has a title different than the content.

Search results for '' - AllStarLink Discussion Groups - as seen at the top right corner.

Thanks for the reply, I took a look at those few links and none of them seem to have any answers to my questions. Although my post was a few pages of text I do think the questions are all on a similar topic, relating to the lower-level networking details of ASL. There are indeed many 100’s of posts on fixing small specific issues, but I figure if I can get a better understanding of the fundamentals I’ll be more likely to get it right the first time. Hopefully someone who has a broad fundamental understanding of how audio actually moves around at the IP-packet level and is mixed/combined at various nodes will see this post and can share some insights. Until then I have still seen no documentation that seems to cover these details and there may be no other way to figure it out other than experimentation. All the documentation out there seems to only cover specific pieces of ASL or Asterisk but not the low level details of how audio actually moves between nodes over IAX and UDP. I reviewed the IAX RFC and it covers the details of how the control and audio streams can move between 2 nodes but doesn’t go into any detail on what happens in scenarios where 3+ nodes are connected.

To summarize my questions more succinctly:

  1. If 2 simplex nodes double is there a way ASL can detect that and play a specific courtesy tone?
  2. Why with some ASL-linked repeaters does my node play a non-default courtesy tone sometimes?
  3. Why on some ASL-linked repeaters is there a bunch of echoing when 2 users double? What would be the best way to debug that?
  4. If 3 or more full-duplex nodes (not repeaters) are connected, does every node forward its Tx audio to all other connected nodes, or are there cases when audio streams from different nodes will be combined (mixed together) by a node before forwarding to another node?
  5. Is there a way to configure a repeater node to have audio from one set of nodes take priority over another set of nodes, eg. only allow audio from one node if another node is not keyed up?

I suspect these are subtle but fundamental questions that probably only one of the main SW devs at ASL/HamVOIP would have any idea about, or that I would have to look at 100K lines of source code to figure out. If no one answers in the next few days I’ll try breaking up into separate posts.

1 - No detection. Audio doubles and not blocked as I said.

2 - There are settings for local and remote CT’s and that may just be confusing you based on their setting when linked and not link and the path a tx comes from.
https://wiki.allstarlink.org/wiki/Courtesy_Tones

3 - Someone has a misconfigured node in the connection path. Likely echolink related.

4 - All nodes connected in TX mode will receive any audio from any tx. No audio is blocked. But obviously, if a connected simplex node is TXing, it’ would be hard to hear, but the audio is sent to it just the same for the network does not know the local config.

5 - The only priority that I know you can do is reduce the audio level of ‘monitored nodes’ (rx only) when a user is TXing.
linkmongain = -20 ; Link Monitor Gain adjusts the audio level of monitored nodes when a signal from another node or the local receiver is received.
; If linkmongain is set to a negative number the monitored audio will decrease by the set amount in db.
; If linkmongain set to a positive number monitored audio will increase by the set amount in db.
; The value of linkmongain is in db. The default value is 0 db.
However, it may be possible to create your own scheme using shell scripts and ‘onevent’ programming with possible helpers like tonemacro etc.
https://wiki.allstarlink.org/wiki/Event_Management
Not for beginners but don’t let that discourage you.

I might mention that the general theme of the network is that all nodes are equal. Personal stations or repeaters or anything new to come. A node is a node. All points equal.
What you do within your subnet portion (downstream) is up to you as long as you do not interfere with the general network and adhere to some basic universal command truths.

1 Like