Calibrating audio levels

There does not seem to be much if any info on how to precisely set Rx and Tx audio levels on nodes (for those of us who do not have high-$ service monitors, which is probably 98% of AllStar users).

My thought (as an audio engineer and EE) is that fundamentally what we want the calibration to do is maximize dynamic range, ie. ensuring that (1.) the output from the receiver does not clip the ADC in the radio interface, and that (2.) the audio going to the transmitter does not clip or result in over-compression in the transmitter. This may be an oversimplification that will not achieve as precise of results as using a service monitor and verifying actual deviation levels, but it’s probably good enough to get consistent levels through a wide variety of nodes +/- a couple dB.

On the receive side this seems easy to adjust, with the tune menu utility “Set Rx Voice Level (using display)” option, then adjusting the rxmixerset value until the level meter shows 5KHz deviation with squelch open on the receiver – which in my observations seems to be the loudest signal the receiver will output, and thus represents a Full-Scale signal. This seems to work very well and I’ve never had any issue with Rx levels being too low or high with this approach.

On the transmit side it’s a little more complicated though because most node transmitters don’t have a level meter for the audio input. So I do a parrot test, open the squelch on the receiver for a couple seconds, and then check that the parroted signal is close to full scale, which I do by looking at the receive audio from another (“monitoring”) radio going into an audio recording interface and wave editor. I then adjust the volume on the monitoring radio so that I get a 0 to -1 dB (full-scale) signal in the wave editor, with no clipping, when squelch is open on the monitoring radio. If I then do a parrot test I can then adjust the txmixer levels until the squelch burst is around -1dB.

It occurred to me that this might be more precise if I could just look at a test tone eg. using the “Flash (Toggle PTT and Tone output several times)” option, or looking at the courtesy tone or Morse telemetry levels. Problem is I can’t find any reference on those values.

Having calibrated a node to where parrot of a squelch burst (white noise) is around -1dB, the test tones now show up at exactly -6dB, which seems like a sensible level for a test tone. Does anyone know what level those “Flash” test tones actually are? (ie. relative to the max output of a 16-bit DAC where 0dB(FS) = 32767).

Similarly, in rpt.conf, courtesy tones are listed with amplitudes of around 2048 to 4096, but it’s not clear what the max amplitude value is. I experimented with some courtesy tones, eg. ct2 = |t(660,880,150,2048)(660,880,150,4096)(660,880,150,8192)(660,880,150,16384)(660,880,150,32768)
and with my txmix gain set very low I do see a 6dB jump between 2048, 4096, and 8192, but then higher values result in distorted tones that are not much louder than the 8192 amplitude tone. So next question is what amplitude setting would actually result in 0dB (Full-Scale) output from the DAC? It would seem it’s 8192 as I get a fairly clear tone there that’s 6dB louder than 4096, but if so why was the arbitrary value of 8192 chosen for the max amplitude value?

With txmix set higher the tones are louder, and I only see a 3dB instead of 6dB increase between amplitudes of 4096 and 8192, which indicates the transmitter is now doing some limiting - which is good as you wouldn’t want to set your levels so there is never any limiting. A reasonable amount of limiting increases intelligibility for listeners in a higher noise environment (eg. driving) and gives more consistent levels. To put this question another way, what max number of dB of limiting should be happening in a node transmitter for optimum audio quality across the network? I would guess 6-10dB but the actual numbers across a wide range of users may be way different.

Thus on the receive side things are simple - there is no limiter ahead of the ADC so you just set rxmix for a “5 KHz” reading at full scale input and you should be good, but on the transmit side you have limiting in the transmitter and possibly in app-rpt itself so there is more room for adjustment.

To more succinctly summarize all the above: Is there a better/simpler way to optimize Rx and Tx audio levels for those who do not have a service monitor but who would like to have their audio levels be as consistent as possible with other AllStar users? There are some parrot tests that look at overall audio, but those depend on the mic gain on the radio you’re going into the node with and on overall voice levels which at any time could vary by +/- 6dB or more, thus any testing would need to look at just the node itself and use consistent test tones or noise, whether that’s open squelch white noise, or tones/noise that could be easily provided from any PC audio out or from ASL itself. The Calibrating Audio Levels - AllStarLink Wiki page has some good insights but this is targeted more toward repeater systems and high-$ test equipment, so it would be nice if that page was adapted to personal AllStar nodes and tests that can be done with nothing more than a computer audio interface and wave editor (which can do a lot). Those may not be accurate to <1dB but might help ensure more consistent levels for everyone.

I should also mention that this question relates specifically to low-cost personal nodes, using basic radios that have a mic in and speaker out such that ASL is processing bandwidth-limited speaker audio and not doing any CTCSS encode or decode. eg. a node using the simpleusb driver with default settings.

In the meantime I suspect the majority of DiY node builders just set the levels where they sound good and with a bit of fine tuning over time that should get you within 3dB of the optimal levels. I’ve also seen some mentions of being able to calculate deviation from looking at the Tx waveform in an SDR so that might be interesting and I may give that a try, though most users probably don’t have an SDR so a more general approach looking just at audio levels and dynamic range at various points might be more useful. Thanks, David

Hi David.

Thank you for a most comprehensive description about your audio calibration journey. AllStar is capable of such good quality audio, yet I hear many people on the South West Net that links many Nodes and repeaters together in Western Australia with very low audio levels. Are you aware that there’s now a Parrot Node 55553 that not only responds with your own audio, but also provides a simple audio level report? It’s a beauty!

I would like to build an analogue audio level meter for my Node to help others with both transmit and receive levels… but haven’t worked out how as yet.


Mark Bosma
Beautiful South Bowning NSW
non impediti ratione cogitationis

Hi Mark, Thanks, and yes I’m aware of that parrot audio test node, however because it only looks at your outgoing audio, it is only going to know if the audio level going into your radio interface ADC is generally too low or possibly overcompressed/distorted. Prior to the ADC however there are 2 radios - the node receiver, and the radio you’re talking into. Various radios you talk into the node with could all have different mic gains or different limiting being done, and your voice energy levels could be much different from other hams, so while that test could give you useful results for a specific radio and specific voice, it would not be able to ensure the node itself is precisely (or even reasonably well) calibrated.

Thus ideally a more general calibration process could be defined, for example if I put white noise into a radio at different levels using a test .wav file that started quiet and then went up in 2dB steps each second, the output could be recorded into the node and a script could look at the levels and determine how much gain adjustment was needed to get the audio levels to the optimum point with no clipping and no more than a specific level of limiting. (This is just an example which might be more complicated than necessary.)

In reality the node’s Rx level is not hard to set just by calibrating open squelch to “5KHz” deviation - or maybe full-scale - on the tune utility meter, which is the most important calibration step since that’s the source of the outgoing audio. However, the tune utility’s level meter is not measuring actual deviation of anything, thus the “5KHz” reference seems somewhat arbitrary in this context, it might be more helpful if it showed the actual % or dB value of the input signal relative to Full-Scale. For my node builds I may be better off adjusting the max known input signal to the max level on the tune utility level meter. (On an RT85 HT open squelch is quite loud and works well for this, even though it ends up with more of pink noise spectral distribution due to deemphasis).

Ultimately it seems you now just have to go and listen on a bunch of different nodes and make sure your parroted audio (when not connected) is about the same level as the average (and good sounding) levels you hear on other nodes, and, that the node’s local Tx audio is not too quiet or too loud compared to local analog RF repeaters. As I continue to build nodes and do more of this testing I suppose a pattern will emerge as to what the optimum settings are, and these settings are easy enough to adjust as needed over time, but I do believe it should not be hard to define a simple but fairly precise guide on how to calibrate the levels of a node to optimum levels without needing to make any subjective comparisons.

The node’s Tx level is not as important because only you are going to hear that, so that can be adjusted any time as needed to personal preference.

Or maybe there are now cheap service monitors/deviation meters available for a low cost. The nanoVNAs are great and brought the cost of a VNA/antenna analyzer from $500+ to more like $100, maybe there’s something similar that can measure FM deviation?

Since I do have an SDR, the quickest and most precise thing for me to do might be to just go with that and do the bit of math as described here: According this all you have to do is put a tone into the radio you want to measure deviation on, and vary the frequency until you see a null on the RF fundamental (ie. a Bessel Zero). Then multiply the tone frequency by 2.4808 and voila you have the deviation. I’m going to try that out soon and see how it goes. My Qu-16 mixer has an audio function generator built-in so this should be easy to do. This would tell me exactly what the input sensitivity is of the node transmit radio ie. in units of KHz/Volt (prior to any limiting ie. at lower signal levels). I could then very easily adjust the Tx DAC gain to put out the exact voltage needed to get the desired deviation. This would not be as helpful on the Rx side though, but that’s not an issue since ASL does have the tune utility level meter.

Thanks David.

I do vaguely recall the Bessell Zero method from the late 70s when I studied this sort of thing… but couldn’t now recall it as an option for measuring deviation - memory’s faded due to a lack of use!

I agree with you about the NanoVNA - isn’t it an amazing bit of kit for $100! I’ve had so much fun doing antenna and filter building with one.


Mark Bosma
Beautiful South Bowning NSW
non impediti ratione cogitationis


The only adjustment that concerns anyone besides yourself is RX level. That is how much audio you are giving the network.

TX level is how much audio you perceive coming from the network (other users) or your node itself (announcements).

If you are lacking a service monitor, you are lacking the tool needed to set your TX level (and RX level) absolutely properly, however you can do a surprisingly good job without.

To set RX level “close enough” look at the virtual display in SimpleUSB tune as you speak robustly. It should bounce between 3-5. You are correct that it is not measuring deviation. It is measuring level sent to the network at a certain deviation level you are sending, assuming a match between the receiver and user’s narrow/wide setting. If for some reason you are using narrow band radios on each end, still use the “3-5 on voice peaks” method.

“B” (Boost) is almost always a bad idea, and it’s easily set on, as it’s the default in the HamVOIP setup.

If you have a service monitor, then set the generator output to 3.7KHz deviation (including 700Hz CTCSS) and set RX level to show “3”. You always have to be aware of the CTCSS level you are sending. Check again at 5.7KHz to see “5”. The reason for this is the Allstar system is not hearing the CTCSS you are sending (it’s already gone from the audio), but your service monitor is showing the composite level.

Speaking of which, I am concerned that you mentioned open squelch as a level reference. You are not running carrier squelch on your node, are you? If so, you can easily put noise or trash out to the network and barely be aware of it. Always use CTCSS on your node receiver.

If you don’t have a service monitor but insist on an instrument-based approach, use the Bessel-function method to send a calibrated tone level to the node (again factoring in the CTCSS level). Basically, modulate your user-TX with a fixed tone, measure its deviation using the SDR or whatever, then get the node’s RX level to agree.

You can play with your TX level to your heart’s content, we won’t hear it on the network. However, the “toggle test tone” function will play a test tone @3KHz deviation which you can measure locally using the SDR (if you encode CTCSS on your TX remember to keep track of that value).

Pres W2PW

Hi all,

Interesting discussion.

Just to clarify, what parameters are determining the audio level sent from a local node to the other nodes via the network ?

Is it the rxmixerset setting ? It controls the audio level coming from the radio receiver to the input of the A/D converter.
Is it ALSO setting the audio level sent to the network or is there a fixed level somewhere in the software ?

Thanks for any clarifications.

73 Patrick TK5EP.

Hi Pres, Thanks for the good overview of the process, that does clarify things, and sounds like adjusting for voice peaks between 3-5 KHz on the tune utility meter should get things to within a few dB of where they need to be. As a node builder (see my design at this link) I would like to be sure all nodes I build or that others build with my general [low-cost yet high-quality] design are as precisely optimized as possible, though I suspect that in any case the end user should probably always run the tune menu and make sure that with their voice and their other radio(s) they get the 3-5K levels on the meter, and have a simple guideline on how to properly adjust the rx and tx levels.

To answer your question, yes the node radio does indeed have Tone Squelch enabled, and in many node builds and months of use and testing that has always worked very well. BTW I do sometimes hear it suggested that nodes should use raw discriminator audio and do CTCSS decode and squelch within the usbradio driver, which indeed is highly preferable for repeater applications, as the unfiltered audio can then be repeated to the transmitter output with no loss of quality (preventing extra deemphasis and then preemphasis of the repeated signal), and likely provide better squelch performance than many receivers may provide, but for the case of a simple, low-cost, portable personal node with good quality HTs and the simpleusb driver (or usbradio driver with no ctcss, dsp, etc. settings enabled), excellent quality can be obtained with no disadvantages or loss in audio quality. I’ve done lots of testing of both drivers, and found no difference in frequency response, SNR, or audio quality. And the Tone Squelch in the RT85 HTs is quite good, resulting in short, fairly quiet squelch tails.

And to Patrick, yes the rxmixerset setting is the one and only setting that sets the gain of the CM1xx IC input ahead of its ADC. That audio (in a non-repeater app ie. personal node) is then routed only to other connected nodes / IAX/SIP apps. You would only hear that audio on your local RF if you do a parrot test.

And to Patrick, yes the rxmixerset setting is the one and only setting that sets the gain of the CM1xx IC input ahead of its ADC. That audio (in a non-repeater app ie. personal node) is then routed only to other connected nodes / IAX/SIP apps. You would only hear that audio on your local RF if you do a parrot test.

This is not quite correct, as the rxboost parameter ALSO sets the audio level going into the ADC.

There are other parameters adjusting the levels sent to the transmitter and received from the receiver:
txmixaset, txmixbset, txctcssadj, rxvoiceadj, rxctcssadj,

But I don’t know how the level sent to the network is adjusted.

I will make some tests to understand this.

73 Patrick.

rxboost would only need to be enabled for radios with extremely quiet audio outputs, which I have not run into. Thus why I did not mention it, because it’s rarely needed and should generally be left off. And if it was needed for a particular radio it probably would have been enabled when the node was first set up and shouldn’t need to be further toggled on/off when doing finer adjustments to Rx audio levels. Thus as relates to general fine tuning of Rx audio levels sent to other nodes, rxmixerset is the only config needing to be adjusted.

txmix[a/b]set controls the transmit level gain in the 2 CM1xx DAC channels.

I don’t use txctcssadj, rxvoiceadj, rxctcssadj in nodes I build so can’t say much about those other than they are more applicable to repeaters and use of flat discriminator audio than to low-cost personal nodes. I would encourage you to find out more about any of these settings though and post what you find. There are indeed very many settings in the app-rpt code that are poorly documented or not documented at all. But in the context of of this thread, rxmixerset and txmix[a/b]set are all that’s needed to properly calibrate the Rx and Tx voice levels. In the context of a repeater where you would also be doing CTCSS and squelch in usbradio you probably should then be using a service monitor and following the “Calibrating Audio Levels” page on the ASL wiki.

Yes, this is the main complain about the ASL site. :grimacing:
There is no “in depth” manual or documentation that could hep us to understand what’s under hood, so we can only speculate and/or test.

You probably are not modifying the txctcssadj, rxvoiceadj, rxctcssadj parameters as they are “under the hood”.
But if you use the radio tune rxnoise, rxvoice and rxtone commands, then you’re acting on these parameters which are saved in a .conf file.

I’m lucky enough to have all the needed equipments to measure what’s going on on the radio side.
I’m actualy testing on a UHF repeater using a homemade URI, and a duplex radio that has both a flat RX output and a flat TX input.
All the audios are handled by the usbradio module.

The auto calibration procedures (radio tune rxnoise, etc…) do give very good results.
Setting the rxmixerset to different values and repeating these procedures are always giving the same deviation on the local repeater.
So now, I need to see what happens on a remote node/repeater.

From what I already noticed, is that this parameter is not linear at all. I.e with my original setting of 125, I do have a deviation of 3 kHz and with a setting of 1, still 2.2 kHz.

But I have to find a way to monitor a distant repeater TX frequency deviation while I make more tests on my local node to see how the rxmixerset parameter (and other settings) does act on the audio level on distant nodes.

So, the question is : are the auto calibration procedure correcting not only the local audio levels but also the network levels ?

Maybe there is someone that already has the answers ?

73 Patrick TK5EP.

It would be a great senior/thesis project for a EE DSP student to go through all the usbradio driver code and fully document how it works and what all the possible settings are, but that would definitely be a time-consuming project.

And in defense of ASL, they actually have very good documentation covering a huge range of topics and all the main cfgs that most people will ever need to use, however due to the power and flexibility of ASL code it would be unreasonable to expect perfect documentation. It is a free and open-source project after all, that works way better and does way more than any other alternative. So it’s up to us as users as we run into questions or issues to then document what we find and contribute that to the ASL wiki or to the forum. With any project the code itself is always the best documentation and it’s not hard to compile ASL, add debug print outputs and figure out what it’s doing (if you have a software background or sufficient time and patience to figure it out).

The rxmixerset is indeed not very linear. I have to set it to vastly different values with usbradio vs. simpleusb and CM108AH / CM119A / CM108B interfaces. And sometimes a small change in the value makes a big difference or larger changes make very little difference. It would be great to have a document explaining everything about these settings and what the tune menu does and what those values you mentioned do and how they are calculated. As an audio engineer I can say that having all parameters have real physical units (eg. actual dB values instead of arbitrary 0-1000 numbers) would be much clearer, but that would then require all possible I/Os to be defined in advance and a more structured software approach which is not easy to do and would require major code changes. Ultimately you just have to do an end-to-end calibration, which is always a good idea anyway. Anyhow by the time you put in gain structure diagrams of the whole system covering every audio interface variant and then properly covered the usbradio and tune menu DSP settings a proper manual could be easily the size of a book.

Anyhow to answer your question the tune menu as you know has options for both Rx and Tx level settings, and the rxmixerset is indeed the only level parameter that needs to be adjusted to control the audio levels sent to remote nodes. It’s possible some other parameter could also affect that to some small degree in some cases, but rxmixerset does in fact give you full control over the rx audio levels and thus outgoing network audio levels.

Very interesting discussion, but what about the case where the radio is a remote base (duplex=0) ? In this case, the TX setting will be what goes out on the remote base radio RF TX. For example if the remote base is set to a frequency of an analog fm repeater, then I’d like the audio to be set such that when using allstar, the audio level is comparable to any other ( non-allstar) ham rigs that might be using the analog fm repeater.

If preformed correctly, the audio will be on the standard. Making no difference what the duplex mode is or if it is a remote base, repeater or just a link radio as it would be with any controller or should be.

Have you considered adjusting the level in the radio to make it correct ?
Sounds like you want to adjust the software for inadequacies of the radio as many of the modern cheap radios have. You might be able to get away with it if it is not far off.

If you look in the file that is config’d from the process, you will see numbers that can be adjusted
manually if you wish.

But note, adjusting these to fit improper audio at the radio will likely mal-adjust your network audio.
Then you may be working on a gopher game.

Everything has consequences and is why I would suggest fix the radio audio and be correct for everything as long as your audio tune was preformed correctly…

And there are ducking controls you can read on in the wiki.