PL detect with CM108 or CM119 fobs?

For those that have built there own audio interface out of a cm108 or cm119 fobs. Is there a way to have PL detect?
I’m needing cos and pl detect both.
Or if there is a work around and using a pin on the pi?

Yes, but it depends on the device type. External CTCSS is either GPIO2 or VOLUP.

I am assuming you want internal detection…

https://wiki.allstarlink.org/wiki/USBRadio_Channel_Driver

part of that wiki…
txctcssdefault = 100.0 ; Default TX CTCSS frequency, any frequency permitted
rxctcssfreqs = 100.0 ; RX CTCSS frequencies in floating point. This list is comma delimited. Frequency must be in table.
txctcssfreqs = 100.0 ; TX CTCSS frequencies in floating point, any frequency permitted. This list is comma delimited. Will follow the RX CTCSS Frequency.
;rxctcssoverride = 0 ; Set to 1 or yes to start out in carrier squelch mode

carrierfrom = dsp ; no,usb,usbinvert,dsp,vox
; no - no carrier detection at all
; usb - from the COR line on the USB sound fob (Active High)
; usbinvert - from the inverted COR line on the USB sound (Active Low)
; dsp - from RX noise using DSP techniques
; vox - voice activated from RX audio

ctcssfrom = dsp ; no,usb,dsp
; no - no CTCSS decoding, system will be carrier squelch
; usb - from the CTCSS line on the USB sound fob (Active high)
; usbinvert - from the inverted CTCSS line on the USB sound fob (Active low)
; dsp - CTCSS decoding using RX audio in DSP.
; rxdemod option must be set to flat for this to work.

There has been discussion about sensing and controlling PTT/COS from alternate devices. With a short Python script, I have been able to detect these, with different computers, and bring the results into the rpt.conf section. This requires not separate interface board.

  1. The AllStarLink wiki has a good write-up on Event_Management. Following this, I can trigger event variables, and show the state.

  2. A couple of problems persists. First, which Variable is associated with COS and PTT? It appears that may be RPT_ETXKEYED = COS, RPT_TXKEYED = PTT

  3. Following nice KK9ROB write-up, the transmitter does not activate when RPT_KEYED is set.

  4. I am using SimpleUSB. That likely is where the breakdown occurs.

  5. Is there a different driver, or what needs to be done so that rpt.conf can set the variables to activate the equivalent of COS/PTT?

  6. Timing or other issues are under software control, if adequately described.

With this simple technique, AllStarLink is totally separated from the hardware. This is only the control lines, not the audio, which is a different critter.

I appreciate the fabulous work on ASL3 and am looking to expand its application with more devices.

This would require a different and new channel driver that does not exist today. chan_simpleusb and chan_usbradio are tightly coupled with the hardware. The variables you’re referring to are status variables not control points.

The topic comes up frequently about using ALSA/PipeAudio + general GPIO and stuff. It’s possible but that code would need to be prioritized and developed.

I believe that is a status readout and not settable item. (no soft toggle)
Otherwise this would have been solved a long time ago.

The way to solve this may best lay with a I2C device as was intended for a Pi channel driver that never made it to production.

Raspbery Pi and DMK eng PITA board

But something should be done soon as we will run out of options sometime.
We have been using the same URI sceme based on a sound fob for more than 15 years and that sound fob was not a new thing then.
I use a pc with parallel port interface for signalling so I can use more sound fobs than those using a URI.
But even PC motherboards with PP interfaces have run out as well and I have a supply of them for myself. I’m likely good for as long as I do this. ‘USB printerport’ devices will not work as they cannot be ‘bit banged’ .
But there is this as well.

Or to allow

RPT_KEYED

to be a addressable settable item. Then we can write our own external triggers for it.
I prefer the later. Then the base code is not so ‘dated’.

But to use that event, you have to code a phone tx event (*99) to engage the TXer (COS ). and I don’t think you get a CT with that method, but it’s been a long time since I tried it.

Mike,
Completely agree. N8EI just responded with what I had surmised about channel drivers.

I too looked at the PITA and similar options. I have used a different I2C on other projects. It is very do-able, but that requires another device, which would become age limited, and most importantly we are still back to a driver problem, needing a new driver.

Bit banging is also doable, but again needs an updated driver. After head banging, rather than bit banging, it seems to appear that a software solution with a new driver as pointed out by N8EI is the long term solution. That is non-trivial and also outside my wheel-house.

If that does become a project, I would be willing to work on Python or Bash script for the interface back to rpt.conf. I have several iterations that I have already experimented with.

The team has brought along a superb package with ASL3. A software solution would be the ribbon that ties in a long term resolution and opens a myriad of expansion opportunities.

Thanks everyone for your observations, inputs, and information.

Not needing to use the OSS module that provides direct-to-device access of /dev/dsp is on the planning list. In fact, it’s rapidly moving UP the list with the kernel problems we’re having with stuttering audio inside of snd_pcm_oss. If we use ALSA/PulseAudio/Pipewire though, there’s more dependencies (which is just a thing to consider, not saying it’s bad), and there’s issues about potentially having full control of the device like we do now. You’d also need to integrate a way to set things like alsamixer levels in a way that doesn’t drive users insane. Finally, if you want a flexible GPIO system, a whole framework would need to be built to match up functions with general GPIO which is done very differently from implementation to implementation and even within an implementation. See the mess that Rapspi 5s unleashed on the formerly very stable GPIO mappings as an example.

It’s all very complicated to get right in a way that’s clean and usable by the vast majority of people. Making shell-outs to scripts and programs would not be the way to do that sustainably.

Well, the channel driver is there in that pita board for I2C. Called chan_pi I think.
But it did not work. I only had the prototype board a short while and just made a quick conclusion that the i2c library’s were not present or in the wrong location when it was compiled. But with all the errors on the audio side of it, hard to tell.
But it could well have been missing the pull-up voltage on i2c and I was not measuring any change under many configs…

Eventually, I will look at it again, but that likely will be more than a year.
Channel drivers in asterisk have a particular format and rules and a lot of study for latest asterisk. But the source code is out there if you think you can adapt it.
A challenge to one that does not do this type stuff all the time.

chan_pi was removed from the code base awhile back.

And it is still posted on git for reference.
It is also on the v1.01 image.
And if all else fails, I have a copy if someone wants to tackle this.

And for proper reference, I am not suggesting anyone make that driver work.
It will not. Not worth the time.
But it is a guide to making the I2C work and if you replace the audio section of that with something updated, you have a modern workable driver that will not require hacking a sound fob, but using a I2C interface for more signaling in/out than is even presented with a parallel port. and can be achieved with a PC or a Pi. It’s just the Pi has a built-in I2C if enabled.

Definitely not in my wheelhouse.

Mike and N8EI,
I agree on that driver. I looked at PITA a couple of years ago.

A couple of days ago there was discussion about a “software driver” that allows COS/PTT be set from rpt.conf.
On thinking more about it, that may be the ultimate solution. It would decouple the ASL3 controls from hardware. Then hardware manipulation would be much more flexible.

For example, the I2C code just to drive the chip is not overwhelming and would not require understanding the deep, dark secrets of Asterisk, just passing a variable. Similarly, the ASL3 / Asterisk gurus could massage that code without having to get up to speed on different hardware.
It never is as simple, but seems a way to overcome this lifelong problem with ASL interfaces. I appreciate all the continuing efforts.