DIAL 8.5 and VM's on ProxMox

For those of you who are into this sort of thing, DIAL 8.5 has been
installed, conbooberated and running successfully with no apparent lag,
on the latest stable version of Proxmox. Currently, it is a radio-less
node.

The test environment is a cluster of three Dell R310 servers (nodes),
each with 16GB+ RAM, RAID 1 system drives, some other volume drives,
10GB fiber storage network links, and 1GB network connections.

Each of three nodes have other VMs running on them, running stuff like
BIONIC, and other silly things for 'stress' testing, as the system is
being evaluated for production environment.

To my understanding, proxmox is not a load-sharing/proxy/cloud
computing network, each VM is hosted/homed on a single node, but has
"live" versions on the other nodes in the cluster. Should a node suffer
both power supply failures, or CPU fan squeals to a stop, or the RAID
controller dies, within a minute or so, another node will spin up the
live versions of the VMs that were on the now dead node.

So far, that has not been any noticeable lag, jitters, delay, otherwise
anything negative, much to my surprise. This is a 101 for me on VM
stuff, never messed with it until now.

If anyone wants to assist in testing, you are invited to connect to
29567
Tuesday night, 7PM Central/8PM Eastern for our weekly Allstar Technical
Net.

I am calling the net tomorrow night, to which the topics are advanced
ASL node configurations, and some other stuff I have to be reminded of.

There is chatter throughout the days, more-so at night, so anyone is
welcome to connect anytime!

Don't be a square, connect to there!

~Benjamin, KB9LFZ

To REALLY tell how well any environment is working, you need to check the
timing quality as reported from the dahdi kernel drivers. Many
environments (particularly VPS!) do rather poorly in the area. Poor
results typically mean audio choppiness and poor telemetry timing (e.g.:
Bad CW or tone timing), particularly where the server is used as a hub
with many users connecting, needing to mix many audio streams. Note that
this is an asterisk thing, not specifically AllStar. Many messages have
been written about this in other asterisk related forums; goog'ling will
find many results.

To test the timing quality, use the dahdi_test command. Jitter in the
timing results and accuracy less than about 99.8% means less than perfect
performance and potentially mediocre results.

Here is a sample run from my dev RPi3 system with 3 nodes (2 usb audio, 1
pseudo) active:

[root@alarmpi-kb4fxc asterisk]# dahdi_test -c 100
Opened pseudo dahdi interface, measuring accuracy...
99.992% 99.990% 99.994% 99.994% 99.995% 99.996% 99.994% 99.994%
99.995% 99.993% 99.996% 99.994% 99.994% 99.994% 99.995% 99.996%
99.993% 99.993% 99.994% 99.995% 99.995% 99.994% 99.994% 99.994%
99.994% 99.995% 99.993% 99.994% 99.994% 99.994% 99.995% 99.993%
99.994% 99.993% 99.994% 99.995% 99.993% 99.994% 99.994% 99.994%
99.996% 99.993% 99.994% 99.994% 99.995% 99.996% 99.994% 99.994%
99.994% 99.994% 99.996% 99.994% 99.994% 99.993% 99.994% 99.995%
99.995% 99.993% 99.994% 99.995% 99.995% 99.995% 99.994% 99.994%
99.994% 99.994% 99.994% 99.994% 99.994% 99.995% 99.995% 99.994%
99.994% 99.993% 99.994% 99.996% 99.993% 99.995% 99.994% 99.995%
99.996% 99.993% 99.994% 99.994% 99.995% 99.996% 99.994% 99.994%
99.994% 99.994% 99.996% 99.994% 99.994% 99.995% 99.994% 99.996%
99.994% 99.993%
--- Results after 98 passes ---
Best: 99.996% -- Worst: 99.990% -- Average: 99.994247%
Cummulative Accuracy (not per pass): 99.994

73, David KB4FXC

For those of you who are into this sort of thing, DIAL 8.5 has been

installed, conbooberated and running successfully with no apparent lag,
on the latest stable version of Proxmox. Currently, it is a radio-less
node.

The test environment is a cluster of three Dell R310 servers (nodes),
each with 16GB+ RAM, RAID 1 system drives, some other volume drives,
10GB fiber storage network links, and 1GB network connections.

Each of three nodes have other VMs running on them, running stuff like
BIONIC, and other silly things for 'stress' testing, as the system is
being evaluated for production environment.

To my understanding, proxmox is not a load-sharing/proxy/cloud
computing network, each VM is hosted/homed on a single node, but has
"live" versions on the other nodes in the cluster. Should a node suffer
both power supply failures, or CPU fan squeals to a stop, or the RAID
controller dies, within a minute or so, another node will spin up the
live versions of the VMs that were on the now dead node.

So far, that has not been any noticeable lag, jitters, delay, otherwise
anything negative, much to my surprise. This is a 101 for me on VM
stuff, never messed with it until now.

If anyone wants to assist in testing, you are invited to connect to
29567
Tuesday night, 7PM Central/8PM Eastern for our weekly Allstar Technical
Net.

I am calling the net tomorrow night, to which the topics are advanced
ASL node configurations, and some other stuff I have to be reminded of.

There is chatter throughout the days, more-so at night, so anyone is
welcome to connect anytime!

Don't be a square, connect to there!

~Benjamin, KB9LFZ

···

On Mon, 27 Nov 2017, Benjamin Naber wrote:

_______________________________________________
App_rpt-users mailing list
App_rpt-users@lists.allstarlink.org
http://lists.allstarlink.org/cgi-bin/mailman/listinfo/app_rpt-users

To unsubscribe from this list please visit http://lists.allstarlink.org/cgi-bin/mailman/listinfo/app_rpt-users and scroll down to the bottom of the page. Enter your email address and press the "Unsubscribe or edit options button"
You do not need a password to unsubscribe, you can do it via email confirmation. If you have trouble unsubscribing, please send a message to the list detailing the problem.

Hi Benjamin,

I'm not sure of the specifics in your environment of course, but if there are multiple live versions of a virtual machine then the redundancy/failover features are typically within the guest virtual machine or the specific services it offers. Such as DNS Primary and Secondary servers, for example.

In other cases in a virtual environment where the redundancy/failover features are within the hypervisor, whether it be Proxmox, ESXi, HyperV, etc, the key is shared storage. In the event of a critical failure of an individual host (individual server or server blade in a chassis), the hypervisor detects the failure and automatically moves the single instance of the running virtual machine to another host, with minimal downtime.

Running virtual machines can also be manually or automatically migrated from one host to another to more equally distribute the load across the cluster, with virtually (haha) zero downtime. I've ping virtual machines continuously while they are being migrated and never lose a ping!

More info:

Maybe this explanation helps someone...

73
Will, KE4IAJ
TARG AEC

···

-----Original Message-----
From: App_rpt-users [mailto:app_rpt-users-bounces@lists.allstarlink.org] On Behalf Of David McGough
Sent: Monday, November 27, 2017 10:07 PM
To: Users of Asterisk app_rpt <app_rpt-users@lists.allstarlink.org>
Subject: Re: [App_rpt-users] DIAL 8.5 and VM's on ProxMox

To REALLY tell how well any environment is working, you need to check the timing quality as reported from the dahdi kernel drivers. Many environments (particularly VPS!) do rather poorly in the area. Poor results typically mean audio choppiness and poor telemetry timing (e.g.:
Bad CW or tone timing), particularly where the server is used as a hub with many users connecting, needing to mix many audio streams. Note that this is an asterisk thing, not specifically AllStar. Many messages have been written about this in other asterisk related forums; goog'ling will find many results.

To test the timing quality, use the dahdi_test command. Jitter in the timing results and accuracy less than about 99.8% means less than perfect performance and potentially mediocre results.

Here is a sample run from my dev RPi3 system with 3 nodes (2 usb audio, 1
pseudo) active:

[root@alarmpi-kb4fxc asterisk]# dahdi_test -c 100 Opened pseudo dahdi interface, measuring accuracy...
99.992% 99.990% 99.994% 99.994% 99.995% 99.996% 99.994% 99.994% 99.995% 99.993% 99.996% 99.994% 99.994% 99.994% 99.995% 99.996% 99.993% 99.993% 99.994% 99.995% 99.995% 99.994% 99.994% 99.994% 99.994% 99.995% 99.993% 99.994% 99.994% 99.994% 99.995% 99.993% 99.994% 99.993% 99.994% 99.995% 99.993% 99.994% 99.994% 99.994% 99.996% 99.993% 99.994% 99.994% 99.995% 99.996% 99.994% 99.994% 99.994% 99.994% 99.996% 99.994% 99.994% 99.993% 99.994% 99.995% 99.995% 99.993% 99.994% 99.995% 99.995% 99.995% 99.994% 99.994% 99.994% 99.994% 99.994% 99.994% 99.994% 99.995% 99.995% 99.994% 99.994% 99.993% 99.994% 99.996% 99.993% 99.995% 99.994% 99.995% 99.996% 99.993% 99.994% 99.994% 99.995% 99.996% 99.994% 99.994% 99.994% 99.994% 99.996% 99.994% 99.994% 99.995% 99.994% 99.996% 99.994% 99.993%
--- Results after 98 passes ---
Best: 99.996% -- Worst: 99.990% -- Average: 99.994247% Cummulative Accuracy (not per pass): 99.994

73, David KB4FXC

On Mon, 27 Nov 2017, Benjamin Naber wrote:

For those of you who are into this sort of thing, DIAL 8.5 has been

installed, conbooberated and running successfully with no apparent lag, on the latest stable version of Proxmox. Currently, it is a radio-less node.

The test environment is a cluster of three Dell R310 servers (nodes), each with 16GB+ RAM, RAID 1 system drives, some other volume drives, 10GB fiber storage network links, and 1GB network connections.

Each of three nodes have other VMs running on them, running stuff like BIONIC, and other silly things for 'stress' testing, as the system is being evaluated for production environment.

To my understanding, proxmox is not a load-sharing/proxy/cloud computing network, each VM is hosted/homed on a single node, but has "live" versions on the other nodes in the cluster. Should a node suffer both power supply failures, or CPU fan squeals to a stop, or the RAID controller dies, within a minute or so, another node will spin up the live versions of the VMs that were on the now dead node.

So far, that has not been any noticeable lag, jitters, delay, otherwise anything negative, much to my surprise. This is a 101 for me on VM stuff, never messed with it until now.

If anyone wants to assist in testing, you are invited to connect to
29567
Tuesday night, 7PM Central/8PM Eastern for our weekly Allstar Technical Net.

I am calling the net tomorrow night, to which the topics are advanced ASL node configurations, and some other stuff I have to be reminded of.

There is chatter throughout the days, more-so at night, so anyone is welcome to connect anytime!

Don't be a square, connect to there!

~Benjamin, KB9LFZ

_______________________________________________
App_rpt-users mailing list
App_rpt-users@lists.allstarlink.org
http://lists.allstarlink.org/cgi-bin/mailman/listinfo/app_rpt-users

To unsubscribe from this list please visit http://lists.allstarlink.org/cgi-bin/mailman/listinfo/app_rpt-users and scroll down to the bottom of the page. Enter your email address and press the "Unsubscribe or edit options button"
You do not need a password to unsubscribe, you can do it via email confirmation. If you have trouble unsubscribing, please send a message to the list detailing the problem.

_______________________________________________
App_rpt-users mailing list
App_rpt-users@lists.allstarlink.org
http://lists.allstarlink.org/cgi-bin/mailman/listinfo/app_rpt-users

To unsubscribe from this list please visit http://lists.allstarlink.org/cgi-bin/mailman/listinfo/app_rpt-users and scroll down to the bottom of the page. Enter your email address and press the "Unsubscribe or edit options button"
You do not need a password to unsubscribe, you can do it via email confirmation. If you have trouble unsubscribing, please send a message to the list detailing the problem.

David,

you message was put in my "Worthy Keeping" folder!

While on this subject, are there any other tests?

These were the results of: dahdi_test -c 100

me@KB9LFZ-2:~# dahdi_test -c 100
Opened pseudo dahdi interface, measuring accuracy...
99.999% 99.997% 99.609% 99.997% 99.998% 99.615% 99.995% 99.608%
99.999% 99.615% 99.608% 99.613% 99.608% 99.999% 99.994% 99.970%
99.645% 99.998% 99.608% 99.998% 99.996% 99.996% 99.998% 99.996%
99.998% 99.996% 99.999% 99.615% 99.611% 99.613% 99.995% 99.608%
99.997% 99.612% 99.608% 99.614% 99.608% 99.615% 99.997% 100.000%
99.605% 99.612% 99.989% 99.976% 99.957% 99.998% 99.996% 99.997%
100.000% 99.993% 99.998% 99.997% 99.999% 99.997% 99.997% 99.997%
99.997% 99.998% 99.998% 99.996% 99.999% 99.993% 99.999% 99.999%
99.993% 99.998% 99.998% 99.998% 99.997% 99.994% 99.999% 99.997%
99.999% 99.998% 99.993% 100.000% 99.997% 100.000% 99.996% 99.997%
99.998% 99.995% 99.998% 99.999% 99.992% 99.999% 99.997% 99.998%
99.998% 99.996% 99.997% 99.999% 99.996% 99.999% 99.996% 99.997%
99.997% 99.995%
--- Results after 98 passes ---
Best: 100.000% -- Worst: 99.605% -- Average: 99.917708%
Cummulative Accuracy (not per pass): 99.997

What is interesting about this mess is that Windows 10 does not do very
well on this enviroment... while the Linux machines just seem to work
as if the OS was installed on real hardware

~Benjamin, KB9LFZ

···

On Mon, 2017-11-27 at 22:07 -0500, David McGough wrote:

To REALLY tell how well any environment is working, you need to check
the
timing quality as reported from the dahdi kernel drivers. Many
environments (particularly VPS!) do rather poorly in the area. Poor
results typically mean audio choppiness and poor telemetry timing
(e.g.:
Bad CW or tone timing), particularly where the server is used as a
hub
with many users connecting, needing to mix many audio streams. Note
that
this is an asterisk thing, not specifically AllStar. Many messages
have
been written about this in other asterisk related forums; goog'ling
will
find many results.

To test the timing quality, use the dahdi_test command. Jitter in the
timing results and accuracy less than about 99.8% means less than
perfect
performance and potentially mediocre results.

Here is a sample run from my dev RPi3 system with 3 nodes (2 usb
audio, 1
pseudo) active:

[root@alarmpi-kb4fxc asterisk]# dahdi_test -c 100
Opened pseudo dahdi interface, measuring accuracy...
99.992% 99.990% 99.994% 99.994% 99.995% 99.996% 99.994% 99.994%
99.995% 99.993% 99.996% 99.994% 99.994% 99.994% 99.995% 99.996%
99.993% 99.993% 99.994% 99.995% 99.995% 99.994% 99.994% 99.994%
99.994% 99.995% 99.993% 99.994% 99.994% 99.994% 99.995% 99.993%
99.994% 99.993% 99.994% 99.995% 99.993% 99.994% 99.994% 99.994%
99.996% 99.993% 99.994% 99.994% 99.995% 99.996% 99.994% 99.994%
99.994% 99.994% 99.996% 99.994% 99.994% 99.993% 99.994% 99.995%
99.995% 99.993% 99.994% 99.995% 99.995% 99.995% 99.994% 99.994%
99.994% 99.994% 99.994% 99.994% 99.994% 99.995% 99.995% 99.994%
99.994% 99.993% 99.994% 99.996% 99.993% 99.995% 99.994% 99.995%
99.996% 99.993% 99.994% 99.994% 99.995% 99.996% 99.994% 99.994%
99.994% 99.994% 99.996% 99.994% 99.994% 99.995% 99.994% 99.996%
99.994% 99.993%
--- Results after 98 passes ---
Best: 99.996% -- Worst: 99.990% -- Average: 99.994247%
Cummulative Accuracy (not per pass): 99.994

73, David KB4FXC

On Mon, 27 Nov 2017, Benjamin Naber wrote:

> For those of you who are into this sort of thing, DIAL 8.5 has been

installed, conbooberated and running successfully with no apparent
lag,
on the latest stable version of Proxmox. Currently, it is a radio-
less
node.

The test environment is a cluster of three Dell R310 servers (nodes),
each with 16GB+ RAM, RAID 1 system drives, some other volume drives,
10GB fiber storage network links, and 1GB network connections.

Each of three nodes have other VMs running on them, running stuff
like
BIONIC, and other silly things for 'stress' testing, as the system is
being evaluated for production environment.

To my understanding, proxmox is not a load-sharing/proxy/cloud
computing network, each VM is hosted/homed on a single node, but has
"live" versions on the other nodes in the cluster. Should a node
suffer
both power supply failures, or CPU fan squeals to a stop, or the RAID
controller dies, within a minute or so, another node will spin up the
live versions of the VMs that were on the now dead node.

So far, that has not been any noticeable lag, jitters, delay,
otherwise
anything negative, much to my surprise. This is a 101 for me on VM
stuff, never messed with it until now.

If anyone wants to assist in testing, you are invited to connect to
29567
Tuesday night, 7PM Central/8PM Eastern for our weekly Allstar
Technical
Net.

I am calling the net tomorrow night, to which the topics are advanced
ASL node configurations, and some other stuff I have to be reminded
of.

There is chatter throughout the days, more-so at night, so anyone is
welcome to connect anytime!

Don't be a square, connect to there!

~Benjamin, KB9LFZ

_______________________________________________
App_rpt-users mailing list
App_rpt-users@lists.allstarlink.org
http://lists.allstarlink.org/cgi-bin/mailman/listinfo/app_rpt-users

To unsubscribe from this list please visit http://lists.allstarlink.o
rg/cgi-bin/mailman/listinfo/app_rpt-users and scroll down to the
bottom of the page. Enter your email address and press the
"Unsubscribe or edit options button"
You do not need a password to unsubscribe, you can do it via email
confirmation. If you have trouble unsubscribing, please send a
message to the list detailing the problem.

_______________________________________________
App_rpt-users mailing list
App_rpt-users@lists.allstarlink.org
http://lists.allstarlink.org/cgi-bin/mailman/listinfo/app_rpt-users

To unsubscribe from this list please visit http://lists.allstarlink.o
rg/cgi-bin/mailman/listinfo/app_rpt-users and scroll down to the
bottom of the page. Enter your email address and press the
"Unsubscribe or edit options button"
You do not need a password to unsubscribe, you can do it via email
confirmation. If you have trouble unsubscribing, please send a
message to the list detailing the problem.

Hi,

You should strive for better timing numbers for serious use. AllStar uses
the dahdi bridge/conference software as the mechanism to mix all the
received audio for a given node into the combined transmit audio....Every
time that you link nodes together, you're dynamically creating a dahdi
multi-party bridge, which is similar to an Asterisk conference call
"underneath the covers." The software timing must be spot-on to
accomplish this task without glitches.

Here may be a couple links of interest:

https://wiki.asterisk.org/wiki/display/AST/Bridges

73, David KB4FXC

David,

you message was put in my "Worthy Keeping" folder!

While on this subject, are there any other tests?

These were the results of: dahdi_test -c 100

me@KB9LFZ-2:~# dahdi_test -c 100
Opened pseudo dahdi interface, measuring accuracy...
99.999% 99.997% 99.609% 99.997% 99.998% 99.615% 99.995% 99.608%
99.999% 99.615% 99.608% 99.613% 99.608% 99.999% 99.994% 99.970%
99.645% 99.998% 99.608% 99.998% 99.996% 99.996% 99.998% 99.996%
99.998% 99.996% 99.999% 99.615% 99.611% 99.613% 99.995% 99.608%
99.997% 99.612% 99.608% 99.614% 99.608% 99.615% 99.997% 100.000%
99.605% 99.612% 99.989% 99.976% 99.957% 99.998% 99.996% 99.997%
100.000% 99.993% 99.998% 99.997% 99.999% 99.997% 99.997% 99.997%
99.997% 99.998% 99.998% 99.996% 99.999% 99.993% 99.999% 99.999%
99.993% 99.998% 99.998% 99.998% 99.997% 99.994% 99.999% 99.997%
99.999% 99.998% 99.993% 100.000% 99.997% 100.000% 99.996% 99.997%
99.998% 99.995% 99.998% 99.999% 99.992% 99.999% 99.997% 99.998%
99.998% 99.996% 99.997% 99.999% 99.996% 99.999% 99.996% 99.997%
99.997% 99.995%
--- Results after 98 passes ---
Best: 100.000% -- Worst: 99.605% -- Average: 99.917708%
Cummulative Accuracy (not per pass): 99.997

What is interesting about this mess is that Windows 10 does not do very
well on this enviroment... while the Linux machines just seem to work
as if the OS was installed on real hardware

~Benjamin, KB9LFZ

···

On Tue, 28 Nov 2017, Benjamin Naber wrote:

On Mon, 2017-11-27 at 22:07 -0500, David McGough wrote:

To REALLY tell how well any environment is working, you need to check
the
timing quality as reported from the dahdi kernel drivers. Many
environments (particularly VPS!) do rather poorly in the area. Poor
results typically mean audio choppiness and poor telemetry timing
(e.g.:
Bad CW or tone timing), particularly where the server is used as a
hub
with many users connecting, needing to mix many audio streams. Note
that
this is an asterisk thing, not specifically AllStar. Many messages
have
been written about this in other asterisk related forums; goog'ling
will
find many results.

To test the timing quality, use the dahdi_test command. Jitter in the
timing results and accuracy less than about 99.8% means less than
perfect
performance and potentially mediocre results.

Here is a sample run from my dev RPi3 system with 3 nodes (2 usb
audio, 1
pseudo) active:

[root@alarmpi-kb4fxc asterisk]# dahdi_test -c 100
Opened pseudo dahdi interface, measuring accuracy...
99.992% 99.990% 99.994% 99.994% 99.995% 99.996% 99.994% 99.994%
99.995% 99.993% 99.996% 99.994% 99.994% 99.994% 99.995% 99.996%
99.993% 99.993% 99.994% 99.995% 99.995% 99.994% 99.994% 99.994%
99.994% 99.995% 99.993% 99.994% 99.994% 99.994% 99.995% 99.993%
99.994% 99.993% 99.994% 99.995% 99.993% 99.994% 99.994% 99.994%
99.996% 99.993% 99.994% 99.994% 99.995% 99.996% 99.994% 99.994%
99.994% 99.994% 99.996% 99.994% 99.994% 99.993% 99.994% 99.995%
99.995% 99.993% 99.994% 99.995% 99.995% 99.995% 99.994% 99.994%
99.994% 99.994% 99.994% 99.994% 99.994% 99.995% 99.995% 99.994%
99.994% 99.993% 99.994% 99.996% 99.993% 99.995% 99.994% 99.995%
99.996% 99.993% 99.994% 99.994% 99.995% 99.996% 99.994% 99.994%
99.994% 99.994% 99.996% 99.994% 99.994% 99.995% 99.994% 99.996%
99.994% 99.993%
--- Results after 98 passes ---
Best: 99.996% -- Worst: 99.990% -- Average: 99.994247%
Cummulative Accuracy (not per pass): 99.994

73, David KB4FXC

On Mon, 27 Nov 2017, Benjamin Naber wrote:

> For those of you who are into this sort of thing, DIAL 8.5 has been

installed, conbooberated and running successfully with no apparent
lag,
on the latest stable version of Proxmox. Currently, it is a radio-
less
node.

The test environment is a cluster of three Dell R310 servers (nodes),
each with 16GB+ RAM, RAID 1 system drives, some other volume drives,
10GB fiber storage network links, and 1GB network connections.

Each of three nodes have other VMs running on them, running stuff
like
BIONIC, and other silly things for 'stress' testing, as the system is
being evaluated for production environment.

To my understanding, proxmox is not a load-sharing/proxy/cloud
computing network, each VM is hosted/homed on a single node, but has
"live" versions on the other nodes in the cluster. Should a node
suffer
both power supply failures, or CPU fan squeals to a stop, or the RAID
controller dies, within a minute or so, another node will spin up the
live versions of the VMs that were on the now dead node.

So far, that has not been any noticeable lag, jitters, delay,
otherwise
anything negative, much to my surprise. This is a 101 for me on VM
stuff, never messed with it until now.

If anyone wants to assist in testing, you are invited to connect to
29567
Tuesday night, 7PM Central/8PM Eastern for our weekly Allstar
Technical
Net.

I am calling the net tomorrow night, to which the topics are advanced
ASL node configurations, and some other stuff I have to be reminded
of.

There is chatter throughout the days, more-so at night, so anyone is
welcome to connect anytime!

Don't be a square, connect to there!

~Benjamin, KB9LFZ

_______________________________________________
App_rpt-users mailing list
App_rpt-users@lists.allstarlink.org
http://lists.allstarlink.org/cgi-bin/mailman/listinfo/app_rpt-users

To unsubscribe from this list please visit http://lists.allstarlink.o
rg/cgi-bin/mailman/listinfo/app_rpt-users and scroll down to the
bottom of the page. Enter your email address and press the
"Unsubscribe or edit options button"
You do not need a password to unsubscribe, you can do it via email
confirmation. If you have trouble unsubscribing, please send a
message to the list detailing the problem.

_______________________________________________
App_rpt-users mailing list
App_rpt-users@lists.allstarlink.org
http://lists.allstarlink.org/cgi-bin/mailman/listinfo/app_rpt-users

To unsubscribe from this list please visit http://lists.allstarlink.o
rg/cgi-bin/mailman/listinfo/app_rpt-users and scroll down to the
bottom of the page. Enter your email address and press the
"Unsubscribe or edit options button"
You do not need a password to unsubscribe, you can do it via email
confirmation. If you have trouble unsubscribing, please send a
message to the list detailing the problem.

_______________________________________________
App_rpt-users mailing list
App_rpt-users@lists.allstarlink.org
http://lists.allstarlink.org/cgi-bin/mailman/listinfo/app_rpt-users

To unsubscribe from this list please visit http://lists.allstarlink.org/cgi-bin/mailman/listinfo/app_rpt-users and scroll down to the bottom of the page. Enter your email address and press the "Unsubscribe or edit options button"
You do not need a password to unsubscribe, you can do it via email confirmation. If you have trouble unsubscribing, please send a message to the list detailing the problem.