[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[LDM #OCI-316545]: bandwidth throttling



Hi Daniel,

re:
> say I have upstream LDM box A , and downstream LDM box B.   Using nc or
> ftp, I can xfr  from A-> B at 8MB/s
> 
> When I pqinsert a product in A, it only gets transferred to B @ ~600
> KB/s constant.  if I pqinsert another product during this transfer,
> then I see the  A ->B traffic  double to ~1200 KB/s .
> 
> I am curious as of whether LDM has some sort of per session traffic
> shaping enabled by default. I googled and googled but I can't find a
> definitive answer.

No, there is no shaping done by the LDM.

re:
> My only other recourse is to review the source code, so any help would
> be appreciated

I believe that what you are seeing is a result of the pqinsert process
not running as part of the LDM process group.  When you insert a process into
an LDM queue "out of band" (not as part of the LDM process group), there
is nothing to tell the LDM's routines that a new product is available
for transfer.  The strategy implemented is when there is no notification
of the available of a new product, the process (ldmd in LDM v6.9.x and
rpc.ldmd in previous versions) will "wake-up" and check to see if there
is any new products every 30 seconds.  Depending on when in the 30 second
sleep period you pqinsert the product, there will be up to a 30 second
wait.  If you insert a second product while the first is being transferred,
the second one will be sent immediately due to the server process already
being active.

You can alert the LDM server process of the availability of new products
in the queue by sending a CONT signal to the LDM process group leader.  The
thing that may not be familiar to you is specifying the negative of the
process id of the LDM process group leader in the kill invocation.

Example:

ps -eaf | grep ldmd
ldm      27496     1  0 Jul12 ?        00:00:02 ldmd -I 0.0.0.0 -P 388 -M 256 
-m 3600 -o 3600 -q /usr/local/ldm/var/queues/ldm.pq /usr/local/ldm/etc/ldmd.conf
ldm      27504 27496  0 Jul12 ?        00:01:02 ldmd -I 0.0.0.0 -P 388 -M 256 
-m 3600 -o 3600 -q /usr/local/ldm/var/queues/ldm.pq /usr/local/ldm/etc/ldmd.conf
ldm      27505 27496  0 Jul12 ?        01:00:56 ldmd -I 0.0.0.0 -P 388 -M 256 
-m 3600 -o 3600 -q /usr/local/ldm/var/queues/ldm.pq /usr/local/ldm/etc/ldmd.conf
ldm      27506 27496  0 Jul12 ?        00:00:15 ldmd -I 0.0.0.0 -P 388 -M 256 
-m 3600 -o 3600 -q /usr/local/ldm/var/queues/ldm.pq /usr/local/ldm/etc/ldmd.conf
ldm      27507 27496  0 Jul12 ?        00:00:16 ldmd -I 0.0.0.0 -P 388 -M 256 
-m 3600 -o 3600 -q /usr/local/ldm/var/queues/ldm.pq /usr/local/ldm/etc/ldmd.conf

This listing shows us that ldmd with process ID 27496 is the process group 
leader.
Sending a HUP signal to the process group leader as follows will cause it to
propagate the signal to all members of the group:

kill CONT -27496

As soon as the CONT signal is received by the process group leader, all
other LDM server processes (again named ldmd in LDM v6.9.x and rpc.ldmd
in previous versions of the LDM) will receive the signal and start processing
the new data.

Cheers,

Tom
--
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: OCI-316545
Department: Support LDM
Priority: Normal
Status: Closed