[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[IDD #YYF-400085]: ldm files size



Hi Jordi,

re:
> * - what is the size of your LDM queue?*
> 
> I use to check it like this:
> 
> eady:~> regutil /queue/size
> > 500M

OK, this is the default sent out with the LDM.  It might be advisable
to increase the queue size to 1 or 2 GB _if_ your machine has enough
RAM.

re:
> 
> * - which version of the LDM are you running?*
> 
> I copy the output of ldmadmin config:
> 
> hostname:              eady.uib.es
> > os:                    Linux
> > release:               3.2.0-4-amd64
> > ldmhome:               /usr/local/ldm
> > LDM version:           6.10.1
> > PATH:
> > /usr/local/ldm/ldm-6.10.1/bin:/usr/local/ldm/scripts:/usr/local/ldm/decoders:/usr/local/ldm/util:/usr/local/ldm/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:;:/usr/NX/bin
> > LDM conf file:         /usr/local/ldm/etc/ldmd.conf
> > pqact(1) conf file:    /usr/local/ldm/etc/pqact.conf
> > scour(1) conf file:    /usr/local/ldm/etc/scour.conf
> > product queue:         /usr/local/ldm/var/queues/ldm.pq
> > queue size:            500M bytes
> > queue slots:           default
> > reconciliation mode:   do nothing
> > pqsurf(1) path:        /usr/local/ldm/var/queues/pqsurf.pq
> > pqsurf(1) size:        2M
> > IP address:            0.0.0.0
> > port:                  388
> > PID file:              /usr/local/ldm/ldmd.pid
> > Lock file:             /usr/local/ldm/.ldmadmin.lck
> > maximum clients:       256
> > maximum latency:       3600
> > time offset:           3600
> > log file:              /usr/local/ldm/var/logs/ldmd.log
> > numlogs:               7
> > log_rotate:            1
> > netstat:               /bin/netstat -A inet -t -n
> > top:                   /usr/bin/top -b -n 1
> > metrics file:          /usr/local/ldm/var/logs/metrics.txt
> > metrics files:         /usr/local/ldm/var/logs/metrics.txt*
> > num_metrics:           4
> > check time:            1
> > delete info files:     0
> > ntpdate(1):            /usr/sbin/ntpdate
> > ntpdate(1) timeout:    5
> > time servers:          ntp.ucsd.edu ntp1.cs.wisc.edu ntppub.tamu.edu
> > otc1.psu.edu timeserver.unidata.ucar.edu
> > time-offset limit:     10

OK.

re:
> Is it possible that the problem is because of using an old version?

No, the LDM is forwardly compatible.

It still might be a good idea to upgrade to the latest LDM (v6.13.3),
but upgrading is not critical.

The other possible cause for the problem you have been experienced
might have been resolved on our end since we just increased the
LDM queue size on all real-server backends of our IDD top level
relay cluster, idd.unidata.ucar.edu.  The reason for the increase
in queue size from 55 GB to 75 GB was that the residency time for
products in the queues was pretty small, and this would mean that
for sites that have/had feed latencies greater than that residency
time would not be sent data before it was overwritten in the
queues.  Please keep an eye on your output file sizes for the
next few days to see if things look the same, better, or worse.

re:
> Many thanks for your help,

No worries.
Cheers,

Tom
--
****************************************************************************
Unidata User Support                                    UCAR Unidata Program
(303) 497-8642                                                 P.O. Box 3000
address@hidden                                   Boulder, CO 80307
----------------------------------------------------------------------------
Unidata HomePage                       http://www.unidata.ucar.edu
****************************************************************************


Ticket Details
===================
Ticket ID: YYF-400085
Department: Support IDD
Priority: Normal
Status: Closed
===================
NOTE: All email exchanges with Unidata User Support are recorded in the Unidata 
inquiry tracking system and then made publicly available through the web.  If 
you do not want to have your interactions made available in this way, you must 
let us know in each email you send to us.