Thanks for all the resposes folks;
I may end up piping to another process but I would like to avoid that if
at all possible...I would like to handle it through existingLDM methods.
Another process may introduce another opportunity for failure and
additional support responisibility.
RLB
Christian Pagé wrote:
You could also pipe to a shell script that would do further processing
and put the data to a specified filename and/or do other actions.
On Vendredi, janv 17, 2003, at 09:24 America/Montreal, Randy Breeser
wrote:
Thanks for the quick response Steve;
Actually I would not have a problem with a lot of files in the
target directory, that directory is a que for products that will
be swept into another system. There will never be a large volume
of products, they will be very small text files but they may stay
in the que for up to 15 seconds. These will be severe weather
reports from the public via the web so I need to account for the
possibility of two reports hitting the system at the same time.
Also I cannot change the filename in any other way then to append
something on the end.
Thanks...RLB
Steve Emmerson wrote:
Randy,
Date: Tue, 14 Jan 2003 16:23:15 -0600
From: "Randy Breeser" <Randy.Breeser@xxxxxxxx>
Organization: NWS La Crosse Wisconsin
To: ldm-users@xxxxxxxxxxxxxxxx
Subject: Unique Filename
The above message contained the following:
First let me say that I am pretty new to LDM and I hope that I am
posting this question to the right list...if not maybe someone will
point me to a beginners list.
I need to generate a unique filename in pqact.conf so that files will
not overwrite those already in a que. Below is what I have done so
far.
I added "%M%S" here but this will only resolve to one second. Not
quite
good enough...it is possible that 2 files could be processed
during the
same second. The "NEWFILE.dat" part of the filename cannot change nor
can the path.
EXP .*(transfile.dat) FILE -overwrite
/some/file/path/NEWFILE.dat%M%S
Any help would be greatly appreciated.
If you're worried about multiple files being processed in the same
second, then it seems to me that you're in a bad situation for the
following reasons:
1. You could end-up with thousands upon thousands of files in a
single directory.
2. The scour(1) facility might not be sufficient to keep the number
of files down to a managable level.
It could be that, with a little thought, a solution could be found
that
obviates the need for sub-second resolution. What are these files and
what are you trying to do with them? Does there product-ID containing
nothing that could be useful?
Regards,
Steve Emmerson <http://www.unidata.ucar.edu>
Christian Pagé
page@xxxxxxxxxxx
http://meteocentre.com/ http://meteoalerte.com/
Etudiant au Doctorat en Sciences de l'environnement UQAM
+1 514 987 3000 ext. 2376