Tom, I'm attaching the pertinent bit from the pqact file. I cut out the unrelated entries, but left this one which was still giving me those errors. I did ensure there were tabs and no spaces where you had pointed out, and I have been using pqactcheck/pqactHUP this whole time, I've learned my lesson there long ago ;) It took me a while to write back because I wanted to do more troubleshooting before I replied. Here are some additional details I've found... I see those LDM log errors regardless of if I include "-e /home/scripts/sat/goes_mcidas/manager.py -p" or not. In other words, goes-restitch.py is having problems even saving the data, even if I cut out all processing of my own after the fact. So any processing I'm doing after goes-restitch.py appears uninvolved. The same thing is happening on a second machine of mine. To further narrow it down, (on that second test machine) I'm only having LDM request from one of our NP ingest servers, and I remade the queue on both in case some corrupted data was stuck someplace. None of that seemed to matter, I would still see those errors in LDM's log file; the NP server's log file was clean. FWIW, while this second machine has been powered on for some time, LDM had not been running, so it was a fresh queue and all. Also, the attached pqact snippet was the only pqact entry running on the second machine, and it was still giving me errors. I don't see any issues with disk space, partitions (including /tmp), i-nodes, or anything else I can think to poke at. Nothing has changed on either of these servers in some time, and the environment (so far) appears nominal. I'm saving some data using "FILE -metadata /path/to/file" (with tabs) and piping that to goes-restitch.py manually using "cat datafile.nc | /home/scripts/ldm-alchemy/goes-restitch.py -v -d /home/data/goes". I can't seem to get that to produce any errors! Regardless of what options I try with goes-restitch.py, it all works when running manually. Well now I'm really confused... So the error LDM gives suggests that the data cannot be piped to the script, but I can pipe it all manually. I'm having a hard time telling for sure, but it seems like when running that pqact that some of the data is saved to the output directory, but not always 100%. Though at least some of it is, even when I'm watching those errors scroll by in the log, but because the errors don't tell me which ABI band the data was from it's hard to correlate those errors to data saved. While this is a bit of a head scratcher, if I'm still the only one with these symptoms then I'll accept it's just me. I'm curious to see if it goes away when the sector moves again, though I agree with you that doesn't make a ton of sense either. Thanks for taking the time, -Mike ====================== Mike Zuranski Meteorology Support Analyst College of DuPage - Nexlab Weather.cod.edu ====================== On Tue, Jul 21, 2020 at 2:42 PM Tom Yoksas <yoksas@xxxxxxxx> wrote: > Hi Mike, > > On 7/21/20 1:16 PM, Mike Zuranski wrote: > > I just double-checked, and there is a tab there. > > But, is there also a space? The only separator allowed between, > for instance, PIPE and the flags used is a tab. The reason for > this is the parser stops at non-tab characters and treats the > rest as the "decoder" to run. The snippet you sent from your > LDM log file showed that 'pqact' thought that the decoder was: > > '-metadata ...' > > re: > > Full pqact entry is > > below; I see now any tabs I had were replaced with spaces when I copied > > it to the email. > > > > NOTHER ^TI(S.).. KNES ([0-9][0-9][0-9][0-9][0-9][0-9]) ... > > PIPE -metadata > > /home/scripts/ldm-alchemy/goes-restitch.py -q -d > > /home/data/goes -t 60 -e /home/scripts/sat/goes_mcidas/manager.py -p -l > > /home/ldm/var/logs/goes-restitch/\1.log \1 \2 > > It is hard to determine if pattern-action file actions are formatted > correctly from cut-and-paste listings. Can you send your pattern-action > file as an attachment? As we get down into the weeds with this problem, > it would probably be better to have the exchanges logged in our inquiry > tracking system. > > re: > > As for this not reaching the ldm-users list, that was my mistake. I > > forgot I wasn't subbed with the email I sent it from, so it got flagged > > as a non-member. > > OK, that makes sense now :-) > > re: > > What's confusing about this is everything else continues to work fine, > > and that entry too was working fine, until that sector change. That > > tells me something with the data changed, but the LDM config should > > otherwise be okay. I'm starting to save tiles directly and will keep > > digging. > > We did not see any interruption in restitching the NOAAPort-delivered > tiles into full scenes and insertion into the IDD NIMAGE feed, so > a change in location for one of the Mesoscale sectors doesn't make sense > (to me, at least) at this point. > > Cheers, > > Tom > > > On Tue, Jul 21, 2020 at 1:59 PM Tom Yoksas <yoksas@xxxxxxxx > > <mailto:yoksas@xxxxxxxx>> wrote: > > > > Hi Mike and Ryan, > > > > I just took a closer look at the LDM log messages that were included > > in the original message by Mike (to python-users? I never saw a post > > to ldm-users). I think what is going on is a typo in the > pattern-action > > file action that is being run. Here is the log output I am referring > > to: > > > > 20200721T120016.942335Z pqact[59355] filel.c:pipe_prodput:2178 > > ERROR Couldn't pipe product to decoder "-metadata > > /home/scripts/ldm-alchemy/goes-restitch.py -q -d /home/data/goes -t > 60 > > -e /home/scripts/sat/goes_mcidas/manager.py -p -l > > /home/ldm/var/logs/goes-restitch/SU.log SU 211200" > > > > The lack of the decoder name in the output strongly suggests that > > there is not a tab between '-metadata' and the part immediately > before > > it. Please check your pattern-action file action and correct this > > typo if, of course, that it exists and then do the following: > > > > <as 'ldm'> > > ldmadmin pqactcheck > > > > If this reports no errors: > > > > ldmadmin pqactHUP > > > > Cheers, > > > > Tom > > > > On 7/21/20 12:49 PM, Ryan May wrote: > > > Hi Mike, > > > > > > We're not seeing any issues with our feed or execution of > > goes-restitch. > > > I'm actually seeing TISU as Mesoscale-2 right now, with > Mesoscale-1 > > > under TISH. Regardless, no anomalous behavior with what we're > > running. > > > > > > Are you seeing anything in the logs from the script itself? (i.e. > > > /home/ldm/var/logs/goes-restitch/SU.log) > > > > > > For the Mesoscale sectors, goes-restitch isn't doing much, since > the > > > sectors are generally only one tile (they are today). All that's > > > happening is removing WMO header/footer and adjusting a bit of > > netCDF > > > metadata. > > > > > > The first place I'd look, then, is to see if something's going > wrong > > > with /home/scripts/sat/goes_mcidas/manager.py. > > > > > > Ryan > > > > > > On Tue, Jul 21, 2020 at 10:34 AM Mike Zuranski > > <zuranski@xxxxxxxxxxxxxxx <mailto:zuranski@xxxxxxxxxxxxxxx> > > > <mailto:zuranski@xxxxxxxxxxxxxxx > > <mailto:zuranski@xxxxxxxxxxxxxxx>>> wrote: > > > > > > Greetings, > > > > > > I'm using Unidata's goes-restitch.py to merge the GOES-R tiles > > > coming from NOAAPort. It's been working well for years with > > nary an > > > issue, until this morning... > > > > > > From what I can tell, GOES-16 Mesoscale-1 was moved over the > > > Atlantic (9.5N/41W, header of TISU.. ) at around 12Z, and > almost > > > immediately my ldmd.log began filling up with what I'm pasting > > > below. After about an hour of that, it began to impact the > > rest of > > > our satellite processing operation. I've cut out G16 Meso1 > from > > > processing at the pqact, remade the queue & restarted LDM, and > > > everything has been fine since. But if I try to re-enable > > that, my > > > LDM log starts filling up with those errors right away > again. I > > > tried to set the goes-restitch.py logging to verbose, but no > > errors > > > were being reported in that log file, only the ldmd.log file. > > > > > > If anyone else uses goes-restitch.py with NOAAPort tiles, are > you > > > seeing the same thing? I don't store the tiles locally so I > > haven't > > > interrogated them yet to see if anything has changed, but it > sure > > > seems like this script doesn't like something about these SU > > tiles > > > coming in. > > > > > > Note: I am using a slightly modified version of > > goes-restitch.py to > > > execute another script once a full set of tiles has been > stitched > > > together so it can be acted upon. If you're wondering about > > the "-e > > > /home/scripts/sat/goes_mcidas/manager.py -p" bit below, > > that's all > > > it's doing. I submitted a pull-request for this a while > back, so > > > the code changes can be seen here: > > > https://github.com/Unidata/ldm-alchemy/pull/4. Again, this has > > > never been an issue for the several years I've been using it. > > > > > > ldmd.log output example: > > > 20200721T120016.942166Z pqact[59355] > > > pbuf.c:pbuf_fl> Thanks, > > > > -Mike > > > > ====================== > > Mike Zuranski > > Meteorology Support Analyst > > College of DuPage - Nexlab > > Weather.cod.edu <http://Weather.cod.edu> > > ====================== > > > > ush:113 ERROR Broken pipe > > > 20200721T120016.942279Z pqact[59355] > > > pbuf.c:pbuf_flush:113 ERROR Couldn't write to > > pipe: > > > fd=1021, len=4096 > > > 20200721T120016.942298Z pqact[59355] > > > filel.c:pipe_put:1930 ERROR Couldn't write > > > 307601-byte product to pipe > > > 20200721T120016.942317Z pqact[59355] > > > filel.c:pipe_out:2115 ERROR Couldn't write > > product > > > data to pipe > > > 20200721T120016.942335Z pqact[59355] > > > filel.c:pipe_prodput:2178 ERROR Couldn't pipe > > product to > > > decoder "-metadata /home/scripts/ldm-alchemy/goes-restitch.py > > -q -d > > > /home/data/goes -t 60 -e > > /home/scripts/sat/goes_mcidas/manager.py -p > > > -l /home/ldm/var/logs/goes-restitch/SU.log SU 211200" > > > 20200721T120016.942353Z pqact[59355] > > > filel.c:pipe_prodput:2187 ERROR Decoder terminated > > > prematurely > > > 20200721T120016.942375Z pqact[59355] > > > filel.c:fl_removeAndFree:425 ERROR Deleting failed > PIPE > > > entry: pid=66861, cmd="-metadata > > > /home/scripts/ldm-alchemy/goes-restitch.py -q -d > > /home/data/goes -t > > > 60 -e /home/scripts/sat/goes_mcidas/manager.py -p -l > > > /home/ldm/var/logs/goes-restitch/SU.log SU 211200" > > > > > > Thanks, > > > > > > -Mike > > > > > > ====================== > > > Mike Zuranski > > > Meteorology Support Analyst > > > College of DuPage - Nexlab > > > Weather.cod.edu <http://Weather.cod.edu> <http://Weather.cod.edu> > > > ====================== > > > _______________________________________________ > > > NOTE: All exchanges posted to Unidata maintained email lists > are > > > recorded in the Unidata inquiry tracking system and made > publicly > > > available through the web. Users who post to any of the > lists we > > > maintain are reminded to remove any personal information that > > they > > > do not want to be made public. > > > > > > > > > python-users mailing list > > > python-users@xxxxxxxxxxxxxxxx > > <mailto:python-users@xxxxxxxxxxxxxxxx> > > <mailto:python-users@xxxxxxxxxxxxxxxx > > <mailto:python-users@xxxxxxxxxxxxxxxx>> > > > For list information, to unsubscribe, or change your > membership > > > options, visit: https://www.unidata.ucar.edu/mailing_lists/ > > > > > > > > > > > > -- > > > Ryan May, Ph.D. > > > Software Engineer > > > UCAR/Unidata > > > Boulder, CO > > > > > > _______________________________________________ > > > NOTE: All exchanges posted to Unidata maintained email lists are > > > recorded in the Unidata inquiry tracking system and made publicly > > > available through the web. Users who post to any of the lists we > > > maintain are reminded to remove any personal information that they > > > do not want to be made public. > > > > > > > > > ldm-users mailing list > > > ldm-users@xxxxxxxxxxxxxxxx <mailto:ldm-users@xxxxxxxxxxxxxxxx> > > > For list information or to unsubscribe, visit: > > https://www.unidata.ucar.edu/mailing_lists/ > > > > > > > -- > > > +----------------------------------------------------------------------+ > > * Tom Yoksas UCAR Unidata > Program * > > * (303) 497-8642 (last resort) P.O. Box > 3000 * > > * yoksas@xxxxxxxx <mailto:yoksas@xxxxxxxx> > > Boulder, CO 80307 * > > * Unidata WWW Service http://www.unidata.ucar.edu/ * > > > +----------------------------------------------------------------------+ > > > > _______________________________________________ > > NOTE: All exchanges posted to Unidata maintained email lists are > > recorded in the Unidata inquiry tracking system and made publicly > > available through the web. Users who post to any of the lists we > > maintain are reminded to remove any personal information that they > > do not want to be made public. > > > > > > python-users mailing list > > python-users@xxxxxxxxxxxxxxxx <mailto:python-users@xxxxxxxxxxxxxxxx> > > For list information, to unsubscribe, or change your membership > > options, visit: https://www.unidata.ucar.edu/mailing_lists/ > > > > -- > +----------------------------------------------------------------------+ > * Tom Yoksas UCAR Unidata Program * > * (303) 497-8642 (last resort) P.O. Box 3000 * > * yoksas@xxxxxxxx Boulder, CO 80307 * > * Unidata WWW Service http://www.unidata.ucar.edu/ * > +----------------------------------------------------------------------+ > >
Attachment:
pqact.split.goes
Description: Binary data
python-users
archives: