pan-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Pan-users] Command line use; download of nzb files does not stop


From: Duncan
Subject: Re: [Pan-users] Command line use; download of nzb files does not stop
Date: Thu, 3 Nov 2011 22:26:33 +0000 (UTC)
User-agent: Pan/0.135 (Tomorrow I'll Wake Up and Scald Myself with Tea; GIT bb16cbd /st/portage/src/egit-src/pan2)

Graham Lawrence posted on Thu, 03 Nov 2011 08:17:44 -0700 as excerpted:

> Duncan, thank you for pointing out that && is a conditional test.  I had
> understood && simply as "wait until previous instruction completes
> before proceeding", because that is the question I sought to answer when
> I first came across it via google; hence my seemingly contradictory
> logic.  If pan fails I need to run dem (display error messages), a
> function I've put in .bashrc.  Its essential feature is that it throws
> up an xterm and blinks an appropriate error message at me whenever a
> script running in background fails for some reason.

[ Please don't top-post.  There's a reason for pan's top-posting 
warnings. ]

OK, that clears up the intent of the script logic a /lot/! =:^)

In terms of &&, note that bash's default behavior is to wait for a 
command to terminate (and return a result) before continuing.  However, 
if a command forks and the fork actually does the work, with the original 
process simply terminating (the behavior of many daemons unless run in 
foreground mode, for instance), then when the originally invoked process 
terminates, bash continues.  That's one reason the sleep (N seconds) 
command is often used, to wait after original termination some time, for 
the forked process or the hardware state or whatever, to settle.

If the converse is desired, do NOT wait for completion, there's the & 
(single &) backgrounding directive.  Much like redirection, this is 
tacked on to the tail end of the shell command.  However, the invoking 
bash script instance still owns the backgrounded process, which will 
still terminate if the invoking bash script terminates, thus the wait 
builtin (which waits for all background processes to complete) as well as 
the disown (disown child processes) builtin.

So && doesn't add the wait for termination -- that's the default -- 
instead, it's a conditional, only executing what follows if the previous 
command succeeded (returned 0 exit status).

Or more precisely, && is a logical "AND" (thus the use of the & symbol), 
but since both sides of a logical AND must be true, if the left side is 
false the outcome is already known and bash shortcuts things by not even 
attempting execution of the right side, thus making it an effective exit-
conditional, only executing what's on the right if the left hand side 
succeeds. =:^)

As alluded to earlier, || serves the converse logical OR function, 
actually XOR, which after shortcutting, effectively makes it an exit-
conditional testing for failure of the left-hand side, only executing 
what's on the right if the left-hand side fails.

So that part of your script can simply use...

|| { dem 1; exit 1 }

... to execute the compound command (the call to dem and the call to 
exit) on failure.

By the way, since you mentioned that dem is a function in your bashrc, 
you can simplify the logic even further, if desired.

1) Add the call to exit to the dem function itself, presumably as the 
last command executed in the function.

2) Instead of using a compound dem 1; exit 1 structure ever time you'd 
invoke it, simply use:

dem 1

Of course, if you're already passing "1" as a status code to dem, you're 
probably already using it in the function, and can simply invoke exit 
with the same variable ($1 positional or whatever other it's assigned to).

With that modification to your dem function, the above compound on the 
right-hand side of the || can be made a single command, as such

|| dem 1

Alternatively, if in your existing usage you sometimes call dem but do 
NOT exit, then you can keep it as-is, and add a second function 
"deme" (dem with exit), defined as such:

deme () { dem $1; exit $1 }

Then you can call dem as normal if you don't want to exit, or deme if you 
do want to exit, passing the status codes as you are currently doing.

> But there does seem to be an anomaly here.  Whatever follows the && can
> not take effect until pan exits.

True.  As mentioned above, you could change that by using the & (single) 
to invoke pan as a background process, but that wouldn't seem to be your 
intent, either.

> That it happens to be a test that must always fail is irrelevant.
> That test cannot be tried until the pan instruction has terminated. 
> And that does _not_ happen.  In fact, after Ctrl-c-ing to terminate
> pan, when restarted (after reboot) with just

> pan &

> it went right back to downloading those duplicates.

What about without the & at all?  It seems to me that it's redundant.

> Apparently Task Manager is not removing completed items from the list in
> command line mode,  So when it reaches the end of the list it just
> returns to the beginning of it again.  The only way I was able to
> completely shut it up was to select all the items in Task Manager and
> delete them, when in gui mode.

If the script isn't looping, thus calling pan repeatedly, then you're 
correct, the problem would seem to be in pan.  But as I don't know what 
the rest of the script looks like, I don't know if it's invoking pan 
repeatedly, thus causing the dups, or if pan itself is causing the dups.

> As for pan.debug, when I could not find it in /home/g I ran find /home
> -name pan.debug which I believe searches every subdirectory under /home.
>  It returned nothing.  Possibly I deleted it in some way.

Yes.  That's still something of a mystery.  I don't have an explanation 
for that at all, at this point.  The path thing was simply grasping at 
straws, particularly as you'd used the absolute path in the redirect.  
The only other thing I could think of would be some sort of permissions 
related problem -- if for some reason the script was run as a different 
user, perhaps SETUID or some such, without permissions to write to that 
dir.  But that seems quite unlikely indeed.

Perhaps the partition on which you have /home/g is full?  Equally 
unlikely.  Quota issue?  If anything, even more unlikely.  Depending on 
your distro, maybe SELinux or similar security issue?  Possible, 
particularly as I don't run SELinux myself and thus am unfamiliar with 
its restrictions, but that seems just as unlikely as the other 
possibilities.

So... I have no clue at all what's going on there.  If it were happening 
on my machine, I could probably figure it out, but the turn-around 
latency for troubleshooting it via list thread is simply prohibitive; we 
could easily still be working on it at this time next year!

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman




reply via email to

[Prev in Thread] Current Thread [Next in Thread]