The collegue of mine came across a problem that developed into an interesting solution that I decided to share with the world. Actually, I think the world is pretty much aware of the solution, but just in case that I will ever be looking for this solution again, I’ll have it handy here.
The task at hand was to do some processing of the logs on the fly. The syslog
was configured to filter the appropriate logs into a named pipe and a Perl script was written to read from the said pipe and do all the processing.
The original piece of code looked something like this:
open (SYSLOG, "<$named_pipe") or die "Couldn't open $named_pipe: $!\n"; while () { do_processing($_); } close(SYSLOG);
The problem came with syslog
daemon restarts. Every time the syslog
was stopped, the EOF was sent to the pipe and the script stopped reading it.
First approach was to nest the pipe reading into an endless loop like this:
while (1) { open (SYSLOG, "<$named_pipe") or die "Couldn't open $named_pipe: $!\n"; while () { do_processing($_); } close(SYSLOG); print "Syslog was restarted\n" if ($debug); }
While it worked, it wasn’t a very nice looking solution. A couple of alternative ideas that involved signal handling and kill -HUP
came around but were also disregarded.
A much better looker approach was found in the book by W. Richard Stevens ” UNIX Network Programming, Volume 2, Second Edition: Interprocess Communications”. The idea is simpe – the same script that opens the named pipe for reading, should open the same pipe for writing. This way, the pipe will stay open as long as the script is working.
The following code works:
open (SYSLOG, "<$named_pipe") or die "Couldn't open $named_pipe for reading: $!\n"; open (SYSLOG_W, ">$named_pipe") or die "Couldn't open $named_pipe for writing: $!\n"; while () { do_processing($_); } close(SYSLOG);
A minor improvement was made after reading perldoc -f open
which suggested that the same file can be open for both reading and writing with a single open
. Here is the changed code:
open (SYSLOG, "+<$named_pipe") or die "Couldn't open $named_pipe: $!\n"; while () { do_processing($_); } close(SYSLOG);
Simple and elegant – just as code should be.
Drop me a line if you know of any other ways to improve the snippet above.
Nice tip, however is this doable using bash?
I.e. I want to be able to type in console 1:
$ mkfifo thenamedpipe
$ cat .bashrc > thenamedpipe
$ cat .bashrc > thenamedpipe
And in console 2:
$ cat
Yes, this should work from bash too.
Tried this out to email myself important syslog messages and it works very nicely.
Thanks
Peter
Good to hear that it worked for you, Peter. Thanks for stopping by.
Hi,
Bumped in to your site while researching named pipes.
Wanted to know if there was a way to read/write to a pipe depending on the read/write operation at the other end of the pipe.
Let’s say i have a named-pipe myNamedPipe. if i echo “hello” >myNamedPipe, since i am writing to the pipe
i want my program to read data..instead if i use
vi myNamedPipe since that involves a read i want my
program to generate data!
Is there a way to do this?
Hi Veek.
How about using two named pipes? Or even more? If you want two or more processes (say A and B) to communicate back and forward, I guess the easiest way is to create two named pipes (say AtoB.pipe and BtoA.pipe). Each process would than write to its own pipe and read from the pipe of another process.
I run over the same problem in a small perl daemon I am writing. My problem was not related with the beauty of the code but with the fact that the while(1){ $data=; # …etc } will keep running once EOF was hit. This takes CPU and obviously it is not nice for a daemon to behave like this especially when he clearly had to wait.
Your solution is working.
Thanks,
Val