Archive of UserLand's first discussion group, started October 5, 1998.

Re: tail -f in a web page

Author:Dan Lyke
Posted:11/5/1999; 3:43:03 PM
Topic:A question for Apache gurus
Msg #:12765 (In response to 12764)
Prev/Next:12764 / 12766

Brent, memory is cheap (or at least cheap enough that caching the last few lines in it rather than rereading the document is probably a good idea). I'm not a Perl god, and there's probably some obscure primitive or idiom that does this in a line and a half, and with three extra punctuation characters will floss your cat and put away your ironed clothing too, but why not:

#!/usr/bin/perl -w
use strict;

my (@lines, $currline, $linecount, $i);

$linecount = 4; $currline = 0;

while (<>) { $lines[$currline] = $_; $currline = ($currline + 1) % $linecount; }

for ($i = 0; $i < $linecount; $i++) { print $lines[($i + $currline) % $linecount]."\n"; }

Or, if you've got huge lines and buffering the line is too expensive for you, you could use "seek" and "tell" in a ring buffer in conjunction with your reads.

Another possibility would be to use that seek and tell in the refresh URL so that you don't have to reread the file every time through. Be sure to timestamp your refresh URLs so that if someone truncates the file and it grows over the size it was when it was truncated in between refreshes you have a chance of detecting this and dealing with it appropriately.

Also, (at this point Dave is groaning and saying "No, not the protocol rant again!") telnet is a perfectly acceptable and much lower overhead protocol with which to do tail -f. Any particular reason you're not just using that? Or even server push HTTP?




There are responses to this message:


This page was archived on 6/13/2001; 4:53:22 PM.

© Copyright 1998-2001 UserLand Software, Inc.