PBrooks at mhg.co.za
Mon May 5 13:50:05 CEST 2003
> The program I've been developing obviously have to be 100% stable :)
> You can't be 100% stable on a multi-user system unless you use a write
> you flush the buffer to the disc and then do the error checking - even
> you have written your buffer another program of higher priority might fill
> the disk so your buffer can't get to it and your error will only come at
> your next write.
> The safest writes are to space that you already have allocated - random
> writes. That is what databases do. In the case of a log-file, make your
> circular and pre-allocate it a certain size, then you will overwrite old
> messages, but you will never fill up the disk.
> This is a bit more work for people who read the log-file later, but you
> write a small reader to do it for them. There are lots of ways of doing
> this, but to be safe it is best to write your new record to the circular
> file - making sure it writes through any system buffers (a write with
> - then write the record number to the start of the file. That way your
> log-file reader can easily show the last n records which are usually the
> ones of interest.
> On an msdros system you can make it fairly friendly by giving your
> file an extension that you map to your reader - then people using it don't
> need to be aware of how its done.
Thanks for that response, I guess is isn't possible to do on a
multi-user system as you said. However a circular file would mean you'd
only have a log-file history of length x ... Interesting way of doing
it, would mean you'd never be able to view any logs beyond x since they
Not necessarily. Good practice, say on a security logging system, is to use
two circular log files. When one fills up you switch to the other - then you
compress and back up the full log-file to tape/CD ROM/ftp server and send an
alarm if you find that the log-files are filling up to quickly. That way one
security denial of service (causing log-files to fill up discs or wrap
around) is avoided. Over time you can size your log files so that they
automatically back up and switch every day or two.
A really sensible approach on a unix system is to treat syslog this way and
use syslog calls for all your messages. That means that they are interleaved
with system and other messages (easy to disentangle with greps) so that you
can get the full context of an error. If everybody did that then there
wouldn't be all those hundreds of irritating log files all over the place!
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the fpc-pascal