View Single Post
  #64   Report Post  
The Ghost In The Machine
 
Posts: n/a
Default Are PC surge protectors needed in the UK?

In sci.physics,

wrote
on Sun, 11 Jul 04 09:29:22 GMT
:
In article ,
"John Gilmer" wrote:
snip-valiantly refraining from comment

- support for running a defragmenter while the volume is mounted.
(Don't ask.)


Well, I understand was the defragmenter does in a FAT system but since I
still don't understand how files are stored I can't
understand how that are
either fragmented or defragmented.


Consider a file system that writes empty blocks in numerical sequential
order. Now think of a file that's deleted. This leaves an empty
"hole" in the filled blocks. Now make a file whose size is less
than the "hole". Now you have a smaller hole that will be filled
with the next file that is written. That file is larger than the
hole so the hole gets filled, then the next block that isn't filled
is found and written into. Over time, all files, when viewed from
the geometry of the physical disk look like swiss cheese.

A defragmenter takes the whole file system and rewrites each file
such that all its block numbers are monotonically increasing.

Now, where this gets really, really ****ed up is when the defragger
program "forgets" which should be the next block (real easy to do
with off-by-one bugs) or has to call its error handling when it
can't do a fit or the block chain pointers become broken. The
last one is a feature of all Misoft OSes because of memory
management problems--but that's another nightmare in the not-an-OS
biz.


Indeed. In Linux, there's no defragger[*], because the file
code in Linux is a little smarter. I'd admittedly have
to look for the details though, and ext2's organization
is quite different from FAT's or NTFS. FAT in particular
is terrible, basically every file is a single chain --
but you probably knew that already. NTFS is more or less
as I've described it in my prior post, at a high level,
and it feels like an engineered solution, whereas Linux's
ext2 is more elegant, even if it's still engineered.
But there's no perfect solution anyway; as you've described
the problem, there's always going to be a hole or two,
and a determined program can probably fragment any file
system if it does something like the following:

open big file
write block to big file
open little file
write block to little file
close it
write block to big file
open little file
write block to little file
close it
write block to big file
....

(It's a good thing the trend is towards centralized dedicated-machine
syslog-type logging. :-) )

I'll admit to wondering whether NT had the rather interesting
capability or not of "let's just write it here". I base
this hypothesis on observations using DiskKeeper Lite, which
copy I had at the time on a machine at my then-employer.
Basically, the notion is to simply write the new block at
an open sector in the cylinder over which the head is
flying.

Of course this would fragment things terribly, and I have no proof.
But things did fragment pretty badly when I used such tools
as Visual C++.



snip

/BAH

Subtract a hundred and four for e-mail.

[*] actually, there is, but it's very rarely used.

--
#191,

It's still legal to go .sigless.