View Single Post
  #143   Report Post  
Posted to uk.d-i-y
Rod Speed Rod Speed is offline
external usenet poster
 
Posts: 40,893
Default Defraggin LInux (was should DIY be a green cause)



"polygonum" wrote in message
...
On 25/03/2016 21:20, Vir Campestris wrote:

If I put 5000 files on my disk, and delete every alternate one, how can
it not be fragmented?


If by "It" you mean your free space, yes, of course. But not one single
file is fragmented by doing that.

The worst aspect of fragmentation under Windows is when a file keeps
growing. Classic is a log file but plenty of applications keep adding to
the end of a big file.

If you create a new file, Windows will try to find a space on the disc
which is large enough to allow that file to grow to to some pre-determined
value (can't remember - maybe it was 4 MB? - it could be changed). So long
as it is written once, and no larger than that size, it will not fragment
in most cases. If you defrag, then any spare space is likely to be
coalesced.

Something like this:
Create a file and it is placed in a space large enough for 4 MB. Create
another file and that 4 MB of space remains used only by the first file
and free space after it. If that first file is 1 MB, then you defrag, the
other 3 MB will no longer be there to allow growth of the first file.
If the file is a write-once file, then not, much of an issue. If it is
written in numerous small lumps, then having done a defrag removes the
possibility if it growing without fragmenting.

If you copy an existing file to another drive, Windows will attempt to
find a non-fragmented space large enough for the whole file. Memory
suggests it will use the smallest non-fragmented area large enough for the
file - if it can. But, so far as I know, there is no ready technique
available for telling Windows "create a new file and I want it to be able
to grow to 50 MB without fragmenting".

On some systems I used to deal with, you could see one particular file
would fragment severely. Simply copying it, preferably to another drive
and back, would reduce fragmentation dramatically.


But the point is that with log files it doesnt really matter if they are
severely fragmented, because you dont normally care how long it
takes to move thru the entire file linearly. At most you just browse it
occasionally and so the speed at which you move thru the log file
is determined by how fast you can read it, not the speed at which
the heads can move between fragments.

And with the other main situation where you do see very large
files increase in size over time, database files, you dont normally
move thru them linearly either.

The main time you do move thru very large files which get added to over
time,
linearly is when you are backing them up, but again that normally happens in
the background so you dont care if it takes a bit longer when its
fragmented.