View Single Post
  #162   Report Post  
Posted to uk.d-i-y
The Natural Philosopher[_2_] The Natural Philosopher[_2_] is offline
external usenet poster
 
Posts: 39,563
Default Defraggin LInux (was should DIY be a green cause)

On 28/03/16 06:00, Johnny B Good wrote:
On Sat, 26 Mar 2016 09:28:35 +0000, The Natural Philosopher wrote:

On 25/03/16 21:20, Vir Campestris wrote:
On 24/03/2016 22:43, The Natural Philosopher wrote:
Oh the joys of Linux, and no de fragging ever unless the disk is 100%
full

I've heard this said, and I never can work out how.

If I put 5000 files on my disk, and delete every alternate one, how can
it not be fragmented?

Well of course it is somewhat, but the point is that new files tend to
be written in the middle of the biggest free space, depending on the
actual disk format in use, so they tend to simply grow linearly.

Fragmentation isn't a file in a random place, its a file in dozens of
random places, so to get the entire contents takes many seeks.

http://www.howtogeek.com/115229/htg-...x-doesnt-need-

defragmenting/

I was quite surprised at the explanation. The strategy of scattering
files across the disk volume 'in order to allow them space into which to
grow' seemed so bogus, seeing as how most files are edited by writing a
complete new copy before renaming the original as a backup file, either
temporarily simply to avoid the naming conflict or as a safety rollback
copy.


Where are you going to put that new file then? Into all the tiny little
gaps left by previous deleted ones, as none of the spaces were big
enough to hold new bigger files?

If you delete a single file bang in the middlke of a huge gap you get
the huge gap back, with no clutter.

skip intersting design of tape FAT

With my strategy of minimising fragmentation of free space being *such*
a "No Brainer", I've always assumed MSFT's FAT based FSes used a similar
strategy, BICBW.


The words in that paragraph are in te wro9ng order.

I've always assumed MSFTs FAT based FSes were the result of no brain no
strategy.

And in fact the FAT structure doesn't necessariliy determine the usage
strategy.

That's done by the OD disk writing utility libraries.



...

Incidently (aside from the use of SSDs), the key to minimising
fragmentation induced performance loss in a MSFT FS is optimised
partitioning of the HDD into 3 partition spaces (OS, Apps and data
partitions).

Frankly I dont see much difference between OS and Apps.

In Linux, you tend to have the OS and apps all in one place - thats
readonly data, the user data in another, and if you are in server land,
the /var system which contains systemwide 'moving' data like mail and
log files and databases etc, by default

Performance wise boot time is affected by having the OS on SSD, and
general program startup by having the apps on SSD (/usr/bin and friends)
but really the user data as suchj ois not needed to be spo speedy, and
as far as system log files go, they are written in background so you
wint nootoi=ce any seepd differential

What is very important is to have swap in SSD if you can, because its
much faster at paging in and out.

Assuming you need swap at all.



snip defrag windows stuff

--
If you tell a lie big enough and keep repeating it, people will
eventually come to believe it. The lie can be maintained only for such
time as the State can shield the people from the political, economic
and/or military consequences of the lie. It thus becomes vitally
important for the State to use all of its powers to repress dissent, for
the truth is the mortal enemy of the lie, and thus by extension, the
truth is the greatest enemy of the State.

Joseph Goebbels