Home |
Search |
Today's Posts |
#121
Posted to uk.d-i-y
|
|||
|
|||
should DIY be a green cause
On 26/03/16 02:04, Andy Burns wrote:
Johnny B Good wrote: It's the home computer equivalent of Microsoft's Cockamaimee Pagefile default settings in windows designed to accelerate "System Senility" by aggravating the effects of file system fragmentation due to normal file writing activities further aggravated by the endless file churn from the never ending stream of windows updates and fixes. Pagefiles were a necessary evil when you could only afford half as much RAM as you needed, now I just buy twice the RAM I think I'll need and run without a pagefile. That may or may not work depending in what te system is doing. If you have a system that is running many many processes, a lot of which are dormant, pageing is a necessary evil. Outside of datacentres (and gaming?) demand for "MORE POWER!" seems to have died out a few years before Moore's Law hits the buffers (Intel are giving up on tick-tock). Mmm. I think once its fast enough to run 3D realistic games at 50 fps, its fast enough for almost everything bar massive numerical computation. Or its a problem that can be solved by massive parallelism anyway. computers are actually 'good enough' now for nearly everything. The only thing that takes time on mine is video editing/processing. -- A lie can travel halfway around the world while the truth is putting on its shoes. |
#122
Posted to uk.d-i-y
|
|||
|
|||
should DIY be a green cause
On 26/03/16 09:18, Andrew Gabriel wrote:
In article , John Rumm writes: On 23/03/2016 03:39, Bill Wright wrote: In the interests of the conservation of materials and energy, should not DIY, especially the repair of goods, be a priority for the environmental movement? It must present an interesting conflict for many a green... on the one hand it is a very good fit with the political ideology, and yet on the other the knowledge requirements and attention to detail required are likely to be counter to the (lack of) thought process that makes much green policy seem even plausible in the first place. If there was a green movememnt based on sound science and economics, I would join it. Unfortunately, most green activists don't know anything about either of them, and worse still, don't even realise they don't know anything about them. The Green movement got taken over by the marketing departments of major corporates years ago. It is now reactionary, illiberal and dedicated to its own existence and its sponsors' profits. -- If you tell a lie big enough and keep repeating it, people will eventually come to believe it. The lie can be maintained only for such time as the State can shield the people from the political, economic and/or military consequences of the lie. It thus becomes vitally important for the State to use all of its powers to repress dissent, for the truth is the mortal enemy of the lie, and thus by extension, the truth is the greatest enemy of the State. Joseph Goebbels |
#123
Posted to uk.d-i-y
|
|||
|
|||
should DIY be a green cause
|
#124
Posted to uk.d-i-y
|
|||
|
|||
should DIY be a green cause
"Andrew Gabriel" wrote in message
... In article , "ARW" writes: I know but Doug looks well. http://wiki.diyfaq.org.uk/index.php/File:Mini.jpg I had one a bit like that (PKX 886M). Sold it in 1985 IIRC. Used to see it for a couple of years afterwards, gradually growing extra fog lights, walnut dashboard, coach lines, etc. Owner probably got up one morning to find a pile of rust on the driveway, topped off with walnut panel and chrome fog lights... Parents had one many years before in British Racing green, with sliding windows and pull-cord door openers. (ABL 270B) My Mum had LCP140N in lemon yellow. My brother and myself crashed it (don't ask as we were both under 17 years old). And need I say more? http://wiki.diyfaq.org.uk/images/4/42/Letherdrive.jpg Doug has had more paint spray jobs than an Essex girl has had fake spray tans. -- Adam |
#125
Posted to uk.d-i-y
|
|||
|
|||
should DIY be a green cause
"Huge" wrote in message
... On 2016-03-26, Andrew Gabriel wrote: In article , John Rumm writes: On 23/03/2016 03:39, Bill Wright wrote: In the interests of the conservation of materials and energy, should not DIY, especially the repair of goods, be a priority for the environmental movement? It must present an interesting conflict for many a green... on the one hand it is a very good fit with the political ideology, and yet on the other the knowledge requirements and attention to detail required are likely to be counter to the (lack of) thought process that makes much green policy seem even plausible in the first place. If there was a green movememnt based on sound science and economics, I would join it. Unfortunately, most green activists don't know anything about either of them, and worse still, don't even realise they don't know anything about them. *applause* +1 -- Adam |
#126
Posted to uk.d-i-y
|
|||
|
|||
should DIY be a green cause
On 24/03/2016 22:43, The Natural Philosopher wrote:
On 24/03/16 21:26, Chris French wrote: Funny they should introduce automatic defragmentation in Windows 7 then ( if not before, can't remember now) which would rather seem to defeat this cunning plan. It seems to work well enough as if I ever check (rarely) one of my machines it seems to be only slightly fragmented. Oh the joys of Linux, and no de fragging ever unless the disk is 100% full The same as windows then. |
#127
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 26/03/2016 09:33, The Natural Philosopher wrote:
On 25/03/16 22:15, Tim Streater wrote: In article , Vir Campestris wrote: On 24/03/2016 22:43, The Natural Philosopher wrote: Oh the joys of Linux, and no de fragging ever unless the disk is 100% full I've heard this said, and I never can work out how. If I put 5000 files on my disk, and delete every alternate one, how can it not be fragmented? OS X defrags in the background for files up to 10MB in size, AFAIK. But I've never known it to be an issue anyway and it's never discussed on Mac NGs because it isn't an issue. All *nix derived kernels have some sort of 'auto-defrag' going on, but the key point is that by using the disk layout more intelligently, there is less need for it as well. Once again the legacy of Windows - a single user system with its roots back in floppy disks - and Unix - a multi-user system designed to work with a very busy disk from the outset - show up. What a load of cr@p!! If unix was designed so well why doesn't it still use the original file systems? After all you are comparing the original windows file systems with unix ones! But then you always compare old windows stuff with unix as you have no idea about windows! Just for the other people that don't remember the original unix the file systems were no better than dos and got fragmented in multiple ways and the only way to defrag them was to take a tape image and reformat and reload the image. This needed to be done frequently or the system would slow to a crawl (about 30% of normal). Windows was engineered to sell to unsophisticated users. Unix had to sell to very critical industrial and commercial users, and was engineered to work . "All chrome and tailfins". Mind, with SSDs who cares anyway? Well quite. |
#128
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 26/03/2016 09:28, The Natural Philosopher wrote:
On 25/03/16 21:20, Vir Campestris wrote: On 24/03/2016 22:43, The Natural Philosopher wrote: Oh the joys of Linux, and no de fragging ever unless the disk is 100% full I've heard this said, and I never can work out how. If I put 5000 files on my disk, and delete every alternate one, how can it not be fragmented? Well of course it is somewhat, but the point is that new files tend to be written in the middle of the biggest free space, depending on the actual disk format in use, so they tend to simply grow linearly. Fragmentation isn't a file in a random place, its a file in dozens of random places, so to get the entire contents takes many seeks. http://www.howtogeek.com/115229/htg-...defragmenting/ That link includes "Because of the way this approach works, you will start to see fragmentation if your file system fills up. If its 95% (or even 80%) full, youll start to see some fragmentation. However, the file system is designed to avoid fragmentation in normal use. If you do have problems with fragmentation on Linux, you probably need a larger hard disk. If you actually need to defragment a file system, the simplest way is probably the most reliable: Copy all the files off the partition, erase the files from the partition, then copy the files back onto the partition. The file system will intelligently allocate the files as you copy them back onto the disk." So the linux solution to fragmentation is to waste upto 20% of disk space and to do manual disk defragmenting as was the norm for unix in the seventies. Also the way Linux aggressively caches the disk, means that such fragmentation as there is tends not to be such a performance hit., Windows caches a lot, so much so that *you* complain that it uses more memory than a linux machine. Linux must be wasting the memory if it isn't using it for cache like windows does. Mind, with SSDs who cares anyway? Indeed. Andy |
#129
Posted to uk.d-i-y
|
|||
|
|||
should DIY be a green cause
On 26/03/2016 05:15, Bill Wright wrote:
On 24/03/2016 03:47, Johnny B Good wrote: The original repair, just over a year ago, involved gluing the wishbone shaped plastic operating lever back together using thin paxolin splints with a 2 part epoxy resin glue. It just seemed a disgraceful failure mode for the sake of not paying the attention to the detail it deserved in its design (sharp 45 degree bends instead of organic curves to avoid stress concentration mediated failure - it was just begging to fail). This time I used a half mm drill and a couple of 8mm lengths of stainless steel wire to beef up the second glue repair. Unfortunately, this started acting up again just a few days ago and before I could have yet another go, SWMBI decided it was time for a new kettle, hence the visit to Argos today (actually, yesterday as I type this). Are you short of something to ****ing do? If so come round here. Bill Sometimes its easy to fix and saves a lot. I dropped the drawer from the freezer and cracked it at the front where you pulled, at £40+ for a new one it made sense to cut up an old credit card and glue it over the cracks to effect a repair. It also only took a few minutes and has lasted over a year now. |
#130
Posted to uk.d-i-y
|
|||
|
|||
should DIY be a green cause
In article ,
Bill Wright wrote: And the self employed don't get wages. A fundamental principle of being self emplowed. Many self-employed people set themselves up as a company and pay themselves a wage, and this wage is not directly related to profits. Then they are no longer self employed. I'd have thought you'd know the difference. -- *Great groups from little icons grow * Dave Plowman London SW To e-mail, change noise into sound. |
#131
Posted to uk.d-i-y
|
|||
|
|||
should DIY be a green cause
In article ,
Andrew Gabriel wrote: If there was a green movememnt based on sound science and economics, I would join it. Unfortunately, most green activists don't know anything about either of them, and worse still, don't even realise they don't know anything about them. Sounds pretty much like any politician. -- *Arkansas State Motto: Don't Ask, Don't Tell, Don't Laugh. Dave Plowman London SW To e-mail, change noise into sound. |
#132
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 26/03/16 13:01, dennis@home wrote:
On 26/03/2016 09:33, The Natural Philosopher wrote: On 25/03/16 22:15, Tim Streater wrote: In article , Vir Campestris wrote: On 24/03/2016 22:43, The Natural Philosopher wrote: Oh the joys of Linux, and no de fragging ever unless the disk is 100% full I've heard this said, and I never can work out how. If I put 5000 files on my disk, and delete every alternate one, how can it not be fragmented? OS X defrags in the background for files up to 10MB in size, AFAIK. But I've never known it to be an issue anyway and it's never discussed on Mac NGs because it isn't an issue. All *nix derived kernels have some sort of 'auto-defrag' going on, but the key point is that by using the disk layout more intelligently, there is less need for it as well. Once again the legacy of Windows - a single user system with its roots back in floppy disks - and Unix - a multi-user system designed to work with a very busy disk from the outset - show up. What a load of cr@p!! If unix was designed so well why doesn't it still use the original file systems? After all you are comparing the original windows file systems with unix ones! But then you always compare old windows stuff with unix as you have no idea about windows! Just for the other people that don't remember the original unix the file systems were no better than dos and got fragmented in multiple ways and the only way to defrag them was to take a tape image and reformat and reload the image. This needed to be done frequently or the system would slow to a crawl (about 30% of normal). .... But then you always compare old unix stuff with windows as you have no idea about unix! I cant remember anything as bad on Unix as FAT since I first touched a Unix system in about 1984 In fact Unix has NEVER had such a bad files system as FAT. Not from the word go. And they have always been at least ten years ahead of MSDOS/Windows. -- The theory of Communism may be summed up in one sentence: Abolish all private property. Karl Marx |
#133
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 26/03/16 13:09, dennis@home wrote:
Windows caches a lot, so much so that *you* complain that it uses more memory than a linux machine. I have never complained about it using moire memory than anything dear Linux must be wasting the memory if it isn't using it for cache like windows does. I think you mean 'Windows must be wasting the memory if it isn't using it for cache like Linux does'. Its rare to see any genuinely 'free' memory of any size on a linux machine unless its just been booted $ free total used free shared buffers cached Mem: 8108788 7631804 476984 53852 559556 4654828 8GB or RAM of which at the time of writing less than half a gig is 'free' and everything else is buffers cache or in use... -- "What do you think about Gay Marriage?" "I don't." "Don't what?" "Think about Gay Marriage." |
#134
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 26/03/2016 15:19, The Natural Philosopher wrote:
On 26/03/16 13:01, dennis@home wrote: On 26/03/2016 09:33, The Natural Philosopher wrote: On 25/03/16 22:15, Tim Streater wrote: In article , Vir Campestris wrote: On 24/03/2016 22:43, The Natural Philosopher wrote: Oh the joys of Linux, and no de fragging ever unless the disk is 100% full I've heard this said, and I never can work out how. If I put 5000 files on my disk, and delete every alternate one, how can it not be fragmented? OS X defrags in the background for files up to 10MB in size, AFAIK. But I've never known it to be an issue anyway and it's never discussed on Mac NGs because it isn't an issue. All *nix derived kernels have some sort of 'auto-defrag' going on, but the key point is that by using the disk layout more intelligently, there is less need for it as well. Once again the legacy of Windows - a single user system with its roots back in floppy disks - and Unix - a multi-user system designed to work with a very busy disk from the outset - show up. What a load of cr@p!! If unix was designed so well why doesn't it still use the original file systems? After all you are comparing the original windows file systems with unix ones! But then you always compare old windows stuff with unix as you have no idea about windows! Just for the other people that don't remember the original unix the file systems were no better than dos and got fragmented in multiple ways and the only way to defrag them was to take a tape image and reformat and reload the image. This needed to be done frequently or the system would slow to a crawl (about 30% of normal). ... But then you always compare old unix stuff with windows as you have no idea about unix! I have more idea than you. I cant remember anything as bad on Unix as FAT since I first touched a Unix system in about 1984 Well that's because the likes of me had been using unix for years before you and had identified the problems and got them sorted. I was using unix even before they had introduced paging rather than swapping. You do know they are different? In fact Unix has NEVER had such a bad files system as FAT. Not from the word go. And they have always been at least ten years ahead of MSDOS/Windows. |
#135
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 26/03/2016 15:26, The Natural Philosopher wrote:
On 26/03/16 13:09, dennis@home wrote: Windows caches a lot, so much so that *you* complain that it uses more memory than a linux machine. I have never complained about it using moire memory than anything dear Linux must be wasting the memory if it isn't using it for cache like windows does. I think you mean 'Windows must be wasting the memory if it isn't using it for cache like Linux does'. Its rare to see any genuinely 'free' memory of any size on a linux machine unless its just been booted $ free total used free shared buffers cached Mem: 8108788 7631804 476984 53852 559556 4654828 8GB or RAM of which at the time of writing less than half a gig is 'free' and everything else is buffers cache or in use... That is 1/2 a gig of wasted RAM. It could be caching a filesystem. |
#136
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 26/03/16 16:40, dennis@home wrote:
On 26/03/2016 15:19, The Natural Philosopher wrote: On 26/03/16 13:01, dennis@home wrote: On 26/03/2016 09:33, The Natural Philosopher wrote: On 25/03/16 22:15, Tim Streater wrote: In article , Vir Campestris wrote: On 24/03/2016 22:43, The Natural Philosopher wrote: Oh the joys of Linux, and no de fragging ever unless the disk is 100% full I've heard this said, and I never can work out how. If I put 5000 files on my disk, and delete every alternate one, how can it not be fragmented? OS X defrags in the background for files up to 10MB in size, AFAIK. But I've never known it to be an issue anyway and it's never discussed on Mac NGs because it isn't an issue. All *nix derived kernels have some sort of 'auto-defrag' going on, but the key point is that by using the disk layout more intelligently, there is less need for it as well. Once again the legacy of Windows - a single user system with its roots back in floppy disks - and Unix - a multi-user system designed to work with a very busy disk from the outset - show up. What a load of cr@p!! If unix was designed so well why doesn't it still use the original file systems? After all you are comparing the original windows file systems with unix ones! But then you always compare old windows stuff with unix as you have no idea about windows! Just for the other people that don't remember the original unix the file systems were no better than dos and got fragmented in multiple ways and the only way to defrag them was to take a tape image and reformat and reload the image. This needed to be done frequently or the system would slow to a crawl (about 30% of normal). ... But then you always compare old unix stuff with windows as you have no idea about unix! I have more idea than you. I cant remember anything as bad on Unix as FAT since I first touched a Unix system in about 1984 Well that's because the likes of me had been using unix for years before you and had identified the problems and got them sorted. I was using unix even before they had introduced paging rather than swapping. You do know they are different? Yes precious, I do. So you must have been using unix before MSDOS was even invented then? Hardly gybes with your claim that Unix was worse than MSDOS ! Hard to be worse than a system that doesn't exist! Still you always were a prat. In fact Unix has NEVER had such a bad files system as FAT. Not from the word go. And they have always been at least ten years ahead of MSDOS/Windows. -- "What do you think about Gay Marriage?" "I don't." "Don't what?" "Think about Gay Marriage." |
#137
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 26/03/16 16:42, dennis@home wrote:
On 26/03/2016 15:26, The Natural Philosopher wrote: On 26/03/16 13:09, dennis@home wrote: Windows caches a lot, so much so that *you* complain that it uses more memory than a linux machine. I have never complained about it using moire memory than anything dear Linux must be wasting the memory if it isn't using it for cache like windows does. I think you mean 'Windows must be wasting the memory if it isn't using it for cache like Linux does'. Its rare to see any genuinely 'free' memory of any size on a linux machine unless its just been booted $ free total used free shared buffers cached Mem: 8108788 7631804 476984 53852 559556 4654828 8GB or RAM of which at the time of writing less than half a gig is 'free' and everything else is buffers cache or in use... That is 1/2 a gig of wasted RAM. It could be caching a filesystem. It couldn't be, because I haven't got any thing that needs caching. -- Future generations will wonder in bemused amazement that the early twenty-first centurys developed world went into hysterical panic over a globally average temperature increase of a few tenths of a degree, and, on the basis of gross exaggerations of highly uncertain computer projections combined into implausible chains of inference, proceeded to contemplate a rollback of the industrial age. Richard Lindzen |
#138
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 26/03/2016 16:44, The Natural Philosopher wrote:
On 26/03/16 16:40, dennis@home wrote: On 26/03/2016 15:19, The Natural Philosopher wrote: On 26/03/16 13:01, dennis@home wrote: On 26/03/2016 09:33, The Natural Philosopher wrote: On 25/03/16 22:15, Tim Streater wrote: In article , Vir Campestris wrote: On 24/03/2016 22:43, The Natural Philosopher wrote: Oh the joys of Linux, and no de fragging ever unless the disk is 100% full I've heard this said, and I never can work out how. If I put 5000 files on my disk, and delete every alternate one, how can it not be fragmented? OS X defrags in the background for files up to 10MB in size, AFAIK. But I've never known it to be an issue anyway and it's never discussed on Mac NGs because it isn't an issue. All *nix derived kernels have some sort of 'auto-defrag' going on, but the key point is that by using the disk layout more intelligently, there is less need for it as well. Once again the legacy of Windows - a single user system with its roots back in floppy disks - and Unix - a multi-user system designed to work with a very busy disk from the outset - show up. What a load of cr@p!! If unix was designed so well why doesn't it still use the original file systems? After all you are comparing the original windows file systems with unix ones! But then you always compare old windows stuff with unix as you have no idea about windows! Just for the other people that don't remember the original unix the file systems were no better than dos and got fragmented in multiple ways and the only way to defrag them was to take a tape image and reformat and reload the image. This needed to be done frequently or the system would slow to a crawl (about 30% of normal). ... But then you always compare old unix stuff with windows as you have no idea about unix! I have more idea than you. I cant remember anything as bad on Unix as FAT since I first touched a Unix system in about 1984 Well that's because the likes of me had been using unix for years before you and had identified the problems and got them sorted. I was using unix even before they had introduced paging rather than swapping. You do know they are different? Yes precious, I do. So you must have been using unix before MSDOS was even invented then? Hardly gybes with your claim that Unix was worse than MSDOS ! I see you are resulting to lies. I have never said MSDOS was better than unix you are just imagining stuff again. Hard to be worse than a system that doesn't exist! Still you always were a prat. You are in need of therapy. Have you had the test done yet? In fact Unix has NEVER had such a bad files system as FAT. Not from the word go. And they have always been at least ten years ahead of MSDOS/Windows. |
#139
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 26/03/16 17:00, dennis@home wrote:
On 26/03/2016 16:44, The Natural Philosopher wrote: On 26/03/16 16:40, dennis@home wrote: On 26/03/2016 15:19, The Natural Philosopher wrote: On 26/03/16 13:01, dennis@home wrote: On 26/03/2016 09:33, The Natural Philosopher wrote: On 25/03/16 22:15, Tim Streater wrote: In article , Vir Campestris wrote: On 24/03/2016 22:43, The Natural Philosopher wrote: Oh the joys of Linux, and no de fragging ever unless the disk is 100% full I've heard this said, and I never can work out how. If I put 5000 files on my disk, and delete every alternate one, how can it not be fragmented? OS X defrags in the background for files up to 10MB in size, AFAIK. But I've never known it to be an issue anyway and it's never discussed on Mac NGs because it isn't an issue. All *nix derived kernels have some sort of 'auto-defrag' going on, but the key point is that by using the disk layout more intelligently, there is less need for it as well. Once again the legacy of Windows - a single user system with its roots back in floppy disks - and Unix - a multi-user system designed to work with a very busy disk from the outset - show up. What a load of cr@p!! If unix was designed so well why doesn't it still use the original file systems? After all you are comparing the original windows file systems with unix ones! But then you always compare old windows stuff with unix as you have no idea about windows! Just for the other people that don't remember the original unix the file systems were no better than dos and got fragmented in multiple ways and the only way to defrag them was to take a tape image and reformat and reload the image. This needed to be done frequently or the system would slow to a crawl (about 30% of normal). ... But then you always compare old unix stuff with windows as you have no idea about unix! I have more idea than you. I cant remember anything as bad on Unix as FAT since I first touched a Unix system in about 1984 Well that's because the likes of me had been using unix for years before you and had identified the problems and got them sorted. I was using unix even before they had introduced paging rather than swapping. You do know they are different? Yes precious, I do. So you must have been using unix before MSDOS was even invented then? Hardly gybes with your claim that Unix was worse than MSDOS ! I see you are resulting to lies. No, that would be you "Just for the other people that don't remember the original unix the file systems were no better than dos" Except that at the time, DOS didn't exist. I have never said MSDOS was better than unix you are just imagining stuff again. See above. You are in need of therapy. Have you had the test done yet? -- it should be clear by now to everyone that activist environmentalism (or environmental activism) is becoming a general ideology about humans, about their freedom, about the relationship between the individual and the state, and about the manipulation of people under the guise of a 'noble' idea. It is not an honest pursuit of 'sustainable development,' a matter of elementary environmental protection, or a search for rational mechanisms designed to achieve a healthy environment. Yet things do occur that make you shake your head and remind yourself that you live neither in Joseph Stalins Communist era, nor in the Orwellian utopia of 1984. Vaclav Klaus |
#140
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 25/03/2016 21:20, Vir Campestris wrote:
If I put 5000 files on my disk, and delete every alternate one, how can it not be fragmented? If by "It" you mean your free space, yes, of course. But not one single file is fragmented by doing that. The worst aspect of fragmentation under Windows is when a file keeps growing. Classic is a log file but plenty of applications keep adding to the end of a big file. If you create a new file, Windows will try to find a space on the disc which is large enough to allow that file to grow to to some pre-determined value (can't remember - maybe it was 4 MB? - it could be changed). So long as it is written once, and no larger than that size, it will not fragment in most cases. If you defrag, then any spare space is likely to be coalesced. Something like this: Create a file and it is placed in a space large enough for 4 MB. Create another file and that 4 MB of space remains used only by the first file and free space after it. If that first file is 1 MB, then you defrag, the other 3 MB will no longer be there to allow growth of the first file. If the file is a write-once file, then not, much of an issue. If it is written in numerous small lumps, then having done a defrag removes the possibility if it growing without fragmenting. If you copy an existing file to another drive, Windows will attempt to find a non-fragmented space large enough for the whole file. Memory suggests it will use the smallest non-fragmented area large enough for the file - if it can. But, so far as I know, there is no ready technique available for telling Windows "create a new file and I want it to be able to grow to 50 MB without fragmenting". On some systems I used to deal with, you could see one particular file would fragment severely. Simply copying it, preferably to another drive and back, would reduce fragmentation dramatically. -- Rod |
#141
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 26/03/2016 17:11, The Natural Philosopher wrote:
On 26/03/16 17:00, dennis@home wrote: On 26/03/2016 16:44, The Natural Philosopher wrote: On 26/03/16 16:40, dennis@home wrote: On 26/03/2016 15:19, The Natural Philosopher wrote: On 26/03/16 13:01, dennis@home wrote: On 26/03/2016 09:33, The Natural Philosopher wrote: On 25/03/16 22:15, Tim Streater wrote: In article , Vir Campestris wrote: On 24/03/2016 22:43, The Natural Philosopher wrote: Oh the joys of Linux, and no de fragging ever unless the disk is 100% full I've heard this said, and I never can work out how. If I put 5000 files on my disk, and delete every alternate one, how can it not be fragmented? OS X defrags in the background for files up to 10MB in size, AFAIK. But I've never known it to be an issue anyway and it's never discussed on Mac NGs because it isn't an issue. All *nix derived kernels have some sort of 'auto-defrag' going on, but the key point is that by using the disk layout more intelligently, there is less need for it as well. Once again the legacy of Windows - a single user system with its roots back in floppy disks - and Unix - a multi-user system designed to work with a very busy disk from the outset - show up. What a load of cr@p!! If unix was designed so well why doesn't it still use the original file systems? After all you are comparing the original windows file systems with unix ones! But then you always compare old windows stuff with unix as you have no idea about windows! Just for the other people that don't remember the original unix the file systems were no better than dos and got fragmented in multiple ways and the only way to defrag them was to take a tape image and reformat and reload the image. This needed to be done frequently or the system would slow to a crawl (about 30% of normal). ... But then you always compare old unix stuff with windows as you have no idea about unix! I have more idea than you. I cant remember anything as bad on Unix as FAT since I first touched a Unix system in about 1984 Well that's because the likes of me had been using unix for years before you and had identified the problems and got them sorted. I was using unix even before they had introduced paging rather than swapping. You do know they are different? Yes precious, I do. So you must have been using unix before MSDOS was even invented then? Hardly gybes with your claim that Unix was worse than MSDOS ! I see you are resulting to lies. No, that would be you Still trying i see. "Just for the other people that don't remember the original unix the file systems were no better than dos" Except that at the time, DOS didn't exist. And just what difference does that make to the statement I made? Its quite simple to understand. I have never said MSDOS was better than unix you are just imagining stuff again. See above. Where did I say unix was worse than dos? Have you had the test yet? You are in need of therapy. Have you had the test done yet? |
#142
Posted to uk.d-i-y
|
|||
|
|||
should DIY be a green cause
Dave Plowman (News) wrote
Bill Wright wrote Dave Plowman (News) wrote And the self employed don't get wages. A fundamental principle of being self emplowed. Many self-employed people set themselves up as a company and pay themselves a wage, and this wage is not directly related to profits. Then they are no longer self employed. Corse they are when it is their company. |
#143
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
"polygonum" wrote in message ... On 25/03/2016 21:20, Vir Campestris wrote: If I put 5000 files on my disk, and delete every alternate one, how can it not be fragmented? If by "It" you mean your free space, yes, of course. But not one single file is fragmented by doing that. The worst aspect of fragmentation under Windows is when a file keeps growing. Classic is a log file but plenty of applications keep adding to the end of a big file. If you create a new file, Windows will try to find a space on the disc which is large enough to allow that file to grow to to some pre-determined value (can't remember - maybe it was 4 MB? - it could be changed). So long as it is written once, and no larger than that size, it will not fragment in most cases. If you defrag, then any spare space is likely to be coalesced. Something like this: Create a file and it is placed in a space large enough for 4 MB. Create another file and that 4 MB of space remains used only by the first file and free space after it. If that first file is 1 MB, then you defrag, the other 3 MB will no longer be there to allow growth of the first file. If the file is a write-once file, then not, much of an issue. If it is written in numerous small lumps, then having done a defrag removes the possibility if it growing without fragmenting. If you copy an existing file to another drive, Windows will attempt to find a non-fragmented space large enough for the whole file. Memory suggests it will use the smallest non-fragmented area large enough for the file - if it can. But, so far as I know, there is no ready technique available for telling Windows "create a new file and I want it to be able to grow to 50 MB without fragmenting". On some systems I used to deal with, you could see one particular file would fragment severely. Simply copying it, preferably to another drive and back, would reduce fragmentation dramatically. But the point is that with log files it doesnt really matter if they are severely fragmented, because you dont normally care how long it takes to move thru the entire file linearly. At most you just browse it occasionally and so the speed at which you move thru the log file is determined by how fast you can read it, not the speed at which the heads can move between fragments. And with the other main situation where you do see very large files increase in size over time, database files, you dont normally move thru them linearly either. The main time you do move thru very large files which get added to over time, linearly is when you are backing them up, but again that normally happens in the background so you dont care if it takes a bit longer when its fragmented. |
#144
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 26/03/2016 09:33, The Natural Philosopher wrote:
Once again the legacy of Windows - a single user system with its roots back in floppy disks - and Unix - a multi-user system designed to work with a very busy disk from the outset - show up. You do talk a load of crap at times... Windows 3.1, sure a shell on top of DOS. Win NT onward, basically a re-engineering of VMS. Hardly what anyone would describe as a single use floppy based system. -- Cheers, John. /================================================== ===============\ | Internode Ltd - http://www.internode.co.uk | |-----------------------------------------------------------------| | John Rumm - john(at)internode(dot)co(dot)uk | \================================================= ================/ |
#145
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 26/03/2016 15:26, The Natural Philosopher wrote:
8GB or RAM of which at the time of writing less than half a gig is 'free' and everything else is buffers cache or in use... My Windows system is reporting 84Mb free out of 3GB. Most of the rest is cache. Andy |
#146
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 26/03/2016 09:28, The Natural Philosopher wrote:
Well of course it is somewhat, but the point is that new files tend to be written in the middle of the biggest free space, depending on the actual disk format in use, so they tend to simply grow linearly. Fragmentation isn't a file in a random place, its a file in dozens of random places, so to get the entire contents takes many seeks. http://www.howtogeek.com/115229/htg-...defragmenting/ Good link. Thanks. The Windows guys chose to put all their files near the same end of the disk to reduce seeks when the disk is nearly empty. Linux has a different approach - they are scattered all over the place. Andy |
#147
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On Sunday, 27 March 2016 22:57:37 UTC+1, Vir Campestris wrote:
The Windows guys chose to put all their files near the same end of the disk to reduce seeks when the disk is nearly empty. Linux has a different approach - they are scattered all over the place. Longer seeks but less of them, due to less fragmentation. Total wait time is thus less. I wonder how linux handles writing FAT32. NT |
#148
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
wrote
Vir Campestris wrote The Windows guys chose to put all their files near the same end of the disk to reduce seeks when the disk is nearly empty. Linux has a different approach - they are scattered all over the place. Longer seeks but less of them, due to less fragmentation. But when very few very large files are accessed serially now except with media files where the access speed is entirely determined by the media play speed, it is very far from clear that less fragmentation actually matters much anymore and it is clear that any modern system does a hell of a lot more accessing all sorts of files at quite a high rate, even if that is only the internet cache files and cookies etc etc etc, it makes a lot more sense to minimise the time that happens. Total wait time is thus less. Not necessarily. I wonder how linux handles writing FAT32. |
#149
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 27/03/16 22:23, John Rumm wrote:
On 26/03/2016 09:33, The Natural Philosopher wrote: Once again the legacy of Windows - a single user system with its roots back in floppy disks - and Unix - a multi-user system designed to work with a very busy disk from the outset - show up. You do talk a load of crap at times... Windows 3.1, sure a shell on top of DOS. Win NT onward, basically a re-engineering of VMS. Hardly what anyone would describe as a single use floppy based system. Why do you feel the need to pretend I wrote something I didn't, in order to score ego-points? I said that windows had its *legacy* back in single user single tasking floppy based operating systems. Not that it still was. Of course Win NT, written in 1993, some 20!! YEARS after Unix was ALREADY a multi user multi tasking OS...had to try and drag windows into the 20th century, but of course it still had to mainatain backwards compatibility with older windows programs, and it still relied on pretty guis to make administration accessible to the most complete moron of a user, and that resulted in exactly the sort of compromises I documentation. Form over function, designed to sell rather than work. And that is really the complete answer in a nutshell. *nix systems were designed to work in professional applications, Windows has always been first and foremost a consumer product, not an industrial one. It's tried and succeeded in getting sold into those markets, but that is more in spite of its engineering, than because of it. Rough diamond, or polished turd. Your choice -- If you tell a lie big enough and keep repeating it, people will eventually come to believe it. The lie can be maintained only for such time as the State can shield the people from the political, economic and/or military consequences of the lie. It thus becomes vitally important for the State to use all of its powers to repress dissent, for the truth is the mortal enemy of the lie, and thus by extension, the truth is the greatest enemy of the State. Joseph Goebbels |
#150
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 27/03/16 22:55, Vir Campestris wrote:
On 26/03/2016 15:26, The Natural Philosopher wrote: 8GB or RAM of which at the time of writing less than half a gig is 'free' and everything else is buffers cache or in use... My Windows system is reporting 84Mb free out of 3GB. Most of the rest is cache. well you only have 3GB. I've gotta bigger cache than you have! Andy -- Karl Marx said religion is the opium of the people. But Marxism is the crack cocaine. |
#151
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
|
#152
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
"Rod Speed" wrote in message ... wrote Vir Campestris wrote The Windows guys chose to put all their files near the same end of the disk to reduce seeks when the disk is nearly empty. Linux has a different approach - they are scattered all over the place. Longer seeks but less of them, due to less fragmentation. But when very few very large files are accessed serially now except with media files where the access speed is entirely determined by the media play speed, it is very far from clear that less fragmentation actually matters much anymore and it is clear that any modern system does a hell of a lot more accessing all sorts of files at quite a high rate, even if that is only the internet cache files and cookies etc etc etc, it makes a lot more sense to minimise the time that happens. Total wait time is thus less. Not necessarily. Ignore the prick FFS. Ohhh, i'm typing to the prick. But why am I killfiled? |
#153
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On Sat, 26 Mar 2016 09:28:35 +0000, The Natural Philosopher wrote:
On 25/03/16 21:20, Vir Campestris wrote: On 24/03/2016 22:43, The Natural Philosopher wrote: Oh the joys of Linux, and no de fragging ever unless the disk is 100% full I've heard this said, and I never can work out how. If I put 5000 files on my disk, and delete every alternate one, how can it not be fragmented? Well of course it is somewhat, but the point is that new files tend to be written in the middle of the biggest free space, depending on the actual disk format in use, so they tend to simply grow linearly. Fragmentation isn't a file in a random place, its a file in dozens of random places, so to get the entire contents takes many seeks. http://www.howtogeek.com/115229/htg-...x-doesnt-need- defragmenting/ I was quite surprised at the explanation. The strategy of scattering files across the disk volume 'in order to allow them space into which to grow' seemed so bogus, seeing as how most files are edited by writing a complete new copy before renaming the original as a backup file, either temporarily simply to avoid the naming conflict or as a safety rollback copy. This was actually the complete opposite of the one I designed for a solenoid operated data cassette deck where I formatted each side of a C60 sized tape into 168 by 2048 byte blocks (a max formatted capacity of 328KB per side of each tape after using the middle two blocks for a duplicated directory and LNKTBL (an 8 bit version of what I later discovered was called a File Allocation Table" (FAT) in MSDOS terminology). Here, the big idea was to preserve the larger sequences of free blocks as much as possible when writing new data to a part used tape that had randomly scattered free space blocks as a result of previous file deletions. I didn't want to use the next available 3 block space just to save a 2 block file into without searching the LNKTBL (FAT) for the existence of such a conveniently 2 block sized chunk of space. The tape, being effectively a single huge track of 168 blocks requiring fast forward/rewind operations could well do without unnecessary fragmentation, particularly of the larger files (at 5 seconds to read an 8K chunk of tape and 14 seconds end to end search time from first to last block) so it was important to preserve the larger blocks of free space "for better things" than a mere 1 to 3 block chunk of data. Since it's rather unusual for data processing software to write changes *directly* into the one and only copy of a working file (text editors, word processors and so on) risking all in the event of a system crash or a power outage, virtually every app wrote the changes as a completely new file (usually after renaming the original to avoid the naming conflict). That way, should the worst happen, you only risked losing the changes rather than the whole lot. The FAT was duplicated to avoid the same risk (I duplicated both the directory and the LNKTBL which neatly fitted into just a single 2K block, hence the use of two of them in my Tape filing system). With my strategy of minimising fragmentation of free space being *such* a "No Brainer", I've always assumed MSFT's FAT based FSes used a similar strategy, BICBW. However, I suppose the extra breathing space at the end of each file's data block allocation can still prove useful even when the files aren't being directly edited but rather deleted after making sure its updated replacement was successfully written to disk ensuring another larger chunk of free space would be available for yet an even later 'edit' whether of the same file or a completely different one. On an HDD, there's no great detriment in that strategy, unlike on a linear tape where such a strategy would 'Suck Big Time'(tm). Incidently (aside from the use of SSDs), the key to minimising fragmentation induced performance loss in a MSFT FS is optimised partitioning of the HDD into 3 partition spaces (OS, Apps and data partitions). For a win7 setup, you'd probably need a 40GB "drive C" for the OS and pagefile, perhaps another 30 or 40GB drive D for the common or garden apps with the remaining 850/1780 GB space on your 1 or 2 TB drive allocated to a general purpose drive E data volume. This splitting of an HDD's space avoids the OS file activities poisoning the other disk volumes with fragmentation activity. Similarly, but to a much lesser degree, the apps volume and it makes it impossible for windows to thinly spread its 60 odd thousand system files right across the whole of the disk platters to "Mix it" with the rest of your apps and data files (the "Drive C" and "Drive D" partitions becoming effectively "Short stroked 40GB disk portions of a humongous 931 or 1862 GB HDD, reducing seek times whilst enjoying the fastest SDTR region of the disk. On those rare occasions when you think it *just* might be worth spending a few minutes defragging drive volumes C and D, that's just about the size of the defrag job, minutes[1] rather than hours/overnight in the case of the classic lazy OEM *******'s trick of 'everything into a single huge disk partition'[2]. [1] In the case of the 8GB win2k partition on the 1TB drive, it literally was just a couple of minutes to completely defrag drive C (and similarly for the 20GB apps volume - less file churn for a kick off). I wasn't worried about fragmentation on the large data volumes which, after several months, could take anywhere from 1 to 3 hours. Most of the data were large GB sized media files which didn't suffer performance issues on playback or copying/moving although intense video processing of a badly fragmented movie file would suffer a modest performance drop, largely mitigated by my arranging the source and destination folders to be on different physical disk drives to eliminate head contention which neatly negated the worst effects of file fragmentation anyway. [2] Even when the ******* OEMs seemingly started to split the huge disk drives into a couple of partition spaces (plus maintenance/repair and recovery partitions), they often used a ridiculously large drive C volume with a small 100 to 200GB drive D volume on a 1TB drive (or at best, a "50/50 split" - still too damn big to mitigate the "Fragmentation Hell" effect of lumping all the OS, apps and user data into a single 500GB disk volume. -- Johnny B Good |
#154
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
"Johnny B Good" wrote in message
... On Sat, 26 Mar 2016 09:28:35 +0000, The Natural Philosopher wrote: On 25/03/16 21:20, Vir Campestris wrote: On 24/03/2016 22:43, The Natural Philosopher wrote: Oh the joys of Linux, and no de fragging ever unless the disk is 100% full I've heard this said, and I never can work out how. If I put 5000 files on my disk, and delete every alternate one, how can it not be fragmented? Well of course it is somewhat, but the point is that new files tend to be written in the middle of the biggest free space, depending on the actual disk format in use, so they tend to simply grow linearly. Fragmentation isn't a file in a random place, its a file in dozens of random places, so to get the entire contents takes many seeks. http://www.howtogeek.com/115229/htg-...x-doesnt-need- defragmenting/ I was quite surprised at the explanation. The strategy of scattering files across the disk volume 'in order to allow them space into which to grow' seemed so bogus, seeing as how most files are edited by writing a complete new copy before renaming the original as a backup file, either temporarily simply to avoid the naming conflict or as a safety rollback copy. That isn't true of log files or database files. So if you do want to minimise the fragmentation of files, that approach does make sense. On the other hand, you can make an even better case for minimising the head movement you get with the wealth of small files any modern system produces with internet caches and cookies etc rather than minimising the movement of the heads between fragments given that there isnt a lot of serial movement thru the whole of very large files now except with media files which dont matter if there are some extra head movements when playing those files. This was actually the complete opposite of the one I designed for a solenoid operated data cassette deck where I formatted each side of a C60 sized tape into 168 by 2048 byte blocks (a max formatted capacity of 328KB per side of each tape after using the middle two blocks for a duplicated directory and LNKTBL (an 8 bit version of what I later discovered was called a File Allocation Table" (FAT) in MSDOS terminology). Here, the big idea was to preserve the larger sequences of free blocks as much as possible when writing new data to a part used tape that had randomly scattered free space blocks as a result of previous file deletions. I didn't want to use the next available 3 block space just to save a 2 block file into without searching the LNKTBL (FAT) for the existence of such a conveniently 2 block sized chunk of space. The tape, being effectively a single huge track of 168 blocks requiring fast forward/rewind operations could well do without unnecessary fragmentation, particularly of the larger files (at 5 seconds to read an 8K chunk of tape and 14 seconds end to end search time from first to last block) so it was important to preserve the larger blocks of free space "for better things" than a mere 1 to 3 block chunk of data. Since it's rather unusual for data processing software to write changes *directly* into the one and only copy of a working file (text editors, word processors and so on) risking all in the event of a system crash or a power outage, virtually every app wrote the changes as a completely new file (usually after renaming the original to avoid the naming conflict). That way, should the worst happen, you only risked losing the changes rather than the whole lot. The FAT was duplicated to avoid the same risk (I duplicated both the directory and the LNKTBL which neatly fitted into just a single 2K block, hence the use of two of them in my Tape filing system). With my strategy of minimising fragmentation of free space being *such* a "No Brainer", I've always assumed MSFT's FAT based FSes used a similar strategy, BICBW. However, I suppose the extra breathing space at the end of each file's data block allocation can still prove useful even when the files aren't being directly edited but rather deleted after making sure its updated replacement was successfully written to disk ensuring another larger chunk of free space would be available for yet an even later 'edit' whether of the same file or a completely different one. On an HDD, there's no great detriment in that strategy, unlike on a linear tape where such a strategy would 'Suck Big Time'(tm). Incidently (aside from the use of SSDs), the key to minimising fragmentation induced performance loss in a MSFT FS is optimised partitioning of the HDD into 3 partition spaces (OS, Apps and data partitions). For a win7 setup, you'd probably need a 40GB "drive C" for the OS and pagefile, perhaps another 30 or 40GB drive D for the common or garden apps with the remaining 850/1780 GB space on your 1 or 2 TB drive allocated to a general purpose drive E data volume. This splitting of an HDD's space avoids the OS file activities poisoning the other disk volumes with fragmentation activity. In practice that doesnt happen with win7 and later even if you do use a single partition for everything. Similarly, but to a much lesser degree, the apps volume and it makes it impossible for windows to thinly spread its 60 odd thousand system files right across the whole of the disk platters to "Mix it" with the rest of your apps and data files That doesnt happen with current versions of Win even if you do have just one partition for everything. (the "Drive C" and "Drive D" partitions becoming effectively "Short stroked 40GB disk portions of a humongous 931 or 1862 GB HDD, reducing seek times whilst enjoying the fastest SDTR region of the disk. Current Wins do that even with a single partition for everything. So you only get the long seeks when accessing app data files. On those rare occasions when you think it *just* might be worth spending a few minutes defragging drive volumes C and D, that's just about the size of the defrag job, minutes[1] rather than hours/overnight in the case of the classic lazy OEM *******'s trick of 'everything into a single huge disk partition'[2]. [1] In the case of the 8GB win2k partition on the 1TB drive, it literally was just a couple of minutes to completely defrag drive C (and similarly for the 20GB apps volume - less file churn for a kick off). I wasn't worried about fragmentation on the large data volumes which, after several months, could take anywhere from 1 to 3 hours. Most of the data were large GB sized media files which didn't suffer performance issues on playback or copying/moving although intense video processing of a badly fragmented movie file would suffer a modest performance drop, largely mitigated by my arranging the source and destination folders to be on different physical disk drives to eliminate head contention which neatly negated the worst effects of file fragmentation anyway. [2] Even when the ******* OEMs seemingly started to split the huge disk drives into a couple of partition spaces (plus maintenance/repair and recovery partitions), they often used a ridiculously large drive C volume with a small 100 to 200GB drive D volume on a 1TB drive (or at best, a "50/50 split" - still too damn big to mitigate the "Fragmentation Hell" effect of lumping all the OS, apps and user data into a single 500GB disk volume. |
#155
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 28/03/2016 06:00, Johnny B Good wrote:
On Sat, 26 Mar 2016 09:28:35 +0000, The Natural Philosopher wrote: On 25/03/16 21:20, Vir Campestris wrote: On 24/03/2016 22:43, The Natural Philosopher wrote: Oh the joys of Linux, and no de fragging ever unless the disk is 100% full I've heard this said, and I never can work out how. If I put 5000 files on my disk, and delete every alternate one, how can it not be fragmented? Well of course it is somewhat, but the point is that new files tend to be written in the middle of the biggest free space, depending on the actual disk format in use, so they tend to simply grow linearly. Fragmentation isn't a file in a random place, its a file in dozens of random places, so to get the entire contents takes many seeks. http://www.howtogeek.com/115229/htg-...x-doesnt-need- defragmenting/ I was quite surprised at the explanation. The strategy of scattering files across the disk volume 'in order to allow them space into which to grow' seemed so bogus, seeing as how most files are edited by writing a complete new copy before renaming the original as a backup file, either temporarily simply to avoid the naming conflict or as a safety rollback copy. This was actually the complete opposite of the one I designed for a solenoid operated data cassette deck where I formatted each side of a C60 sized tape into 168 by 2048 byte blocks (a max formatted capacity of 328KB per side of each tape after using the middle two blocks for a duplicated directory and LNKTBL (an 8 bit version of what I later discovered was called a File Allocation Table" (FAT) in MSDOS terminology). Here, the big idea was to preserve the larger sequences of free blocks as much as possible when writing new data to a part used tape that had randomly scattered free space blocks as a result of previous file deletions. I didn't want to use the next available 3 block space just to save a 2 block file into without searching the LNKTBL (FAT) for the existence of such a conveniently 2 block sized chunk of space. The tape, being effectively a single huge track of 168 blocks requiring fast forward/rewind operations could well do without unnecessary fragmentation, particularly of the larger files (at 5 seconds to read an 8K chunk of tape and 14 seconds end to end search time from first to last block) so it was important to preserve the larger blocks of free space "for better things" than a mere 1 to 3 block chunk of data. Since it's rather unusual for data processing software to write changes *directly* into the one and only copy of a working file (text editors, word processors and so on) risking all in the event of a system crash or a power outage, virtually every app wrote the changes as a completely new file (usually after renaming the original to avoid the naming conflict). That way, should the worst happen, you only risked losing the changes rather than the whole lot. The FAT was duplicated to avoid the same risk (I duplicated both the directory and the LNKTBL which neatly fitted into just a single 2K block, hence the use of two of them in my Tape filing system). With my strategy of minimising fragmentation of free space being *such* a "No Brainer", I've always assumed MSFT's FAT based FSes used a similar strategy, BICBW. However, I suppose the extra breathing space at the end of each file's data block allocation can still prove useful even when the files aren't being directly edited but rather deleted after making sure its updated replacement was successfully written to disk ensuring another larger chunk of free space would be available for yet an even later 'edit' whether of the same file or a completely different one. On an HDD, there's no great detriment in that strategy, unlike on a linear tape where such a strategy would 'Suck Big Time'(tm). Incidently (aside from the use of SSDs), the key to minimising fragmentation induced performance loss in a MSFT FS is optimised partitioning of the HDD into 3 partition spaces (OS, Apps and data partitions). For a win7 setup, you'd probably need a 40GB "drive C" for the OS and pagefile, perhaps another 30 or 40GB drive D for the common or garden apps with the remaining 850/1780 GB space on your 1 or 2 TB drive allocated to a general purpose drive E data volume. This splitting of an HDD's space avoids the OS file activities poisoning the other disk volumes with fragmentation activity. Similarly, but to a much lesser degree, the apps volume and it makes it impossible for windows to thinly spread its 60 odd thousand system files right across the whole of the disk platters to "Mix it" with the rest of your apps and data files (the "Drive C" and "Drive D" partitions becoming effectively "Short stroked 40GB disk portions of a humongous 931 or 1862 GB HDD, reducing seek times whilst enjoying the fastest SDTR region of the disk. On those rare occasions when you think it *just* might be worth spending a few minutes defragging drive volumes C and D, that's just about the size of the defrag job, minutes[1] rather than hours/overnight in the case of the classic lazy OEM *******'s trick of 'everything into a single huge disk partition'[2]. [1] In the case of the 8GB win2k partition on the 1TB drive, it literally was just a couple of minutes to completely defrag drive C (and similarly for the 20GB apps volume - less file churn for a kick off). I wasn't worried about fragmentation on the large data volumes which, after several months, could take anywhere from 1 to 3 hours. Most of the data were large GB sized media files which didn't suffer performance issues on playback or copying/moving although intense video processing of a badly fragmented movie file would suffer a modest performance drop, largely mitigated by my arranging the source and destination folders to be on different physical disk drives to eliminate head contention which neatly negated the worst effects of file fragmentation anyway. [2] Even when the ******* OEMs seemingly started to split the huge disk drives into a couple of partition spaces (plus maintenance/repair and recovery partitions), they often used a ridiculously large drive C volume with a small 100 to 200GB drive D volume on a 1TB drive (or at best, a "50/50 split" - still too damn big to mitigate the "Fragmentation Hell" effect of lumping all the OS, apps and user data into a single 500GB disk volume. Windows allows a certain amount of space for each new file created. I can't remember how much, and it might vary by version, but it as a setting - probably in the registry. If your files are always less than this size, and you have lots of space, you will get no fragmentation. Even if the file starts very small and grows to this size. If you defrag your drive, the files are put cheek by jowl against each other. Any growth of a file will result in fragmentation. If you copy an existing file, Windows will attempt to place it in the smallest contiguous space large enough for the file. Simply copying a large, heavily fragmented file will defragment it if you have enough contiguous free space. The files that fragment badly are those that start small, and grow allocation unit by allocation unit - interspersed with other files also being written. Classics are various log files. Using an appropriate allocation unit size can have a huge impact. Few people ever seem to set up dedicated volumes with a sensibly chosen AU sizes for particular purposes. Of course, managing multiple volumes can be a right pain... -- Rod |
#156
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On Monday, 28 March 2016 00:27:56 UTC+1, Rod Speed wrote:
tabbypurr wrote Vir Campestris wrote The Windows guys chose to put all their files near the same end of the disk to reduce seeks when the disk is nearly empty. Linux has a different approach - they are scattered all over the place. Longer seeks but less of them, due to less fragmentation. But when very few very large files are accessed serially now except with media files where the access speed is entirely determined by the media play speed, it is very far from clear that less fragmentation actually matters much anymore and it is clear that any modern system does a hell of a lot more accessing all sorts of files at quite a high rate, even if that is only the internet cache files and cookies etc etc etc, it makes a lot more sense to minimise the time that happens. Total wait time is thus less. Not necessarily. The retard strikes again. |
#157
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On Monday, 28 March 2016 01:07:49 UTC+1, The Natural Philosopher wrote:
On 27/03/16 23:49, tabbypurr wrote: On Sunday, 27 March 2016 22:57:37 UTC+1, Vir Campestris wrote: The Windows guys chose to put all their files near the same end of the disk to reduce seeks when the disk is nearly empty. Linux has a different approach - they are scattered all over the place. Longer seeks but less of them, due to less fragmentation. Total wait time is thus less. I wonder how linux handles writing FAT32. Perfectly well. Mangles the file names though an avoidant answer there |
#158
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
"polygonum" wrote in message ... On 28/03/2016 06:00, Johnny B Good wrote: On Sat, 26 Mar 2016 09:28:35 +0000, The Natural Philosopher wrote: On 25/03/16 21:20, Vir Campestris wrote: On 24/03/2016 22:43, The Natural Philosopher wrote: Oh the joys of Linux, and no de fragging ever unless the disk is 100% full I've heard this said, and I never can work out how. If I put 5000 files on my disk, and delete every alternate one, how can it not be fragmented? Well of course it is somewhat, but the point is that new files tend to be written in the middle of the biggest free space, depending on the actual disk format in use, so they tend to simply grow linearly. Fragmentation isn't a file in a random place, its a file in dozens of random places, so to get the entire contents takes many seeks. http://www.howtogeek.com/115229/htg-...x-doesnt-need- defragmenting/ I was quite surprised at the explanation. The strategy of scattering files across the disk volume 'in order to allow them space into which to grow' seemed so bogus, seeing as how most files are edited by writing a complete new copy before renaming the original as a backup file, either temporarily simply to avoid the naming conflict or as a safety rollback copy. This was actually the complete opposite of the one I designed for a solenoid operated data cassette deck where I formatted each side of a C60 sized tape into 168 by 2048 byte blocks (a max formatted capacity of 328KB per side of each tape after using the middle two blocks for a duplicated directory and LNKTBL (an 8 bit version of what I later discovered was called a File Allocation Table" (FAT) in MSDOS terminology). Here, the big idea was to preserve the larger sequences of free blocks as much as possible when writing new data to a part used tape that had randomly scattered free space blocks as a result of previous file deletions. I didn't want to use the next available 3 block space just to save a 2 block file into without searching the LNKTBL (FAT) for the existence of such a conveniently 2 block sized chunk of space. The tape, being effectively a single huge track of 168 blocks requiring fast forward/rewind operations could well do without unnecessary fragmentation, particularly of the larger files (at 5 seconds to read an 8K chunk of tape and 14 seconds end to end search time from first to last block) so it was important to preserve the larger blocks of free space "for better things" than a mere 1 to 3 block chunk of data. Since it's rather unusual for data processing software to write changes *directly* into the one and only copy of a working file (text editors, word processors and so on) risking all in the event of a system crash or a power outage, virtually every app wrote the changes as a completely new file (usually after renaming the original to avoid the naming conflict). That way, should the worst happen, you only risked losing the changes rather than the whole lot. The FAT was duplicated to avoid the same risk (I duplicated both the directory and the LNKTBL which neatly fitted into just a single 2K block, hence the use of two of them in my Tape filing system). With my strategy of minimising fragmentation of free space being *such* a "No Brainer", I've always assumed MSFT's FAT based FSes used a similar strategy, BICBW. However, I suppose the extra breathing space at the end of each file's data block allocation can still prove useful even when the files aren't being directly edited but rather deleted after making sure its updated replacement was successfully written to disk ensuring another larger chunk of free space would be available for yet an even later 'edit' whether of the same file or a completely different one. On an HDD, there's no great detriment in that strategy, unlike on a linear tape where such a strategy would 'Suck Big Time'(tm). Incidently (aside from the use of SSDs), the key to minimising fragmentation induced performance loss in a MSFT FS is optimised partitioning of the HDD into 3 partition spaces (OS, Apps and data partitions). For a win7 setup, you'd probably need a 40GB "drive C" for the OS and pagefile, perhaps another 30 or 40GB drive D for the common or garden apps with the remaining 850/1780 GB space on your 1 or 2 TB drive allocated to a general purpose drive E data volume. This splitting of an HDD's space avoids the OS file activities poisoning the other disk volumes with fragmentation activity. Similarly, but to a much lesser degree, the apps volume and it makes it impossible for windows to thinly spread its 60 odd thousand system files right across the whole of the disk platters to "Mix it" with the rest of your apps and data files (the "Drive C" and "Drive D" partitions becoming effectively "Short stroked 40GB disk portions of a humongous 931 or 1862 GB HDD, reducing seek times whilst enjoying the fastest SDTR region of the disk. On those rare occasions when you think it *just* might be worth spending a few minutes defragging drive volumes C and D, that's just about the size of the defrag job, minutes[1] rather than hours/overnight in the case of the classic lazy OEM *******'s trick of 'everything into a single huge disk partition'[2]. [1] In the case of the 8GB win2k partition on the 1TB drive, it literally was just a couple of minutes to completely defrag drive C (and similarly for the 20GB apps volume - less file churn for a kick off). I wasn't worried about fragmentation on the large data volumes which, after several months, could take anywhere from 1 to 3 hours. Most of the data were large GB sized media files which didn't suffer performance issues on playback or copying/moving although intense video processing of a badly fragmented movie file would suffer a modest performance drop, largely mitigated by my arranging the source and destination folders to be on different physical disk drives to eliminate head contention which neatly negated the worst effects of file fragmentation anyway. [2] Even when the ******* OEMs seemingly started to split the huge disk drives into a couple of partition spaces (plus maintenance/repair and recovery partitions), they often used a ridiculously large drive C volume with a small 100 to 200GB drive D volume on a 1TB drive (or at best, a "50/50 split" - still too damn big to mitigate the "Fragmentation Hell" effect of lumping all the OS, apps and user data into a single 500GB disk volume. Windows allows a certain amount of space for each new file created. I can't remember how much, and it might vary by version, but it as a setting - probably in the registry. If your files are always less than this size, and you have lots of space, you will get no fragmentation. Even if the file starts very small and grows to this size. If you defrag your drive, the files are put cheek by jowl against each other. Any growth of a file will result in fragmentation. If you copy an existing file, Windows will attempt to place it in the smallest contiguous space large enough for the file. Simply copying a large, heavily fragmented file will defragment it if you have enough contiguous free space. The files that fragment badly are those that start small, and grow allocation unit by allocation unit - interspersed with other files also being written. Classics are various log files. And it doesnt really matter if those log files do get quite fragmented, because they are hardly ever read from end to end except when browsing them, when you reading much more slowly than the file can be read anyway, so extra seeks between fragments dent matter at all. Using an appropriate allocation unit size can have a huge impact. Few people ever seem to set up dedicated volumes with a sensibly chosen AU sizes for particular purposes. Of course, managing multiple volumes can be a right pain... -- Rod |
#159
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
wrote
Rod Speed wrote tabbypurr wrote Vir Campestris wrote The Windows guys chose to put all their files near the same end of the disk to reduce seeks when the disk is nearly empty. Linux has a different approach - they are scattered all over the place. Longer seeks but less of them, due to less fragmentation. But when very few very large files are accessed serially now except with media files where the access speed is entirely determined by the media play speed, it is very far from clear that less fragmentation actually matters much anymore and it is clear that any modern system does a hell of a lot more accessing all sorts of files at quite a high rate, even if that is only the internet cache files and cookies etc etc etc, it makes a lot more sense to minimise the time that happens. Total wait time is thus less. Not necessarily. The retard strikes again. You never could bull**** your way out of a wet paper bag. |
#160
Posted to uk.d-i-y
|
|||
|
|||
Defraggin LInux (was should DIY be a green cause)
On 28/03/2016 10:02, 764hho wrote:
And it doesnt really matter if those log files do get quite fragmented, because they are hardly ever read from end to end except when browsing them, when you reading much more slowly than the file can be read anyway, so extra seeks between fragments dent matter at all. Unfortunately, in my experience, it can matter. One particular application I used to deal with had such files. And they were regularly accessed. The difference achieved from a simple move of such a file was often very obvious to a user. It can also matter because in time, as other files are created and deleted, the fragmented log file can mean that any free space is fragmented. One area of NTFS I have either forgotten or never read up is how it knows where fragments of files reside. Based on another file system which I did know well, there was a file which contained lots of records something like: File number ! Fragment number ! Starts at block ! For so many blocks In that old system, locating the block required shuffling through this file and counting. The amount of work this required was very closely related to the number of fragments and hardly at all to the absolute size of the file. To find the last block of a severely fragmented file would require reading through lots of these small records. (Of course, some or all of this might be cached - though probably not then. This file would be a prime candidate for holding in memory.) -- Rod |
Reply |
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Forum | |||
GREEN.... MORE GREEN..... ALL GREEN ! | Home Repair | |||
Run for Clean & Green Mumbai, Run for Green Yatra. Hurry Up Few Bibs Left. | Home Ownership | |||
IF green means acetylene, why is Bernzomatic selling propane in dark green? | Home Repair | |||
Green libs cause green pipes | Metalworking | |||
FS: Green & Green book | Woodworking |