UK diy (uk.d-i-y) For the discussion of all topics related to diy (do-it-yourself) in the UK. All levels of experience and proficency are welcome to join in to ask questions or offer solutions.

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
  #161   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 9,369
Default UPS server wiring no-no



"The Natural Philosopher" wrote in message
...
Andy Champ wrote:
On 10/05/2012 16:32, The Natural Philosopher wrote:

Exactly so. Those of us who have worked on te DESIGN of such systems
know that it is impossible to solve the problem *entirely in software*.


This turns out not to be the case.


It turns out to be exactly the case.


So explain why a file system that writes its intent to a file and then
updates the file and then records that in the log doesn't work.
Like how exactly it corrupts the file system let alone the disk which power
fails don't corrupt in the first place.
You might design a disk system that doesn't detect power fail and corrupt
itself but nobody else does.
You can turn hard drives off as often as you like and they do not fail to
detect the power failing or corrupt themselves.
Once you know this then it is easy to understand why power fails don't have
to corrupt the disk, or the file system or even a correctly designed
application.

  #162   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 1,146
Default UPS server wiring no-no


"dennis@home" wrote in message
...


"The Natural Philosopher" wrote in message
...

Dennis. You wouldn't understand the correct explanation so why not?

Why don't you stop wriggling and admit that you don't have a clue how file
systems or disks actually work.
Almost nothing you have written about file systems is actually true.


Hey Dennis, where does your experience of them originate?


  #163   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 1,146
Default UPS server wiring no-no


"dennis@home" wrote in message
...


"The Natural Philosopher" wrote in message
...

Why don't you go and read on how file systems like veritas work and stop
wriggling!


Is that what you've just done, Den?


  #164   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 3,819
Default UPS server wiring no-no

In message om, brass
monkey writes

"dennis@home" wrote in message
...


"The Natural Philosopher" wrote in message
...

Dennis. You wouldn't understand the correct explanation so why not?

Why don't you stop wriggling and admit that you don't have a clue how file
systems or disks actually work.
Almost nothing you have written about file systems is actually true.


Hey Dennis, where does your experience of them originate?


Bunty


--
geoff
  #165   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 39,563
Default UPS server wiring no-no

dennis@home wrote:


"The Natural Philosopher" wrote in message
...

Why don't you go and read on how file systems like veritas work and stop
wriggling!


I have, dennis.


--
To people who know nothing, anything is possible.
To people who know too much, it is a sad fact
that they know how little is really possible -
and how hard it is to achieve it.


  #166   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 39,563
Default UPS server wiring no-no

dennis@home wrote:


"The Natural Philosopher" wrote in message
...

Dennis. You wouldn't understand the correct explanation so why not?

Why don't you stop wriggling and admit that you don't have a clue how
file systems or disks actually work.
Almost nothing you have written about file systems is actually true.


Oh dear. Well you had better tell that to the people whose documentation
I read, who wrote those file systems.

But thanks for coming out on the opposing side: that guarantees I am right.


--
To people who know nothing, anything is possible.
To people who know too much, it is a sad fact
that they know how little is really possible -
and how hard it is to achieve it.
  #167   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 39,563
Default UPS server wiring no-no

brass monkey wrote:
"dennis@home" wrote in message
...

"The Natural Philosopher" wrote in message
...

Dennis. You wouldn't understand the correct explanation so why not?

Why don't you stop wriggling and admit that you don't have a clue how file
systems or disks actually work.
Almost nothing you have written about file systems is actually true.


Hey Dennis, where does your experience of them originate?


He was sired by a filing cabinet dontcha know.


--
To people who know nothing, anything is possible.
To people who know too much, it is a sad fact
that they know how little is really possible -
and how hard it is to achieve it.
  #168   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 39,563
Default UPS server wiring no-no

dennis@home wrote:


"The Natural Philosopher" wrote in message
...
Andy Champ wrote:
On 10/05/2012 16:32, The Natural Philosopher wrote:

Exactly so. Those of us who have worked on te DESIGN of such systems
know that it is impossible to solve the problem *entirely in software*.

This turns out not to be the case.


It turns out to be exactly the case.


So explain why a file system that writes its intent to a file and then
updates the file and then records that in the log doesn't work.


I have explained all that dennis,.

I never said it 'didn't work' only that to vbe proof against power cuts
needs a hardware solutions in addition.

Which is why you see those systems entirely associated with HARDWARE
manufacturers.




--
To people who know nothing, anything is possible.
To people who know too much, it is a sad fact
that they know how little is really possible -
and how hard it is to achieve it.
  #169   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 9,369
Default UPS server wiring no-no



"The Natural Philosopher" wrote in message
...
dennis@home wrote:


"The Natural Philosopher" wrote in message
...
Andy Champ wrote:
On 10/05/2012 16:32, The Natural Philosopher wrote:

Exactly so. Those of us who have worked on te DESIGN of such systems
know that it is impossible to solve the problem *entirely in
software*.

This turns out not to be the case.


It turns out to be exactly the case.


So explain why a file system that writes its intent to a file and then
updates the file and then records that in the log doesn't work.


I have explained all that dennis,.

I never said it 'didn't work' only that to vbe proof against power cuts
needs a hardware solutions in addition.

Which is why you see those systems entirely associated with HARDWARE
manufacturers.


There is no requirement for any extra hardware that isn't already in the
disk.
Are you really stupid or just pretending.

Journaling file systems work a similar way as databases like oracle work and
they do not require any extra hardware to ensure their integrity either.


  #170   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 11,175
Default UPS server wiring no-no

In article ,
The Natural Philosopher writes:
'nt understand what it does at a brute hardware level enough for me to
realise that its not much point in me actually trying to explain:
nevertheless I will (try)

1/. It is phsyically impossible for a disk to write to two sectors
simultaneously,


Correct.

so a data + metadata transaction is never actually
written as an atomic aaction.


Wrong (see below).

2/. ZFS simply writes new data to an unused sector and when its
complete,


Correct.

it then updates the file metadata to point to it.


Wrong, it writes the metadata to a new block too, it never _updates_
anything. It only ever writes to free blocks.

3/ A crash during the new sector wrote means 'file as it was + corrupted
sector no one will ever see because its still 'spare'


Correct.

4/. a crash during a metadata update won't destroy the actual data, but
will totally **** up the entire file. IF its beyond the checksumming to
sort out.


Wrong. The metadata is unchanged, in exactly the same way the file
is unchanged in the case above.

5/. IF and ONLY IF the correct write data is in NVRAM, because the disk
itself has that facility, then it will indeed be able to say 'wah well,
thats what that sectors hould have been' and correct it.


ZFS needs no NVRAM for correctness, so assume there isn't any,
and I'll skip past your following assumptions of how NVRAM works,
which are not correct...

6/. If it carshes while writing to NVRAM with luck teh whole transaction
will be lost entorely.

HOWEVER
(a) this merely preserves transactions as either fully complete, or not
complete. That's fine for a database app that uses the system in that
way. Not fine for systems that may use the file system in other ways.

(b)It CRITICALLY depends on the disk having NVRAM.

(c) and the controller accurately reporting the state of the disk.

So in the end it is down to HARDWARE to make sure the 'atomic
transaction' metaphor is actually preserved (and hardware is NEVER
directly accessed by the OS anyway so the myth that 'hardware does what
it tells you is just that, a myth. With LUCK it will do what you tell
it, sometimes it doesn't).





That's what it means to be a transactional filesystem.

No, it isn't. That's what the glossy sales brochures tell you.

You haven't understood at all.

To get to that level - and it is a good level - requires a bit more
than a randomn mother board and disk controller coupled to random disks.

The disks have to be equipped with NVRAM and the controller has to not
say 'done it' just because its passed the data to the disk, it has to
relay a proper 'write completed' signal from the disk back to the OS.


OK, let's explain in more detail...

I think you do understand what a transactional filesystem is.
You do understand that it is physically impossible for a disk
to write to two sectors simultaneously (and that applies across
multiple disks in an array too). So I presume you can follow on
from that and understand that committing a transaction has to
depend on something atomic such as a single write, and that's
exactly how ZFS works.

As I said before, ZFS only ever writes to free disk space - it
never updates any block that's in use. So if an application does
update a block in a file, ZFS writes that updated block to a free
block on disk, and doesn't touch the original disk block that
file was using. Then the file metadata is updated, and exactly
the same happens - the metadata is written out to a free block,
and the original is untouched. A transaction is built up
entirely in free space which contains all the changes which
need to be applied to the disk. All the changes associated with
any one operation (a write to a file, a file create, a file
delete, a file rename, etc) are contained in the same transaction.
Any power loss (or other cause of unexpected interruption to the
system) will result in the filesystem appearing to be unchanged,
as this is all still just in free blocks, and so the filesystem
is still completely consistent. When the disk(s) have written back
all these changed blocks, ZFS finally writes back the uberblock
(its name for the superblock) which points to the new metadata,
and hence all the updates. This is an atomic single sector write,
which thus commits the transaction. It either happens, or it
doesn't happen. It can't half happen, so there's no window when
the filesystem is inconsistent - it's always consistent on disk,
so it's always safe from ZFS's point of view to power off the
system at any time.

I've skipped a ton of detail (I usually teach this as a half
day class), but that should be enough to understand the main
principles.

And NONE of this will help an app that is doing something in a non
transactional way either.


I already said that. If someone writes an app wrongly, all bets
are off. In the Enterprise space (where I work), most developers
know how to do this right though - it's key to large sectors of
industry. However, even if the developer did it all wrong, you
still won't ever get a corrupted filesystem.

--
Andrew Gabriel
[email address is not usable -- followup in the newsgroup]


  #171   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 39,563
Default UPS server wiring no-no

Andrew Gabriel wrote:
In article ,
The Natural Philosopher writes:
'nt understand what it does at a brute hardware level enough for me to
realise that its not much point in me actually trying to explain:
nevertheless I will (try)

1/. It is phsyically impossible for a disk to write to two sectors
simultaneously,


Correct.

so a data + metadata transaction is never actually
written as an atomic aaction.


Wrong (see below).

2/. ZFS simply writes new data to an unused sector and when its
complete,


Correct.

it then updates the file metadata to point to it.


Wrong, it writes the metadata to a new block too, it never _updates_
anything. It only ever writes to free blocks.

If you think about that you will realise its actually ********. At some
poimnt it has to have a start for its B tree if that is what it usees.

That has to be updated. Otherwise it cant ever find out where to find
teh metdata.


3/ A crash during the new sector wrote means 'file as it was + corrupted
sector no one will ever see because its still 'spare'


Correct.

4/. a crash during a metadata update won't destroy the actual data, but
will totally **** up the entire file. IF its beyond the checksumming to
sort out.


Wrong. The metadata is unchanged, in exactly the same way the file
is unchanged in the case above.


Correct - see above. At some level it has to know where the new blocks
are, that has to be in a fixed location table.

5/. IF and ONLY IF the correct write data is in NVRAM, because the disk
itself has that facility, then it will indeed be able to say 'wah well,
thats what that sectors hould have been' and correct it.


ZFS needs no NVRAM for correctness, so assume there isn't any,
and I'll skip past your following assumptions of how NVRAM works,
which are not correct...

6/. If it carshes while writing to NVRAM with luck teh whole transaction
will be lost entorely.

HOWEVER
(a) this merely preserves transactions as either fully complete, or not
complete. That's fine for a database app that uses the system in that
way. Not fine for systems that may use the file system in other ways.

(b)It CRITICALLY depends on the disk having NVRAM.

(c) and the controller accurately reporting the state of the disk.

So in the end it is down to HARDWARE to make sure the 'atomic
transaction' metaphor is actually preserved (and hardware is NEVER
directly accessed by the OS anyway so the myth that 'hardware does what
it tells you is just that, a myth. With LUCK it will do what you tell
it, sometimes it doesn't).





That's what it means to be a transactional filesystem.

No, it isn't. That's what the glossy sales brochures tell you.

You haven't understood at all.

To get to that level - and it is a good level - requires a bit more
than a randomn mother board and disk controller coupled to random disks.

The disks have to be equipped with NVRAM and the controller has to not
say 'done it' just because its passed the data to the disk, it has to
relay a proper 'write completed' signal from the disk back to the OS.


OK, let's explain in more detail...

I think you do understand what a transactional filesystem is.
You do understand that it is physically impossible for a disk
to write to two sectors simultaneously


That is the point.

(and that applies across
multiple disks in an array too). So I presume you can follow on
from that and understand that committing a transaction has to
depend on something atomic such as a single write, and that's
exactly how ZFS works.


but it cant be an atomic write at the physical disk level,. It has to be
a series of writes.



As I said before, ZFS only ever writes to free disk space - it
never updates any block that's in use. So if an application does
update a block in a file, ZFS writes that updated block to a free
block on disk, and doesn't touch the original disk block that
file was using. Then the file metadata is updated, and exactly
the same happens - the metadata is written out to a free block,
and the original is untouched. A transaction is built up
entirely in free space which contains all the changes which
need to be applied to the disk. All the changes associated with
any one operation (a write to a file, a file create, a file
delete, a file rename, etc) are contained in the same transaction.
Any power loss (or other cause of unexpected interruption to the
system) will result in the filesystem appearing to be unchanged,
as this is all still just in free blocks, and so the filesystem
is still completely consistent. When the disk(s) have written back
all these changed blocks, ZFS finally writes back the uberblock
(its name for the superblock) which points to the new metadata,
and hence all the updates. This is an atomic single sector write,
which thus commits the transaction. It either happens, or it
doesn't happen. It can't half happen, so there's no window when
the filesystem is inconsistent - it's always consistent on disk,
so it's always safe from ZFS's point of view to power off the
system at any time.


But if you are using new blocks *entirely*, how do you know where they are?

"ZFS finally writes back the uberblock(its name for the superblock)!"

So it DIES update at least ONE block. It has to.


I understand what you are saying, but there has to be at some point an
UPDATE of that master block to reflect the fact that the file is using
different block locations: that is *vulnerable*.

Otherwise when the file system starts how does ZFS know where to start
looking for its files and directories?

I agree is a good system and it reduces the probability of errors
considerably but it cannot eliminate them. Not without hardware assistance.

The ONLY way round that one is to e.g. hold the uberblock in NVRAM




I've skipped a ton of detail (I usually teach this as a half
day class), but that should be enough to understand the main
principles.

And NONE of this will help an app that is doing something in a non
transactional way either.


I already said that. If someone writes an app wrongly, all bets
are off. In the Enterprise space (where I work), most developers
know how to do this right though - it's key to large sectors of
industry. However, even if the developer did it all wrong, you
still won't ever get a corrupted filesystem.



--
To people who know nothing, anything is possible.
To people who know too much, it is a sad fact
that they know how little is really possible -
and how hard it is to achieve it.
  #172   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 11,175
Default UPS server wiring no-no

In article ,
The Natural Philosopher writes:
Andrew Gabriel wrote:
In article ,
The Natural Philosopher writes:
'nt understand what it does at a brute hardware level enough for me to
realise that its not much point in me actually trying to explain:
nevertheless I will (try)

1/. It is phsyically impossible for a disk to write to two sectors
simultaneously,


Correct.

so a data + metadata transaction is never actually
written as an atomic aaction.


Wrong (see below).

2/. ZFS simply writes new data to an unused sector and when its
complete,


Correct.

it then updates the file metadata to point to it.


Wrong, it writes the metadata to a new block too, it never _updates_
anything. It only ever writes to free blocks.

If you think about that you will realise its actually ********. At some
poimnt it has to have a start for its B tree if that is what it usees.

That has to be updated. Otherwise it cant ever find out where to find
teh metdata.


Yes, exactly.
That's updated with every transaction commit.
That's what the new uberblock points to.

3/ A crash during the new sector wrote means 'file as it was + corrupted
sector no one will ever see because its still 'spare'


Correct.

4/. a crash during a metadata update won't destroy the actual data, but
will totally **** up the entire file. IF its beyond the checksumming to
sort out.


Wrong. The metadata is unchanged, in exactly the same way the file
is unchanged in the case above.


Correct - see above. At some level it has to know where the new blocks
are,


That's part of the metadata.

that has to be in a fixed location table.


Why?
You have to know how to find it. It doesn't have to be in a fixed place
(and it isn't).

5/. IF and ONLY IF the correct write data is in NVRAM, because the disk
itself has that facility, then it will indeed be able to say 'wah well,
thats what that sectors hould have been' and correct it.


ZFS needs no NVRAM for correctness, so assume there isn't any,
and I'll skip past your following assumptions of how NVRAM works,
which are not correct...

6/. If it carshes while writing to NVRAM with luck teh whole transaction
will be lost entorely.

HOWEVER
(a) this merely preserves transactions as either fully complete, or not
complete. That's fine for a database app that uses the system in that
way. Not fine for systems that may use the file system in other ways.

(b)It CRITICALLY depends on the disk having NVRAM.

(c) and the controller accurately reporting the state of the disk.

So in the end it is down to HARDWARE to make sure the 'atomic
transaction' metaphor is actually preserved (and hardware is NEVER
directly accessed by the OS anyway so the myth that 'hardware does what
it tells you is just that, a myth. With LUCK it will do what you tell
it, sometimes it doesn't).





That's what it means to be a transactional filesystem.

No, it isn't. That's what the glossy sales brochures tell you.

You haven't understood at all.

To get to that level - and it is a good level - requires a bit more
than a randomn mother board and disk controller coupled to random disks.

The disks have to be equipped with NVRAM and the controller has to not
say 'done it' just because its passed the data to the disk, it has to
relay a proper 'write completed' signal from the disk back to the OS.


OK, let's explain in more detail...

I think you do understand what a transactional filesystem is.
You do understand that it is physically impossible for a disk
to write to two sectors simultaneously


That is the point.

(and that applies across
multiple disks in an array too). So I presume you can follow on
from that and understand that committing a transaction has to
depend on something atomic such as a single write, and that's
exactly how ZFS works.


but it cant be an atomic write at the physical disk level,. It has to be
a series of writes.


The transaction commit is a single write, as I described below;
writing out a new uberblock.

As I said before, ZFS only ever writes to free disk space - it
never updates any block that's in use. So if an application does
update a block in a file, ZFS writes that updated block to a free
block on disk, and doesn't touch the original disk block that
file was using. Then the file metadata is updated, and exactly
the same happens - the metadata is written out to a free block,
and the original is untouched. A transaction is built up
entirely in free space which contains all the changes which
need to be applied to the disk. All the changes associated with
any one operation (a write to a file, a file create, a file
delete, a file rename, etc) are contained in the same transaction.
Any power loss (or other cause of unexpected interruption to the
system) will result in the filesystem appearing to be unchanged,
as this is all still just in free blocks, and so the filesystem
is still completely consistent. When the disk(s) have written back
all these changed blocks, ZFS finally writes back the uberblock
(its name for the superblock) which points to the new metadata,
and hence all the updates. This is an atomic single sector write,
which thus commits the transaction. It either happens, or it
doesn't happen. It can't half happen, so there's no window when
the filesystem is inconsistent - it's always consistent on disk,
so it's always safe from ZFS's point of view to power off the
system at any time.


But if you are using new blocks *entirely*, how do you know where they are?

"ZFS finally writes back the uberblock(its name for the superblock)!"

So it DIES update at least ONE block. It has to.


No - remember, it never overwrites a block that's in use, and that
includes the uberblock. It writes a new one in a different place.

(Uberblock updates are implemented differently from other block
updates, but at this level, exactly the same principles apply.)

I understand what you are saying, but there has to be at some point an
UPDATE of that master block to reflect the fact that the file is using
different block locations: that is *vulnerable*.


*has to be vulnerable*? No it isn't.

Otherwise when the file system starts how does ZFS know where to start
looking for its files and directories?


It looks for the most recent uberblock, which points to the most recent
metadata block tree, which is the most recent transaction on the disk.

I agree is a good system and it reduces the probability of errors
considerably but it cannot eliminate them. Not without hardware assistance.


It's specifically designed not to need hardware assist - it's designed
to work with cheap disks, so you can build Enterprise grade storage
cheaply.

The ONLY way round that one is to e.g. hold the uberblock in NVRAM


You haven't said why you think that, but it's not the case.

I think you accept that a single sector write is atomic.
I hope you can see how the single sector write commits the transaction.
That makes the whole transaction atomic.

I've skipped a ton of detail (I usually teach this as a half
day class), but that should be enough to understand the main
principles.

And NONE of this will help an app that is doing something in a non
transactional way either.


I already said that. If someone writes an app wrongly, all bets
are off. In the Enterprise space (where I work), most developers
know how to do this right though - it's key to large sectors of
industry. However, even if the developer did it all wrong, you
still won't ever get a corrupted filesystem.


--
Andrew Gabriel
[email address is not usable -- followup in the newsgroup]
  #173   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 2,580
Default UPS server wiring no-no

On 11/05/2012 15:54, Andrew Gabriel wrote:

The ONLY way round that one is to e.g. hold the uberblock in NVRAM


You haven't said why you think that, but it's not the case.


I'll take the blame for confusing him with NVRAM. It's how you make
things go fast and still cope with power loss, but obviously that's not
relevant to what you're talking about.

  #174   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 9,369
Default UPS server wiring no-no



"The Natural Philosopher" wrote in message
...

I agree is a good system and it reduces the probability of errors
considerably but it cannot eliminate them. Not without hardware
assistance.

The ONLY way round that one is to e.g. hold the uberblock in NVRAM


Totally untrue.
When you issue a command to a disk it will either successfully write it or
it will leave its sectors unchanged in the event of a power fail.
Once you work this out you will realise how it works.

So yes the hardware is designed to cope with a power fail, the hardware
being the disk.

if you didn't design the disk to work this way you would get thousands of
corruptions, every time you power it off.
It is plain to see that this does not happen as the disks detect the power
loss and complete any write to the platter that is in progress, then they
shut down.


So now you can write the data and stuff to the log on the disk and then
write the data, etc to the disk, then mark it finished in the log after you
have updated the "uberblocks". On a power fail restart you just replay the
logs to ensure everything is OK.

If you don't worry about the data you can just write a list of inodes you
are writing and do a quick rebuild on start up after a power fail.

  #175   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 39,563
Default UPS server wiring no-no

dennis@home wrote:


"The Natural Philosopher" wrote in message
...

I agree is a good system and it reduces the probability of errors
considerably but it cannot eliminate them. Not without hardware
assistance.

The ONLY way round that one is to e.g. hold the uberblock in NVRAM


Totally untrue.
When you issue a command to a disk it will either successfully write it
or it will leave its sectors unchanged in the event of a power fail.


No it wont.


it takes some microseconds to write. If a power fail happens than, its a
toasted sector.

plonker

Once you work this out you will realise how it works.


Once you realise you are as usual talking through your arse, you will
realise that most of what you think is about as useful as a stale fart.

So yes the hardware is designed to cope with a power fail, the hardware
being the disk.


ecpet by and large they aren't.


if you didn't design the disk to work this way you would get thousands
of corruptions, every time you power it off.


No you wouldn't. *Most* disks spend a vanishingly small amount of time
WRITING.
and that's why a normal shutdown syncs the disks, flushes the caches and
ceases all disk activity before halting the processor, and why most
modern PCs do NOT have a true on off switch. Too many people like you,
****ed their disks.



It is plain to see that this does not happen as the disks detect the
power loss and complete any write to the platter that is in progress,
then they shut down.


No they dont.

Some advanced ones may, but the normal run of the mill disk doesn't.


So now you can write the data and stuff to the log on the disk and then
write the data, etc to the disk, then mark it finished in the log after
you have updated the "uberblocks". On a power fail restart you just
replay the logs to ensure everything is OK.

If you don't worry about the data you can just write a list of inodes
you are writing and do a quick rebuild on start up after a power fail.


I love the way you just make stuff up to avoid looking like a total
plonker, and then end up looking even more like one.


--
To people who know nothing, anything is possible.
To people who know too much, it is a sad fact
that they know how little is really possible -
and how hard it is to achieve it.


  #176   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 2,076
Default UPS server wiring no-no

On Fri, 11 May 2012 20:08:11 +0100, The Natural Philosopher wrote:

dennis@home wrote:


"The Natural Philosopher" wrote in message
...

I agree is a good system and it reduces the probability of errors
considerably but it cannot eliminate them. Not without hardware
assistance.

The ONLY way round that one is to e.g. hold the uberblock in NVRAM


Totally untrue.
When you issue a command to a disk it will either successfully write it
or it will leave its sectors unchanged in the event of a power fail.


No it wont.


it takes some microseconds to write. If a power fail happens than, its a
toasted sector.


Never heard of capacitors?

--
Use the BIG mirror service in the UK:
http://www.mirrorservice.org

*lightning protection* - a w_tom conductor
  #177   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 39,563
Default UPS server wiring no-no

Bob Eager wrote:
On Fri, 11 May 2012 20:08:11 +0100, The Natural Philosopher wrote:

dennis@home wrote:

"The Natural Philosopher" wrote in message
...

I agree is a good system and it reduces the probability of errors
considerably but it cannot eliminate them. Not without hardware
assistance.

The ONLY way round that one is to e.g. hold the uberblock in NVRAM
Totally untrue.
When you issue a command to a disk it will either successfully write it
or it will leave its sectors unchanged in the event of a power fail.

No it wont.


it takes some microseconds to write. If a power fail happens than, its a
toasted sector.


Never heard of capacitors?

never heard of a capacitor discharging?


--
To people who know nothing, anything is possible.
To people who know too much, it is a sad fact
that they know how little is really possible -
and how hard it is to achieve it.
  #178   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 9,369
Default UPS server wiring no-no



"The Natural Philosopher" wrote in message
...
dennis@home wrote:


"The Natural Philosopher" wrote in message
...

I agree is a good system and it reduces the probability of errors
considerably but it cannot eliminate them. Not without hardware
assistance.

The ONLY way round that one is to e.g. hold the uberblock in NVRAM


Totally untrue.
When you issue a command to a disk it will either successfully write it
or it will leave its sectors unchanged in the event of a power fail.


No it wont.


it takes some microseconds to write. If a power fail happens than, its a
toasted sector.

plonker


There is no point in trying to discuss this with you, you can't even get the
basics correct.
Power fails do not corrupt disk sectors until you get this through your
skull everything else about file systems is pointless.



  #179   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 11,175
Default UPS server wiring no-no

In article ,
The Natural Philosopher writes:
dennis@home wrote:


"The Natural Philosopher" wrote in message
...

I agree is a good system and it reduces the probability of errors
considerably but it cannot eliminate them. Not without hardware
assistance.

The ONLY way round that one is to e.g. hold the uberblock in NVRAM


Totally untrue.
When you issue a command to a disk it will either successfully write it
or it will leave its sectors unchanged in the event of a power fail.


No it wont.


it takes some microseconds to write. If a power fail happens than, its a
toasted sector.


Disks are designed to get a whole sector written, even with a
powerfail at the beginning of the write. Some disks can even
get their whole write cache flushed to disk after the power
fails.

However, ZFS makes no assumption that either of these happen.
ZFS survives without corruption even if the write of the new
uberblock leaves the sector trashed, because it is not writing
it to the sector where the previous uberblock is stored, and
that still remains safe. (There are also other measures it takes,
so even if the previous uberblock is trashed too, it can still
get that transaction back, as applications have been assured
that data is safely committed to disk.)

--
Andrew Gabriel
[email address is not usable -- followup in the newsgroup]
  #180   Report Post  
Posted to uk.d-i-y
external usenet poster
 
Posts: 2,397
Default UPS server wiring no-no

On 10/05/2012 21:36, The Natural Philosopher wrote:
Andy Champ wrote:
On 10/05/2012 16:32, The Natural Philosopher wrote:

Exactly so. Those of us who have worked on te DESIGN of such systems
know that it is impossible to solve the problem *entirely in software*.


This turns out not to be the case.

Just so as I know to avoid them, which systems did you work on the
design of?


Modus minicomputer


Fortunately that won't be hard to avoid. I've never heard of it, and
nor has Google.

Andy
Reply
Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
server down Karl Townsend[_4_] Metalworking 3 October 26th 10 08:28 PM
OT- newsgroup server-who do you use? Ashnook. UK diy 11 April 17th 09 05:47 PM
Tardis News Web Server inside Server Defies Laws of Nature [email protected] UK diy 1 March 3rd 08 08:38 PM
Tardis News Web Server inside Server Defies Laws of Nature O'Neil's Faggy Prostate - Wang - called upon, yet again, to be an \ace\ Electronics Repair 0 March 3rd 08 02:48 AM
Tardis News Web Server inside Server Defies Laws of Nature [email protected] Home Repair 0 March 3rd 08 02:44 AM


All times are GMT +1. The time now is 04:31 AM.

Powered by vBulletin® Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 DIYbanter.
The comments are property of their posters.
 

About Us

"It's about DIY & home improvement"