Thread: OT Fahrenheit
View Single Post
  #101   Report Post  
Posted to alt.engineering.electrical,alt.home.repair,rec.food.cooking
T T is offline
external usenet poster
 
Posts: 44
Default OT Fahrenheit

In article Tsn6h.1215$8u1.207@trndny04, says...
krw wrote:
In article ,
says...

On Mon, 13 Nov 2006 14:19:07 -0500, krw wrote:


Mainframes are *not* specified for office environment (rather
"Class A") though. There is a difference between a "departmental
server" and a data center mainframe.

I am not sure what machines you are talking about but 4300s and AS400s
were office space rated. These were around before most people had ever
heard of a server or a LAN.


Ok, let me try again, slower. AS/400 and 4300s are/were what we
now call "departmental servers". /370, ES/9000s were relegated to
data centers and are rated for a "class-A" environment only. Note
that "office space" rating isn't exactly harsh either.


You guys are in semi-violent agreement.
Keith's first response was:
"Not true at all. A high RH contributes to failures in electronics
as well. Even recent equipment is specified from 40-60% RH, over a
fairly narrow temperature range."

I call the "not true at all" part complete bull****.
What Greg said was 100% true. And the gratuitous
"let me try again, slower" is another detractor.

Bottom line: human comfort and "equipment comfort"
are roughly the same, with the "equipment comfort"
range being wider than the human comfort range.
Think about it - humans operate the equipment, and
would not be willing to work in the thousands upon
thousands of "normal" datacenters if the machinerey
could not function in office-like temperature and
humidity. (Sorry - if you're in the military, you
work where they tell you - but even then, if it's
in a datacenter, it's likely to be comfortable.)
In fact, humans usually get uncomfortable outside
the 68-72 range, on average. Datacenter machinery
functions well outside of that range. The farther
you depart from that 68-72, the more extensive the
steps a human needs to take. Machines can't take
those steps, so they will fail when the conditions
are too far from nominal. What would be interesting
is some real discussion of the specific numbers.

I'll give you five examples:
1) Peat Marwick Mitchell datacenter, early 70's
An airconditioner failure caused DASD (2314) data
errors at exactly 94 degrees on their wall thermometer.
Ran fine at 93.
2) Manufacturers Hanover Trust datacenter began losing
equipment (power down) when temperature went above 90
during a blackout. (Early 80's) They had emergency
power to keep the data processing equipment running,
but nothing to power the conditioners.


Ah - we run something comparitively smaller in our office with a pretty
even mix of *nix to Windows servers. All total there are roughly 50
servers.

Room is supplied with power by an APC Symmetra that gives us nominally
15 minutes of backup power. That Symmetra also has a kill switch for
emergency and its wired into the fire alarm system so that when the
sprinklers go off, all power to the room is cut.

The Symmetra also powers the cubes in the IT space. Right now we get 40
minutes time out of it, but that's only because two of our employees
like to have their heaters going full tilt. Otherwise it's over an hour.

Overhead lighting and air conditioning are not on the UPS. However there
is a 125kW natural gas fired generator out back that backs up the UPS,
and also supplies power to not only the overheads, but to the HVAC
system and we even ran a line out to the MDF int he building so Cox
could take advantage of our generator in the event of a building wide
power failure. We weren't being altruistic, we just wanted to make sure
our network connection stays up.

We also do quarterly tests of the power system, as well as having the
system set to do regular exercise runs on the generator.

That data center was my baby. And the redundancy built in shows it.