[Discuss] Server Room Power

Jack Coats jack at coats.org
Wed Oct 12 16:40:07 EDT 2011


Back in the days of mainframes, 208 3ph was a normal choice.  Especially for
large disk drives (large datawise for back them, today they are just
large physically).

If you are doing mainframe datacenters, yes, 208 3ph is still a
reasonable standard.
And even APCC sells UPSes that deliver that too (as well as several
other vendors).

If you are doing a telco type data center, having 48VDC is a norm.
For years SUN
sold servers with power supplies especially for the telco.  This is
because they had
an SLA to provide service for 8 hours (at a minimum) after power
fails, and the most
reliable way is to run off of batteries all the time.  So the battery
bank was rated
appropriately.  Western Electric built some awesome 48VDC equipment.

Other companies did telco rated equipment too.  It was priced more like mil-spec
equipment because it was considered 'specialty'.  And normally it was higher
reliability designs.

When I helped build a small 20k sqft data center, we put in 3 1MW
diesel generators.
They fed power to a auto-switch unit that then ran power into 2 1MW APCC UPS's.
The UPS's fed two separate power buses (120V 1ph, one bus from each UPS).
We installed two power drop plugs above each rack, with 20A to each
(and twist lock plugs).
>From the twist lock plugs we put some large power strips, two per rack.

For computers with dual power supplies, we plugged one into each of
the two separate
power strips.  For ones that had only one power supply, we found some devices
that would plug into both and would 'fail over' if their primary
failed.  We made sure
the primaries were balanced within each rack.

Emergency lighting, backbone networking, etc all ran off separate UPS backed
power circuits.  Even HVAC ran off of UPS power.

...

When I worked for a small regional bank, they used the idea of
separate UPSes in the base
of each rack.  Typically 1500KVA UPSes or equivalent.  We did monitor
the UPSes
centrally and replaced on the average one or two batteries a week.  I
thought we should
have kept spares, but we never did.  Air conditioners was not backed
up, so if we had
power outage we still had to shut down (think Houston TX, august with
rolling power outages).
Bad design for a data center that was mission critical for the business, IMHO.

At that bank we ran lots of Dell and IBM, Intel architecture servers,
Cisco routers,
a couple IBM AIX boxes, and the like.  We tried to spec everything to be 120VAC.
Someone found a 'deal' on some old equipment we had to have either 220VAC or
208 installed for, but we had those wired separately.  They were not on UPSes.
We had a larger (but old) UPS that we had serviced annually (I don't
remember the
brand) that mainly powered the IBM system 32 (or whatever their mid-series of
powerpc type low end 'mainframes' were called).  Not the big ones.  But it was
on this separate fairly large UPS, along with it's direct peripherals.

...

If you need some special power or data center design stuff (raised
flooring, environmental
monitors etc), Leibert has been in the business 'forever' and have
been around for
a LONG time.  I think they are now part of Emerson at emersonnetworkpower.com

If you want some good information about UPSes, go to the apcc.com web site and
read till your eyes cry for mercy.  You will understand the Watts vs
VA difference, at
least enough to make practical decisions (not to engineer, but as
knowledgeable use).

---
When working for an oil company, we were putting in a new unix based
data center.  We were just
starting to use hardware raid controllers and putting 'huge' 8G disk
drives on them (5 per controller).
Starting power on those drives was significant, but they had an option
of 'delayed startup'.  If you
turned it on, on each drive, they would delay startup by the scsi ID
number times 10 seconds.
This allowed us to recover after a power failure without killing our
data center wide UPS.  All that to say
understand your power options on equipment you put in the data center.
 Without the delay
we would have needed a much larger UPS to handle the start up currents.

---

I was installing a computer on a ship.  We had a problem because
without a UPS, it was
doing corrupt writes on occasion.  It turned out to be they would
happen when the power
sagged whenever someone used the elevator (it was on the same circuit,
even though I
was told it was not).  My suggestion was to put a UPS on the computer as a power
conditioner.  The vendor said it was not necessary. ... The problem
was resolved after I
left the project, by putting a UPS on the computer. ... Life goes on.


Sorry for the LONG response but I hope it helps in understanding of some real
world data center power issues.



More information about the Discuss mailing list