#Dell perc h200 spec sheet Pc
the shop here just doesn't build enough stuff in normal PC cases to warrant that. I can't justify the cost of a FLIR camera, although they've admittedly come down in price quite a bit. So a handheld anemometer, temperature probes, and a handheld IR thermometer are good tools to have. Over EACH COMPONENT that needs to be cooled. This is WICKED HARD, and is right in that this isn't JUST about "oh I have twenty 120MM fans therefore I must be OK" but it is also about localized airflow management, and also making sure that a single fan's failure doesn't result in the airflow stopping. A much better fix would have been to replace the heatsink, sacrificing the next slot (which the fan does anyways), and then make sure of "good airflow."Īnyways, the only "real" fix I've found for normal PC cases is to make sure that there's airflow over everything important, and that things are actually cool, and that failure of a single fan doesn't compromise that. But so does riding in your car without a seatbelt, right up to the crash. Yes it works great right UP to that point, no doubt. The LSI's are notorious for kinda-working but horribly corrupting data randomly when they're cooking, so in my view this is a Really Bad Fix, because once it starts puking corrupted blocks into your pool, that could be a pool-killing event.
![dell perc h200 spec sheet dell perc h200 spec sheet](https://practicalsbs.files.wordpress.com/2014/08/20140806_131338767_ios.jpg)
The high-flow blank creates some nice airflow over the RAID controller heatsink, sourced from more than a single fan, and the fans are all monitored by IPMI, so a cooling fan failing does not turn into an immediate overheat and crisis.Īlong those lines, I cringe every time I see proudly describing how he tacked a crappy 40MM fan onto the top of a HBA heatsink, because when that fan invariably fails in a few years, it will actually act as an airblock/insulator, and actually cause his HBA to cook much worse.
#Dell perc h200 spec sheet install
Then, we install a "low-flow" slot blank in the top slot, and a "high-flow" in the bottom, and don't worry about plating the card itself, but rather just make sure the card is cabled and seated so that there is airflow on all sides, because the RAID controller and cabling are such that the card is not removable without pulling it along with the riser. First, the chassis comes with only four of six possible fans, and the two missing fans are over in the area of the expansion slots. For wiring reasons, I like to put RAID cards in the bottom, leaving the top accessible for ethernet/10G/etc, but a combination of issues means that you have to carefully engineer airflow. Of course, years later, UIO cards are nearly unobtainium and unsupported anyways, so I've been ripping out the UIO risers and putting in a WIO riser instead, that gives two standard PCIe, with the bottom card's PCB roughly in the middle of the chassis. It's a 1U UIO chassis from Supermicro, a clever but horrible idea that involved reversing the component side of one of the PCIe cards, so that the back of the bottom card is facing the bottom of the chassis, kinda like a "mirror-mirror PCIe". Inside servers you do need to manage airflow.