3

After exhaustively searching around on the internet and here at superuser I couldn't find a definitive answer to this. I've heard how with some devices they work with less pci-e bandwidth even if they are rated higher. Without further delay, here is my dillema.

I'm looking for a Hardware RAID solution as I cannot afford downtime that would be incurred by simply backing up data, yes I will still be backing up data as I'm well aware RAID is not a backup, and require performance either in RAID 5 or 6 mode.

I've been having a bit of a problem with performance of FakeRAID which as I've learned isn't much of a shocker. My dillema exists because my motherboard has two PCI-E x16 slots, except one runs at x4 mode so it's merely x16 in a mechanical sense. See More here: http://www.newegg.com/Product/Product.aspx?Item=N82E16813130608

I've been shopping around and determined the LSI MegaRAID SATA/SAS 9260-4i with BBU is my best solution, only it and any Hardware RAID cards worth mentioning are x8.

I'm somewhat under the impression that the only requirement for x8 is because of the potentially high bandwidth that these cards can handle. As I'll only be using a maximum of 4 Hard Drives in RAID 5 or 6 mode, should the x4 electrical/x16 mechanical slot work?

Thank you for your time and Best Regards, Howard

4 Answers 4

4

In theory so long as the PCI-E card fits in the slot then the card and host should negotiate the number of lanes that is used, from Wikipedia:

  • A PCIe card physically fits (and works correctly) in any slot that is at least as large as it is (e.g., an ×1 sized card will work in any sized slot);
  • A slot of a large physical size (e.g., ×16) can be wired electrically with fewer lanes (e.g., ×1, ×4, ×8, or ×12) as long as it provides the ground connections required by the larger physical slot size.

In both cases, PCIe negotiates the highest mutually supported number of lanes. Many graphics cards, motherboards and bios versions are verified to support ×1, ×4, ×8 and ×16 connectivity on the same connection.

5
  • Thanks, That's pretty much what I'm expecting. At 500MB/s per lane I don't see how 4 Hard Drives could saturate the bandwidth. My only hangup was that upon contacting LSI, they insisted that x8 was required and I should get a new motherboard. Commented Sep 10, 2011 at 10:39
  • As I mentioned, my answer is based on theory and it may possibly be that if they don't fully support the standard then the card may not work with less lanes, but I would be surprised if that were the case.
    – Mokubai
    Commented Sep 10, 2011 at 10:42
  • I'm ordering the part and will report back when it does or doesn't work. Thanks for the information! Commented Sep 14, 2011 at 16:01
  • @Howard - did it work OK? Commented Dec 8, 2012 at 17:29
  • @JustinGrant This was a while back, I don't know why but I did not order the part. There's a new submission in Steve S. actually tried this and succeeded. I realize this is a necro post but hopefully it's useful to someone in the future. Commented Nov 2, 2014 at 16:06
2

I've done it with x8 physical/x8 electrical 3Ware cards which I have plugged into x16 physical/x4 electrical slots on motherboards. They work just fine. In my case, the 3Ware GUI also confirmed the card was only seeing 4 electrical lanes, as expected.

In a SOHO environment I doubt you would ever notice the difference. In a server environment where you wanted to maximize throughput and I/O's, you probably would notice the difference. I suppose you might also notice the difference if you were running solid state drives vs. mechanical drives or you were only seeking maximum throughput.

I have never tried plugging an x8 card into an open-ended slot with a lower physical count.

0

I don't really want to run backups either, hence I run a RAID 5, and every so often, as an extra precaution, I use a large single drive to backup the entire RAID 5.

However, something didn't seem to get covered clearly. For gamers, you want the fastest speeds possible for your graphics card right? That means 16x mechanical, and 16x lanes. However, many motherboards do NOT give you that rating if you add another card into the motherboard (depending on where you stick that card). For instance, it will say something like, "16x will drop to 8x if a card is inserted in the 8x slot, and the 8x slot will run in 4x because the bandwidth is shared", which is stupid as heck. Anyway, the 4x slot in many cases does NOT share the bandwidth with the 16x slots, and is therefore SAFE! Instead, (in the case of my motherboard for instances) it takes up all the bandwidth of the 1x slots making them unusable, which I could care less about. The problem I ran into was my raid card needs 8x MECHANICAL, and the 4x slot, which can take an 8x card in size, has metal pins that correspond to only 4x mechanical, NOT 8, so I HAVE TO USE THE 8x SLOT! So lame! Oddly enough, the card works, but only 50% of the time, dropping off and vanishing the other 50%, which caused me to look into what was happening with this solid card in my last machine, but unstable as heck card in my new build. So now my card is in the 8x slot, FULLY stable, and dropping my gaming rig 50% in bandwidth...not happy, but I understand the problem now. Wish I had a 4x SATA RAID card, but I haven't really found any in existance by Areca.

-2

I just noticed you MB from the newegg link are you running 2 GPUs in the 2 16x slots or 4x (blue slots) like most boards you can go into the uefi & set the width of the pcie lanes that corresponding slot ie 16x or 4x the card must be in the slot to affect this change

1
  • Welcome to SuperUser. Please be aware of the age of questions, and avoid providing new answers to old questions. Links age and the contents of the linked pages change, PCI versions change, color assumptions about slots change, UEFI wasn't always included on PCs back when the question was asked, etc. Commented Sep 24, 2020 at 23:14

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .