4

I have just finished my (first ever) server build. Everything seems to work ok. Except I only see one of the two NVMe M.2 SSDs that I installed via a PCIe adapter. (At least, it shows as 500 GB, rather than 1 TB.)

  • Are the NVMe SSDs faulty?
  • Or is the adapter automatically generating some sort of RAID array?
  • If so, how can I modify this?
  • Or am I missing some setting somewhere?

The hardware (amongst others):

The following image shows screenshot of BIOS - the highlighted drive is the one in the M.2 PCIe adapter Idem for dev/disks

Additional info:

  • The 250GB has EXSi installed - also there the SSD shows as 500GB, rather than 2x500GB
  • The status leds on the ASUS adapter on both slots with SSD are lit
  • The manual mentions “For Intel® motherboards, go to Advanced > CPU Storage Configuration, then set the PCIE slot(s) that you have installed the Hyper M.2 x16 card(s) to [Hyper M.2 X16].” But I cannot find this option..
4
  • 1
    The ASUS website (and a number of shop-websites that sell these) specifically list this card as only compatible with a small number of Asus main-boards. Are you sure it is supposed to work with your Fujitsu system ?
    – Tonny
    Commented Dec 28, 2018 at 19:34
  • @Tonny, according to my reseller it is. Let me double check with them.
    – 3JL
    Commented Dec 28, 2018 at 19:45
  • @K7AAY, thanks for the edits. Re configuring. Maybe that’s the issue: I do not know how/where to configure the device. There is nothing/not much about this in the manual. It does say: “For Intel® motherboards, go to Advanced > CPU Storage Configuration, then set the PCIE slot(s) that you have installed the Hyper M.2 x16 card(s) to [Hyper M.2 X16].” But I cannot find this option..
    – 3JL
    Commented Dec 29, 2018 at 1:10
  • @KamilMaciorowski, Super User is not a discussion forum, but it is, by definition, a Q&A forum because anybody is allowed to ask or answer questions.
    – Ron Maupin
    Commented Dec 29, 2018 at 1:20

5 Answers 5

4

The PCIE adapter in the original posters case requires a mainboard with Bifurbication support, while the adapter was confirmed incompatible, the reason why was never addressed.

Bifurbication splits a 16x or 8x electrical slot into a division of that slots max lane count, in some cases support can be modded in using bios and uefi editors.

Not all motherboards that can be modded support a full 4x4x4x4x bifurbication and can adapt to 3 out of a possible 4.

Since the original posters mainboard did not support this, only the nvme on the first 4 lanes was visible to the machine.

1

Check the manual for the PCIe card carefully - Some of the cheaper ones actually consist of 1x PCIe M2 slot and 1x SATA M2 slot but with the SATA slot needing to be connected in to the motherboard to work.

For example - this model: https://www.silverstonetek.com/product.php?pid=575&area=en

(I bought one of these and was disappointed to discover that limitation).

I'd check the link you've posted but the manual for the ASUS card you've linked deals exclusively with the RAID setup process (it doesn't discuss the hardware at all).

1
  • Thanks @Adam, I have also seen the cards you mention. This one seems different. I have added more hardware info in the original post.
    – 3JL
    Commented Dec 29, 2018 at 12:49
1

After some back and forth with the retailer, they finally confirmed the MB and adapter are indeed not compatible.. Thanks!

1

A PCIe interface is made up from one or more "lanes". Each lane is made up of a pair (one in each direction) of high speed serial transceivers. More lanes give you more bandwidth.

Some but not all PCIe interfaces support "bifurcation". The same transceivers can be used either to support a single large link, or multiple smaller links. At least some ICs that support bifurcation require the bifurcation to be configured explicitly and the implementers of boards with those ICs on may or may not expose said configuration to the user.

There are two ways to design a PCIe x16 to 4x PCIe M.2 Card*. The cheap way is to just bifurcate the lanes coming from the host. The problem is this only works in some systems, the slot must have all 16 lanes connected, the hardware driving the slot must support bifurcation and the firmware must allow the user to enable that bifurcation. If bifurcation is not supported or enabled then you will only see the M.2 device that is connected to the first four lanes.

That appears to be the situation you are in. You have a bifurcation based card, but either your motherboard doesn't support bifurcation, or you have not managed to find the option to enable it (unfortunately motherboard manufacturers are not consistent in their terminology for this stuff).

The other way to design such a card is to include a bridge chip on the card. The bridge chip presents a proper x16 interface to the host and four separate x4 interfaces to the downstream devices. No special configuration is needed on the host computer, and the card can still operate if the upstream interface is narrower than the full x16.

The problem is that bridge chips big enough for the job (32 lanes total, 16 upstream to the host and 4x4 downstream to the SSDs) are expensive and the cards aren't exactly massive sellers with huge economies of scale. The end result is that bridge-based cards are rare and quite expensive. On a quick search the only vendor I could turn up was "sonnet" (a manufacturer of professional AV gear) with their cards costing hundreds of pounds. I'm sure I've seen others in the past but I can't turn them up right now.

It's usually fairly easy to tell the difference between the two visually. A bifurcation based card will have tracks running directly from the m.2 slots to the PCIe edge connector and will generally only have relatively small ICs. On a bridge based card the tracks from the edge connector and M.2 slots will instead run to a large bridge IC.

* To further confuse matters there also exist PCIe cards that are designed for SATA M.2 drives. You can usually identify these as they tend to be x1 cards and they tend to have B key rather than M key slots.

0
0

A review of the ASUS HYPER M.2 X16 PCIe RAID card manual shows it cannot be configured except as RAID. If you want to have 1,000 GB of storage using the two 500 GB M.2 drives, you must configure it for RAID 0, which is a more fragile system than having each M.2 drive assigned to its own drive letter; if either drive fails, the content of both drives is lost.

The Fujitsu motherboard uses an Intel CPU, so you would follow Chapter One of the Asus instructions. You cannot use RAID 1, RAID 5, or RAID 10; you must use RAID 0.

Furthermore, section 1.1.3 Paragraph 2 notes

Due to chipset limitation, when SATA ports are set to RAID mode, all SATA ports run at RAID mode together.

If so, this may require you configure your SATA HDDs for RAID, which you may not want.

I would suggest you consider returning this to the vendor if you don't want that.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .