I'm looking to upgrade some of my internal systems to 10 gigabit, and seeing some patchy/conflicting/outdated info. Does anyone have any experience with local fiber? This would be entirely isolated to within my LAN, to enable faster access to my fileserver.
Current existing hardware:
- MikroTik CSS326-24G-2S+RM, featuring 2 SFP+ ports capable of 10GbE
- File server with a consumer-grade desktop PC motherboard. I have multiple options for this one going forward, but all will have at least 1 open PCIe x4+ slot
- This file server already has an LSI SAS x8 card connected to an external DAS
- Additional consumer-grade desktop PC, also featuring an open PCIe x4 slot.
- Physical access to run a fiber cable through the ceiling/walls
My primary goal is to have these connected as fast as possible to each other, while also allowing access to the rest of the LAN. I'm reluctant to use Cat6a (which is what these are currently using) due to reports of excessive heat and instability from the SFP+ modules.
As such, I'm willing to run some fiber cables. Here is my current plan, mostly sourced from FS:
- 2x Supermicro AOC-STGN-i2S / AOC-STGN-i1S (sourced from eBay)
- 2x Intel E10GSFPSR Compatible 10GBASE-SR SFP+ 850nm 300m DOM Duplex LC/UPC MMF Optical Transceiver Module (FS P/N: SFP-10GSR-85 for the NIC side)
- 2x Ubiquiti UF-MM-10G Compatible 10GBASE-SR SFP+ 850nm 300m DOM Duplex LC/UPC MMF Optical Transceiver Module (FS P/N: SFP-10GSR-85, for the switch side)
- 2x 15m (49ft) Fiber Patch Cable, LC UPC to LC UPC, Duplex, 2 Fibers, Multimode (OM4), Riser (OFNR), 2.0mm, Tight-Buffered, Aqua (FS P/N: OM4LCDX)
I know the cards are x8, but it seems that's only needed to max out both ports. I will only be using one port on each card.
Are fiber keystone jacks/couplers (FS P/N: KJ-OM4LCDX) a bad idea?
Am I missing something completely? Are these even compatible with each other? I chose Ubiquti for the switch SFP+ since Mikrotik doesn't vendor-lock, AFAICT.
Location: US
I'll have to review your post in greater detail in a bit, but some initial comments: cross vendor compatibility of xcvrs was a laudable goal failed only by protectionist business interests and the result is that the only real way to validate compatibility is to try it.
Regarding your x4 slot and the NICs being x8: does your mobo have the slot cut in such a way that it can accept a physical x8 card even though only the x4 lanes are electrically connected?
For keystone jacks, I personally use them but I try not to go wild with them, since just like with electrical or RF connectors, each one adds some amount of loss, however minor. Having one keystone jack at each end of the fibre seems like it shouldn't be an issue at all.
Final observation for now: this plan sets up a 10 Gb network with fibre, but your use-case for now is just for a bigger pipe to your file server. Are you expecting to expand your use-cases in future? If not, the same benefit can be had by a direct fibre run from your single machine to your file server. Still 10 Gbps but no switch needed in the middle, and you have less risk of cross vendor incompatibility.
I'm short on time rn, but I'll circle back with more thoughts soon.
Thanks for the quick reply. The available x4 slots are all physically x16, but electrically x4.
While my use case today is pretty narrow, I'd rather not mess with the custom network settings to make it all cooperate on an otherwise completely flat network. The file server is running Ubuntu, and the desktop is currently running VMware ESXi. In the future, I expect to replace it with something else. I did verify that it lists the Intel network chipset on the HCL.
Ok, I'm back. I did some quick research and it looks like that Mikrotik switch should be able to do line-rate between the SFP+ ports. That's important because if it was somehow doing non-hardware switching, the performance would be awful. That said, my personal opinion is that Mikrotik products are rather unintuitive to use. My experience has been with older Ubiquiti gear and even older HP Procurve enterprise switches. To be fair, though, prosumer products like from Mikrotik have to make some tradeoffs compared to the money-is-no-object enterprise space. But I wasn't thrilled with the CLI on their routers; maybe the switches are better?
Moving on, that NIC card appears to be equivalent to an Intel x520, so drivers and support should exist for any mainline OS you're running. For 10 Gbps beyond, I agree that you want to go with pluggable modules when possible, unless you absolutely know that the installation will never run fibre.
I will note that 10 Gbps over Cat 5e -- while not mentioned in the standard and thus officially undefined behavior -- it has been reported to work over short distance, in the range of 15-30 meters by some accounts. The twisted pair Ethernet specs only call out the supported wire types by their category designation but ultimately, it's the signal integrity of the differential signals that matter. Cat 3, 5, 5e, 6, etc are just increasingly better at maintaining a signal over a distance. This being officially undefined just means that if it doesn't work, the manufacturer told no lie.
But you're right to avoid 10 Gbps twisted pair, as the xcvrs are expensive, thermally ridiculous, power hungry, and themselves have length limits shorter than what the spec allows, because it's hard to stuff all the hardware into an SFP+ pluggable module. Whereas -SR optics are cheap and DACs even cheaper (when the distance is short enough). No real reason to adopt twisted pair 10 Gbps if fibre is an option.
That said, I didn't check the compatibility of your selected SR transceiver against your NICs and switch, so I'll presume you've done your homework for that.
Going back to the x8 card in a electrically x4 slot, there's a thing in the PCIe spec where the only two widths that are mandatory to support are: 1) the physical card width, and 2) the 1x width. No other widths are necessarily supported. So there's a small possibility that the NIC will only connect at 1x PCIe, which will severely limit your performance. But this is kinda pathological and 9 out of 10 PCIe cards will do graceful width reduction, beyond what the PCIe spec demands. And being an x520 variant, I would expect the driver to have no issue with that, as crummy PCIe drivers can break when their bad assumptions fall through.
Overall, I don't see any burning red flags with your plan. I hope you'll update us with new posts as things progress!
This is the part that's making me the most nervous. Even when I can find a compatibility list, they only refer to obscure first-party transceivers that I can't find (or cost an absurd amount). But from what I have gathered, SFP+ is perfectly standardized, and it's only the lockout code preventing you from using any transceiver on the market.
I couldn't even tell if there's a difference (beyond basic spec compatibility, like 1G vs 10g, SR MMF vs LR, etc) between the expensive ones over the cheap generics. There must be some differences, because I see multiple models that look otherwise identical. Unless they're just aesthetics/date indicators, which is always a possibility.
Unfortunately, the situation is not so simple. Even if the various vendor-locks weren't a thing, the fact is that the testing matrix of xcvrs on the market crossed with the number of switch manufacturers and all their models is ginornous, and it would be a herculean effort to acquire, let alone validate even a subset of all combinations.
While SFP is defined in a standard, the allowable variances -- due to things like manufacturing capabilities and the realities of environmental influences -- mean that it's possible for two compliant transceivers to just not link up. It's unfortunate, but interoperability with so many players and at such cut-throat margins leads to this reality.
And since it's a chain of components, any incompatibility of switch, xcvr, or fibre can wreck a link, and then the blame game hot-potato gets tossed around since no vendor wants to investigate a link issue if it might not be their fault.
In my experience, though, the initial link negotiation is the most problematic part when building a link that isn't all supplied by one vendor. Once past this, I find that a link rarely has issues thereafter. Which is good if you're able to return xcvrs if they don't work for your setup.
This is just anecdotal, but I have never once experienced an issue with SFP+ vendor lock. I have mix-and-matched transceivers between Mikrotik, DLink, TPLink, Dell Enterprise, and Xyxel switches as well as both Mellanox and Intel NICs. The only issue i can recall is some auto-negotiatiin issues using 1GB modules in a Mellanox switch. Manually setting the link rate fixed it. I use a combination of 10Gb fiber, 10Gb copper, and 1Gb copper modules as well as DAC depending on the situation.
I know that vendor lock does exist, but it's not as widespread with modern hardware.
I think you're right, as prosumer and low-end enterprise switch vendors have less of an incentive to bundle first-party xcvrs along with switch sales. However, the ISP and large-enteprise market segments still have vendor locks, although many have an "allow unsupported xcvr" mode which will apply best-effort to operate a third-party xcvr but the warranty won't be honored while such a xcvr is installed.
The likes of Cisco and HPE do things like this, but given that the target customers of such switches are buying them in the hundreds to thousands, and each switch already costs thousands of dollars, the cost of first-party pluggables is just a part of the deal. Such customers also value reliability to a greater degree, so even a miniscule prospect of incompatibility will be avoided.
Insofar as it pertains to this community, the ability to enable the unsupported xcvr mode means old high-end equipment gets a second life in someone's homelab, since warranties stop mattering there