I am building my personal private cloud. I am considering using second hand dell optiplexes as worker nodes, but they only have 1 NIC and I’d need a contraption like this for my redundant network.

Then this wish came to my mind. Theoretically, such a one box solution could be faster than gigabit too.

  • Bitswap@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 year ago

    That product will never exist as there are only a handful of customers who would want it and even less who would pay for it.

    Also, lookup the MTBF reports. It’s more likely that all your Client systems will fail before a switch does.

  • SheeEttin@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    1 year ago

    If you have a bunch of nodes, what do you need redundant NICs for? The other nodes should pick up the slack.

    It’s unlikely for the NIC or cable to suddenly go bad. If you only have one switch, you’re not protected against its failure, either.

    • akash_rawal@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I plan to have 2 switches.

      Of course, if a switch fails, client devices connected to the switch would drop out, but any computer connected to both switches should have link redundancy.

    • computergeek125@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      There are still tons of reasons to have redundant data paths down to the switch level.

      At the enterprise level, we assume even the switch can fail. As an additional note, only some smart/managed switches (typically the ones with removable modules and cost in the five to six figures USD per chassis) can run a firmware upgrade without blocking networking traffic.

      So from a failure case and switching during an upgrade procedure, you absolutely want two switches if that’s your jam.

      On my home system, I actually have four core switches: a Catalyst 3750X stack of two nodes for L3 and 1Gb/s switching, and then all my “fast stuff” is connected to a pair of ES-16-XG, each of which has a port channel of two 10G DACs back to to Catalyst stack, with one leg to each stack member.

      To the point about NICs going bad - you’re right its infrequent but can happen, especially with consumer hardware and not enterprise hardware. Also, at the 10G fiber level, though infrequent, you still see SFPs and DACs go bad at a higher rate than NICs

  • Concave1142@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    I’m going to go a different route than your question. If you have a spare m.2 slot and room in your PC, you can install a m.2 network adapter. I recently installed a m.2 to 2.5gbe adapters in a Dell 3060 SFF as a proof of concept at home for getting Proxmox ceph cluster working over 2.5gbe.

    I used this adapter. https://www.ebay.com/itm/256214788974?mkcid=16&mkevt=1&mkrid=711-127632-2357-0&ssspo=96RQC3CqQ_u&sssrc=4429486&ssuid=9BfwgvpgRMG&var=&widget_ver=artemis&media=COPY

    • krolden@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      This is the way to do it for minipcs in my experience. As long as for some reason the box you’re using only allows for a whitelist of wlan cards to be used, but I haven’t run into any that does that yet.

  • snekerpimp@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    If those are gigabit, I think I have that exact adapter. I have never used it in production, but I have not run into any issues using it with a laptop when diagnosing. Theoretically you can connect hosts directly to each other via usb3 ala level one and have really fast through put but I have not even started investigating this.

    • akash_rawal@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The level1 video shows thunderbolt networking though. It is an interesting concept, but it requires nodes with at-least 2 thunderbolt ports in order to have more than 2 nodes.

    • pezhore@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Adding to this - I have those adapters to, ans fyi they don’t support jumbo frames.

  • themoonisacheese@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Nice idea if you actually have the rest of the redundant network, uplink and all that jazz (otherwise you’re wasting time and money).

    the reason this won’t ever be a product is because if you’re serious about your redundancy you’re installing extra NICs inside the servers, which are ideally not second-hand. the only people who would be the target market of such a product is just you.

    also: do these servers not have pcie slots inside? is there truly no way of adding nics inside?

    • akash_rawal@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yes, the entire network is supposed to be redundant and load-balanced, except for some clients that can only connect to one switch (but if a switch fails it should be trivial to just move them to another switch.)

      I am choosing dell optiplex boxes because it is the smallest x86 nodes I can get my hands on. There is no pcie slot in it other than m.2 SSD slot which will be used for SSD.

  • PeachMan@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    Why not just use a separate switch and wireless AP for redundancy? Wi-Fi can be your backup if your wired switch goes down. Assuming your Dells have Wi-Fi cards, that is.