• 0 Posts
  • 32 Comments
Joined 1 year ago
cake
Cake day: August 2nd, 2023

help-circle




  • In days past some drive vendors had different sector layouts for drives and would cause issues with raid. Pretty sure most nowadays are all the same layout and you won’t run into any issues. I still look to get the same drive model anyways just to be perfectly sure that there are no issues.

    Even then you may run into weird issues like one of my 1.2 TB enterprise ssd drives was reporting 1.12 TiB rather than 1.09 TiB the other 7 drives had. TrueNas refused to build a vdev with that drive and I had to return it to get a new one.


  • Typically a Fiber ISP will run Fiber optics only to your DEMARC (or Demarcation) point. This will be usually where your main cable (before any splits) or DSL line used to come in (in the US they’ve been using Orange tubes to indicate this and it will usually run to a panel in some closet or laundry). At the DEMARC they’ll install one of two things: a basic fiber to ethernet converter which will provide you a single ethernet port and a pure tap to the internet, or a Gateway device that will convert the fiber to multiple eithernet with NAT (usually providing other capabilites like TV, Phone, etc).

    If you have the latter, you may not get much say in what you can do with your connection, and would be limited to a DMZ mode that is configured on the Gateway. What you put behind the converter or gateway is up to you.


  • Make or find yourself a cart to drag around (g or G to drag it). It it doesn’t have wheels it’ll be quite loud. Sound = attraction = death in most cases.

    Don’t bother with cars for a long while, even one that actually runs. They take a lot to maintain and cause a lot of noise (see above). You’re better off starting with a bike for midrange transportation (or if using mods a foldable bike).

    When you start building or find a nice base area, make a crafting nook and drop all your items nearby to it. When crafting you can pull ingredients from 1-2 tiles adjacent.





  • You seem to be misinformed on how the internet works. Nothing is “free”. ISPs have to buy equipment, pay for expensive physical connectivity (without disturbing existing infrastructure), and usually have to deal with constant, ever increasing bandwidth requirements.

    I’m all for a bit of net neutrality, but ISPs tend to get a lot of flak for policies like this, for seemingly no reason. For example, let’s say ISP A and Upstream B have a mutual bandwidth sharing policy (called Peering) where both sides benefit equally from the connectivity. ISP A determines that N is using all the bandwidth to Upstream B. ISP A has three options: N gets all the bandwidth to Upstream B (disturbing other traffic to/from that network), N has to be throttled to allow all traffic equally, or ISP A and Upstream B need to expand their network again (new equipment, new physical links) which will cost a lot of money. N doesn’t even pay ISP A or Upstream B, they just pay their ISP C. In the end, ISP A has to throttle N, and N is the one who had to expand/change their business model to deliver content to their customers. They had to go out and buy services from many upstream providers to even the load and designed a solution to install Caching boxes inside each ISP’s datacenter so their traffic could reach end users without going upstream.



  • For the disks, you may have a small issue with having multiple types of disks in a single RAID10, as those disks might have slightly different physical attributes. ZFS is an option here as you can add two vdevs for the different drive types and add them to the same zpool, which effectively creates the RAID10 you’re looking for. You would typically not use LVM on top of ZFS but if you go with RAID10 it would let you create logical partitions that can be expanded easily at a later time.

    Another ZFS option is to use RAIDZ1 with the 4 disks in a vdev. The vdev will use 1 disk of space across all disks to maintain a parity with the other disks. You will have 12TB of usable storage on your 16TB raw storage. This will allow you to lose one drive with no data loss.







  • Since we don’t know what server or VM tech you’re using the advice will be pretty generic. For self hosting, you can likely get away with your ISCSI traffic sharing the LAN interface with your usual vm traffic but if you need high throughput you will want ISCSI optimized nics and turn on jumbo frames (mtu of 9000 is the standard here). This requires a switch that supports jumbo frames as well.

    For Windows, I find the ISCSI support to be very lacking. Every time I have used it I have had sporadic loss of connectivity, failure to mount on boot, and other issues. I would avoid it.

    For ESXi you can map an ISCSI lun as a datastore and create vmdks on top. This functions the same if you use actual FC luns or NFS mounts, and have had no issues with reliability. There’s also RDM which is raw direct map which can mount the ISCSI lun as a disk of the vm. If you’re using vSphere I would advise against this as you lose the ability to vMotion or use DRS.