this post was submitted on 21 Sep 2023
25 points (96.3% liked)

Selfhosted

40219 readers
1725 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I am trying to figure out the optimal way to connect an 8 bay drive enclosure to a Dell Optiplex 7040 Micro. The end goal is to have the drives made available to a Proxmox cluster and kubernetes cluster. This is all for learning experience as well as to run services for personal use.

The cluster will be made up of 2x Optiplex 7040 and 2x Optiplex 3040. All have i7-6700t CPUs, the 3040s have 16GB DDR3 and 1TB SATA SSD each, and the 7040s each have 32GB DDR4 and 2TB NVMe drive with an additional empty SATA port on the motherboard. The enclosure is a MediaSonic ProBox with USB3.0 and eSATA interfaces available

I have heard that you shouldn’t use USB to connect to storage so I have been trying to figure out a way to use eSATA even though the Optiplex does not have an eSATA port. I found some SATA to eSATA cables on eBay, would that enable me to connect the enclosure directly to the free SATA port on the Optiplex?

Would this setup work? Is it worth it to sacrifice the additional SATA port on one of the 7040s in order to avoid using USB? I would like to maximize stability and speed.

I have not yet decided how I want to configure the drives but was planning to look into either a ZFS pool or ceph. All drives in the enclosure will be for media storage (movies/tv/music, was planning to keep pictures and documents elsewhere) and passed to LXCs and a kubernetes cluster I plan to run on Proxmox.

Any guidance on the connection setup, storage configuration, or my plans in general would be appreciated. Thanks in advance!

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 8 points 1 year ago (3 children)

8 drives over USB 3.0 won't perform well, though a single SATA connection isn't without issues either. That cable will be the I/O limiter in an uncommon fashion. I'd go for the eSATA converter option but neither is ideal. A big question is how will the drives be seen by Proxmox? If it's as one big drive that means you're SOL for safe storage options, if so I wouldn't store anything you don't want to lose on there and just make a ZFS storage pool of it in Proxmox.

As for plans I would down the line look at Ceph because it does really well with cheap hardware. No need for a competent RAID controller etc.

[–] [email protected] 2 points 1 year ago (2 children)

Ya I realize this isn’t a great way to go about storage but I already have the enclosure so I might as well use it for now. At some point down the line I will build something that will work better.

If I connect it using USB I am able to see each drive individually in Proxmox. I am unsure if it will be the same if I use eSATA. In the manual it says that the eSATA interface card needs to support Port Multiplier which I fear means the eSATA to SATA option may not work but I was hoping someone here may know more about that.

If I have to go the USB route and I am able to use each drive individually, would you recommend going with a ZFS pool or ceph?

[–] [email protected] 2 points 1 year ago (1 children)

I'd go with ZFS because balancing between the included disk in a Ceph pool will consume I/O and especially if one disk fails, that rebuild would take days if not weeks. If you want to try Ceph then don't include all the 8 drives initially and see how the performance is with the minimum 3 drives.

[–] [email protected] 1 points 1 year ago

Thanks, I may hold off on ceph for now in that case