What are you talking about.. Containers make it way easier to setup and operate services, especially multicomponent services like Immich. I just tried Immich and it took me several minutes to get it running. If I wanted to give it permanent storage, I'd have to spend several more to make a directory then add a line in a file and restart it. I've been setting up services before Linux containers became a thing and after. I'd never go back to the pre-container times if I have the choice.
avidamoeba
✗ Signal is experiencing technical difficulties. We are working hard to restore service as quickly as possible.
Status page now.
Textbook case of late stage capitalism and a resounding success for Boeing's major shareholders.
You don't migrate the data from the existing z1. It keeps running and stays in use. You add another z1 or z2 to the pool.
If the vdevs are not all the same redundancy level am I right that there's no guarantee which level of redundancy any particular file is getting?
This is a problem. You don't know which file ends up on which vdev. If you only use mirror vdevs then you could remove vdevs you no longer want to use and ZFS will transfer the data from them to the remaining vdevs, assuming there's space. As far as I know you can't remove vdevs from pools that have RAIDz vdevs, you can only add vdevs. So if you want to have guaranteed 2-drive failure for every file, then yes, you'd have to create a new pool with RAIDz2, move data to it. Then you could add your existing drives to it in another RAIDz2 vdev.
Removing RAIDz vdevs might become possible in the future. There's already a feature that allows expanding existing RAIDz vdevs but it's fairly new so I'm personally not considering it in my expansion plans.
What you lose in space, you gain in redundancy. As long as you're not looking for the absolute least redundant setup, it's not a bad tradeoff. Typically running a large stripe array with a single redundancy disk isn't a great idea. And if you're running mirrors anyway, you don't lose any additional space to redundancy.
Adding new disks to an existing ZFS pool is as easy as figuring out what new redundancy scheme you want, then adding them with that scheme to the pool. E.g. you have an existing pool with a RAIDz1 vdev with 3 4TB disks. You found some cheap recertified disks and want to expand with more redundancy to mitigate the risk. You buy 4 16TB disks, create a RAIDz2 vdev and add that to the existing pool. The pool grows in storage by whatever is the space available from the new vdev. Critically pools are JBODs of vdevs. You can add any number or type of vdevs to a pool. The redundancy is done at the vdev level. Thus you can have a pool with a mix of any RAIDzN and/or mirrors. You don't create a new pool and transition to it. You add another vdev with whatever redundancy topology you want to the existing pool and keep writing data to it. You don't even have to offline it. If you add a second RAIDz1 to an existing RAIDz1, you'd get similar redundancy to moving from RAIDz1 to RAIDz2.
Finally if you have some even stranger hardware lying around, you can combine it in appropriately sized volumes via LVM and give that to ZFS, as someone already suggested. I used to have a mirror with one real 8TB disk and one 8TB LVM volume consisting of 1TB, 3TB and 4TB disk. Worked like a charm.
They better increase quickly. Apple's bank account is large.
And OpenStack is a mature open source project, tried, true, used in all sorts of data centers large and small, supported by multiple vendors. I'd take it any day before 10 VC-funded guys' project. I mean good for them for skinning that cat and if it gains a real community, I might bite.
Don't we already have OpenStack? Something, something OpenStack complicated, something, something. Sounds a bit like raison d'étre.
A wiki sounds like the right thing since you want to be able to see the current and previous versions of things. It's a bit easier to edit than straight Markdown in git, which is the other option I'd do. Ticketing systems like OpenProject are more useful for tracking many different pieces of work simultaneously, including future work. The process of changing your current networking setup from A to B would be tracked in OpenProject. New equipment to buy, cabling to do, software to install, descibing it in your wiki, and the progress on each of those. Your wiki would be in state A before you begin this ticket. Once you finish it, your wiki will be in state B. While in progress, the wiki would be somewhere between A and B. You could of course use just the wiki but it's nice to have a place where you can keep track of all the other things including being able to leave comments that provide context which allows you to resume at a later point in time. At several workplaces the standard setup that always gets entrenched is a ticketing system, a wiki and a version control. Version is only needed for tasks that include code. So the absolute core are the other two. If I had to reduce to a single solution, I'd choose a wiki since I could use separate wiki pages to track my progress as I go from A to B.
Dave Calhoun is this you?
Get out of the anti container mindset. Getting started with docker takes half an hour. You need to learn 3-4 commands to use other people's services. Everything is easier than RPMs afterwards.