Multiple Drive Configuration

So up until now always played with a server on an old machine with a single hard drive very simple and straight forward. Never with actual proper server hardware.

I now have a Dell R520 (2x Intel E5-2430 6 Core 2.5GHz, 48GB RAM) that I want to use as a dedicated server.

I’d like to install CentOS with Webmin/Virtualmin. I’m starting with 15 or so sites for mostly friends and family and personal projects. Low to moderate traffic.

My issue is disk configuration and management. Trying to wrap my head around how that will work with multiple drives available. Here is what I have to work with:

1x 500GB SSD added in the DVD bay. (thinking installing the OS here?)
1x 2TB HDD (came with the server)
1x 10TB HDD (had it sitting, not being used, so threw it in)
6x 8TB HDDs
PERC H310 Mini RAID controller

I know it’s a ton of storage, but I’ve got it, may as well play with it.

I’ve been searching for tutorials and how-tos but most of what I am finding is related to building a NAS or virtualization servers.

I’ve come across LVM which seems promising from an expansion standpoint but not seeing much on redundancy (unless I am missing something there) and then there is RAID both hardware and software which can address redundancy. Don’t know if ZFS fits into this application.

There is a brief section about partitioning here https://www.virtualmin.com/documentation/installation/automated and I think I should be separating the /home and /var onto a separate partition?

Any best practices recommendations for the drive configuration in this use scenario?

You’re correct on LVM not having redundancy. You’d need LVM pool built on top of raid redundant disks. Have to setup raid then lvm on top of it. Or ZFS handles raid and redundancy all in one. Much simpler.

This is just me…
I would use 2 8TB disks in raid 1 and install the OS. Maybe partition out 500GB and use it for the OS, leaving 7.5 to use for something else. All the 8Tb would be a waste for the OS.
Then put 4x 8TB drives in a ZFS raidz (aka 5) of your choice. Then attach the ssd as a read cache to the zfs disk. Then I’d move /home to ZFS.

10tb and 2tb or odd balls. But can be added to zfs also, as size doesn’t matter with zfs.

If OS redundancy isnt important, install the OS to a 200gig partition on the ssd. throw all the rotating disks into zfs and mount /home or any other mount point to zfs. Then use the rest of the ssd as zfs read cache. Back up root/etc/ to the zfs somewhere so if the ssd craps out, you can at least get your settings back. For a home setup, either would be fairly fail safe.

In my servers, I always raid 1 the OS on 2 disks. Speed not really important, dont need a lot of space. Then have rotating rust in zfs raidz with ssd or nvme cache. Lots of space with nearly ssd’ish speeds after data is cached initially. If you only have a few users, a 4 or 5 disk zfs raid will more than keep up without the cache. But why waste an ssd. lol.

There used to be a time when separating /home /var /logs /etc…etc was the norm. Mainly for speed and redundancy. But disks and cpus are so fast now, there’s really no need anymore except for partitions where you want lots of space. Like /home. So to me, it makes sense to put the OS /root on the smallest disks, then home can be put elsewhere and resized to almost anything afterwards.

1 Like

Thanks, that all makes a lot of sense.

I am thinking that the OS on the SSD will be best for me and then ZFS for the HDD’s, and then adding the ZFS cache onto the remaining partition on the SSD. I did opt to ignore the oddball drives for now as I have plenty of space and no sense making it more complex.

So I have moved forward. Installed Centos 7 on the SSD and created a raidz2 for the 6x 8TB drives. I’ve installed virtualmin.

So no I suppose I need to move the /home to the ZFS pool but I am also wondering about /var as it is where the database files are stored. I am working on a rather large database project that will potentially host several million records the csv file I am importing total over 1GB in the folder.

At this stage I’m not sure about the procedure for moving the directories to the ZFS pool. Is there a way you do this in Webmin/Virtualmin or is there a command line procedure?

Moving /home is not done in virtualmin. will need command line. I know nothing about Centos so this will be in generic linux terms as far as process.
Assume you have a zfs pool of disks called pool tank.
In the pool tank, create a filesystem for home. Call it whatever you want. /tank/home would be ok. Its mount point will be /tank/home by default. Then copy the current /home/files into /tank/home/. So you’ll have the orig /home currently mounted, and everything in /home copied to /tank/home. So you should have two copies of /home now.
Then as root, on cmd line, rename the orig folder /home to /home-orig.
Create a new /home folder which will be empty.
Now its a matter of changing the zfs /tank/home mount point to point /home(which is now empty) instead of /tank/home. There’s a zfs cmd to do that. zfs set mountpoint=/home tank/home
The new /home should be immediately be active and should be able to ls and see all the files there, but reboot to be sure the boot process picks up the new /home.

I would not move /var just for the sake of the DB. Just move mysql’s default location to a new filesystem on tank. like /tank/mysql. That can be changed in webmin/mysql. NOTE on databases on zfs, need to make the filesystem synchronous. So for the /tank/mysql filesystem, set the sync property to ‘always’. Otherwise might get corruption as the default is to cache writes, which doesn’t play nice with databases.

Once you’re comfortable and it all works, you can delete /home-orig. If it doesn’t work, delete all the zfs pools and rename /home-orig back to /home and you’re back where you started. And try again.

1 Like

This topic was automatically closed 14 hours after the last reply. New replies are no longer allowed.