Henrick - Yet another ZFS question

New About Yours API Help
4.7 KB, Plain text
First of all, thank you for a great show, I'm a long time listener, but first time writer. It's actually because of BSDnow that I started to play around with OpenBSD in the first place, after 20 years or so in the Linux-world, and now all infrastructure that I manage is almost only driven by BSD. 
The only thing I have a hard time changing out is my day-to-day Lenovo T431s with Debian...
I know that there have been a lot of questions about ZFS already and here is yet another one coming - I don't really know if this is going to be a question or not or if it's just a need for confirmation that what I've planned is the right way to do things. But here it comes anyway...

I have a small home-server running Debian with ZoL and a lot of KVM's running OpenBSD's and a few Debian Servers for doing various tasks.
The ZFS pools I have today is a small SSD for rpool, one build out of 4 old 1TB disk in RAIDZ1 that I just had laying around when I first installed ZFS on the server and didn't know anything, with a Samsung 512GB SSD for L2ARC and ZIL - at first I just used it for playing around and learning ZFS, since I had used LVM on everything for the past 10+ years or so, but now find myself in love with ZFS and would never (with the technology of today) change back.

Since my confidence in both my skills regarding to ZFS (a big thanks to Michael W. Lucas and his books and the internet) and in ZFS in general only have grown, I can now see that the server have nothing on the old LVM partitions and all data is on ZFS.
I plan to change away from the RAIDZ1 to a mirror of two 12TB disks, but what is the best way to do this, preferably without any or at least very little downtime on the KVM?

I was thinking:

1. Create a new mirror of the two 12TB disk and create a pool on it (and remember to set dedup and compression from the beginning this time)
2. Take a snapshot of the old pool
3. Send and receive the snapshot to the new pool - can I just take snapshot of "container" or do I have to take one of each sub-pool and send/receive separately?
4. Test that I have all data and that at least one of the KVM can start
5. Shutdown all KVM's, create a new snapshot and repeat 3.
6. Rename the old pool to something else, and rename the new pool to the same as the old one had and restore mount points
7. Add a new small SSD for SIL and the big old SSD for L2ARC to the new pool
8. Start all KVM's again

Is there something I have missed? or is there a better way to do this?

In the future I can always add more space by adding yet another mirror of 12TB drives, because lets face it, space is getting used faster and faster nowadays :-P

Next project is getting a FreeBSD with Bhyve to run the guest and a couple of storage servers running GlusterFS one-top of ZFS for better redundancy and speed (already tested on my KVM), but that might be a mail for anther time and a project for another day.

Pool information:

# zfs list
NAME                                     MOUNTPOINT
container                                 /container
container/backup                    /media/backup
container/home                       /media/home
container/vm                           /VM
container/vm/ISO                    /VM/ISO
container/vm/guests               /var/lib/libvirt/images
container/vm/guests/media    /var/lib/libvirt/images/media
rpool                                        /rpool
rpool/ROOT                              none
rpool/ROOT/debian-1              /
rpool/ROOT/debian-1/var        /var
rpool/ROOT/debian-1/var/log  /var/log
rpool/ROOT/debian-1/var/tmp /var/tmp
rpool/home                              /home
rpool/swap                               -
rpool/tmp                                 legacy

# zpool list -v
container  3,62T   537G  3,10T         -    12%    14%  1.01x  ONLINE  -
  raidz1  3,62T   537G  3,10T         -    12%    14%
    scsi-1ATA_ST1000DM003-1CH162_Z1D84GCZ      -      -      -         -      -      -
    scsi-1ATA_Hitachi_HDT721010SLA360_STF607MS0RL0PK      -      -      -         -      -      -
    scsi-1ATA_WDC_WD1003FBYX-01Y7B1_WD-WCAW36894273      -      -      -         -      -      -
    scsi-1ATA_ST1000DM003-1CH162_Z1D83WLS      -      -      -         -      -      -
log      -      -      -         -      -      -
  ata-Samsung_SSD_840_PRO_Series_S1AXNSADB24515R-part1   142G  6,14M   142G         -     1%     0%
cache      -      -      -         -      -      -
  ata-Samsung_SSD_840_PRO_Series_S1AXNSADB24515R-part2   250G  20,4G   230G         -     0%     8%
rpool  63,5G  6,27G  57,2G       20G    58%     9%  1.00x  ONLINE  -
  ata-Samsung_SSD_840_PRO_Series_S1AXNSADB24515R              63,5G  6,27G  57,2G       20G    58%     9%
Pasted 11 months, 3 weeks ago — Expires in 11 days
URL: http://dpaste.com/21D1KWA