Marshall - ZFS question

New About Yours API Help
3.6 KB, Plain text
Thank you SOO much for the ZFS talk! 4 years, 3 dead disks, 3 different operating systems, same pool, ZERO data loss. I finally created a new pool and would like confirmation or explanation on the way the output appears.
Original: 2x1TB mirrored VDEV (total 1TB)
Goal: 2x1TB striped, mirrored to 2TB single disk (total 2TB)
Current OS: Kubuntu 18.04
Disks:
2TB Toshiba
1TB WD
1TB Seagate
I turned on autoexpand, started with the 2 1TB disks in a mirror VDEV, attached the 2TB as a 3rd mirror, allowed it to finish resilvering. I then detached both 1TB disks, leaving a single disk (no redundancy made me nervous, but it gets worse). I then created a new striped pool with the 2 1TB disks, created snapshots on each dataset on the 2TB, and used zfssend piped to zfsreceive to copy datasets from the 2TB to the 2x1TB pool. I then destroyed the 2TB pool, then attached the 2TB disk as a mirror to the 2x1TB pool.
The output of zfs status looks odd to me now. The data and filesystem sizes look fine, so I THINK I did everything correctly. I tried adding the 2x1TB as a striped VDEV to the single 2TB disk, but could not find a way to do so, hence the copying. Is that possible? That would have saved some time, if not (feature add?).
All of my data is backed up remotely, but would be a pain to re-download. On the plus side, fragmentation was at 12% (now gone, of course), and I moved from lzjb to lz4, so saved a bit more space. Also, the time for the resilvering was cut in half from the first time to the second (mirror->2TB vs stripe->2TB). 
Also, can I delete the existing snapshots now? They were the ones I originally copied with zfssend/receive.
Thanks!

root@ubuwks:~# zpool history storage
History for 'storage':
2018-05-26.19:27:22 zpool create -m none -o ashift=12 storage ata-ST31000524AS_6VPHV2BA ata-WDC_WD10EZRX-00L4HB0_WD-WCC4J0140674
2018-05-26.19:30:43 zfs receive -o compression=lz4 storage/Personal
2018-05-26.20:56:59 zfs receive -o compression=lz4 storage/storage
2018-05-26.21:00:41 zfs set mountpoint=/storage/Personal storage/Personal
2018-05-26.21:04:37 zfs set mountpoint=/storage storage/storage
2018-05-26.21:10:11 zpool attach storage ata-ST31000524AS_6VPHV2BA /dev/disk/by-id/ata-TOSHIBA_DT01ACA200_Z38XUL1GS

Note here that the 2TB and one of the 1TB disks are listed under mirror-0, and the 2nd 1TB is listed even with mirror-0. Is this right?
root@ubuwks:~# zpool status
  pool: storage
 state: ONLINE
  scan: resilvered 278G in 0h56m with 0 errors on Sat May 26 22:06:11 2018
config:

        NAME                                        STATE     READ WRITE CKSUM
        storage                                     ONLINE       0     0     0
          mirror-0                                  ONLINE       0     0     0
            ata-ST31000524AS_6VPHV2BA               ONLINE       0     0     0
            ata-TOSHIBA_DT01ACA200_Z38XUL1GS        ONLINE       0     0     0
          ata-WDC_WD10EZRX-00L4HB0_WD-WCC4J0140674  ONLINE       0     0     0

errors: No known data errors

root@ubuwks:~# zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
storage            576G  1.19T    96K  none
storage/Personal  2.53G  1.19T  2.53G  /storage/Personal
storage/storage    573G  1.19T   573G  /storage

root@ubuwks:~# zpool list
NAME      SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
storage  1.81T   576G  1.25T         -     0%    31%  1.00x  ONLINE  -

root@ubuwks:~# zfs list -t snapshot
NAME                            USED  AVAIL  REFER  MOUNTPOINT
storage/Personal@copy26052018   144K      -  2.53G  -
storage/storage@copy26052018   43.9M      -   573G  -
Pasted 2 months, 3 weeks ago — Expires in 279 days
URL: http://dpaste.com/2W7VD6K