Bostjan - ZFS Record Size and my mistakes

New About Yours API Help
1.5 KB, Plain text
Hi,

Some time ago I wrote in asking about the benefits of the ZFS record size. As I understood it is best to use 1MB large blocks for large files in the dataset. Among other things, this should reduce the total size of the datatset by 10%.

I used ZFS get and set recordsize command to check and set it. After setting it I checked again the record size and it changed from 128k to 1M.

Here are my mistakes.
Let’s call this dataset “datasetA”. I wanted to quickly get the dataset’s size in the pool so I checked it in the FreeNAS GUI under the storage tab. It said 1TiB Used. Then I moved (not copy) all data from this datasetA to another datasetB using File manager in Linux. After that I copied all data back to this original datasetA and checked the used size of this datatsetA in FreeNAS and it said 1023 GiB (previous was 1TiB). At that point I also found out that I had snapshots in this datatsetA so I ended up with 3TiB of data in my pool. wa wa waaaaaa

Here is my question, how to do all this correctly?
How to get the actual size of the datatset (there are no hard and soft links in this datatset)? Is the size of the file the same as the space taken on the disk (the files are uncompressible)? How to properly move/copy files from such datatset in order that new record size gets into affect?

Why is free space in the pool reduced if I copied the exact the same data back to this datatsetA (with previous snapshot)? Didn’t it recognise the same data?

Thank you for correcting my mistakes. Much appreciated.

Best regards,
Bostjan
Pasted 6 months ago — Expires in 179 days
URL: http://dpaste.com/0Q97J7H