I create a ZFS volume:
zfs create -V 1G trunk/funkTo turn this into a PV, evidently one cannot simply pvcreate either /dev/trunk/funk or /dev/zd0 (in this case). LVM complains that it cannot find the drive or that it was filtered out. Without digging through LVM's options, I chose what feels like a very dirty but successful approach - loopback devices:
losetup /dev/loop0 /dev/trunk/funkViola! Now I have a ZFS backing store for my LVM, meaning I can pvmove all sorts of interesting things into ZFS and then back out, without invoking a single ZFS command. Not that I have anything against ZFS commands, mind you.
pvcreate /dev/loop0
The Good
You can do what I mentioned above with regard to LVM's logical extents. You get to use familiar tools, and can migrate between two different volume managers...sort of.The Bad
The loopback device does not survive reboot; you have to losetup it again and run pvscan to get your volumes back. Thus, it's not a transparent solution for things like moving your root partition, or possibly even your /usr folder. Since you're cramming data through three virtual devices instead of one, you also necessarily take a performance hit. I figured this would be the case going in, but wanted to see what could be done.The Ugly
Here are some results from two tests. In both tests, dbench was run for 120 seconds with 50 clients.Vol-1 here is a direct ZFS volume, 2G in size, formatted with XFS and mounted locally.
Operation Count AvgLat MaxLat
----------------------------------------
NTCreateX 3550706 0.084 184.590
Close 2607715 0.027 114.792
Rename 150360 0.049 14.309
Unlink 717466 0.338 180.865
Deltree 100 12.302 106.897
Mkdir 50 0.003 0.012
Qpathinfo 3218035 0.005 25.145
Qfileinfo 564208 0.001 7.601
Qfsinfo 590419 0.003 6.306
Sfileinfo 289141 0.042 14.303
Find 1244465 0.014 18.228
WriteX 1772158 0.026 17.727
ReadX 5566389 0.006 19.958
LockX 11566 0.004 2.074
UnlockX 11566 0.003 5.996
Flush 248977 20.776 264.706
Throughput 931.291 MB/sec 50 clients 50 procs max_latency=264.710 ms
Vol-2 was my ZFS -> losetup -> LVM volume, also roughly 2G in size and formatted with XFS (and mounted locally):
Operation Count AvgLat MaxLat
----------------------------------------
NTCreateX 2019488 0.112 346.872
Close 1481790 0.032 246.645
Rename 85494 0.064 23.981
Unlink 409039 0.385 324.935
Qpathinfo 1830417 0.007 90.319
Qfileinfo 318618 0.001 8.176
Qfsinfo 335946 0.004 8.134
Sfileinfo 164346 0.035 22.344
Find 707310 0.019 67.141
WriteX 996612 0.035 134.271
ReadX 3163203 0.008 19.165
LockX 6556 0.004 0.117
UnlockX 6556 0.003 0.423
Flush 141610 38.198 420.011
Throughput 524.834 MB/sec 50 clients 50 procs max_latency=420.017 ms
Other Thoughts
It's possible that LVM is not treating the device very nicely, writing in 512 byte sectors instead of the 4K sectors that my ZFS pool has been configured to use. If this were to become fixed, or if there was a way to get around using a loopback device, we might see better performance. Maybe.
Conclusion
The moral of this story is: You can do it, but it'll perform like shit.
No comments:
Post a Comment