Bounding Unicorns

ZFS Impressions

I spent a couple of years running ZFS on one of my disks. This page summarizes my impressions.

Usage

I put ZFS on a disk used for various workloads. It had small files, large files, bulk i/o, persistent background i/o, interactive i/o. A little bit of everything.

There was a single physical disk under ZFS, no raid.

Memory

FreeBSD went through a buffer cache unification sometime back in 4.x or 5.0 era. My understanding of it is there are fewer copies of data read from disk between the disk itself and the program actually operating on it. This means better performance and better memory utilization.

ZFS has its own ARC cache, and as far as I know does not use FreeBSD's unified cache. This means using ZFS in any capacity means splitting the memory between ZFS caches and FreeBSD's other caches. On a ZFS-only system maybe this is not so bad, but on a mixed system this is somewhat problematic as the ram is fixed in arc cache and, for example, I cannot temporarily repurpose it to, say, compile a web browser in memory.

Writes block entire system

On UFS, writes start going out to disk almost immediately, and as they are happening the system remains usable and pretty responsive, just slower. On ZFS, writes are nearly instantaneous as long as they are going into the write cache, but once the cache fills up the entire system seems to pause while the cache is being flushed.

This is extremely noticeable during interactive work if there is a bulk transfer happening in the background. The terminal simply hangs if it has anything at all to do with the ZFS disk.

These seemingly blocking flushes mean that certain work that cannot be blocked, such as saving a real time stream to disk, cannot be done on a system using ZFS, even when saving to a non-ZFS disk and when the bandwidth of the stream is not particularly high.

This bug report for FreeNAS is even more extreme than the behavior I was seeing, but the concept appears to be the same.

Fragmentation

When multiple files are being written to a ZFS partition concurrently, they appear to be severely fragmented. I can tell because reading these files back takes much longer than normal.

UFS seems to be much more suitable to concurrent write workloads.

This article reflects someone else's experience with fragmentation on ZFS.

Performance

I could not tell that ZFS was faster than UFS in any way. Perhaps with a raid the situation would have been different. My preferred approach to backups is to copy important files to a separate disk, ideally with different access patterns. For cases where it is feasible (git repositories), I back up off-site to my servers.

Conclusions

I consider UFS (FreeBSD's default file system) to be superior to ZFS for general purpose use.

As FreeBSD has its own (non-ZFS) tools for managing RAID configurations and such, "general purpose" does not mean "single disk", although I definitely do not believe there are any benefits in using ZFS on a single disk specifically over UFS.

Added memory consumption, unpredictable performance under writes and high fragmentation are all significant issues.

I imagine there are features that ZFS offers that outweigh these issues (snapshots, large read cache, using SSD in read cache capacity?) but the features, and whether they are worth living with ZFS's issues, need to be evaluated on a case by case basis for each application.