Note that the download is not an archive, but a single 2000+ line shellscript. Just chmod +x zrep, and you are ready to go!
Zrep has been reported to run on multiple OS's that provide ZFS, including Solaris, IllumOS, Linux, and BSD (including FreeNAS, and Nas4Free).
Compatibility issues
I chose ksh as the interpreter, because it has extra efficiencies with
builtin functions, and also user written functions.
Just be sure to run zrep with Real ksh, not an impostor such as pdksh,
or it may not work properly.
Similarly, there may be a bug with Gentoo "improved" ksh, which is a
non-official patch. I have had a report that standard 2012 ksh works, but
the app-shells/ksh-93.20140625 gentoo ksh may have a problem.
FreeNAS may work best using #!/usr/local/bin/ksh93
as first line for zrep (or just symlink that to /bin/ksh !)
If for some reason you want ksh source code, the best place seems to be
https://github.com/att/ast
License
The license for zrep is available here.
The short summary is that you are free to use it as much as you like,as
long as you dont sue me for anything that goes wrong :-)
If you are really bored, you may also read
the CHANGELOG
For historians, some older versions are still available:
zrep version 0.8.4 , Oct 17th, 2012 /
zrep version 0.7 , June 29th, 2012
It also handles 'failover', as simply as "zrep failover datapool/yourfs". This will conveniently handle all the details of
# zrep status scratch/datasrc synced as of Mon Mar 12 13:23 2012
In contrast, zrep is designed to be
If you are in a hurry to just try zrep out, however, a super-trivialized version of how to use it would be:
After the initial full sync, this will do incremental zfs sends, back to back, "forever". (or at least until you hit an error :)zrep init pool/fs desthost destpool/fs # (will create the destination fs!) # Initialize additional fs's with zrep if you wish. Then... while true; do zrep sync all; done
For some amount of greater detail, please see the usage message, via "zrep -h"
The one "undocumented feature" you may care about, is that the property zrep:savecount controls the number of recent snapshots preserved. To change from the default (currently, 5), use
zfs set zrep:savecount=NEWVAL your/fs/here
There is also a separate troubleshooting page
/pool/fs/hereHowever, it is probably a bad idea to try it out of the box on BOTH of
/pool/fs/here /pool/fs/here/tooIf you really wish to sync a bunch of ZFS filesystems nested under a master filesystem, zrep does now support a recursive flag. See the documentation for more details.
http://www.psc.edu/networking/projects/hpn-ssh/
Some speed results, from local-host testing: using regular scp to regular sshd, got about 20MB/sec using regular scp to hpn-sshd, got about 30MB/sec using hpc-scp to hpn-sshd, got 150MB/sec
Additionally, zrep supports integrating with "mbuffer", or "bbcp". Either of which can have a beneficial effect to transfer speeds.
It should be noted, however, that generating the data for a "zfs send"
has speed limits of its own.
You may want to first time "zfs send your@snapshot >/dev/null", to see
if optimizing your network throughput is going to be significant.
Bottom line: unless you're sending from an SSD, and/or sending Terabytes of data,
it may be best to just stick with SSH.
If you are a shellscript writer, this may interest you. Feel free to browse around the source directory, which I have now moved to a GitHub repository for zrep