The system being tested is a 15 drive raidz3 setup with 4TB drives running kernel 3.13.2 (non-hardened for testing).
The encryption algorithm
I was previously using cbc-essiv, this causes I found (via a simple scrub test) that it caused my performance to be about 10% lower then xts-plain64.
cbc-essiv was slow because it needed to read the previous block in order to write the current one.
I chose xts-plain64 even though there are known attacks against it.
XTS mode is susceptible to data manipulation and tampering, and applications must employ measures to detect modifications of data if manipulation and tampering is a concern: "...since there are no authentication tags then any ciphertext (original or modified by attacker) will be decrypted as some plaintext and there is no built-in mechanism to detect alterations. The best that can be done is to ensure that any alteration of the ciphertext will completely randomize the plaintext, and rely on the application that uses this transform to include sufficient redundancy in its plaintext to detect and discard such random plaintexts." The mode is also susceptible to traffic analysis, replay, and sector randomization attacks.
- I believe that the auth tag is double verified by both luks checksuming and zfs checksuming.
- I am not sure about the other issues (traffic analysis, replay, and sector randomization attacks).
- I do not currently consider myself up against state actors.
Setting the Elevator
I unknowningly was using cfq, this caused more load spikes, longer scrub times and higher load in general.
ZFSonLinux does set the elevator to NOOP if it is put directly on a hard disk but using LUKS interferes with this.
Here is the results of me testing via scrubs (done only once, fresh boot each time, but good enough for me...).
|Elevator||Scrub Time||Average Load15||Link to load|
|NOOP||10h42m||5.1 (ish)||NOOP Graph|
|CFQ||11h47m||6.5 (ish)||CFQ Graph|
By changing all drives to xts-plain64 and the elevators to noop my scrubs went from 360MB/s to 450MB/s (a 20% gain).