1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | ||
Editor: DonovanBaarda
Time: 2020/06/30 12:33:09 GMT+10 |
||
Note: |
changed: -The FastCDC paper highlighted that typical chunking algorithms use N bits of the hash to get an average chunk size of 2^N, but the distribution is not a normal distribution around the target chunk size, but an exponential distribution with the most common block size being the smallest. Also, skipping over a min-block-size effectively shifts this distribution by the min size, which changes the average chunk size to min+2^N. We will call 2^N the "target-size", so the average-size = min-size + target-size. The FastCDC paper highlighted that typical chunking algorithms use N bits of the hash to get an average chunk size of 2^N, but the distribution is not a normal distribution around the target chunk size, but an exponential distribution with the most common block size being the smallest. Also, skipping over a min-block-size effectively shifts this distribution by the min size, which changes the average chunk size to min+2^N. We will call 2^N the "target-size", so the average-size = min-size + target-size. There is a >22% chance of a chunk-boundary within 1/4 of, the median chunk size is about 70% of, and <13% are greater than 2x, the target-size. added: Algorithms ========== PolyHash -------- This is the Polynomial Hash AKA RabinKarp (not Rabin Fingerprint) mentioned at https://en.wikipedia.org/wiki/Rolling_hash, and used by librsync. It gives a very good hash (low collisions, good distribution) without needing a byte-mapping table, which saves cache memory. However, it does use 2 multiplies per rotation, which makes it slower than rollsums that only use addition or xor like CyclicPoly/Buzhash, Rollsum/Alder32/Bap, or Gear. Also unlike Gear, it does need to keep the sliding window of data. changed: -The gear rollsum has a really neat feature for chunking; there is no need to keep a sliding window because old bytes "expire" out of the hash as they get shifted out. It uses a lookup table to map bytes to the full hash width before adding them to the hash, and they slowly get shifted out with each byte added. This means that the window is limited to the size of the hash, with older bytes only represented in the more significant bits of the hash. - -This means the normal technique of selecting the N least-significant bits with a mask also prunes the window size to N bytes. Importantly, it not only prunes the window size to N bytes, but it also only selects part of the entropy of the last 8 bytes of that window (assuming the original byte's entropy is perfectly spread across the hash by the mapping function). If we assume the entropy linearly decays for the last 8 bytes, it means we loose 4 bytes worth of entropy. So for N bits of entropy, assuming 1 bit of entropy per byte, we need at least N+4 bytes in the window. The gear rollsum has a really neat feature for chunking; there is no need to keep a sliding window because old bytes "expire" out of the hash as they get shifted out. It uses a lookup table to map bytes to the full hash width before adding them to the hash, and they slowly get shifted out with each byte added. This means that the window is limited to the size of the hash, with older bytes only represented in the more significant bits of the hash. This makes it completely useless as a fast rolling hash for rsync, but ideal for chunking. This means the normal technique of selecting the N least-significant bits with a mask also prunes the window size to N bytes. Importantly, it not only prunes the window size to N bytes, but it also only selects only part of the entropy of the last 8 bytes of that window (assuming the original byte's entropy is perfectly spread across the hash by the mapping function). If we assume the entropy linearly decays for the last 8 bytes, it means we loose 4 bytes worth of entropy. So for N bits of entropy, assuming 1 bit of entropy per byte, we need at least N+4 bytes in the window. changed: -For 2. it seems strange that they didn't seem to realize that just using the most significant bits would give you the largest window size. Shifting the mask by 5 seems very arbitrary, and less effective than always using the most significant N bits. Also I'm not sure but `!(h>>S)` where S=32-N might be just as fast as `!(h&M)` where M=((~0) << (32-N)). However, it's also interesting to note that in the Gear entropy analysis above we need at least N+4 bytes in the window, so adding 5 bytes by shifting the mask 5 places should be sufficient to get your N bits of entropy. However, using the most significant bits would mean you are using a full window size of 32-4 = 28bytes, which is probably still better. For 2. it seems strange that they didn't seem to realize that just using the most significant bits would give you the largest window size. Shifting the mask by 5 seems very arbitrary, and less effective than always using the most significant N bits. Also I'm not sure but `!(h>>S)` where S=32-N might be just as fast as `!(h&M)` where `M=((~0) << (32-N))`. However, it's also interesting to note that in the Gear entropy analysis above we need at least N+4 bytes in the window, so adding 5 bytes by shifting the mask 5 places should be sufficient to get your N bits of entropy. However, using the most significant bits would mean you are using a full window size of 32-4 = 28bytes, which is probably still better. changed: -It means that the hash effectively includes history data before the window, since chunk boundary selection also depends on how far back the last chunk boundary was. - -The chunk boundaries have a degree of "hardness" based on their hash. A hash with more zero bits is "harder" and more likely to be chosen as a chunk boundary even if there was another boundary before it. These "hard boundaries" force the synchronisation of chunk boundaries after changes harder. - -The "hard cliff" at the target-size feels a little ugly, but it does avoid complicating things. An incremental adjustment to the probabilities to give a more normal distribution would be interesting to try/analize. This together with `h <= r` hash judgment can give a continuously variable match probability. - - * It means that the hash judgement effectively includes history data before the window, since chunk boundary selection also depends on how far back the last chunk boundary was. * The chunk boundaries have a degree of "hardness" based on their hash. A hash with more zero bits is "harder" and more likely to be chosen as a chunk boundary even if there was another boundary before it. These "hard boundaries" force the synchronisation of chunk boundaries after changes harder. The "hard cliff" at the target-size feels a little ugly, but it does avoid complicating things. An incremental adjustment to the probabilities to give a more normal distribution would be interesting to try/analyze. This together with `h <= r` hash judgment can give a continuously variable match probability. Better Normalized Chunking ========================== We can use a hash judgement of `h <= p` where p is a fixed point probability of being a chunk boundary between 0.0 -> 1.0 scaled into a 32bit number. This lets use use arbitrary chunking probabilities based on the size to give whatever chunk-size distribution we want. Using the following gives a pretty good approximation of a standard distribution centered on the target_size:: min_size=target_size/4 max_size=target_size*2 p = ((size - min_size)/(max_size - min_size))^2 You can see this graphed here; https://docs.google.com/spreadsheets/d/1ofg3-ZX04mbGA-9HZAcu5shfkbPYEO_PGW_4UuKFQpc/edit?usp=sharing Note that p as a fixed point unsigned 32bit number can be calculated efficiently incrementally like this:: p = 0 step = 2^32 / (max_size - min_size)^2 incr = step * 2 for each iteration: h = <update rollsum> if h < p: <chunk breakpoint> p += step step += incr Note that provided (max_size - min_size) is not a power of 2 then this will be rounded down and will not quite reach 1.0 (which would overflow P anyway) by the time you reach the max size. If (max_size - min_size) is close or larger than 2**16 then incr will be too small to be accurate enough. If you want to calculate it more accurately you'd need integer fractions like this:: p = 0 denom = (max_size - min_size)^2 step = 2^32 / denom step_frac = 2^32 % denom incr = step * 2 incr_fract = step_frac * 2 if incr_frac >= denom: incr += 1 incr_frac -= denom for each iteration: h = <update rollsum> if h < p: <chunk breakpoint> p += step step += incr step_frac += incr_frac if step_frac >= denom: step += 1 step_frac -= denom or (probably better) update p and step at a courser granularity like this every 1024 bytes:: p = 0 step = 2^32 / ((max_size - min_size)/1024)^2 incr = step * 2 for each iteration: h = <update rollsum> if h < p: <chunk breakpoint> if size % 1024 == 0: p += step step += incr It's not entirely clear that doing this actually does improve re-syncing breakpoints, which is necessary to maximize duplicate detection, but it will give a much better chunk size distribution. Comparing the hash to a probability like this probably requires that the hash used is fairly good, which is a stronger requirement than for `(h&mask == r)` style hash judgement. Things like Gear are probably sufficient provided the byte-mapping table is good. Buzhash and Rollsum/Adler/Bup without a decent byte-mapping table are not.
Using rollsums for chunking is a little different to how rsync/librsync uses them, and they have slightly different requirements.
Rsync uses rollsums as a fast hash of a whole fixed sized block for comparison against other blocks. This means it needs a large rolling "window", and a good hash with low collisions and clustering behaviour.
Chunking only needs a small rolling window with enough bytes (32, 48 or 64 bytes is common) to give a hash with enough meaningful bits to compare against a target value to identify the cut points. This means collisions and clustering probably don't matter as much, and pretty much the only thing that really matters is that the target hash value has a good random distribution with the right frequency across all the possible input-window values. Note that a generally good hash would also meet this requirement, but it is a weaker requirement than a good hash.
The FastCDC? paper highlighted that typical chunking algorithms use N bits of the hash to get an average chunk size of 2^N, but the distribution is not a normal distribution around the target chunk size, but an exponential distribution with the most common block size being the smallest. Also, skipping over a min-block-size effectively shifts this distribution by the min size, which changes the average chunk size to min+2^N. We will call 2^N the "target-size", so the average-size = min-size + target-size. There is a >22% chance of a chunk-boundary within 1/4 of, the median chunk size is about 70% of, and <13% are greater than 2x, the target-size.
The min chunk size also cause it to take longer for chunks to "re-align" after differences because it hides chunk-boundaries that are in the skipped data. The exponential distribution of chunk-sizes means that many chunk boundaries will fall within the min-chunk-size, and skipping them can skip over boundaries used on previous versions, delaying re-alignment., This means you really want your min-size to be small to avoid hurting the chunk re-alignment and thus de-duplication. I'd suggest 1/4 of the "target chunk size", which also ensures the average chunk size is only 25% larger than the target chunk size.
The window size used needs to give you N bits of entropy in the N bits of the hash used for the chunking hash judgement. Compression can give a rough estimate of the bits of entropy per byte, and with text files 4:1 compression is common, and for things like html it reportedly can go to 10:1. Conservatively we could assume we get 1 bit of entropy per byte in the window, so for N bits in the hash, we need a window with at least N bytes. If we are selecting only N bits from a 32bit hash generated from the window bytes, then provided the hash is good, any N bits from that hash should include N bits of entropy from the N byte window. However, if the hash is not good, and the entropy is not evenly distributed across all the hash bits, and in particular over the N hash bits selected, we would need more than N bytes in the window. For a full 32bits of entropy in the 32bit hash we'd need at least 32bytes in the window.
If you map bytes to a 32bit value, you are smearing that byte's entropy over 32bits. If you then select a subset of that 32bit value, are you selecting only a subset of the initial bytes entropy? I think that depends on the mapping and how you select bits from it. For example, simply repeating the byte 4x to get 32bits, and then selecting any contiguous 8bits, you get all the original bit's and thus all the entropy in the original byte. However, I'm pretty sure if you select less than 8 bits, or select 8 bits that are not impacted by all 8 original bits (ie, selecting multiple copies of some bits), then you must be getting only a fraction of the original entropy.
If we assume the mapping is a "good hash", the entropy of the original byte is spread over all 32bits in a way that if you select any 8bits you get all the byte's original entropy, but if you select <8 bits you get a corresponding fraction of the entropy. However, we must assume the entropy within the original byte is not spread evenly over all 8 bits (so not a good hash), so selecting less than 8 bits gives you that fraction of the original entropy. So if we use only 3bits of a 32bit word mapped from one byte that contained only 2bits of entropy, we would get 2*3/8 bits of entropy. It would be interesting to figure out a formula for approximating the entropy you get from selecting n bits expanded to 32bits from m bits that contained e bits of entropy.
This is the Polynomial Hash AKA RabinKarp? (not Rabin Fingerprint) mentioned at https://en.wikipedia.org/wiki/Rolling_hash, and used by librsync. It gives a very good hash (low collisions, good distribution) without needing a byte-mapping table, which saves cache memory. However, it does use 2 multiplies per rotation, which makes it slower than rollsums that only use addition or xor like CyclicPoly?/Buzhash, Rollsum/Alder32/Bap, or Gear. Also unlike Gear, it does need to keep the sliding window of data.
The gear rollsum has a really neat feature for chunking; there is no need to keep a sliding window because old bytes "expire" out of the hash as they get shifted out. It uses a lookup table to map bytes to the full hash width before adding them to the hash, and they slowly get shifted out with each byte added. This means that the window is limited to the size of the hash, with older bytes only represented in the more significant bits of the hash. This makes it completely useless as a fast rolling hash for rsync, but ideal for chunking.
This means the normal technique of selecting the N least-significant bits with a mask also prunes the window size to N bytes. Importantly, it not only prunes the window size to N bytes, but it also only selects only part of the entropy of the last 8 bytes of that window (assuming the original byte's entropy is perfectly spread across the hash by the mapping function). If we assume the entropy linearly decays for the last 8 bytes, it means we loose 4 bytes worth of entropy. So for N bits of entropy, assuming 1 bit of entropy per byte, we need at least N+4 bytes in the window.
The oldest bytes are included in less and less high end bits the older they get. So even though a 32bit hash includes 32bytes worth of window, the oldest byte is only represented in the single most-significant bit.
For windows larger than 32bits you need a larger hash (64bits), which means you need a larger byte-mapping table. These tables eat CPU memory cache, which can hurt the overall program even if the chunking bit is fast.
Uses a Gear rollsum, but adds 3 things;
Option 1. seems pretty minor, but they claim a noticeable speedup from !(h&mask) vs (h&mask)==r, presumably because the assembler & operation gives you a zero-check for free. Using a zero target is also more likely to hit degenerate cases if you are not careful with your byte-mapping. For example, if zero is mapped to zero (as it would be using eg murmurhash's mix32 finalizer), then you always match on runs of zero's. Also, h <= r is probably just as cheap and can give you arbitrary target block sizes, with arbitrary probabilities of matching.
For 2. it seems strange that they didn't seem to realize that just using the most significant bits would give you the largest window size. Shifting the mask by 5 seems very arbitrary, and less effective than always using the most significant N bits. Also I'm not sure but !(h>>S) where S=32-N might be just as fast as !(h&M) where M=((~0) << (32-N)). However, it's also interesting to note that in the Gear entropy analysis above we need at least N+4 bytes in the window, so adding 5 bytes by shifting the mask 5 places should be sufficient to get your N bits of entropy. However, using the most significant bits would mean you are using a full window size of 32-4 = 28bytes, which is probably still better.
The idea of 3. is interesting, and they claim it speeds up re-syncing chunk boundaries faster after skipping over min-size bytes, giving better deduplication. It's not clear in the paper if the de-duplication wins are really because of this, or because they way they use it gives smaller chunks. There are several ways to think of what this does, and how it relates to min-size handling.
The "hard cliff" at the target-size feels a little ugly, but it does avoid complicating things. An incremental adjustment to the probabilities to give a more normal distribution would be interesting to try/analyze. This together with h <= r hash judgment can give a continuously variable match probability.
We can use a hash judgement of h <= p where p is a fixed point probability of being a chunk boundary between 0.0 -> 1.0 scaled into a 32bit number. This lets use use arbitrary chunking probabilities based on the size to give whatever chunk-size distribution we want.
Using the following gives a pretty good approximation of a standard distribution centered on the target_size:
min_size=target_size/4 max_size=target_size*2 p = ((size - min_size)/(max_size - min_size))^2
You can see this graphed here;
https://docs.google.com/spreadsheets/d/1ofg3-ZX04mbGA-9HZAcu5shfkbPYEO_PGW_4UuKFQpc/edit?usp=sharing
Note that p as a fixed point unsigned 32bit number can be calculated efficiently incrementally like this:
p = 0 step = 2^32 / (max_size - min_size)^2 incr = step * 2 for each iteration: h = <update rollsum> if h < p: <chunk breakpoint> p += step step += incr
Note that provided (max_size - min_size) is not a power of 2 then this will be rounded down and will not quite reach 1.0 (which would overflow P anyway) by the time you reach the max size. If (max_size - min_size) is close or larger than 2**16 then incr will be too small to be accurate enough. If you want to calculate it more accurately you'd need integer fractions like this:
p = 0 denom = (max_size - min_size)^2 step = 2^32 / denom step_frac = 2^32 % denom incr = step * 2 incr_fract = step_frac * 2 if incr_frac >= denom: incr += 1 incr_frac -= denom for each iteration: h = <update rollsum> if h < p: <chunk breakpoint> p += step step += incr step_frac += incr_frac if step_frac >= denom: step += 1 step_frac -= denom
or (probably better) update p and step at a courser granularity like this every 1024 bytes:
p = 0 step = 2^32 / ((max_size - min_size)/1024)^2 incr = step * 2 for each iteration: h = <update rollsum> if h < p: <chunk breakpoint> if size % 1024 == 0: p += step step += incr
It's not entirely clear that doing this actually does improve re-syncing breakpoints, which is necessary to maximize duplicate detection, but it will give a much better chunk size distribution.
Comparing the hash to a probability like this probably requires that the hash used is fairly good, which is a stronger requirement than for (h&mask == r) style hash judgement. Things like Gear are probably sufficient provided the byte-mapping table is good. Buzhash and Rollsum/Adler/Bup without a decent byte-mapping table are not.