summaryrefslogtreecommitdiff
path: root/table
diff options
context:
space:
mode:
authorPeter Dillinger <peterd@fb.com>2020-06-22 13:30:57 -0700
committerFacebook GitHub Bot <facebook-github-bot@users.noreply.github.com>2020-06-22 13:32:07 -0700
commit5b2bbacb6fd2b49606ef2cd24e3bb2d2061239b4 (patch)
treeb70b7197f741619de99a73a6510dd0e8e6de5748 /table
parent1092f19d95095c6639ec678eacf7b6212fa170e5 (diff)
Minimize memory internal fragmentation for Bloom filters (#6427)
Summary: New experimental option BBTO::optimize_filters_for_memory builds filters that maximize their use of "usable size" from malloc_usable_size, which is also used to compute block cache charges. Rather than always "rounding up," we track state in the BloomFilterPolicy object to mix essentially "rounding down" and "rounding up" so that the average FP rate of all generated filters is the same as without the option. (YMMV as heavily accessed filters might be unluckily lower accuracy.) Thus, the option near-minimizes what the block cache considers as "memory used" for a given target Bloom filter false positive rate and Bloom filter implementation. There are no forward or backward compatibility issues with this change, though it only works on the format_version=5 Bloom filter. With Jemalloc, we see about 10% reduction in memory footprint (and block cache charge) for Bloom filters, but 1-2% increase in storage footprint, due to encoding efficiency losses (FP rate is non-linear with bits/key). Why not weighted random round up/down rather than state tracking? By only requiring malloc_usable_size, we don't actually know what the next larger and next smaller usable sizes for the allocator are. We pick a requested size, accept and use whatever usable size it has, and use the difference to inform our next choice. This allows us to narrow in on the right balance without tracking/predicting usable sizes. Why not weight history of generated filter false positive rates by number of keys? This could lead to excess skew in small filters after generating a large filter. Results from filter_bench with jemalloc (irrelevant details omitted): (normal keys/filter, but high variance) $ ./filter_bench -quick -impl=2 -average_keys_per_filter=30000 -vary_key_count_ratio=0.9 Build avg ns/key: 29.6278 Number of filters: 5516 Total size (MB): 200.046 Reported total allocated memory (MB): 220.597 Reported internal fragmentation: 10.2732% Bits/key stored: 10.0097 Average FP rate %: 0.965228 $ ./filter_bench -quick -impl=2 -average_keys_per_filter=30000 -vary_key_count_ratio=0.9 -optimize_filters_for_memory Build avg ns/key: 30.5104 Number of filters: 5464 Total size (MB): 200.015 Reported total allocated memory (MB): 200.322 Reported internal fragmentation: 0.153709% Bits/key stored: 10.1011 Average FP rate %: 0.966313 (very few keys / filter, optimization not as effective due to ~59 byte internal fragmentation in blocked Bloom filter representation) $ ./filter_bench -quick -impl=2 -average_keys_per_filter=1000 -vary_key_count_ratio=0.9 Build avg ns/key: 29.5649 Number of filters: 162950 Total size (MB): 200.001 Reported total allocated memory (MB): 224.624 Reported internal fragmentation: 12.3117% Bits/key stored: 10.2951 Average FP rate %: 0.821534 $ ./filter_bench -quick -impl=2 -average_keys_per_filter=1000 -vary_key_count_ratio=0.9 -optimize_filters_for_memory Build avg ns/key: 31.8057 Number of filters: 159849 Total size (MB): 200 Reported total allocated memory (MB): 208.846 Reported internal fragmentation: 4.42297% Bits/key stored: 10.4948 Average FP rate %: 0.811006 (high keys/filter) $ ./filter_bench -quick -impl=2 -average_keys_per_filter=1000000 -vary_key_count_ratio=0.9 Build avg ns/key: 29.7017 Number of filters: 164 Total size (MB): 200.352 Reported total allocated memory (MB): 221.5 Reported internal fragmentation: 10.5552% Bits/key stored: 10.0003 Average FP rate %: 0.969358 $ ./filter_bench -quick -impl=2 -average_keys_per_filter=1000000 -vary_key_count_ratio=0.9 -optimize_filters_for_memory Build avg ns/key: 30.7131 Number of filters: 160 Total size (MB): 200.928 Reported total allocated memory (MB): 200.938 Reported internal fragmentation: 0.00448054% Bits/key stored: 10.1852 Average FP rate %: 0.963387 And from db_bench (block cache) with jemalloc: $ ./db_bench -db=/dev/shm/dbbench.no_optimize -benchmarks=fillrandom -format_version=5 -value_size=90 -bloom_bits=10 -num=2000000 -threads=8 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=false $ ./db_bench -db=/dev/shm/dbbench -benchmarks=fillrandom -format_version=5 -value_size=90 -bloom_bits=10 -num=2000000 -threads=8 -optimize_filters_for_memory -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=false $ (for FILE in /dev/shm/dbbench.no_optimize/*.sst; do ./sst_dump --file=$FILE --show_properties | grep 'filter block' ; done) | awk '{ t += $4; } END { print t; }' 17063835 $ (for FILE in /dev/shm/dbbench/*.sst; do ./sst_dump --file=$FILE --show_properties | grep 'filter block' ; done) | awk '{ t += $4; } END { print t; }' 17430747 $ #^ 2.1% additional filter storage $ ./db_bench -db=/dev/shm/dbbench.no_optimize -use_existing_db -benchmarks=readrandom,stats -statistics -bloom_bits=10 -num=2000000 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=false -duration=10 -cache_index_and_filter_blocks -cache_size=1000000000 rocksdb.block.cache.index.add COUNT : 33 rocksdb.block.cache.index.bytes.insert COUNT : 8440400 rocksdb.block.cache.filter.add COUNT : 33 rocksdb.block.cache.filter.bytes.insert COUNT : 21087528 rocksdb.bloom.filter.useful COUNT : 4963889 rocksdb.bloom.filter.full.positive COUNT : 1214081 rocksdb.bloom.filter.full.true.positive COUNT : 1161999 $ #^ 1.04 % observed FP rate $ ./db_bench -db=/dev/shm/dbbench -use_existing_db -benchmarks=readrandom,stats -statistics -bloom_bits=10 -num=2000000 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=false -optimize_filters_for_memory -duration=10 -cache_index_and_filter_blocks -cache_size=1000000000 rocksdb.block.cache.index.add COUNT : 33 rocksdb.block.cache.index.bytes.insert COUNT : 8448592 rocksdb.block.cache.filter.add COUNT : 33 rocksdb.block.cache.filter.bytes.insert COUNT : 18220328 rocksdb.bloom.filter.useful COUNT : 5360933 rocksdb.bloom.filter.full.positive COUNT : 1321315 rocksdb.bloom.filter.full.true.positive COUNT : 1262999 $ #^ 1.08 % observed FP rate, 13.6% less memory usage for filters (Due to specific key density, this example tends to generate filters that are "worse than average" for internal fragmentation. "Better than average" cases can show little or no improvement.) Pull Request resolved: https://github.com/facebook/rocksdb/pull/6427 Test Plan: unit test added, 'make check' with gcc, clang and valgrind Reviewed By: siying Differential Revision: D22124374 Pulled By: pdillinger fbshipit-source-id: f3e3aa152f9043ddf4fae25799e76341d0d8714e
Diffstat (limited to 'table')
-rw-r--r--table/block_based/block_based_table_factory.cc4
-rw-r--r--table/block_based/filter_policy.cc208
-rw-r--r--table/block_based/filter_policy_internal.h10
3 files changed, 192 insertions, 30 deletions
diff --git a/table/block_based/block_based_table_factory.cc b/table/block_based/block_based_table_factory.cc
index ad1d20ca9..18dddaf3c 100644
--- a/table/block_based/block_based_table_factory.cc
+++ b/table/block_based/block_based_table_factory.cc
@@ -268,6 +268,10 @@ static std::unordered_map<std::string, OptionTypeInfo>
{offsetof(struct BlockBasedTableOptions, partition_filters),
OptionType::kBoolean, OptionVerificationType::kNormal,
OptionTypeFlags::kNone, 0}},
+ {"optimize_filters_for_memory",
+ {offsetof(struct BlockBasedTableOptions, optimize_filters_for_memory),
+ OptionType::kBoolean, OptionVerificationType::kNormal,
+ OptionTypeFlags::kNone, 0}},
{"filter_policy",
{offsetof(struct BlockBasedTableOptions, filter_policy),
OptionType::kUnknown, OptionVerificationType::kByNameAllowFromNull,
diff --git a/table/block_based/filter_policy.cc b/table/block_based/filter_policy.cc
index 5fc63fd88..07fbf25fb 100644
--- a/table/block_based/filter_policy.cc
+++ b/table/block_based/filter_policy.cc
@@ -28,9 +28,12 @@ namespace {
// See description in FastLocalBloomImpl
class FastLocalBloomBitsBuilder : public BuiltinFilterBitsBuilder {
public:
- explicit FastLocalBloomBitsBuilder(const int millibits_per_key)
+ // Non-null aggregate_rounding_balance implies optimize_filters_for_memory
+ explicit FastLocalBloomBitsBuilder(
+ const int millibits_per_key,
+ std::atomic<int64_t>* aggregate_rounding_balance)
: millibits_per_key_(millibits_per_key),
- num_probes_(FastLocalBloomImpl::ChooseNumProbes(millibits_per_key_)) {
+ aggregate_rounding_balance_(aggregate_rounding_balance) {
assert(millibits_per_key >= 1000);
}
@@ -48,33 +51,36 @@ class FastLocalBloomBitsBuilder : public BuiltinFilterBitsBuilder {
}
virtual Slice Finish(std::unique_ptr<const char[]>* buf) override {
+ size_t num_entry = hash_entries_.size();
+ std::unique_ptr<char[]> mutable_buf;
uint32_t len_with_metadata =
- CalculateSpace(static_cast<uint32_t>(hash_entries_.size()));
- char* data = new char[len_with_metadata];
- memset(data, 0, len_with_metadata);
+ CalculateAndAllocate(num_entry, &mutable_buf, /*update_balance*/ true);
- assert(data);
+ assert(mutable_buf);
assert(len_with_metadata >= 5);
+ // Compute num_probes after any rounding / adjustments
+ int num_probes = GetNumProbes(num_entry, len_with_metadata);
+
uint32_t len = len_with_metadata - 5;
if (len > 0) {
- AddAllEntries(data, len);
+ AddAllEntries(mutable_buf.get(), len, num_probes);
}
+ assert(hash_entries_.empty());
+
// See BloomFilterPolicy::GetBloomBitsReader re: metadata
// -1 = Marker for newer Bloom implementations
- data[len] = static_cast<char>(-1);
+ mutable_buf[len] = static_cast<char>(-1);
// 0 = Marker for this sub-implementation
- data[len + 1] = static_cast<char>(0);
+ mutable_buf[len + 1] = static_cast<char>(0);
// num_probes (and 0 in upper bits for 64-byte block size)
- data[len + 2] = static_cast<char>(num_probes_);
+ mutable_buf[len + 2] = static_cast<char>(num_probes);
// rest of metadata stays zero
- const char* const_data = data;
- buf->reset(const_data);
- assert(hash_entries_.empty());
-
- return Slice(data, len_with_metadata);
+ Slice rv(mutable_buf.get(), len_with_metadata);
+ *buf = std::move(mutable_buf);
+ return rv;
}
int CalculateNumEntry(const uint32_t bytes) override {
@@ -84,26 +90,163 @@ class FastLocalBloomBitsBuilder : public BuiltinFilterBitsBuilder {
}
uint32_t CalculateSpace(const int num_entry) override {
- uint32_t num_cache_lines = 0;
- if (millibits_per_key_ > 0 && num_entry > 0) {
- num_cache_lines = static_cast<uint32_t>(
- (int64_t{num_entry} * millibits_per_key_ + 511999) / 512000);
+ // NB: the BuiltinFilterBitsBuilder API presumes len fits in uint32_t.
+ return static_cast<uint32_t>(
+ CalculateAndAllocate(static_cast<size_t>(num_entry),
+ /* buf */ nullptr,
+ /*update_balance*/ false));
+ }
+
+ // To choose size using malloc_usable_size, we have to actually allocate.
+ uint32_t CalculateAndAllocate(size_t num_entry, std::unique_ptr<char[]>* buf,
+ bool update_balance) {
+ std::unique_ptr<char[]> tmpbuf;
+
+ // If not for cache line blocks in the filter, what would the target
+ // length in bytes be?
+ size_t raw_target_len = static_cast<size_t>(
+ (uint64_t{num_entry} * millibits_per_key_ + 7999) / 8000);
+
+ if (raw_target_len >= size_t{0xffffffc0}) {
+ // Max supported for this data structure implementation
+ raw_target_len = size_t{0xffffffc0};
+ }
+
+ // Round up to nearest multiple of 64 (block size). This adjustment is
+ // used for target FP rate only so that we don't receive complaints about
+ // lower FP rate vs. historic Bloom filter behavior.
+ uint32_t target_len =
+ static_cast<uint32_t>(raw_target_len + 63) & ~uint32_t{63};
+
+ // Return value set to a default; overwritten in some cases
+ uint32_t rv = target_len + /* metadata */ 5;
+#ifdef ROCKSDB_MALLOC_USABLE_SIZE
+ if (aggregate_rounding_balance_ != nullptr) {
+ // Do optimize_filters_for_memory, using malloc_usable_size.
+ // Approach: try to keep FP rate balance better than or on
+ // target (negative aggregate_rounding_balance_). We can then select a
+ // lower bound filter size (within reasonable limits) that gets us as
+ // close to on target as possible. We request allocation for that filter
+ // size and use malloc_usable_size to "round up" to the actual
+ // allocation size.
+
+ // Although it can be considered bad practice to use malloc_usable_size
+ // to access an object beyond its original size, this approach should
+ // quite general: working for all allocators that properly support
+ // malloc_usable_size.
+
+ // Race condition on balance is OK because it can only cause temporary
+ // skew in rounding up vs. rounding down, as long as updates are atomic
+ // and relative.
+ int64_t balance = aggregate_rounding_balance_->load();
+
+ double target_fp_rate = EstimatedFpRate(num_entry, target_len + 5);
+ double rv_fp_rate = target_fp_rate;
+
+ if (balance < 0) {
+ // See formula for BloomFilterPolicy::aggregate_rounding_balance_
+ double for_balance_fp_rate =
+ -balance / double{0x100000000} + target_fp_rate;
+
+ // To simplify, we just try a few modified smaller sizes. This also
+ // caps how much we vary filter size vs. target, to avoid outlier
+ // behavior from excessive variance.
+ for (uint64_t maybe_len64 :
+ {uint64_t{3} * target_len / 4, uint64_t{13} * target_len / 16,
+ uint64_t{7} * target_len / 8, uint64_t{15} * target_len / 16}) {
+ uint32_t maybe_len =
+ static_cast<uint32_t>(maybe_len64) & ~uint32_t{63};
+ double maybe_fp_rate = EstimatedFpRate(num_entry, maybe_len + 5);
+ if (maybe_fp_rate <= for_balance_fp_rate) {
+ rv = maybe_len + /* metadata */ 5;
+ rv_fp_rate = maybe_fp_rate;
+ break;
+ }
+ }
+ }
+
+ // Filter blocks are loaded into block cache with their block trailer.
+ // We need to make sure that's accounted for in choosing a
+ // fragmentation-friendly size.
+ const uint32_t kExtraPadding = kBlockTrailerSize;
+ size_t requested = rv + kExtraPadding;
+
+ // Allocate and get usable size
+ tmpbuf.reset(new char[requested]);
+ size_t usable = malloc_usable_size(tmpbuf.get());
+
+ if (usable - usable / 4 > requested) {
+ // Ratio greater than 4/3 is too much for utilizing, if it's
+ // not a buggy or mislinked malloc_usable_size implementation.
+ // Non-linearity of FP rates with bits/key means rapidly
+ // diminishing returns in overall accuracy for additional
+ // storage on disk.
+ // Nothing to do, except assert that the result is accurate about
+ // the usable size. (Assignment never used.)
+ assert(tmpbuf[usable - 1] = 'x');
+ } else if (usable > requested) {
+ // Adjust for reasonably larger usable size
+ size_t usable_len = (usable - kExtraPadding - /* metadata */ 5);
+ if (usable_len >= size_t{0xffffffc0}) {
+ // Max supported for this data structure implementation
+ usable_len = size_t{0xffffffc0};
+ }
+
+ rv = (static_cast<uint32_t>(usable_len) & ~uint32_t{63}) +
+ /* metadata */ 5;
+ rv_fp_rate = EstimatedFpRate(num_entry, rv);
+ } else {
+ // Too small means bad malloc_usable_size
+ assert(usable == requested);
+ }
+ memset(tmpbuf.get(), 0, rv);
+
+ if (update_balance) {
+ int64_t diff = static_cast<int64_t>((rv_fp_rate - target_fp_rate) *
+ double{0x100000000});
+ *aggregate_rounding_balance_ += diff;
+ }
+ }
+#else
+ (void)update_balance;
+#endif // ROCKSDB_MALLOC_USABLE_SIZE
+ if (buf) {
+ if (tmpbuf) {
+ *buf = std::move(tmpbuf);
+ } else {
+ buf->reset(new char[rv]());
+ }
}
- return num_cache_lines * 64 + /*metadata*/ 5;
+ return rv;
}
- double EstimatedFpRate(size_t keys, size_t bytes) override {
- return FastLocalBloomImpl::EstimatedFpRate(keys, bytes - /*metadata*/ 5,
- num_probes_, /*hash bits*/ 64);
+ double EstimatedFpRate(size_t keys, size_t len_with_metadata) override {
+ int num_probes = GetNumProbes(keys, len_with_metadata);
+ return FastLocalBloomImpl::EstimatedFpRate(
+ keys, len_with_metadata - /*metadata*/ 5, num_probes, /*hash bits*/ 64);
}
private:
- void AddAllEntries(char* data, uint32_t len) {
+ // Compute num_probes after any rounding / adjustments
+ int GetNumProbes(size_t keys, size_t len_with_metadata) {
+ uint64_t millibits = uint64_t{len_with_metadata - 5} * 8000;
+ int actual_millibits_per_key =
+ static_cast<int>(millibits / std::max(keys, size_t{1}));
+ // BEGIN XXX/TODO(peterd): preserving old/default behavior for now to
+ // minimize unit test churn. Remove this some time.
+ if (!aggregate_rounding_balance_) {
+ actual_millibits_per_key = millibits_per_key_;
+ }
+ // END XXX/TODO
+ return FastLocalBloomImpl::ChooseNumProbes(actual_millibits_per_key);
+ }
+
+ void AddAllEntries(char* data, uint32_t len, int num_probes) {
// Simple version without prefetching:
//
// for (auto h : hash_entries_) {
// FastLocalBloomImpl::AddHash(Lower32of64(h), Upper32of64(h), len,
- // num_probes_, data);
+ // num_probes, data);
// }
const size_t num_entries = hash_entries_.size();
@@ -129,7 +272,7 @@ class FastLocalBloomBitsBuilder : public BuiltinFilterBitsBuilder {
uint32_t& hash_ref = hashes[i & kBufferMask];
uint32_t& byte_offset_ref = byte_offsets[i & kBufferMask];
// Process (add)
- FastLocalBloomImpl::AddHashPrepared(hash_ref, num_probes_,
+ FastLocalBloomImpl::AddHashPrepared(hash_ref, num_probes,
data + byte_offset_ref);
// And buffer
uint64_t h = hash_entries_.front();
@@ -141,13 +284,16 @@ class FastLocalBloomBitsBuilder : public BuiltinFilterBitsBuilder {
// Finish processing
for (i = 0; i <= kBufferMask && i < num_entries; ++i) {
- FastLocalBloomImpl::AddHashPrepared(hashes[i], num_probes_,
+ FastLocalBloomImpl::AddHashPrepared(hashes[i], num_probes,
data + byte_offsets[i]);
}
}
+ // Target allocation per added key, in thousandths of a bit.
int millibits_per_key_;
- int num_probes_;
+ // See BloomFilterPolicy::aggregate_rounding_balance_. If nullptr,
+ // always "round up" like historic behavior.
+ std::atomic<int64_t>* aggregate_rounding_balance_;
// A deque avoids unnecessary copying of already-saved values
// and has near-minimal peak memory use.
std::deque<uint64_t> hash_entries_;
@@ -457,7 +603,7 @@ const std::vector<BloomFilterPolicy::Mode> BloomFilterPolicy::kAllUserModes = {
};
BloomFilterPolicy::BloomFilterPolicy(double bits_per_key, Mode mode)
- : mode_(mode), warned_(false) {
+ : mode_(mode), warned_(false), aggregate_rounding_balance_(0) {
// Sanitize bits_per_key
if (bits_per_key < 1.0) {
bits_per_key = 1.0;
@@ -549,6 +695,7 @@ FilterBitsBuilder* BloomFilterPolicy::GetFilterBitsBuilder() const {
FilterBitsBuilder* BloomFilterPolicy::GetBuilderWithContext(
const FilterBuildingContext& context) const {
Mode cur = mode_;
+ bool offm = context.table_options.optimize_filters_for_memory;
// Unusual code construction so that we can have just
// one exhaustive switch without (risky) recursion
for (int i = 0; i < 2; ++i) {
@@ -563,7 +710,8 @@ FilterBitsBuilder* BloomFilterPolicy::GetBuilderWithContext(
case kDeprecatedBlock:
return nullptr;
case kFastLocalBloom:
- return new FastLocalBloomBitsBuilder(millibits_per_key_);
+ return new FastLocalBloomBitsBuilder(
+ millibits_per_key_, offm ? &aggregate_rounding_balance_ : nullptr);
case kLegacyBloom:
if (whole_bits_per_key_ >= 14 && context.info_log &&
!warned_.load(std::memory_order_relaxed)) {
diff --git a/table/block_based/filter_policy_internal.h b/table/block_based/filter_policy_internal.h
index 2ca9dc859..783373b26 100644
--- a/table/block_based/filter_policy_internal.h
+++ b/table/block_based/filter_policy_internal.h
@@ -135,6 +135,16 @@ class BloomFilterPolicy : public FilterPolicy {
// only report once per BloomFilterPolicy instance, to keep the noise down.)
mutable std::atomic<bool> warned_;
+ // State for implementing optimize_filters_for_memory. Essentially, this
+ // tracks a surplus or deficit in total FP rate of filters generated by
+ // builders under this policy vs. what would have been generated without
+ // optimize_filters_for_memory.
+ //
+ // To avoid floating point weirdness, the actual value is
+ // Sum over all generated filters f:
+ // (predicted_fp_rate(f) - predicted_fp_rate(f|o_f_f_m=false)) * 2^32
+ mutable std::atomic<int64_t> aggregate_rounding_balance_;
+
// For newer Bloom filter implementation(s)
FilterBitsReader* GetBloomBitsReader(const Slice& contents) const;
};