summaryrefslogtreecommitdiff
path: root/table/scoped_arena_iterator.h
diff options
context:
space:
mode:
authorAndrew Kryczka <andrewkr@fb.com>2016-10-18 12:04:56 -0700
committerAndrew Kryczka <andrewkr@fb.com>2016-10-18 12:04:56 -0700
commit6fbe96baf8a4539d800e7c124ca85443a0ca7876 (patch)
tree7feed6387f5e28be5686d9f2c165fd066344c3a6 /table/scoped_arena_iterator.h
parent257de78d9b56f1c5a2d30ec942617275a96b9dc3 (diff)
Compaction Support for Range Deletion
Summary: This diff introduces RangeDelAggregator, which takes ownership of iterators provided to it via AddTombstones(). The tombstones are organized in a two-level map (snapshot stripe -> begin key -> tombstone). Tombstone creation avoids data copy by holding Slices returned by the iterator, which remain valid thanks to pinning. For compaction, we create a hierarchical range tombstone iterator with structure matching the iterator over compaction input data. An aggregator based on that iterator is used by CompactionIterator to determine which keys are covered by range tombstones. In case of merge operand, the same aggregator is used by MergeHelper. Upon finishing each file in the compaction, relevant range tombstones are added to the output file's range tombstone metablock and file boundaries are updated accordingly. To check whether a key is covered by range tombstone, RangeDelAggregator::ShouldDelete() considers tombstones in the key's snapshot stripe. When this function is used outside of compaction, it also checks newer stripes, which can contain covering tombstones. Currently the intra-stripe check involves a linear scan; however, in the future we plan to collapse ranges within a stripe such that binary search can be used. RangeDelAggregator::AddToBuilder() adds all range tombstones in the table's key-range to a new table's range tombstone meta-block. Since range tombstones may fall in the gap between files, we may need to extend some files' key-ranges. The strategy is (1) first file extends as far left as possible and other files do not extend left, (2) all files extend right until either the start of the next file or the end of the last range tombstone in the gap, whichever comes first. One other notable change is adding release/move semantics to ScopedArenaIterator such that it can be used to transfer ownership of an arena-allocated iterator, similar to how unique_ptr is used for malloc'd data. Depends on D61473 Test Plan: compaction_iterator_test, mock_table, end-to-end tests in D63927 Reviewers: sdong, IslamAbdelRahman, wanning, yhchiang, lightmark Reviewed By: lightmark Subscribers: andrewkr, dhruba, leveldb Differential Revision: https://reviews.facebook.net/D62205
Diffstat (limited to 'table/scoped_arena_iterator.h')
-rw-r--r--table/scoped_arena_iterator.h8
1 files changed, 7 insertions, 1 deletions
diff --git a/table/scoped_arena_iterator.h b/table/scoped_arena_iterator.h
index 68cc144ec..d6183b0c0 100644
--- a/table/scoped_arena_iterator.h
+++ b/table/scoped_arena_iterator.h
@@ -40,10 +40,16 @@ class ScopedArenaIterator {
}
InternalIterator* operator->() { return iter_; }
+ InternalIterator* get() { return iter_; }
void set(InternalIterator* iter) { reset(iter); }
- InternalIterator* get() { return iter_; }
+ InternalIterator* release() {
+ assert(iter_ != nullptr);
+ auto* res = iter_;
+ iter_ = nullptr;
+ return res;
+ }
~ScopedArenaIterator() {
reset(nullptr);