summaryrefslogtreecommitdiff
path: root/port
diff options
context:
space:
mode:
authorAndrew Kryczka <andrew.kryczka2@gmail.com>2019-04-22 11:48:45 -0700
committerFacebook Github Bot <facebook-github-bot@users.noreply.github.com>2019-04-22 11:51:39 -0700
commit8272a6de57ed701fb25bb660e074cab703ed3fe7 (patch)
tree7409a51071b965455309caf98801eb809642fa84 /port
parentdf38c1ce660628f05b4686eeaf0b548295ce7967 (diff)
Optionally wait on bytes_per_sync to smooth I/O (#5183)
Summary: The existing implementation does not guarantee bytes reach disk every `bytes_per_sync` when writing SST files, or every `wal_bytes_per_sync` when writing WALs. This can cause confusing behavior for users who enable this feature to avoid large syncs during flush and compaction, but then end up hitting them anyways. My understanding of the existing behavior is we used `sync_file_range` with `SYNC_FILE_RANGE_WRITE` to submit ranges for async writeback, such that we could continue processing the next range of bytes while that I/O is happening. I believe we can preserve that benefit while also limiting how far the processing can get ahead of the I/O, which prevents huge syncs from happening when the file finishes. Consider this `sync_file_range` usage: `sync_file_range(fd_, 0, static_cast<off_t>(offset + nbytes), SYNC_FILE_RANGE_WAIT_BEFORE | SYNC_FILE_RANGE_WRITE)`. Expanding the range to start at 0 and adding the `SYNC_FILE_RANGE_WAIT_BEFORE` flag causes any pending writeback (like from a previous call to `sync_file_range`) to finish before it proceeds to submit the latest `nbytes` for writeback. The latest `nbytes` are still written back asynchronously, unless processing exceeds I/O speed, in which case the following `sync_file_range` will need to wait on it. There is a second change in this PR to use `fdatasync` when `sync_file_range` is unavailable (determined statically) or has some known problem with the underlying filesystem (determined dynamically). The above two changes only apply when the user enables a new option, `strict_bytes_per_sync`. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5183 Differential Revision: D14953553 Pulled By: siying fbshipit-source-id: 445c3862e019fb7b470f9c7f314fc231b62706e9
Diffstat (limited to 'port')
-rw-r--r--port/win/io_win.cc35
1 files changed, 19 insertions, 16 deletions
diff --git a/port/win/io_win.cc b/port/win/io_win.cc
index 128cb60b9..64ded8465 100644
--- a/port/win/io_win.cc
+++ b/port/win/io_win.cc
@@ -383,21 +383,23 @@ Status WinMmapFile::PreallocateInternal(uint64_t spaceToReserve) {
return fallocate(filename_, hFile_, spaceToReserve);
}
-WinMmapFile::WinMmapFile(const std::string& fname, HANDLE hFile, size_t page_size,
- size_t allocation_granularity, const EnvOptions& options)
- : WinFileData(fname, hFile, false),
- hMap_(NULL),
- page_size_(page_size),
- allocation_granularity_(allocation_granularity),
- reserved_size_(0),
- mapping_size_(0),
- view_size_(0),
- mapped_begin_(nullptr),
- mapped_end_(nullptr),
- dst_(nullptr),
- last_sync_(nullptr),
- file_offset_(0),
- pending_sync_(false) {
+WinMmapFile::WinMmapFile(const std::string& fname, HANDLE hFile,
+ size_t page_size, size_t allocation_granularity,
+ const EnvOptions& options)
+ : WinFileData(fname, hFile, false),
+ WritableFile(options),
+ hMap_(NULL),
+ page_size_(page_size),
+ allocation_granularity_(allocation_granularity),
+ reserved_size_(0),
+ mapping_size_(0),
+ view_size_(0),
+ mapped_begin_(nullptr),
+ mapped_end_(nullptr),
+ dst_(nullptr),
+ last_sync_(nullptr),
+ file_offset_(0),
+ pending_sync_(false) {
// Allocation granularity must be obtained from GetSystemInfo() and must be
// a power of two.
assert(allocation_granularity > 0);
@@ -966,7 +968,8 @@ WinWritableFile::WinWritableFile(const std::string& fname, HANDLE hFile,
size_t alignment, size_t /* capacity */,
const EnvOptions& options)
: WinFileData(fname, hFile, options.use_direct_writes),
- WinWritableImpl(this, alignment) {
+ WinWritableImpl(this, alignment),
+ WritableFile(options) {
assert(!options.use_mmap_writes);
}