Executive Summary
A community video by Pixel Operative
flagged Windrose as writing 15–30 MB/s to the SSD
during normal gameplay, with sailing producing a
near-flat sustained 30 MB/s. The claim was based on Task
Manager disk activity graphs from a single client. This
report reaches the same underlying observation through a
different path: direct analysis of the RocksDB save folder
plus the Unreal Engine 5 client log
(R5.log), which captures the game's own
internal instrumentation.
The evidence is consistent with a persistence layer tuned for maximum durability at the cost of write volume, not a corruption risk. The game runs multiple RocksDB instances concurrently, each containing many column families behind a shared 1 MB write-ahead log. That combination forces frequent memtable flushes across column families, amplifying a modest stream of logical writes into a much larger stream of physical disk activity. Whether the tuning is a deliberate design choice or an unintentionally conservative default cannot be determined from this data alone — only the outcome is observable.
OPTIONS file alone
does not fully determine. The observable trade-off
is sustained background disk traffic that strains
shared server hosts, may shorten endurance on
hot-loop SSD hardware, and inflates perceived I/O
relative to logical gameplay state changes.
Analysis inputs
-
Live RocksDB save directory
(
Worlds/<world-id-a>…), 9 days of play, 66 live SST files plus MANIFEST, OPTIONS, and three rotatedLOGfiles. -
Matching UE5 client log
R5.logcovering a single 4h 8min session (2026-04-22 22:17 → 2026-04-23 02:25 local). - Session window is fully contained in the save's latest SST timestamp range, allowing direct correlation between log events and file-system artifacts.
Persistence Architecture
The game does not run a single save database. From the
R5LogBLDalAQ category in the engine log,
Windrose opens at least three concurrent RocksDB
instances, each behind its own async write queue and
worker thread.
| Instance | Path | Worker thread |
|---|---|---|
| Players | 0.10.0/Players/<player-id>… | 43252 → 2652 → 44492 |
| Accounts | 0.10.0/Accounts/<account-id>… | 28396 |
| Worlds | 0.10.0/Worlds/<world-id-a>… | (per-world worker) |
Each database runs through the same abstraction stack:
R5BLDalAsyncQueue enqueues work items, and
R5BLRocksAsyncQueue drains them into the
underlying RocksDB. The queues are independent, so disk
traffic observed in a single save folder represents only
one of multiple concurrent I/O sources. A player with
multiple islands adds one more Worlds instance per
island; the profile in this capture has two
(<world-id-a>… and <world-id-b>…).
instances
embedded in game
during active play
in 9 days (Worlds DB)
RocksDB file numbers are a reasonable activity proxy rather than an exact flush/compaction count — the same allocator is also used for WAL files, MANIFEST rotations, and other internal artifacts. Even taken as a proxy, the numbers are informative: the Worlds DB alone allocated roughly 1,065,000 file numbers over the 9-day span, while only 66 SSTs remain on disk. That ratio implies the large majority of emitted files have already been rewritten and deleted by compaction. During the measured 136-minute active session the allocation rate stabilized at roughly 8.6 events per second for the Worlds DB instance.
The 1 MB WAL Cap
The single most consequential tuning decision is the
maximum write-ahead log size. Every one of the
instances above is configured identically, via an
OPTIONS file written by RocksDB at DB
creation.
DBOptions (from OPTIONS-982899)
max_total_wal_size = 1048576 // 1 MB bytes_per_sync = 0 // no OS-level pacing wal_bytes_per_sync = 0 max_background_jobs = 2 // flush + compaction combined use_direct_io_for_flush_and_compaction = false use_fsync = false // fdatasync instead wal_recovery_mode = kPointInTimeRecovery paranoid_checks = true verify_sst_unique_id_in_manifest = true compaction_verify_record_count = true flush_verify_memtable_count = true ttl = 2592000 // 30 days rocksdb_version = 10.4.2
A 1 MB total WAL budget is extreme by any reasonable
baseline. Typical RocksDB deployments size the WAL
between 64 MB and 1 GB. Per the RocksDB documentation,
max_total_wal_size only takes effect when
the DB has more than one column family — otherwise WAL
sizing is dictated by write_buffer_size
alone. The Worlds DB in this capture has
22 column families, so the option is
empirically active:
default R5LargeObjects R5BLIsland R5BLBuilding R5BLIslandChest R5BLCrop R5BLActor_DamageableFoliage R5BLActor_DialogueActor R5BLActor_Drop R5BLActor_ExplodingBarrel R5BLDynamicGenericActor R5BLStaticGenericActor R5BLIslandShipDock R5BLActor_PickupResource R5BLPlayerInWorld R5BLActor_DigNode R5BLActor_DigVolume R5BLActor_MineralNode R5BLResourceSpawnPoint R5BLGameplaySpawner R5BLActorScenarioSave R5BLActor_BuildingBlock
Every column family has its own memtable and its own
write_buffer_size of 64 MB, but they share
a single WAL capped at 1 MB. When accumulated writes
across any column families push the WAL past
the cap, RocksDB force-flushes the memtables of the
column families whose data is present in the oldest
live WAL. With 22 column families sharing a 1 MB
budget, this condition is reached frequently, and each
forced flush can emit multiple L0 SSTs — one per
flushed column family.
The L0 outputs then feed the compaction pipeline. Leveled compaction in RocksDB typically shows a write-amplification factor in the range of 20–30× user data in published studies, which aligns with the order-of-magnitude gap between plausible in-game logical write rates and the 15–30 MB/s physical rates the community video captured.
Flush cascade
At a modest 1 MB/s of logical writes, this configuration forces approximately one flush per second. The flush emits an L0 SST that must eventually be merged into L1, then L2, then L3 — each level rewriting every byte it touches. With the LSM-tree's standard 10× per-level amplification, a 1 MB/s logical stream produces tens of MB/s of physical writes, which matches the disk activity the community video captured.
bytes_per_sync throttling, the engine
cannot batch writes into fewer, larger flushes. Each
MB of game state produces its own flush, its own SST,
and its own downstream compaction work — per
database instance.
Several other flags reinforce the durability bias:
paranoid_checks,
verify_sst_unique_id_in_manifest,
compaction_verify_record_count, and
flush_verify_memtable_count are all on. The
WAL recovery mode is
kPointInTimeRecovery, the strictest option.
The configuration is internally consistent with a
single goal: minimize the window of possible data loss,
at any disk cost.
Write Amplification on Disk
Write amplification is usually inferred from RocksDB's internal statistics. In this save, those statistics are not directly available (see § 06 · Backup Procedure for why), so the argument rests on three indirect indicators: the gap between file numbers allocated and files still live on disk, the size of the MANIFEST relative to the live data, and the distribution of SST sizes across compaction levels.
Live vs. allocated
currently on disk
payload
over 9 days
(compaction history)
The ratio of allocated-to-surviving file numbers exceeds 16,000:1. Even treating file numbers as a proxy rather than an exact flush/compaction count (see § 02), the gap is large enough to imply that the large majority of emitted files have been superseded by compaction. The MANIFEST itself, which records every file edit, has grown to 5.9 MB — unusually large for a live database holding fewer than 20 MB of actual data.
The Write Queue During Play
The RocksDB LOG files on disk capture only
DB open/shutdown banners (see
§ 06 · Backup Procedure), but the
UE5 client log preserves the DAL layer's view of the
same workload. The
R5BLDalAsyncQueue::DetectProblems handler
fires whenever a task exceeds its latency threshold.
Write queue growth (Players DB worker, single session)
| Timestamp | Elapsed | Queued items | Total task # | Avg rate |
|---|---|---|---|---|
| 02:53:05 | 35 min | 480 | 1,456 | ~42 / min |
| 03:37:53 | 80 min | 1,522 | 4,615 | ~70 / min |
| 04:07:56 | 110 min | 2,330 | 7,039 | ~81 / min |
Two things stand out. First, the rate of committed tasks climbs across the session — from 42/min to 81/min — so activity accelerates rather than stabilizing. Second, the queue depth nearly quintuples (480 → 2,330) over the same interval. The write pipeline is accepting work faster than it can drain, which is the structural condition that produces sustained background I/O long after an individual gameplay event has concluded.
Slow-commit detections in the same session
02:53:05 R5BLDalAQ: Quite slow task. Task was finished in 320 ms. commitT 03:37:53 R5BLDalAQ: Slow task. Task was finished in 690 ms. commitT (Warning) 04:07:56 R5BLDalAQ: Quite slow task. Task was finished in 292 ms. commitT
Commit latencies of 290–690 ms on a local SSD indicate
the RocksDB instance is periodically stalling — likely
during L0-to-L1 compaction bursts when
max_background_jobs=2 leaves no headroom
for concurrent flush and compaction. The devs have
built threshold-based detection for exactly this, and
it fired three times in four hours during the
captured session.
Co-op context caveat
The R5.log capture comes from a co-op session, not a
pure solo session. Log evidence:
R5LogIceProtocol creating STUN/TURN P2P
connections at session start, the co-op map
GenlandiaMulty loaded,
LogNet: Welcomed by server confirming a
peer-to-peer join, and 218 LogNet: Warning
entries over four hours showing co-op replication
churn (repeated
No owning connection for actor warnings
on ship and player-state actors). There was one
P2P handshake at session start and no additional
full-peer joins visible in the captured window.
Two implications for the numbers above: (1) the initial join burst is inside the first 35-minute sample, so the 35 → 110 min growth from 42 to 81 commits/min is not join-driven — it reflects steady-state co-op activity; but (2) absolute rates are co-op-inflated relative to a true solo session. A solo capture taken for comparison would let us separate "co-op replication overhead" from "base game persistence." That capture was not taken for this report.
Memory pressure concurrent with the I/O
In the first ten minutes of gameplay, the in-game
R5ResourcesMemoryLeakDetector fired five
times, flagging average memory growth rates above the
5 MB/s threshold:
02:25:15 growth 13.33 MB/s 02:32:26 growth 6.37 MB/s 02:33:27 growth 13.09 MB/s 02:35:21 growth 5.03 MB/s 02:36:22 growth 13.19 MB/s
Most of this is UE5 asset streaming on the
GenlandiaMulty map, not the RocksDB
queue. But the coexistence is worth noting:
unbounded in-memory growth plus a growing DAL queue is
the pattern that eventually produces the sustained
30 MB/s disk rate observed externally — the game
continuously generates more state to persist than the
tuned-for-durability write pipeline can drain in a
single pass.
Backup Procedure
The rotated RocksDB LOG files in the save
folder have misled at least one observer (including this
one, on first pass) into suspecting frequent DB
reopens during gameplay. The client log clarifies the
actual behavior.
R5LogCoopProxy at session start
02:17:17.518 UR5CoopProxy::RollBackups Existing backups num: 5 02:17:17.518 UR5CoopProxyClient::MakeBackup /_Backups/20260422_221717 02:17:17.518 MakeBackup R5BLIsland 02:17:17.584 Backup record R5BLIsland[<world-id-b>…] 02:17:17.585 MakeBackup R5BLAccount 02:17:17.593 MakeAccountDescriptionBackup 02:17:17.594 Backup record R5BLPlayer[<player-id>…] 02:17:17.602 Backup successfully created
- Backups run on each game launch, not on a wall-clock timer. The game keeps a rolling window of 5 previous backups.
-
Each backup iterates every root collection
(
R5BLIsland,R5BLAccount,R5BLPlayer) and opens each underlying RocksDB briefly. This is what creates the rapid open/shutdown cycles visible in the rotatedLOG.old.*files — three rotations in 325 ms. - A complete backup finishes in ~84 ms for the profile sampled. The operation is cheap; the side-effect on the RocksDB info log is what makes post-hoc analysis of the info log ineffective.
The evidence points to aggressive durability-oriented RocksDB tuning rather than corruption or a runaway write loop. The 1 MB shared WAL against 22 column families, paranoid verification flags, strict point-in-time recovery mode, and concurrent per-collection databases are all consistent with keeping the crash-recovery replay window small. Whether that tuning was a deliberate product choice or an accidental legacy default cannot be settled without developer comment; the observable outcome is the same either way.
What this means for players
- No corruption risk. The configuration is internally coherent with durability-first intent, and the video's own creator reports 60 hours of corruption-free play.
- SSD endurance is the real concern. Consumer TLC SSDs are rated for hundreds of TBW; sustained 15–30 MB/s during gameplay is well within normal consumer-drive budgets over typical ownership periods, but is meaningful on QLC drives or older drives near end of life.
- Shared hosting will struggle. The queue-growth pattern means cheap VPS and shared dedicated hosts with contended I/O are genuinely unsuitable for hosting this game. Self-hosted or reputable dedicated I/O are the viable options.
Tuning dials the devs could turn
-
max_total_wal_size1 MB → 64 MB would reduce flush frequency by ~64× with only a 64 MB worst-case recovery replay. -
bytes_per_sync0 → 1 MB would pace writes to the OS instead of bursting, smoothing disk utilization without affecting durability. -
max_background_jobs2 → 4 would eliminate most commit-latency stalls by letting flush and compaction proceed concurrently. -
level0_file_num_compaction_triggerraised from 4 would absorb more flushes before kicking off L0→L1 compaction work.
None of these changes require code work — only tuning. Each reduces disk pressure without weakening the durability model in any game-relevant way. The worst-case exposure after a 64 MB WAL switch is still measured in milliseconds of replayed writes.
Can players tune this themselves?
Likely no, based on the evidence available. The
OPTIONS-<N> files visible in the
save folder are written by RocksDB on
every DB open, not read; editing them in place
does not persist because the next open overwrites
with the values the game supplies in code. The
game's external Config directory on disk is
effectively empty — the R5 log shows
pakchunk3-release-steam-game-coop.pak
mounted at R5/Config/, so any INI
configuration ships inside the pak rather than as
loose files, and pak-packed config is not
user-editable without repack tooling.
A definitive test (not run for this report) takes
about a minute: edit
max_total_wal_size in the current
OPTIONS file to a larger value (for example
67108864 for 64 MB), launch the game
briefly and exit cleanly, then inspect the newly
written OPTIONS-<higher-number>
file. If the edited value persists, the game
respects external OPTIONS-file loading — rare but
possible. If it reverts to
1048576, the configuration is set
in code and user-side tuning is not available
without a mod. Harmless either way: RocksDB
writes a fresh OPTIONS file on every open, so the
experiment does not mutate live state.
Methodology note
This analysis was performed against a live save
snapshot plus a matching engine log from a single
session. The RocksDB info LOG files
on disk capture only the shutdown-time open/close
cycles — the game reopens the Players DB during
shutdown (confirmed by
R5BLRocksAsyncQueue close/reopen
events in the R5 log), rotating the gameplay
session's LOG out of the
keep_log_file_num = 3 buffer before
the folder can be copied. Both surviving
LOG.old.* rotation timestamps in this
capture land within a 325 ms window at session
shutdown, confirming the effect. There is no
copy-after-exit window that would recover the
gameplay LOG.
Per-event byte counts and exact compaction-level
statistics therefore require live capture during
gameplay rather than post-hoc file copying.
Practical options: procmon or ETW
traces on the RocksDB folder, a sidecar file
watcher (FileSystemWatcher,
watchdog) logging size deltas, or —
with enough persistence — raising
keep_log_file_num in OPTIONS and
checking whether the game preserves the edit
across launches. None of those were run for this
report.
Evidence Appendix
The claims in this report rest on four artifact classes:
the live RocksDB save folder, the embedded
OPTIONS and MANIFEST files,
the rotated RocksDB info LOG files, and
the Unreal Engine 5 R5.log client log for
the correlated session. This appendix reproduces the
specific command outputs and excerpts the narrative
sections rely on.
A · RocksDB configuration (OPTIONS excerpt)
Values drawn verbatim from
OPTIONS-982899, the current OPTIONS file
for the Worlds DB.
[Version] rocksdb_version = 10.4.2 [DBOptions] max_total_wal_size = 1048576 // 1 MB bytes_per_sync = 0 wal_bytes_per_sync = 0 max_background_jobs = 2 max_background_compactions = -1 max_background_flushes = -1 use_direct_io_for_flush_and_compaction = false use_fsync = false wal_recovery_mode = kPointInTimeRecovery wal_compression = kNoCompression paranoid_checks = true verify_sst_unique_id_in_manifest = true compaction_verify_record_count = true flush_verify_memtable_count = true allow_concurrent_memtable_write = true [CFOptions "default"] (representative; same values repeat across all 22 CFs) write_buffer_size = 67108864 // 64 MB per CF max_write_buffer_number = 2 min_write_buffer_number_to_merge = 1 level0_file_num_compaction_trigger = 4 max_bytes_for_level_base = 268435456 // 256 MB max_bytes_for_level_multiplier = 10 ttl = 2592000 // 30 days soft_pending_compaction_bytes_limit = 68719476736 // 64 GB hard_pending_compaction_bytes_limit = 274877906944 // 256 GB
B · Column families (Worlds DB)
22 column families are defined in the OPTIONS file, one
[CFOptions "<name>"] section each.
This count satisfies RocksDB's documented requirement
for max_total_wal_size to be an effective
flush trigger.
default R5LargeObjects R5BLIsland R5BLBuilding R5BLIslandChest R5BLCrop R5BLActor_DamageableFoliage R5BLActor_DialogueActor R5BLActor_Drop R5BLActor_ExplodingBarrel R5BLDynamicGenericActor R5BLStaticGenericActor R5BLIslandShipDock R5BLActor_PickupResource R5BLPlayerInWorld R5BLActor_DigNode R5BLActor_DigVolume R5BLActor_MineralNode R5BLResourceSpawnPoint R5BLGameplaySpawner R5BLActorScenarioSave R5BLActor_BuildingBlock
C · SHA-256 verification of same-size SSTs
Three groups of surviving SSTs share identical byte sizes within the group. SHA-256 hashes are distinct within every group — same size, different content. Reported in full for reproducibility.
Group 1 — 1,661,300 bytes 1b3f0a014237442eb7ee6ae1d62958d2b77eb8104afec17d783f74f633cd8185 1086472.sst 5b89fd401530c2e00ded09bc9c7909e2e60f32f0513ce181a5d65817fc4adef1 1086495.sst ea1923ca6def5d0b2ebc93705cd531a0074a64a6aa36b441a91961fcf34272e7 1086518.sst Group 2 — 18,070 bytes 641ba328d51247e6b3b3ebce10cec507f74671c73c71e01777b2d8d9a97226fe 1086387.sst f6f49fe6251fcf8ab74df5a4ba74e1ae575f712c3c049ef19d998f762054bd08 1086483.sst 17227ef2c26b4aa52cd93cc0e9717d0f6d309c67642444eafb6235a641a4724f 1086527.sst Group 3 — 1,218 bytes b04c1064cd1a08664d2cb9f7d881c4c12b1d44d8d5fa2d8fee6236a4cfbb01e5 020909.sst f2320ac90378bfffeeaa838453e8d6de16311f3fb2d5ec791a030919e6fb37fd 300578.sst b8682c3251cd61689f68ba9663a8df8eb965d3fdcb3b049582ced40aa1f974a0 441321.sst 343c987639f3d1678acccab6c719a04440be97f03eae493d0dfc2b607fb961c6 1077894.sst 3a5206d6c1f7b30e4915abd42f3d6a48e5bc11be231fc25672bf1e1f8a38abf4 1078215.sst
The non-matching hashes rule out the initial hypothesis
that these were the same payload rewritten across
compaction levels. A plausible alternative is that
compaction is targeting consistent output sizes per
level — which is consistent with RocksDB's default
target-file-size behavior — and is unrelated to
repeated rewrites of the same key range. Settling the
question would require either sst_dump
output or key-range / sequence-number overlap analysis
from the MANIFEST.
D · File-system summary of the Worlds DB
Live SSTs on disk: 66
Total live SST bytes: 19,470,508 (18.57 MB)
MANIFEST size: 5,945,606 (5.67 MB)
File-number range: 020,861 → 1,086,528
File-number span: 1,065,667
SST timestamps (active window): 2026-04-23 00:09 → 02:25 local
Active-session allocations: 70,106 file numbers in 136 min
≈ 515 / min ≈ 8.6 / sec
E · DAL async-queue growth (Players DB, single session)
Extracted from R5LogBLDalAQ::DetectProblems
entries. Format in the log is
[s: queued: totalTaskID: taskType].
elapsed queued total-task# latency type ─────── ────── ─────────── ─────── ───── 35 min 480 1,456 320 ms commitT 80 min 1,522 4,615 690 ms commitT (Warning) 110 min 2,330 7,039 292 ms commitT Rate between samples: 0 → 35 min: 1,456 tasks in 35 min = ~42 / min 35 → 80 min: 3,159 tasks in 45 min = ~70 / min 80 → 110 min: 2,424 tasks in 30 min = ~81 / min
F · Memory-leak detector alerts (first 10 min of gameplay)
02:25:15 R5ResourcesMemoryLeakDetector growth 13.33 MB/s (threshold 5.00) 02:32:26 R5ResourcesMemoryLeakDetector growth 6.37 MB/s 02:33:27 R5ResourcesMemoryLeakDetector growth 13.09 MB/s 02:35:21 R5ResourcesMemoryLeakDetector growth 5.03 MB/s 02:36:22 R5ResourcesMemoryLeakDetector growth 13.19 MB/s World: Client -1 (/Game/Maps/GYM/Genlandia/GenlandiaMulty.GenlandiaMulty)
G · What this appendix does not contain
-
Per-event compaction byte counts. The RocksDB info
LOGrotated out during the launch backup procedure, leaving only DB-open/shutdown banners. A clean session exit followed by an immediate save-folder copy is the capture path for that data. -
Key-level content comparison of the same-size SSTs.
sst_dump --output=compressedwould surface key ranges and sequence numbers; it was not run here. -
A solo-session baseline. The R5.log capture is from
a co-op session (client joined a remote P2P host
at 02:23 into
GenlandiaMulty). Rates are therefore co-op-inflated relative to a true solo session, and the delta cannot be quantified without a matching solo capture. - Server-side traces. All data in this report is from a client installation; shared-host behavior on rented dedicated servers is inferred, not measured.