I recently needed to read/write to disk a 64,000 x 64,000 array of u64 values.
64k * 64k = 4.1 billion entries * 8 bytes = 32.7 GB loaded every test run,
well within the valuable to micro-optimize range.
Perfecting superfast load times at these sizes with Rust is surprisingly hard puzzle though.
File content is a complete xy array of Entity IDs stored as usize array indexes.
I will use u64 == usize interchangeably.
Used case is pathfinding in my facto-loop-miner Factorio Mod.
Sure this problem has other solutions. But in 2023 we have
fast SK hynix P41 NVMe drives rated 7 GB/s and 256 GB RAM!
We have AWS m7gd.2xlarge instances!
"Write memory as-is to disk" should still be easy practical solution.
Documenting the increasing complexity required to obtain better performance.
Includes cargo bench benchmarks for my full file read/write use case.
This excludes page cache concerns and higher level Rust IO Crates.
I will explore the built-in low-level std Rust and libc Linux APIs
to understand how they work from safe copy to mmap to io_uring.