Rust build caching is usually discussed as a speed problem. In practice, it is also a storage problem.
Once a Rust codebase gets large enough, you end up recompiling a big dependency tree in CI and across branches — but you also end up storing the same artifacts again and again across target/ directories. Git worktrees make that especially visible: each worktree gets its own build output, so a few active branches can consume a surprising amount of disk space without doing anything unusual.
What felt missing to us was something like a next-generation sccache: still a drop-in RUSTC_WRAPPER, but built to do more than skip compiles. We wanted one cache story that reduces duplicate artifacts across worktrees, behaves sanely on macOS, and can extend into CI without changing how people use Cargo.
That was the problem that led us to build kache.
Highlights:
- Keep using normal Cargo workflows through a drop-in
RUSTC_WRAPPER - Store artifacts by
blake3content hash instead of duplicating them across worktrees - Restore hits with hardlinks, so repeated
.rliboutputs take one physical copy on disk - See cache health, misses, and deduplicated bytes in a config and monitoring TUI
- Get sensible macOS defaults with Spotlight and Time Machine exclusions for the local store
- Carry the same cache into CI with S3-compatible storage or
kache-action
Why we built it
We use Rust heavily, we use git worktrees a lot, and we got tired of watching target/ directories grow everywhere. Existing cache setups helped with some rebuilds, but they did not solve the part that bothered us most locally: equivalent artifacts still ended up duplicated across worktrees and build directories.
So we built a cache with a narrower goal:
- Skip recompiling dependency crates when nothing meaningful changed
- Keep one local copy of equivalent artifacts when possible
- Reuse artifacts across
target/directories without spraying copies around - Make the same cache usable in CI when needed
What kache is
kache is a RUSTC_WRAPPER for Rust.
You install it, set it as your wrapper, and keep using Cargo normally.
You can set it via an environment variable:
export RUSTC_WRAPPER=kache
Or persist it in ~/.cargo/config.toml:
[build]
rustc-wrapper = "kache"
For each crate compilation, kache computes a deterministic cache key from the build inputs, checks whether it already has the corresponding artifacts, and either restores them or lets the compile proceed and stores the result afterward.
At a high level:
- One content-addressed local store — artifacts keyed deterministically with
blake3 - Hardlink-based restore — fast hits without duplicating files across worktrees
- Built-in visibility — monitoring, config, cleanup, and diagnostics instead of blind cache behavior
- Optional S3 sync — share artifacts across machines and CI when local-only is no longer enough
The important part is that it stays close to the normal Cargo workflow. No new build system, no config files to maintain — just a wrapper.
Why hardlinks matter
The main design choice in kache is the use of hardlinks for local cache hits.
The local problem is not just "can we avoid recompiling this crate?" — it is also "why do we have several copies of the same .rlib spread across worktrees?"
With hardlinks, kache keeps one physical copy in the local store and links it into multiple target/ directories. For teams working with several branches at once, that changes the cost model quite a bit. Instead of materializing the same dependency artifacts as separate files per worktree, you store them once.
~/.cache/kache/store/
└── ab/cd1234... ← single copy on disk
project/
├── main/target/debug/deps/libfoo.rlib → hardlink
├── feature-a/target/debug/deps/libfoo.rlib → hardlink
└── feature-b/target/debug/deps/libfoo.rlib → hardlink
That is the part we cared about most. S3 support is useful. The TUI is useful. But the hardlink-based local layout is the thing that makes kache feel different in day-to-day Rust work.
What kache adds
If you already use sccache, the interesting question is not "can kache also hit a cache?" It can. The more useful question is what kache adds around that basic idea.
The main difference is that kache treats local build artifacts as something to manage properly, not just something to reuse opportunistically. It gives you one content-addressed local store, restores hits with hardlinks across worktrees, avoids a class of macOS-specific pain points, and makes the cache inspectable through a monitor, config UI, diagnostics, and cleanup commands.
So this is not really "sccache, but again." It is a cache built around local deduplication, observability, and practical team workflows, while still fitting in the same RUSTC_WRAPPER slot.
If you are already on sccache, kache also has an explicit migration path: kache doctor --fix can update the wrapper setup, and kache doctor --fix --purge-sccache can additionally remove the old cache and binary.
Local first, shared when needed
kache is useful as a local-only cache even if you never touch remote storage. That is the simplest setup, and for many Rust engineers it is the right place to start: install it, set RUSTC_WRAPPER, build as usual, and let the local store absorb repeated dependency builds.
If you want to share the cache across machines or CI, kache can sync to any S3-compatible backend — AWS S3, MinIO, Ceph, Cloudflare R2, and others.
This is where the tool becomes useful beyond one laptop:
- CI populates common dependency artifacts on each run
- Developers pull those artifacts instead of rebuilding from scratch
- Teams share one remote cache without changing how they run Cargo
There is also a GitHub Action for this, so GitHub Actions users do not need to wire the wrapper, restore, and save steps by hand.
macOS notes
macOS was part of the motivation too.
Rust build directories can get especially awkward there for two reasons:
- Incremental compilation and git worktrees can interact badly on APFS
- Spotlight and Time Machine happily spend effort on build output you do not actually care about
kache handles the first one by disabling incremental compilation while it is active, and it still strips incremental flags even when caching is disabled. That is partly a cache-design choice, and partly a way to avoid APFS-related worktree corruption.
kache also excludes its own store (~/.cache/kache) from Time Machine and Spotlight automatically on macOS. If you keep many worktrees around, it is often worth doing the same for each worktree's target/ directory:
tmutil addexclusion path/to/worktree/target
touch path/to/worktree/target/.metadata_never_index
The first line tells Time Machine not to back up throwaway build artifacts. The second drops the same Spotlight sentinel that kache uses for its own store, which keeps indexing from burning cycles on directories that will be blown away anyway.
One more practical detail: hardlinks only work within one filesystem. If your worktrees live on a different APFS volume than ~/.cache/kache, kache can still restore artifacts, but it has to copy instead of hardlinking. On macOS setups with external volumes or split workspaces, it is worth keeping that in mind.
CI with kache-action
The action is intentionally small from the user's point of view, but it does a bit more than a one-line example suggests: it installs kache, sets RUSTC_WRAPPER, restores a cache backend, starts the daemon early for warming, saves the cache after the build, and writes a job summary. On pull requests it can also keep a sticky comment updated with hit rate and cache misses.
For a team setup using S3-compatible storage, a workflow can look like this:
permissions:
contents: read
pull-requests: write
jobs:
rust:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- uses: zondax/kache-action@v1
with:
s3-bucket: rust-build-cache
s3-endpoint: https://minio.internal:9000
s3-access-key-id: ${{ secrets.S3_ACCESS_KEY_ID }}
s3-secret-access-key: ${{ secrets.S3_SECRET_ACCESS_KEY }}
manifest-key: release-${{ runner.os }}-${{ runner.arch }}
warm: "true"
min-compile-ms: "1500"
- run: cargo build --release --locked
- run: cargo test --locked
The useful detail here is not just "use the action." It is that the action can scope warmed artifacts with manifest-key, so release, test, and clippy jobs can share one bucket without stepping on each other, and it can avoid prefetching crates that are cheaper to rebuild than to download with min-compile-ms.
If you do not have S3 set up yet, the same action can fall back to GitHub Actions cache. That is a good on-ramp for open-source repos, while S3-compatible storage is usually the better fit for larger caches, faster restores, and sharing across multiple machines or repositories.
Tooling around the cache
kache is not just a wrapper binary. The surrounding tooling is part of what makes it usable day to day:
Two commands matter more than they might sound from their names alone: kache monitor makes cache behavior visible in real time, including hits, misses, and deduplicated bytes, and kache config gives you an interactive way to manage store size, remote settings, and other options without hand-editing config files.
The command surface stays compact:
$ kache --help
Commands:
monitor Live TUI dashboard for hits, misses, and dedup stats
config Interactive TUI for cache and remote configuration
list Inspect cached crates and cache entries
sync Pull and push artifacts to remote storage
doctor Diagnose setup and migrate from sccache
clean Find and remove target/ directories
gc Garbage-collect the local store
purge Remove cache entries or wipe the store
daemon Manage the background service
kache clean deserves a mention on its own — it comes from the same real problem as the cache itself. Rust build output accumulates quietly, and at some point you want a fast way to see where the space went and reclaim it.
Getting started
Install kache with whichever path fits your environment:
# mise
mise use -g github:kunobi-ninja/kache@latest
# cargo-binstall
cargo binstall kache
# from source
cargo install --git https://github.com/kunobi-ninja/kache
Then set it as your wrapper:
export RUSTC_WRAPPER=kache
Or persist it in ~/.cargo/config.toml:
[build]
rustc-wrapper = "kache"
For remote sharing, configure an S3-compatible backend and use kache sync. Details are in the README.
Why we are open-sourcing it
We built kache for our own development and CI workflows at Zondax. Over time it stopped feeling like an internal utility and started feeling like a tool other Rust teams might also find useful — especially teams working with large dependency trees, multiple worktrees, and shared CI pipelines.
So we are publishing it as open source. Not because Rust needs a grand theory of build caching, but because this specific combination has been useful for us:
- Content-addressed artifacts
- Hardlinks for local deduplication
- Optional S3 for team sharing
- Zero change to the Cargo workflow
If that matches the problems you are dealing with, kache may be worth a look.
And if you want to help shape it, contributions are welcome: bug reports, edge cases, performance traces, workflow feedback, docs improvements, and PRs to both the core cache and the GitHub Action are all useful.
GitHub: kunobi-ninja/kache
GitHub Action: kunobi-ninja/kache-action