Hey HN! We’ve just open-sourced model2vec-rs, a Rust crate for loading and running Model2Vec static embedding models with zero Python dependency. This allows you to embed text at (very) high throughput; for example, in a Rust-based microservice or CLI tool. This can be used for semantic search, retrieval, RAG, or any other text embedding usecase.
Main Features:
- Rust-native inference: Load any Model2Vec model from Hugging Face or your local path with StaticModel::from_pretrained(...).
- Tiny footprint: The crate itself is only ~1.7 mb, with embedding models between 7 and 30 mb.
Performance:
We benchmarked single-threaded on a CPU:
- Python: ~4650 embeddings/sec
- Rust: ~8000 embeddings/sec (~1.7× speedup)
First open-source project in Rust for us, so would be great to get some feedback!
Comments URL: https://news.ycombinator.com/item?id=44021883
Points: 13
# Comments: 0
Login to add comment
Other posts in this group



I kept slamming into Claude Code limits mid-session and couldn’t find a quick way to see how close I was getting, so I hacked together a tiny local tracker.
Streams your prompt + completion usag

Article URL: https://zed.dev/blog/debugger
Comments URL: https://news.ycombinator.com/ite
Article URL: https://elliptic-curves.art/
Comments URL: https://news.ycombinator.com/item?
Article URL: https://blog.jsbarretto.com/post/software-is-joy