We wrote our inference engine on Rust, it is faster than llama cpp in all of the use cases. Your feedback is very welcomed. Written from scratch with idea that you can add support of any kernel and platform.
Comments URL: https://news.ycombinator.com/item?id=44570048
Points: 72
# Comments: 23
Létrehozva
9h
|
2025. júl. 15. 16:50:31
Jelentkezéshez jelentkezzen be
EGYÉB POSTS Ebben a csoportban

Article URL: https://mistral.ai/news/voxtral
Comments URL: https://news.ycombinator.com
Here's the loop from today
https://atlas.niu.edu/analysis/radar/midwest/midwest_radar_b...

Article URL: https://go.dev/blog/fips140
Comments URL: https://news.ycombinator.com/item?id


Article URL: https://the-open-source-ward.ghost.io