We wrote our inference engine on Rust, it is faster than llama cpp in all of the use cases. Your feedback is very welcomed. Written from scratch with idea that you can add support of any kernel and platform.
Comments URL: https://news.ycombinator.com/item?id=44570048
Points: 72
# Comments: 23
Établi
1mo
|
15 juil. 2025, 16:50:31
Connectez-vous pour ajouter un commentaire
Autres messages de ce groupe

Article URL: https://www.theguardian.com/science/2017/aug/


Article URL: https://github.com/bravenewxyz/agent-c
Comments URL: https://news.y
Article URL: https://austinvernon.site/blog/standardthermal.html

Article URL: https://github.com/unkn0wn-root/cascache
Comments URL: https://ne