Article URL: https://plainframework.com/
Comments URL: https://news.ycombinator.com/item?id=43512589
Points: 49
# Comments: 32
созданный
2mo
|
29 мар. 2025 г., 05:40:03
Войдите, чтобы добавить комментарий
Другие сообщения в этой группе

I discovered that in LLM inference, keys and values in the KV cache have very different quantization sensitivities. Keys need higher precision than values to maintain quality.
I patched llama.cp
Article URL: https://clojurescript.org/news/2025-05-16-release




Article URL: https://bobacollection.staxmuseum.org/
Comments URL: https://news.y