Show HN: Speeding up LLM inference 2x times (possibly)

Here's a project I've been working on for the last few months.

It's a new (I think) algorithm, that allows to adjust smoothly - and in real time - how many calculations you'd like to do during inference of an LLM model.

It seems that it's possible to do just 20-25% of weight multiplications instead of all of them, and still get good inference results.

I implemented it to run on M1/M2/M3 GPU. The mmul approximation itself can be pushed to run 2x fast before the quality of output collapses.

The inference speed is just a bit faster than Llama.cpp's, because the rest of implementation could be better, but with a better development I think it can be a new method to speed up inference - in addition to quantization.

You could call it ad-hoc model distillation :)

You can change the speed / accuracy of a model at will, in real time.

Oh, and as a side effect, the data format allows to also choose how much of the model you want to load into the memory. You can decide to skip say 10-20-40% of the least important weights.

It's implemented for Mistral, it was also tested slightly on Mixtral and Llama. It's for FP16 for now, but Q8 is in the works.

The algorithm is described here, and the implementation is open source.

https://kolinko.github.io/effort/

I know these are bold claims, but I hope they survive the scrutiny :)


Comments URL: https://news.ycombinator.com/item?id=40067677

Points: 45

# Comments: 7

https://asciinema.org/a/piP22yYwcaohu5cA2gyuv1W61

Created 1mo | Apr 17, 2024, 7:20:05 PM


Login to add comment

Other posts in this group

Reversing Choplifter
May 21, 2024, 12:40:15 PM | Hacker news
Show HN: Oracolo – A minimalist Nostr blog in a single HTML file

Oracolo is a minimalist blog powered by Nostr, that consists of a single html file, weighing only ~140Kb. It works also without a web server; for example you can send it via email as a business ca

May 21, 2024, 12:40:14 PM | Hacker news