We built any-llm because we needed a lightweight router for LLM providers with minimal overhead. Switching between models is just a string change : update "openai/gpt-4" to "anthropic/claude-3" and you're done.
It uses official provider SDKs when available, which helps since providers handle their own compatibility updates. No proxy or gateway service needed either, so getting started is pretty straightforward - just pip install and import.
Currently supports 20+ providers including OpenAI, Anthropic, Google, Mistral, and AWS Bedrock. Would love to hear what you think!
Comments URL: https://news.ycombinator.com/item?id=44650567
Points: 56
# Comments: 43
Zaloguj się, aby dodać komentarz
Inne posty w tej grupie

Article URL: https://brave.com/privacy-updates/35-block-recall/

Article URL: https://www.cerebras.ai/press-release/cer
I built a lightweight GIF decoder in pure C, ideal for embedded or performance-critical environments. It’s header-only, zero dynamic memory allocations, and fully platform-independent. Supports bo

Article URL: https://www.maxvanijsselmuiden.nl/liquid-glass

Article URL: https://questdb.com/careers/technical-content-lead/
Article URL: https://en.wikipedia.org/wiki/Wick_effect
Comments URL: https://
Article URL: https://i4is.org/what-we-do/technical/project-lyra/