Show HN: Any-LLM – Lightweight router to access any LLM Provider

We built any-llm because we needed a lightweight router for LLM providers with minimal overhead. Switching between models is just a string change : update "openai/gpt-4" to "anthropic/claude-3" and you're done.

It uses official provider SDKs when available, which helps since providers handle their own compatibility updates. No proxy or gateway service needed either, so getting started is pretty straightforward - just pip install and import.

Currently supports 20+ providers including OpenAI, Anthropic, Google, Mistral, and AWS Bedrock. Would love to hear what you think!


Comments URL: https://news.ycombinator.com/item?id=44650567

Points: 56

# Comments: 43

https://github.com/mozilla-ai/any-llm

Utworzony 17h | 22 lip 2025, 20:20:22


Zaloguj się, aby dodać komentarz

Inne posty w tej grupie

Show HN: Header-only GIF decoder in pure C – no malloc, easy to use

I built a lightweight GIF decoder in pure C, ideal for embedded or performance-critical environments. It’s header-only, zero dynamic memory allocations, and fully platform-independent. Supports bo

23 lip 2025, 12:40:06 | Hacker news
Wick Effect
23 lip 2025, 10:20:07 | Hacker news