Nowadays, a common AI tech stack has hundreds of different prompts running across different LLMs.
Three key problems:
- Choices, picking from 100s of LLMs the best LLM for that 1 prompt is gonna be challenging, you're probably not picking the most optimized LLM for a prompt you wrote.
- Scaling/Upgrading, similar to choices but you want to keep consistency of your output even when models depreciate or configurations change.
- Prompt management is scary, if something works, you'll never want to touch it but you should be able to without fear of everything breaking.
So we launched Prompt Engine which automatically runs your prompts for you on the best LLM every single time with all the tools like internet access. You can also store prompts for reusability and caching which increases performance on every run.
How it works?
tldr, we built a really small model that is trained on datasets comparing 100s of LLMs that can automatically pick a model based on your prompt.
Here's an article explaining the details: https://jigsawstack.com/blog/jigsawstack-mixture-of-agents-m...
Comments URL: https://news.ycombinator.com/item?id=42339302
Points: 5
# Comments: 0
Chcete-li přidat komentář, přihlaste se
Ostatní příspěvky v této skupině

Article URL: https://github.com/mapbox/mcp-server
Comments URL: https://news.ycomb
Article URL: https://blog.yossarian.net/2025/06/11/github-actions-policies-dumb-bypass
Comments URL:

RomM is a self-hosted app that allows you to manage your retro game files (ROMs) and play them in the browser.
Think of it as Plex or Jellyfin for your ROM library: it automatically fetches meta
