Nowadays, a common AI tech stack has hundreds of different prompts running across different LLMs.
Three key problems:
- Choices, picking from 100s of LLMs the best LLM for that 1 prompt is gonna be challenging, you're probably not picking the most optimized LLM for a prompt you wrote.
- Scaling/Upgrading, similar to choices but you want to keep consistency of your output even when models depreciate or configurations change.
- Prompt management is scary, if something works, you'll never want to touch it but you should be able to without fear of everything breaking.
So we launched Prompt Engine which automatically runs your prompts for you on the best LLM every single time with all the tools like internet access. You can also store prompts for reusability and caching which increases performance on every run.
How it works?
tldr, we built a really small model that is trained on datasets comparing 100s of LLMs that can automatically pick a model based on your prompt.
Here's an article explaining the details: https://jigsawstack.com/blog/jigsawstack-mixture-of-agents-m...
Comments URL: https://news.ycombinator.com/item?id=42339302
Points: 5
# Comments: 0
Login to add comment
Other posts in this group

Article URL: https://www.pizzint.watch/
Comments URL: https://news.ycombinator.com/item?id=4

Article URL: https://www.figma.com/blog/ipo-pricing/
Comments URL: https://news
Article URL: https://alexandermigdal.com/paradise-lost/
Comments URL: https:

Hey HN! My name is Christian, and I’m the co-founder of https://frigade.ai. We’ve built an AI agent that automatically learns how to use any web-based product, and

Hey HN,
We are Winston, Edward, and James, and we built Meka Agent, an open-source framework that lets vision-based LLMs execute tasks directly on a computer, just like a person would.
Backsto