ive spent the past few months designing a framework for orchestrating multiple large language models in parallel — not to choose the “best,” but to let them argue, mix their outputs, and preserve dissent structurally.
It’s called Maestro heres the whitepaper https://github.com/d3fq0n1/maestro-orchestrator (Narrative version here: https://defqon1.substack.com/p/maestro-a-framework-for-coher...)
Core ideas:
Prompts are dispatched to multiple LLMs (e.g., GPT-4, Claude, open-source models)
The system compares their outputs and synthesizes them
It never resolves into a single voice — it ends with a 66% rule: 2 votes for a primary output, 1 dissent preserved
Human critics and analog verifiers can be triggered for physical-world confirmation (when claims demand grounding)
The feedback loop learns not only from right/wrong outputs, but from what kind of disagreements lead to deeper truth
Maestro isn’t a product or API — it’s a proposal for an open, civic layer of synthetic intelligence. It’s designed for epistemic integrity and resistance to centralized control.
Would love thoughts, critiques, or collaborators.
Comments URL: https://news.ycombinator.com/item?id=44109664
Points: 4
# Comments: 0
Войдите, чтобы добавить комментарий
Другие сообщения в этой группе


Hey HN, Henry and Roman here - we've been building a cross-platform framework for deploying LLMs, VLMs, Embedding Models and TTS models locally on smartphones.
Ollama enables deploying LLMs mode
Article URL: https://openfront.io/
Comments URL: https://news.ycombinator.com/item?id=44528943

Article URL: https://www.nber.org/papers/w33989
Comments URL: https://news.ycombinat

Article URL: https://simonwillison.net/2025/Jul/11/grok-musk/