Hello HN,
I'm Ghita, co-founder of ZeroEntropy (YC W25). We build high accuracy search infrastructure for RAG and AI Agents.
We just released two new state-of-the-art rerankers zerank-1, and zerank-1-small. One of them is fully open-source under Apache 2.0.
We trained those models using a novel Elo score inspired pipeline which we describe in detail in the blog attached. In a nutshell, here is an outline of the training steps: * Collect soft preferences between pairs of documents using an ensemble of LLMs. * Fit an ELO-style rating system (Bradley-Terry) to turn pairwise comparisons into absolute per-document scores. * Normalize relevance scores across queries using a bias correction step, modeled using cross-query comparisons and solved with MLE.
You can try the models either through our API (https://docs.zeroentropy.dev/models), or via HuggingFace (https://huggingface.co/zeroentropy/zerank-1-small).
We would love this community's feedback on the models, and the training approach. A full technical report is also going to be released soon.
Thank you!
Comments URL: https://news.ycombinator.com/item?id=44582662
Points: 30
# Comments: 5
https://www.zeroentropy.dev/blog/improving-rag-with-elo-scores
Connectez-vous pour ajouter un commentaire
Autres messages de ce groupe
Article URL: https://michaelbastos.com/blog/why-self-taught-engineers-often-outperform
Comments URL:




Article URL: https://nodejs.org/en/learn/typescript/run-natively
