I work on research at Chroma, and I just published our latest technical report on context rot.
TLDR: Model performance is non-uniform across context lengths, including state-of-the-art GPT-4.1, Claude 4, Gemini 2.5, and Qwen3 models.
This highlights the need for context engineering. Whether relevant information is present in a model’s context is not all that matters; what matters more is how that information is presented.
Here is the complete open-source codebase to replicate our results: https://github.com/chroma-core/context-rot
Comments URL: https://news.ycombinator.com/item?id=44564248
Points: 24
# Comments: 1
Connectez-vous pour ajouter un commentaire
Autres messages de ce groupe

Article URL: https://github.com/nrposner/fizzcrate
Comments URL: https://news.yco
Article URL: https://www.smithsonian
Article URL: https://amiga.vision/
Comments URL: https://news.ycombinator.com/item?id=44999018
