Hi HN,
I built a CLI for uploading documents and querying them with an LLM agent that uses search tools rather than stuffing everything into the context window. I recorded a demo using the CrossFit 2025 rulebook that shows how this approach compares to traditional RAG and direct context injection[1].
The core insight is that LLMs running in loops with tool access are unreasonably effective at this kind of knowledge retrieval task[2]. Instead of hoping the right chunks make it into your context, the agent can iteratively search, refine queries, and reason about what it finds.
The CLI handles the full workflow:
```bash trieve upload ./document.pdf trieve ask "What are the key findings?"
```
You can customize the RAG behavior, check upload status, and the responses stream back with expandable source references. I really enjoy having this workflow available in the terminal and I'm curious if others find this paradigm as compelling as I do.
Considering adding more commands and customization options if there's interest. The tool is free for up to 1k document chunks.
Source code is on GitHub[3] and available via npm[4].
Would love any feedback on the approach or CLI design!
[1]: " rel="nofollow">
Comments URL: https://news.ycombinator.com/item?id=44290207
Points: 16
# Comments: 0
Accedi per aggiungere un commento
Altri post in questo gruppo
Article URL: https://hal.science/hal-04179324v1/document
Comments URL: http

Article URL: https://arxiv.org/abs/2506.01963
Comments URL: https://news.ycombinator.c
Article URL: https://graydon2.dreamwidth.org/317484.html
Comments URL: http