Snorting the AGI with Claude Code
kadekillary.work
DRM Can Watch You Too: Privacy Effects of Browsers' Widevine EME (2023)
hal.science
Show HN: Chawan TUI web browser
chawan.net
What Happens When Clergy Take Psilocybin
nautil.us
Show HN: Canine – A Heroku alternative built on Kubernetes
github.com
Why Generative AI Coding Tools and Agents Do Not Work For Me
blog.miguelgrinberg.com
Battle to eradicate invasive pythons in Florida achieves milestone
phys.org
Retrobootstrapping Rust for some reason
graydon2.dreamwidth.org
Open-Source RISC-V: Energy Efficiency of Superscalar, Out-of-Order Execution
arxiv.org
OpenTelemetry for Go: Measuring overhead costs
coroot.com
OpenAI wins $200M U.S. defense contract
cnbc.com
Show HN: Zeekstd – Rust Implementation of the ZSTD Seekable Format
github.com
Working on databases from prison
turso.tech
Breaking Quadratic Barriers: A Non-Attention LLM for Ultra-Long Context Horizons
arxiv.org
Your Texas LLC Won't Protect You from Blue State Labor Laws
upcactus.com
Nanonets-OCR-s – OCR model that transforms documents into structured markdown
huggingface.co
Show HN: Nexus.js - Fabric.js for 3D
punk.cam
George Orwell's 1984 and How Power Manufactures Truth
openculture.com
Identity Assertion Authorization Grant
ietf.org
Is gravity just entropy rising? Long-shot idea gets another look
quantamagazine.org
Show HN: dk – A script runner and cross-compiler, written in OCaml
diskuv.com
Adding public transport data to Transitous
volkerkrause.eu
Hi HN,
I built a CLI for uploading documents and querying them with an LLM agent that uses search tools rather than stuffing everything into the context window. I recorded a demo using the CrossFit 2025 rulebook that shows how this approach compares to traditional RAG and direct context injection[1].
The core insight is that LLMs running in loops with tool access are unreasonably effective at this kind of knowledge retrieval task[2]. Instead of hoping the right chunks make it into your context, the agent can iteratively search, refine queries, and reason about what it finds.
The CLI handles the full workflow:
```bash trieve upload ./document.pdf trieve ask "What are the key findings?"
```
You can customize the RAG behavior, check upload status, and the responses stream back with expandable source references. I really enjoy having this workflow available in the terminal and I'm curious if others find this paradigm as compelling as I do.
Considering adding more commands and customization options if there's interest. The tool is free for up to 1k document chunks.
Source code is on GitHub[3] and available via npm[4].
Would love any feedback on the approach or CLI design!
[1]: https://www.youtube.com/watch?v=SAV-esDsRUk [2]: https://news.ycombinator.com/item?id=43998472 [3]: https://github.com/devflowinc/trieve/blob/main/clients/cli/i... [4]: https://www.npmjs.com/package/trieve-cli