SQLite JSON at Full Index Speed Using Generated Columns
dbpro.app
4 billion if statements (2023)
andreasjhkarlsson.github.io
From text to token: How tokenization pipelines work
paradedb.com
Fedora: Open-source repository for long-term digital preservation
fedorarepository.org
The tiniest yet real telescope I've built
lucassifoni.info
The Tor Project is switching to Rust
itsfoss.com
BpfJailer: eBPF Mandatory Access Control [pdf]
lpc.events
Show HN: Tripwire: A new anti evil maid defense
github.com
Guarding My Git Forge Against AI Scrapers
vulpinecitrus.info
Google de-indexed Bear Blog and I don't know why
journal.james-zhan.com
Show HN: Autofix Bot – Hybrid static analysis and AI code review agent
CRISPR fungus: Protein-packed, sustainable, and tastes like meat
isaaa.org
He set out to walk around the world. After 27 years, his quest is nearly over
washingtonpost.com
Rivian Unveils Custom Silicon, R2 Lidar Roadmap, and Universal Hands Free
riviantrackr.com
Training LLMs for Honesty via Confessions
arxiv.org
The highest quality codebase
gricha.dev
Denial of service and source code exposure in React Server Components
react.dev
Why Isn't Online Age Verification Just Like Showing Your ID in Person?
eff.org
Hi there, HN! We’re Jai and Sanket from DeepSource (YC W20), and today we’re launching Autofix Bot, a hybrid static analysis + AI agent purpose-built for in-the-loop use with AI coding agents.
AI coding agents have made code generation nearly free, and they’ve shifted the bottleneck to code review. Static-only analysis with a fixed set of checkers isn’t enough. LLM-only review has several limitations: non-deterministic across runs, low recall on security issues, expensive at scale, and a tendency to get ‘distracted’.
We spent the last 6 years building a deterministic, static-analysis-only code review product. Earlier this year, we started thinking about this problem from the ground up and realized that static analysis solves key blind spots of LLM-only reviews. Over the past six months, we built a new ‘hybrid’ agent loop that uses static analysis and frontier AI agents together to outperform both static-only and LLM-only tools in finding and fixing code quality and security issues. Today, we’re opening it up publicly.
Here’s how the hybrid architecture works:
- Static pass: 5,000+ deterministic checkers (code quality, security, performance) establish a high-precision baseline. A sub-agent suppresses context-specific false positives.
- AI review: The agent reviews code with static findings as anchors. Has access to AST, data-flow graphs, control-flow, import graphs as tools, not just grep and usual shell commands.
- Remediation: Sub-agents generate fixes. Static harness validates all edits before emitting a clean git patch.
Static solves key LLM problems: non-determinism across runs, low recall on security issues (LLMs get distracted by style), and cost (static narrowing reduces prompt size and tool calls).
On the OpenSSF CVE Benchmark [1] (200+ real JS/TS vulnerabilities), we hit 81.2% accuracy and 80.0% F1; vs Cursor Bugbot (74.5% accuracy, 77.42% F1), Claude Code (71.5% accuracy, 62.99% F1), CodeRabbit (59.4% accuracy, 36.19% F1), and Semgrep CE (56.9% accuracy, 38.26% F1). On secrets detection, 92.8% F1; vs Gitleaks (75.6%), detect-secrets (64.1%), and TruffleHog (41.2%). We use our open-source classification model for this. [2]
Full methodology and how we evaluated each tool: https://autofix.bot/benchmarks
You can use Autofix Bot interactively on any repository using our TUI, as a plugin in Claude Code, or with our MCP on any compatible AI client (like OpenAI Codex).[3] We’re specifically building for AI coding agent-first workflows, so you can ask your agent to run Autofix Bot on every checkpoint autonomously.
Give us a shot today: https://autofix.bot. We’d love to hear any feedback!
---
[1] https://github.com/ossf-cve-benchmark/ossf-cve-benchmark
[2] https://huggingface.co/deepsource/Narada-3.2-3B-v1
[3] https://autofix.bot/manual/#terminal-ui