Skip to content(if available)orjump to list(if available)

Show HN: Morphik – Open-source RAG that understands PDF images, runs locally

Show HN: Morphik – Open-source RAG that understands PDF images, runs locally

9 comments

·April 22, 2025

Hey HN, we’re Adi and Arnav. A few months ago, we hit a wall trying to get LLMs to answer questions over research papers and instruction manuals. Everything worked fine, until the answer lived inside an image or diagram embedded in the PDF. Even GPT‑4o flubbed it (we recently tried O3 with the same, and surprisingly it flubbed it too). Naive RAG pipelines just pulled in some text chunks and ignored the rest.

We took an invention disclosure PDF (https://drive.google.com/file/d/1ySzQgbNZkC5dPLtE3pnnVL2rW_9...) containing an IRR‑vs‑frequency graph and asked GPT “From the graph, at what frequency is the IRR maximized?”. We originally tried this on gpt-4o, but while writing this used the new natively multimodal model o4‑mini‑high. After a 30‑second thinking pause, it asked for clarifications, then churned out buggy code, pulled data from the wrong page, and still couldn’t answer the question. We wrote up the full story with screenshots here: https://docs.morphik.ai/blogs/gpt-vs-morphik-multimodal.

We got frustrated enough to try fixing it ourselves.

We built Morphik to do multimodal retrieval over documents like PDFs, where images and diagrams matter as much as the text.

To do this, we use Colpali-style embeddings, which treat each document page as an image and generate multi-vector representations. These embeddings capture layout, typography, and visual context, allowing retrieval to get a whole table or schematic, not just nearby tokens. Along with vector search, this could now retrieve exact pages with relevant diagrams and pass them as images to the LLM to get relevant answers. It’s able to answer the question with an 8B llama 3.1 vision running locally!

Early pharma testers hit our system with queries like "Which EGFR inhibitors at 50 mg showed ≥ 30% tumor reduction?" We correctly returned the right tables and plots, but still hit a bottleneck, we weren’t able to join the dots across multiple reports. So we built a knowledge graph: we tag entities in both text and images, normalize synonyms (Erlotinib → EGFR inhibitor), infer relations (e.g. administered_at, yields_reduction), and stitch everything into a graph. Now a single query could traverse that graph across documents and surface a coherent, cross‑document answer along with the correct pages as images.

To illustrate that, and just for fun, we built a graph of 100 Paul Graham’s essays here: https://pggraph.streamlit.app/ You can search for various nodes, (eg. startup, sam altman, paul graham and see corresponding connections). In our system, we create graphs and store the relevant text chunks along with the entities, so on querying, we can extract the relevant entity, do a search on the graph and pull in the text chunks of all connected nodes, improving cross document queries.

For longer or multi-turn queries, we added persistent KV caching, which stores intermediate key-value states from transformer attention layers. Instead of recomputing attention from scratch every time, we reuse prior layers, speeding up repeated queries and letting us handle much longer context windows.

We’re open‑source under the MIT Expat license: https://github.com/morphik-org/morphik-core

Would love to hear your RAG horror stories, what worked, what didn’t and any feedback on Morphik. We’re here for it.

MitPitt

Should I use this if I don't plan on working with pdfs? What's the best RAG currently?

DavidPP

I'm currently building an internal tool using SurrealDB directly, but I'm curious to use Morphik since it implement features I hadn't the time to figure out yet. (For example, I started with hardcoded schemas and I like how you support both).

Minor nitpick, but the README for your ui-component project under ee says:

"License This project is part of Morphik and is licensed under the MIT License."

However, your ee folder has an "enterprise" license, not the MIT license.

Adityav369

Thanks for pointing that out! Fixed it.

For the metadata extraction, we save these as Column(JSONB) for each documents which allows it to be changed on the fly.

Although, I keep wondering if it would have been better to use something like mongodb for this part, just because it's more natural.

Please let me know if you have questions and how it works out for you.

null

[deleted]

Imanari

Looks really nice! How does it handle tables?

Adityav369

We have two ingestion pathways: 1. regular OCR + text embeddings; 2. Colpali. We've observed that Colpali does a much better job with tables since it can encode positional stuff and layouts as well.

th0ma5

Whenever I ask people wanting to use such features at scale which figure could be out of place or have a transposed digit it generally makes the project evaporate.

trollbridge

If it’s MIT open source, what does the paid part apply to?

Adityav369

The paid part applies to the ui-component which provides a chat user interface. The core code, SDK, api is all under MIT license.