Parsing PDFs (and more) in Elixir using Rust
15 comments
·January 29, 2025cpursley
I've been thinking a lot about how to accomplish various RAG things in Elixir (for LLM applications). PDF is one of the missing pieces, so glad to see work here. The really tricky part is not just parsing out the text (you can just call the pdftotext unix command line utility for that), but accurately pulling out things like complex tables, etc in a way that could be chunked/post processed in a useful way. I'd love to see something like Unstructured or Marker but in Rust (i.e., fast) that Elixir could NIF out to it. And maybe some kind of hybrid system that uses open llm models with vision capabilities. Ref:
cpursley
Well derp, I should have read the linked extractous repo. This looks like the extract solution I've been after (see what I did there).
bustylasercanon
Yeah I could maybe highlight how good that library is in here
vikp
Hey, I'm the author of marker - thanks for sharing. Most of the processing time is model inference right now. I've been retraining some models lately onto new architectures to improve speed (layout, tables, LaTeX OCR).
We recently integrated gemini flash (via the --use_llm flag), which maybe moves us towards the "hybrid system" you mentioned. Hoping to add support for other APIs soon, but focusing on improving quality/speed now.
Happy to chat if anyone wants to talk about the difficulties of parsing PDFs, or has feedback - email in profile.
cpursley
Very cool, any plans for a dockerized API of marker similar to what Unstructured released? I know you have a very attractively priced serverless offering (https://www.datalab.to) but having something to develop against locally would be great (for those of us not in the Python world).
vikp
It's on the list to build - been focusing on quality pretty heavily lately.
conradfr
Maybe just using pdftohtml instead of pdftotext.
cpursley
I experimented with it, it generates way too much noise. Cool utility, though!
constantinum
For instace Llamaparse(https://docs.llamaindex.ai/en/stable/llama_cloud/llama_parse...)uses LLMs for pdf text extraction, but the problem is hallucination. e.g > https://github.com/run-llama/llama_parse/issues/420
There is also LLMWhisperer that preserves the layout(tables, checkboxes, forms)and hence the context. https://pg.llmwhisperer.unstract.com/
cpursley
Is this open source? Is it slow Python? That's where I'm stuck.
joshchernoff
FYI: your preview image from the html header meta tag is broken.
bustylasercanon
Thanks! I need to fix that
The Achilles heel of the BEAM is that if it crashes in native code then it has no way to recover and it’s much vaunted robustness goes out the window. So writing native hooks in Rust makes it a bit harder to crash the whole VM.
On the plus side it makes IPC pretty straightforward, so you can move the processes that need the native code (NIFs) to a separate VM if you’re feeling paranoid.