Skip to content(if available)orjump to list(if available)

Hard problems that reduce to document ranking

obblekk

The open source ranking library is really interesting. It's using a type of merge sort where the comparator function is an llm comparing (but doing batches >2 for fewer calls).

Reducing problems to document ranking is effectively a type of test-time search - also very interesting!

I wonder if this approach could be combined with GRPO to create more efficient chain of thought search...

https://github.com/BishopFox/raink?tab=readme-ov-file#descri...

rahimnathwani

The article introducing the library has something about how pairwise comparisons are most reliable (i.e. for each pair of items you ask an LLM which they prefer) but computationally expensive. Doing a single LLM call (rank these items in order) is much less reliable. So they do something in between that gives enough pairwise comparisons to have a more reliable list.

https://news.ycombinator.com/item?id=43175658

rfurmani

Very cool! This is also one of my beliefs in building tools for research, that if you can solve the problem of predicting and ranking the top references for a given idea, then you've learned to understand a lot about problem solving and decomposing problems into their ingredients. I've been pleasantly surprised by how well LLMs can rank relevance, compared to supervised training of a relevancy score. I'll read the linked paper (shameless plug, here it is on my research tools site: https://sugaku.net/oa/W4401043313/)

noperator

A concept that I've been thinking about a lot lately: transforming complex problems into document ranking problems to make them easier to solve. LLMs can assist greatly here, as I demonstrated at inaugural DistrictCon this past weekend.

lifeisstillgood

So would this be 1600 commits and one of which fixes the bug (which might be easier with commit messages?) or is this a diff between two revisions, with 1600 chunks, each chunk a “document” ?

I am trying to grok why we want to find the fix - is it to understand what was done so we can exploit unpatched instances in the wild?

Also also

“identifying candidate functions for fuzzing targets“ - if every function is a document I get where the list of documents is, what what is the query - how do I say “find me a function most suitable to fuzzing”

Apologies if that’s brusque - trying to fit new concepts in my brain :-)

Everdred2dx

Very interesting application of LLMs. Thanks for sharing!

m3kw9

That title hurts my head to read