End of Japanese community
support.mozilla.org
Solarpunk is happening in Africa
climatedrift.substack.com
Dillo, a multi-platform graphical web browser
github.com
ChatGPT terms disallow its use in providing legal and medical advice to others
ctvnews.ca
I may have found a way to spot U.S. at-sea strikes before they're announced
old.reddit.com
Recursive macros in C, demystified (once the ugly crying stops)
h4x0r.org
Firefox profiles: Private, focused spaces for all the ways you browse
blog.mozilla.org
The state of SIMD in Rust in 2025
shnatsel.medium.com
Why aren't smart people happier?
theseedsofscience.pub
New gel restores dental enamel and could revolutionise tooth repair
nottingham.ac.uk
Ruby and Its Neighbors: Smalltalk
noelrappin.com
NY school phone ban has made lunch loud again
gothamist.com
Brain-IT: Image Reconstruction from fMRI via Brain-Interaction Transformer
amitzalcher.github.io
The MDL ("Muddle") Programming Language (1979) [pdf]
bitsavers.informatik.uni-stuttgart.de
The Basic Laws of Human Stupidity (1987) [pdf]
gandalf.fee.urv.cat
Carice TC2 – A non-digital electric car
caricecars.com
Vacuum bricked after user blocks data collection – user mods it to run anyway
tomshardware.com
Scientists Growing Colour Without Chemicals
forbes.com
The shadows lurking in the equations
gods.art
I want a good parallel language [video]
youtube.com
I was right about dishwasher pods and now I can prove it [video]
youtube.com
A Lost IBM PC/at Model? Analyzing a Newfound Old Bios
int10h.org
Hey HN,
I'm excited to share the Massive Legal Embedding Benchmark (MLEB) — the first comprehensive benchmark for legal embedding models.
Unlike previous legal retrieval datasets, MLEB was created by someone with actual domain expertise (I have a law degree and previously led the AI team at the Attorney-General's Department of Australia).
I came up with MLEB while trying to train my own state-of-the-art legal embedding model. I found that there were no good benchmarks for legal information retrieval to evaluate my model on.
That led me down a months-long process working alongside my brother to identify or, in many cases, build our own high-quality legal evaluation sets.
The final product was 10 datasets spanning multiple jurisdictions (the US, UK, Australia, Singapore, and Ireland), document types (cases, laws, regulations, contracts, and textbooks), and problem types (retrieval, zero-shot classification, and QA), all of which have been vetted for quality, diversity, and utility.
For a model to do well at MLEB, it needs to have both extensive legal domain knowledge and strong legal reasoning skills. That is deliberate — given just how important high-quality embeddings are to legal RAG (particularly for reducing hallucinations), we wanted our benchmark to correlate as strongly as possible with real-world usefulness.
The dataset we are most proud of is called Australian Tax Guidance Retrieval. It pairs real-life tax questions posed by Australian taxpayers with relevant Australian Government guidance and policy documents.
We constructed the dataset by sourcing questions from the Australian Taxation Office's community forum, where Australian taxpayers ask accountants and ATO officials their tax questions.
We found that, in most cases, such questions can be answered by reference to government web pages that, for whatever reason, users were unable to find themselves. Accordingly, we manually went through a stratified sample of 112 challenging forum questions and extracted relevant portions of government guidance materials linked to by tax experts that we verified to be correct.
What makes the dataset so valuable is that, unlike the vast majority of legal information retrieval evaluation sets currently available, it consists of genuinely challenging real-world user-created questions, rather than artificially constructed queries that, at times, diverge considerably from the types of tasks embedding models are actually used for.
Australian Tax Guidance Retrieval is just one of several other evaluation sets that we painstakingly constructed ourselves simply because there weren't any other options.
We've contributed everything, including the code used to evaluate models on MLEB, back to the open-source community.
Our hope is that MLEB and the datasets within it will hold value long into the future so that others training legal information retrieval models won't have to detour into building their own "MTEB for law".
If you'd like to head straight to the leaderboard instead of reading our full announcement, you can find it here: https://isaacus.com/mleb
If you're interested in playing around with our model, which happens to be ranked first on MLEB as of 16 October 2025 at least, check out our docs: https://docs.isaacus.com/quickstart