Skip to content(if available)orjump to list(if available)

Ask HN: What are you working on? (October 2025)

Ask HN: What are you working on? (October 2025)

715 comments

·October 12, 2025

What are you working on? Any new ideas that you're thinking about?

cjflog

Currently a one-man side project: https://laboratory.love

Last year, PlasticList found plastic chemicals in 86% of tested foods—including 100% of baby foods they tested. Around the same time, the EU lowered its “safe” BPA limit by 20,000×, while the FDA still allows levels roughly 100× higher than Europe’s new standard.

That seemed solvable.

Laboratory.love lets you crowdfund independent lab testing of the specific products you actually buy. Think Consumer Reports × Kickstarter, but focused on detecting endocrine disruptors in your yogurt, your kid’s snacks, or whatever you’re curious about.

Find a product (or suggest one), contribute to its testing fund, and get full lab results when testing completes. If a product doesn’t reach its goal within 365 days, you’re automatically refunded. All results are published publicly.

We use the same ISO 17025-accredited methodology as PlasticList.org, testing three separate production lots per product and detecting down to parts-per-billion. The entire protocol is open.

Since last month’s “What are you working on?” post:

- 4 more products have been fully funded (now 10 total!)

- That’s 30 individual samples (we do triplicate testing on different batches) and 60 total chemical panels (two separate tests for each sample, BPA/BPS/BPF and phthalates)

- 6 results published, 4 in progress

The goal is simple: make supply chains transparent enough that cleaner ones win. When consumers have real data, markets shift.

Browse funded tests, propose your own, or just follow along: https://laboratory.love

oniony

On https://laboratory.love/faq you say: "We never accept funding from companies whose products we might test. All our funding comes from individual contributors." On https://laboratory.love/blog you say: "If you're a product manufacturer interested in having your product tested, we welcome your participation in funding."

Bit confused as to your position on funding.

neilv

1. An example result is "https://laboratory.love/product/117", which is a list of chemicals and measurements. Is there a visualization of how these levels relate to regulations and expert recommendations? What about a visualization of how different products in the same category compare, so that consumers know which brand is supposedly "best"? Maybe a summary rating, as stars or color-coded threat level?

2. If you find regulation-violating (or otherwise serious) levels of undesirable chemicals, do you... (a) report it to FDA; (b) initiate a class-action lawsuit; (c) short the brand's stock and then news blitz; or (d) make a Web page with the test results for people to do with it what they will?

3. Is 3 tests enough? On the several product test results I clicked, there's often wide variation among the 3 samples. Or would the visualization/rating tell me that all 3 numbers are unacceptably bad, whether it's 635.8 or 6728.6?

4. If I know that plastic contamination is a widespread problem, can I secretly fund testing of my competitors' products, to generate bad press for them?

5. Could this project be shut down by a lawsuit? Could the labs be?

cjflog

Thank you for your questions!

1. I'm still working to make results more digestible and actionable. This will include the %TDI toggle (total daily intake, for child vs adult and USA vs EU) as seen on PlasticList, but I'm also tinkering with an even more consumer-friendly 'chemical report card'. The final results page would have both the card and the detailed table of results.

2. I have not found any regulation-violating levels yet, so in some sense, I'll cross that bridge when I get there. Part of the issue here is that many believe the FDA levels are far too relaxed which is part of why demand for a service like laboratory.love exists.

3. This is part of the challenge that PlasticList faced, and additionally a lot of my thinking around the chemical report card are related to this. Some folks think a single test would be sufficient to catch major red flags. I think triplicate testing is a reasonable balance of statistically robust while not being completely cost-prohibitive.

4. Yes, I suppose one could do that, as long as the funded products can be acquired by laboratory.love anonymously through their normal consumer supply chains. Laboratory.love merely acquires three separate batches of a given product from different sources, tests them at an ISO/IEC 17025-accredited lab, and publishes the data.

5. I suppose any project can be shut down by a lawsuit, but laboratory.love is not currently breaking any laws as far as I'm aware.

ugh123

The UK levels are more strict and generally more up to date, which I personally follow rather than FDA. Could be nice to show those violations as a comparison to FDA.

Great site!

Timothy055

What a wonderful idea! I sincerely hopes this moves the market. And I backed a study.

hxorr

This is actually a topic I'm interested in

What bugs me is that plastics manufacturers advertise "BPA-free", which is technically correct, but then add a very similar chemical from the same family that has the same effect on the plastic - which is good - but also has the same effect on your endocrine system

linsomniac

Where can I subscribe to pay $20/mo to whatever happens to be the current leader unfunded product?

cjflog

Thanks for your interest!

Here is a Stripe link: https://donate.stripe.com/9B614o4NWdhN83l9r06c001

I'll add subscriptions as a more formal option on laboratory.love soon!

Disclaimer: I don't think I can have a 365-day refund with a recurring donations like this. The financial infrastructure would add too much complexity.

ebbi

It's sad that it's come to this on needing to test these things, but amazing initiative! Would love something like this where I am.

gcanyon

Serious question: around 1900 meat was often preserved using formaldehyde, and milk was adulterated with water and chalk, and sometimes with pureed calf brains to simulate cream.

I hope we can agree that we are better off than that now.

What I'm curious about is whether you think it's been a steady stream of improvements, and we just need to improve further? Or if you think there was some point between 1900 and now where food health and safety was maximized, greater than either 1900 or now, and we've regressed since then?

abdullahkhalids

Trying to collapse high dimensional, complex phenomena onto a single axis usually gives one a fake sense of certainty. One should avoid it as much as possible.

cjflog

I don't know, but I do know there is room for improvement from where we are now, and I think we should strive to do better.

cjflog

Where are you? This project is not necessarily limited to products that are available in the United States. Anything that can be shipped to the United States is still testable.

ebbi

In New Zealand, but just thinking about some of the items that wouldn't be able to be shipped to the US.

ashdnazg

First of all, really cool initiative!

It's interesting that a bunch of the funded products have been funded by a single person.

Do you know if it's the producers themselves? Worried rich people?

cjflog

Given the current reach of the project (read: still small!), I suspect for awhile yet the majority of successfully funded testing will be by concerned individuals with expendable income. It is cheaper and much faster to go through laboratory.love than it would be to partner with a lab as an individual (plus the added bonus that all data is published openly).

I've yet to have any product funded by a manufacturer. I'm open to this, but I would only publish data for products that were acquired through normal consumer supply chains anonymously.

tribeca18

this looks so cool! I wish it told me if the levels found for tested products were good/bad - I have no prior reference so the numbers meant nothing to me

cjflog

Coming soon. Thanks for the feedback!

pbronez

I think this concept has legs to be much bigger than just foods. There are lots of influencer types who focus on testing.

For example, there are two individuals who own the same $100k machine for testing the performance of loudspeakers.

https://www.audiosciencereview.com/forum/index.php

https://www.erinsaudiocorner.com/

Both of them do measurements and YouTube videos. Neither one has a particularly good index of their completed reviews, let alone tools to compare the data.

I wish I could subscribe to support a domain like “loud speaker spin tests” and then have my donation paid out to these reviewers based on them publishing new high quality reviews with good data that is published to a common store.

azianmike

A couple of months ago, I saw a tweet from @awilkinson: “I just found out how much we pay for DocuSign and my jaw dropped. What's the best alternative?”

Me being naive, I thought “how hard could would it actually be to build a free e-sign tool?”

Turns out not that hard.

In about a weekend, I built a UETA and ESIGN compliant tool. And it was free. And it cost me less than $50. Unlimited free e-sign. https://useinkless.com/

aslakhellesoy

FYI: DocuSign’s moat/USP is trust, not software.

DocuSign customers buy trust.

apples_oranges

Really? Trust to send an email with a link? What else is making it trustworthy?

MASNeo

You’d be surprised how much trust people place in legal departments, balance sheet strength and talent capacity. All things for which I had to turn down superior technical proposals in the past. The old saying „Nobody gets fired for buying IBM“ still runs strong.

Free e-signatures are a great idea, have you considered getting a foundation to back the project and maybe taking out some indemnity insurance, perhaps raising a dispute fund?

KaiserPro

That big companies use it for their important legal contracts.

its a well recognised tool for contract agreements, and you pay the money so that you are indemnified for any oopsies that might happen in transit.

alansaber

For a weekend, this is incredibly well done.

Onavo

agree.com is free though

rjh29

And Adobe Fill & Sign.

kacesensitive

Dude hell yeah

koeng

I am working on making ultra-low cost freeze-dried enzymes for synthetic biology.

For example, 1 PCR reaction (a common reaction used to amplify DNA) costs about $1 each, and we're doing tons every day. Since it is $1, nobody really tries to do anything about it - even if you do 20 PCRs in one day, eh it's not that expensive vs everything else you're doing in lab. But that calculus changes once you start scaling up with robots, and that's where I want to be.

Approximately $30 of culture media can produce >10,000,000 reactions worth of PCR enzyme, but you need the right strain and the right equipment. So, I'm producing the strain and I have the equipment! I'm working on automating the QC (usually very expensive if done by hand) and lyophilizing for super simple logistics.

My idea is that every day you can just put a tube on your robot and it can do however many PCR reactions you need that day, and when the next day, you just throw it out! Bring the price from $1 each to $0.01 + greatly simplify logistics!

Of course, you can't really make that much money off of this... but will still be fun and impactful :)

the__alchemist

As a bio hobbyist, this is fantastic! I don't do enough volume of PCR to think of it as expensive, but your use case of high-volume/automatic sounds fantastic! (And so many other types of reagents and equipment are very expensive).

Some things that would be cool

  - Along your lines: In general, cheap automated setups for PCR and gels
  - Cheap/automatic quantifiable gels. E.g. without needing a kV supply capillary, expensive QPCR machines etc.
  - Cheaper enzymes in general
  - More options for -80 freezers
  - Cheaper/more automated DNA quantification. I got a v1 Quibit which gets the job done, but new ones are very expensive, and reagent costs add up.
  - Cheaper shaking incubator options. You can get cheap shakers and baters, but not cheap combined ones... which you need for pretty much everything. Placing one in the other can work, but is sub-optimal due to size and power-cord considerations.
  - More centrifuges that can do 10kG... this is the minimum for many protocols.
  - Ability to buy pure ethanol without outrageous prices or hazardous shipping fees.
- Not sure if this is feasible but... reasonable cost machines to synthesize oglios?

koeng

I've thought a lot about this! My main goal is to create a cloud lab that doesn't suck - ie, a remote lab that is actually useful for people, and a lot of these are relevant things. Let me run down the ideas I have for each

1. You can purchase gel boxes that do 48 to 96 lanes at once. I'd ideally have it on a robot whose only purpose is to load and run these once or twice a day. All the samples coming through get batched together and run

2. Bioanalyzer seems nice for quantification of like PCRs to make sure you're getting the right size. But if I'll be honest I haven't though that much about it. But qPCRs actually become very cheap, if you can keep the machines full. You can also use something like a nanodrop and it is much much cheaper

3. Pichia pastoris expression ^

4. You can use a plate reader (another thing that goes bulk nicely), but the reagents you can't really get around (but cheaper in bulk from China)

5. If you aggregate, these become really cheap. The complicated bits are getting the proper cytomat parts for shaking, as they are limited on the used market

6. These can't be automated well, so I honestly haven't thought too much about it.

7. Reagents cheaper in bulk China

8. ehhhh, maybe? But not really. But if you think about a scaled centralized system, you can get away with not using oligos for a lot of things

desireco42

That sounds really cool. I wouldn't agree you can't make money off this, you can make money off anything, just find people who need this and it seems you did find it.

Anyhow good luck. Would love to follow if you do anything with this in the future. Do you have a blog or anything?

null

[deleted]

jesse__

I've been working on a 3D voxel-based game engine for like 10 years in my spare time. The most recent big job has been to port the world gen and editor to the GPU, which has had some pretty cute knock-on effects. The most interesting is you can hot-reload the world gen shaders and out pop your changes on the screen, like a voxel version of shadertoy.

https://github.com/scallyw4g/bonsai

I also wrote a metaprogramming language which generates a lot of the editor UI for the engine. It's a bespoke C parser that supports a small subset of C++, which is exposed to the user through a 'scripting-like' language you embed directly in your source files. I wrote it as a replacement for C++ templates and in my completely unbiased opinion it is WAY better.

https://github.com/scallyw4g/poof

sonnig

Looks great!

Lucasoato

That’s so great, good luck with your project!

mikodin

A simple comment, but wow I really like the look of Bonsai! The lighting, shading and shapes are really beautiful, I think a game made in this would feel really unique

k0ns0l

wow, this is so cool Jesse :)

tomburgs

I have been building a workout tracking app specifically aimed at weightlifting/strength training.

My perfect user being someone who is either a body builder, powerlifter, or someone who just takes weightlifting seriously.

I've also been obsessed with making it iOS native and a one-time purchase.

Been trying to build in public on Bluesky: @tobu.bsky.social

Simple landing page with a waitlist: https://plates.framer.website/

ftomassetti

As of today I finished and published my book on migrating RPG code to modern languages: https://www.amazon.com/Migrating-RPG-Code-Modern-Languages-e...

So I will rest for a few days :D

rta5

I have been working on an open-source automotive controller that can run TockOS (embedded operating system written in Rust).

The rough overview is on my X post here: https://x.com/BobAdamsEE/status/1965573686884434278

It's a long running process, and the HW is mostly defined (but not laid out) but on pause while I work on porting TockOS to an ATSAMV71 to make sure I won't run into any project ending issues with the SW before I build the hardware.

tamnd

I am working on the little book of algorithms: https://github.com/little-book-of/algorithms

A project to implement 1000 algorithms. I have finished around 400 so far and I am now focusing on adding test cases, writing implementations in Python and C, and creating formal proofs in Lean.

It has been a fun way to dive deeper into how algorithms work and to see the differences between practical coding and formal reasoning. The long-term goal is to make it a solid reference and learning resource that covers correctness, performance, and theory in one place.

The project is still in its draft phase and will be heavily edited over the next few months and years as it grows and improves.

If anyone has thoughts on how to structure the proofs or improve the testing setup, I would love to hear ideas or feedback.

0x00cl

Wow, that looks fun and probably get to learn a lot about algorithms.

I don't have any feedback, but rather a question, as I've seen many repositories with people sharing their algorithms, at least on GitHub for many different languages (e.g. https://github.com/TheAlgorithms), what did you find that was missing from those repositories that you wanted to write a book and implement hundreds of algorithms, what did you find that was lacking?

MASNeo

Great idea. I had been thinking about pretty much the same but perhaps targeted at executives and perhaps including AI/Cloud.

I usually feel to many people wildly through around terms they hardly understand, in the belief they cannot possibly understand. That’s so wrong, every executive should understand some of what determines button line. It’s not like people skip economics because it’s hard.

Would love to perhaps contribute sometime next year. Stared and until then good luck - perhaps add a donation link!

tamnd

Thanks! I completely agree. For more than ten years consulting, training and architecting systems for clients across government and enterprise, I have seen the same pattern. Long before "big data", "cloud" and now with "AI" and "GenAI" these buzzwords have often been misunderstood by most of the C-suite. In my entire career, explaining the basics and setting the right expectations has always been the hardest part.

I really like your idea of targeting executives and connecting it to real business outcomes. Getting decision makers to truly understand the fundamentals behind the technology would make a huge difference.

tamnd

I hope the next generation learns to love "C" and Algorithms again. I have rediscovered my appreciation for C recently, even though Go is my main professional programming language.

kragen

That's a cool project!

I feel like the presentation of Lomuto's algorithm on p.110 would be improved by moving the i++ after the swap and making the corresponding adjustments to the accesses to i outside the loop. Also mentioning that it's Lomuto's algorithm.

These comments are probably too broad in scope to be useful this late in the project, so consider them a note to myself. C as the language for presenting the algorithms has the advantage of wide availability, not sweeping performance-relevant issues like GC under the rug, and stability, but it ends up making the implementations overly monomorphic. And some data visualizations as in Sedgewick's book would also be helpful.

tamnd

My biggest inspiration for this project, though, is The Art of Computer Programming (TAOCP), that level of depth and precision is the ultimate goal. I'm also planning to include formal proofs of all algorithms in Lean, though that could easily turn into a 10-year project.

tamnd

Sedgewick's Algorithms book is great for practical learning but too tied to Java and implementation details. It is a bit shallow on theory, though the community and resources for other languages help.

That said, I personally prefer Introduction to Algorithms (CLRS) for its formal rigor and clear proofs, and Grokking Algorithms for building intuition.

The broader goal of this project is to build a well tested, reference quality set of implementations in C, Python, and Go. That is the next milestone.

amelius

Sedgewick is available for C also.

tamnd

For visualization, I'm considering using the awesome Manim library by 3Blue1Brown (https://3b1b.github.io/manim/getting_started/quickstart.html). I'm not very good at visual design, but let see what I can come up with.

tamnd

To reduce the current monomorphism, I might add a generic version using void* and a comparator, or generate code for a few key types, while keeping the simple monomorphic listings for readability. (Though this would make the code a bit more complex)

teiferer

Nice to see that you are still around with this after your https://news.ycombinator.com/item?id=45448525 was flagged because of LLM slop issues of your work. How are you addressing those?

null

[deleted]

vivzkestrel

i am working on something pretty radical in this space. it is a book of algorithms that derives all the algorithms without telling what the algorithm is. for example for binary search your book quickly went into the low + high / 2 = mid thing. my method is radically different. i take an even sized array then try to actually find it step by step , then take an odd sized array , find it step by step, derive a general hypothesis and then create the formula from it for that algorithm. this is going to be orders of magnitude above any data structures and algorithms books and courses when it comes out. pinky promise

jbarrow

Training ML models for PDF forms. You can try out what I’ve got so far with this service that automatically detects where fields should go and makes PDFs fillable: https://detect.semanticdocs.org/ Code and models are at: https://github.com/jbarrow/commonforms

That’s built on a dataset and paper I wrote called CommonForms, where I scraped CommonCrawl for hundreds of thousands of fillable form pages and used that as a training set:

https://arxiv.org/abs/2509.16506

Next step is training and releasing some DETRs, which I think will drive quality even higher. But the ultimate end goal is working on automatic form accessibility.

olooney

I found a neat way to do high-quality "semantic soft joins" using embedding vectors[1] and the Hungarian algorithm[2] and I'm turning it into an open source Python package:

https://github.com/olooney/jellyjoin

It hits a sweet spot by being easier to use than record linkage[3][4] while still giving really good matches, so I think there's something there that might gain traction.

[1]: https://platform.openai.com/docs/guides/embeddings

[2]: https://en.wikipedia.org/wiki/Hungarian_algorithm

[3]: https://en.wikipedia.org/wiki/Record_linkage

[4]: https://recordlinkage.readthedocs.io/en/latest/

mmaaz

I love this as someone who used to work on max-weight matchings and now works on LLMs :)

guskel

Very neat. As a heavy user of recordlinkage, this is definitely on my radar.

sbrother

This is very cool! Thanks for sharing.

pbronez

Cool project!

I see you saved a spot to show how to use it with an alternative embedding model. It would be nice to be able to use the library without an OpenAI api key. Might even make sense to vendor a basic open source model in your package so it can work out of the box without remote dependencies.

olooney

Yes, I'm planning out-of-the-box support for nomic[1] which can run in-process, and ollama which runs as a local server and supports many free embedding models[2].

[1]: https://www.nomic.ai/blog/posts/nomic-embed-text-v1

[2]: https://ollama.com/search?c=embedding

conditionnumber

Project is super cool.

If you're adding more LLM integration, a cool feature might be sending the results of allow_many="left" off to an LLM completions API that supports structured outputs. Eg imagine N_left=1e5 and N_right=1e5 but they are different datasets. You could use jellyjoin to identify the top ~5 candidates in right for each left, reducing candidate matches from 1e10 to 5e5. Then you ship the 5e5 off to an LLM for final scoring/matching.

Michael9876

I'm a dev and also a private pilot. Currently I'm working on Pilot Kit: https://air.club/ , a mobile app born from my own frustration with the amount of tedious paperwork in aviation.

It's an all-in-one toolkit designed to automate the boring stuff so you can focus on flying. Core features include: automatic flight tracking that turns into a digital logbook entry, a full suite of E6B/conversion calculators, customizable checklists, and live weather decoding.

It’s definitely not a ForeFlight killer, but it's a passion project I'm hoping can be useful for other student and private pilots.

App Store: https://apps.apple.com/app/pilot-kit/id6749793975 Google Play: https://play.google.com/store/apps/details?id=club.air.pilot...

Any feedback is welcome!

asim

X replacement that's isn't Threads, Bluesky or Mastodon? https://micro.mu/blog/2025/10/13/more-on-mu.html