Launch HN: Karumi (YC F25) – Personalized, agentic product demos
8 comments
·November 24, 2025icyfox
Seems like the live demo is bear hugged - been waiting for ~5 minutes now. A bit ironic given their landing page: Don’t make your prospects wait–ever again
In its current iteration this demo might net discourage your future clients rather than encourage them.
I like the idea in general as an alternative to needing to book with a BDE. I'd always prefer to just self serve for a new product; anything that gates my time (sales calls, popover walkthroughs, etc) is something I'd prefer to skip. But I know non-engineering customers really love these calls to see the power of a new platform. I wonder if they'll be as engaged during an AI walkthrough versus when there's a person on the other end of the phone.
tonilopezmr
hi thanks! the load was massive try now!
klaushougesen1
i was actually impressed with the voice and understanding - good work
tekacs
Homepage:
> Don’t make your prospects wait–ever again
> [...] where prospects receive personalized demos, in a video call, instantly.
Demo: > We are preparing the demo for you...
> Setting up your experience...
> We are experiencing very high demand
> Almost there...
It spins for... well I don't know because I gave up and navigated away after about 60 seconds.I'm not super sure what architecture is in use that means that 16 minutes of being on the HN frontpage leaves it stalling out and unable to respond to requests after 60 seconds, but... it doesn't feel connected with the homepage messaging.
I absolutely appreciate (and have been subject to!) the HN traffic influx before, but for the nature of the product, when doing an _intentional_ Launch HN (not posted by someone else), it's fairly confidence-eroding to see the architecture fail to handle it in this way.
Really hoping that it's something transient and one-time that can be fixed – but surprised that there exist loading screens for this situation.
tonilopezmr
Hi thank you for testing! try now! we have a massive load coming from Product Hunt!
Brian-Watkins
[dead]
throw03172019
You made me wait. I’d rather schedule a demo. Waste of time reading a bunch of “almost there” messages.
tonilopezmr
Hi thank you! more than 80k demos in the same minute! try now!
Hey HN! We're Toni and Pablo, and we're building Karumi (https://www.karumi.ai), a system that lets your users get instant, scalable, guided demos of your product, fully automated, zero human interaction. It works in any language.
Here's a demo video: https://www.loom.com/share/e7f7e00f2284478e8335f8f4d4dac6bd. There's also a live demo at https://www.karumi.ai/meet/start/phlz.
Karumi is an AI agent that operates a real web app in a shared browser session and talks the user through it. Instead of a human giving a screen-share demo, the agent opens your product, clicks around, fills forms, and explains what it’s doing. We started building this as an internal tool at our previous company. As the product grew, people kept asking “what’s the right way to demo feature X?“. Docs and scripts became outdated quickly, and the quality of demos depended too much on who was presenting. We wanted something closer to a repeatable program: an agent that knows the main flows, understands who it’s talking to, and can walk through the product without getting lost.
Over time this turned into three main components:
Planning/control layer
A loop that decides the next step: ask something, click, navigate, reset, etc. It uses a reasoning model, but only within a fixed set of allowed actions with guards (timeouts, depth limits, reset states). It never gets free control of the browser.
Browser execution layer
A controlled browser session, streamed in a video call. The agent can only interact with the elements we want. We log each action with a timestamp and the agent’s “reason”, which helps debug odd behavior.
Product knowledge layer
We ingest docs, demo scripts & videos, and usage analytics, to train the agent. At runtime, the agent uses its knowledge to decide what flow to show and how to explain it.
Some practical details and limitations:
We only support web apps right now. Desktop apps will come next. LLMs introduce non-determinism, so we bias toward safe, predictable behavior: checkpoints, conservative navigation, and “escape hatches” that reset to known states. If the agent doesn’t understand a UI state (unknown modal, layout shift, etc.), it asks the user instead of guessing. Regarding pricing, it’s still early. We tailor it to each customer based on their needs. Our current thinking is a platform fee plus a per-call charge for the agent. The platform fee varies depending on complexity, support requirements, and overall scope.
People currently use Karumi for inbound demos and internal demo environments. If you want to see it inside a real product, here’s Karumi running in Deel’s platform: https://www.loom.com/share/e7f7e00f2284478e8335f8f4d4dac6bd
We’ll be around to answer questions and look forward to your feedback!