Launch HN: Plexe (YC X25) – Build production-grade ML models from prompts
6 comments
·November 4, 2025brightstar18
Product seems cool. But can you help me understand if what you are doing is different from the following: > you put a prompt > Plexe glorifies that prompt into a bigger prompt with more specific instructions (augmented by schema definitions, intent and whatnot) > plug it into the provided model/LLM > .predict() gives me the output (which was heavily guardrailed by the glorified prompt in the step 2)
tnt128
In the demo, you didn’t show the process of cleaning and labeling data, does your product do that somehow, or do you still expect the user to provide that after connecting the data source.
johnsillings
very cool – I like how opinionated the product approach is vs. a bunch of disconnected tools for specialists to use (which seems more common for this space).
marcellodb
Thanks, we're pretty opinionated on "this should make sense to non-ML practitioners" being a defining aspect of the product vision. Behind the scenes, we've had quite a few conversations specifically about how to avoid features feeling "disconnected", which is always challenging at an early stage when you're getting pulled in several directions by users with different use cases. Happy to hear it came across that way to you.
Hey HN! We're Vaibhav and Marcello, founders of Plexe (https://www.plexe.ai). We create production-ready ML models from natural language descriptions. Tell Plexe what ML problem you want to solve, point it at your data, and it handles the entire pipeline from feature engineering to deployment.
Here’s a walkthrough: https://www.youtube.com/watch?v=TbOfx6UPuX4.
ML teams waste too much time on generic heavy lifting. Every project follows the same pattern: 20% understanding objectives, 60% wrangling data and engineering features, 20% experimenting with models. Most of this is formulaic but burns months of engineering time. Throwing LLMs at it isn't the answer as that just trades engineering time for compute costs and worse accuracy. Plexe automates this repetitive 80%, so your team can work faster on what actually has value.
You describe your problem in plain English ("fraud detection model for transactions" or "product embedding model for search"), connect your data (Postgres, Snowflake, S3, direct upload, etc), and then Plexe: - Analyzes data and engineers features automatically - Runs experiments across multiple architectures (logistic regression to neural nets) - Generates comprehensive evaluation reports with error analysis, robustness testing, and prioritized recommendations to provide actionable guidance - Deploys the best model with monitoring and automatic retraining
We did a Show HN for our open-source library five months ago (https://news.ycombinator.com/item?id=43906346). Since then, we've launched our commercial platform with interactive refinement, production-grade model evaluations, retraining pipeline, data connectors, analytics dashboards, and deployment for online and batch inference.
We use a multi-agent architecture where specialized agents handle different pipeline stages. Each agent focuses on its domain: data analysis, feature engineering, model selection, deployment, and so on. The platform tracks all experiments and generates exportable Python code.
Our open-source core (https://github.com/plexe-ai/plexe, Apache 2.0) remains free for local development. For the paid product, our pricing is usage-based, with a minimum top up of $10. Enterprises can self-host the entire platform. You can sign up on https://console.plexe.ai. Use promo code `LAUNCHDAY20` to get $20 to try out the platform.
We’d love to hear your thoughts on the problem and feedback on the platform!