Skip to content(if available)orjump to list(if available)

Launch HN: InspectMind (YC W24) – AI agent for reviewing construction drawings

Launch HN: InspectMind (YC W24) – AI agent for reviewing construction drawings

48 comments

·December 10, 2025

Hi HN, we're Aakash and Shuangling of InspectMind (https://www.inspectmind.ai/), an AI “plan checker” that finds issues in construction drawings, details, and specs.

Construction drawings quietly go out with lots of errors: dimension conflicts, co-ordination gaps, material mismatches, missing details and more. These errors turn into delays and hundreds of thousands of dollars of rework during construction. InspectMind reviews the full drawing set of a construction project in minutes. It cross-checks architecture, engineering, and specifications to catch issues that cause rework before building begins.

Here’s a video with some examples: https://www.youtube.com/watch?v=Mvn1FyHRlLQ.

Before this, I (Aakash) built an engineering firm that worked on ~10,000 buildings across the US. One thing that always frustrated us: a lot of design coordination issues don’t show up until construction starts. By then, the cost of a mistake can be 10–100x higher, and everyone is scrambling to fix problems that could have been caught earlier.

We tried everything including checklists, overlay reviews, peer checks but scrolling through 500–2000 PDF sheets and remembering how every detail connects to every other sheet is a brittle process. City reviewers and GC pre-con teams try to catch issues too, yet they still sneak through.

We thought: if models can parse code and generate working software, maybe they can also help reason about the built environment on paper. So we built something we wished we had!

You upload drawings and specs (PDFs). The system breaks them into disciplines and detail hierarchies, parses geometry and text, and looks for inconsistencies: - Dimensions that don’t reconcile across sheets; - Clearances blocked by mechanical/architectural elements; - Fire/safety details missing or mismatched; - Spec requirements that never made it into drawings; - Callouts referencing details that don’t exist.

The output is a list of potential issues with sheet refs and locations for a human to review. We don’t expect automation to replace design judgment, just to help ACE professionals not miss the obvious stuff. Current AIs are good at obvious stuff, plus can process data at quantities way beyond what humans can accurately do, so this is a good application for them.

Construction drawings aren't standardized and every firm names things differently. Earlier “automated checking” tools relied heavily on manually-written rules per customer, and break when naming conventions change. Instead, we’re using multimodal models for OCR + vector geometry, callout graphs across the entire set, constraint-based spatial checks, and retrieval-augmented code interpretation. No more hard-coded rules!

We’re processing residential, commercial, and industrial projects today. Latency ranges from minutes to a few hours depending on sheet count. There’s no onboarding required, simply upload PDFs. There are still lots of edge cases (PDF extraction weirdness, inconsistent layering, industry jargon), so we’re learning a lot from failures, maybe more than successes. But the tech is already delivering results that couldn’t be done with previous tools.

Pricing is pay-as-you-go: we give an instant online quote per project after you upload the project drawings. It’s hard to do regular SaaS pricing since one project may be a home remodel and another may be a highrise. We’re open to feedback on that too, we’re still figuring it out.

If you work with drawings as an architect, engineer, MEP, GC preconstruction, real estate developer, plan reviewer we’d love a chance to run a sample set and hear what breaks, what’s useful, and what’s missing!

We’ll be here all day to go into technical details about geometry parsing, clustering failures, code reasoning attempts or real-world construction stories about how things go wrong. Thanks for reading! We’re happy to answer anything and look forward to your comments!

knollimar

What kind of system to you have for parsing symbology?

Do you check anything like cross discipline coordination (e.g. online searching specification data for parts on drawings like mechanical units and detecting mismatch with electrical spec), or it wholly within 1 trades code at a time?

edit: there's info that answers this on the website. It seems limited to the common ones (e.g. elec vs arch), which makes sense.

aakashprasad91

Symbol variation is a huge challenge across firms.

Our approach mixes OCR, vector geometry, and learned embeddings so the model can recognize a symbol plus its surrounding annotations (e.g., “6-15R,” “DIM,” “GFCI”).

When symbols differ by drafter, the system leans heavily on the textual/graph context so it still resolves meaning accurately. We’re actively expanding our electrical symbol library and would love sample sets from your workflow.

aakashprasad91

We parse symbols using a mix of vector geometry, OCR, and learned detection for common architectural/MEP symbols. Cross-discipline checks are a big focus as we already flag mismatches between architectural, structural, and MEP sheets, and we’re expanding into deeper electrical/mechanical spec alignment next. Would love to hear which symbols matter most in your workflow so we can improve coverage.

djprice1

What do you mean when you say "vector geometry"? Are you using the geometry extracted from PDFs directly? I'm curious how that interacts with the OCR and detection model portion of what you're doing

knollimar

I do electrical so parsing lighting is often a big issue. (Subcontractor)

One big issue Ive had is drafters use the same symbol for different things per person. One person's GFCi is another's switched receptacle. People use the specialty putlet symbol sometimes very precisely and others not. Often accompanied by an annotation (e.g. 6-15R).

Dimmers being ambiguous is huge; avoiding dimming type mismatches is basically 80% the lutron value add.

oscarmcdougall

We're in a similar space doing machine assisted lighting take offs for contractors in AU/NZ, with bespoke models trained for identifying & measuring luminaires on construction plans.

Compliance is a space we've branched into recently. Would be super interested in seeing how you guys are currently approaching symbol detection.

aakashprasad91

Happy to swap notes. If you send a representative lighting plan set, we can run it and share how the detector clusters, resolves, and cross-references symbols across sheets. Always excited to compare approaches with teams solving adjacent problems.

sparselogic

This is fun to see. Some of my family are Division 10 contractors: their GCs love them because they spot design coordination and code issues early and keep the project from getting derailed. Bringing that to the entire project is a serious lifesaver.

aakashprasad91

Totally! Division 10 and specialty trades are often the first to see coordination issues show up in the field. We’re trying to bring that same early-warning benefit across the entire drawing set so errors never make it to construction. Would love to run a real project from your family’s world if they’re open to it!

knollimar

Maybe this is saying the quiet part out loud: how do you deal with bogus specs that designers end up not caring about since they're copy pasted? Is it just mission accomplished when you point out a potential difficulty?

aakashprasad91

We see that a lot — specs that are clearly boilerplate or outdated relative to the drawings. Our goal isn’t to force a change, but to surface where the specs and drawings diverge so the designer can quickly decide what’s intentional vs what’s baggage. “Flag + context for fast human judgment” is the philosophy.

pondemic

I’m sure commissioning engineers would have a field day with this. Have you considered use cases on the larger owner’s side of things? As an owner’s rep I can definitely see value here at an SD and DD level, especially if the owner has a decently sized Facilities or commissioning team.

aakashprasad91

Great point! Owner’s reps and commissioning teams are becoming one of the fastest-growing user groups for us. At SD/DD we can surface coordination risks early, highlight spec–drawing mismatches, and give owners a clearer picture of design completeness before things get locked in. If you’re open to it, we’d love to run a sample SD/DD set from your world and see what’s most useful.

zodo123

How does your system do with hand-drawn plans from an old-school architect? Is reliable OCR and line reading dependent on CAD output plans?

Doerge

I love this!

Stupid question: Would BIM solve these issues? I know northern Europe are somewhat advanced in that direction. What kind of digitalization pace do you see in the US?

knollimar

BIM just shuffles the problem around. There are firms that do "one source of truth" BIM models but the real issue is conflicts and workflow buy in.

How do you get architect to agree with engineer with lighting designer with lighting contractor when they all have different non overlapping deadlines, work periods, knowledge and scope?

edit: if you don't work in the industry, BIM helps for "these two things are in the same spot", but not much for code unless it's about clearance or some spatial based calculation

aakashprasad91

100% agree the hardest problems are workflow and incentives, not file formats.

Even with a perfect BIM model, late changes and discipline silos mean drawings still diverge and coordination issues sneak through.

We’re trying to be the “safety net” that catches what falls through when teams are moving fast and not perfectly in sync.

aakashprasad91

BIM definitely helps, but most projects still rely heavily on 2D PDFs for coordination and permitting especially in the US. Even when BIM exists, drawings often lag behind the model and changes don’t stay perfectly synced. We see AI plan checking as a bridge that helps teams catch what falls through the cracks in today’s workflows. And BIM only catches certain issues not building codes etc.

testUser1228

The bathroom height example in your video is really interesting (checking the bathroom height above the toilet against building code), how does it know when to check drawings against code provisions and how does it know which code to look at?

aakashprasad91

We infer the applicable codes from the project metadata + the drawings themselves.

The location + occupancy/use type tells us the governing code families (e.g., IBC/IRC, ADA, NFPA, local amendments), and then we parse the sheets for callouts, annotations, assemblies, and spec sections to map them to the relevant provisions.

So the system knows when to check (e.g., plumbing fixture clearances) because of the objects it detects in the drawings, and it knows what code to check based on jurisdiction + building type + what’s being shown in that detail.

The model still flags with human-review intent so designer judgment stays in the loop.

testUser1228

Gotcha, so the model is identifying elements on the sheets and determining when to run code checks? Is the model running thousands of code checks per drawing set? I would imagine there are lots of elements that could trigger that

aakashprasad91

Yep, the model identifies objects/conditions on sheets (fixtures, stairs, rated walls, landings, etc.) and triggers the relevant checks automatically. It can run thousands of checks per project, but we only surface high-confidence findings where the combination of geometry + annotations + code context points to a real risk. Humans stay in the loop to confirm what matters.

knollimar

Is the pay as you go model % based or project sized? I've had issues with conflicts of interest of being lean vs not. It's hard to sell on % based revenue.

Also who is this targetted at? Subcontractors, GC, design?

aakashprasad91

We price per-project based on size/complexity not % of construction cost, so there’s no conflict of interest around bigger budgets. Today our main users are architects/engineers and GC pre-con teams, but subs who catch coordination issues early also get a ton of value.

knollimar

At what stage do you run this on plans? like DD, some % CD? What's the intended target timeframe?

I don't see how subs get much value unless they can use it on ~80% CD for bid phases

aakashprasad91

Most teams run us late DD through CD anywhere the set is stable enough that coordination issues matter. Subs especially like running it pre-bid at ~80–100% CDs so they don’t inherit coordination risk. Earlier checks also help designers tighten the set before hand-offs, so value shows up at multiple stages. Eventually the goal is to be continuous QA tool including during construction by pulling in field data too and comparing to drawings and specs. Like drawings showed X size and field photos show Y size.

cannedbread

When I upload my drawing set, how often should I expect it to hallucinate? And how much of the real stuff does it flag?

aakashprasad91

Hallucinations still happen occasionally, but we bias heavily toward high-confidence findings so noise stays low. On typical projects we surface a few hundred coordination issues that are real, observable conflicts across sheets rather than speculative checks. We’re actively improving precision by learning from every false positive customers flag. We show you the drawings, specs, etc. so you can verify it yourself not just trust the AI.

shuangly

We do extensive preprocessing to ensure AI receives accurate context, data, and documents for review, and we’re continuously refining this, so accuracy keeps improving every day. Right now the accuracy isn't super stable yet across projects, but we've had findings with > 90% accuracy results

frogguy

Are you doing code checks for structural issues? If so, how do you deal with licensing on common code orgs, such as ASCE?

aakashprasad91

Great question. We currently focus primarily on coordination, dimension conflicts, missing details, and clear code-triggered checks that don’t require sealed structural judgment. For structural code references (e.g., ASCE-7), we infer applicable sections and surface potential issues for a licensed engineer to review. We don’t replace engineering judgment or sealed design accountability.

T1tt

"an AI “plan checker”" do you have some public benchmark for how many issues you can find?

how does this work behind the scenes?

aakashprasad91

Great questions. We’re working on a more formal public benchmark and will share results as our dataset grows. Today, we typically catch coordination issues like conflicting dimensions, missing callouts, building code and clearance violations that humans often miss in large sheet sets. Behind the scenes it’s a multimodal workflow: OCR + geometry parsing + cross-sheet callout graph + constraint checks vs. code/spec requirements.

BoorishBears

Not shade, and it's a small thing, but why do you list your investors as social proof here?

Isn't the target persona someone who'd be at best indifferent, and at worst distrustful, of a tech product that leads with how many people invested in it? Especially vs the explanation and actual testimonials you're pushing below the fold to show that?

aakashprasad91

Totally fair callout and appreciate the feedback. We’re already testing alternative hero layouts focused purely on real customer results and example issues caught. Our goal is to win trust by demonstrating usefulness/results, not who invested in us.

an_aparallel

where would my firms documents end up (on whos servers) to do this checking? I dont know how any firm would just hand out their cd's just like that?

Or is being that lax normal these days?

Aside: this field is insanely frustrating, the chasm between clash detection and resolution is a right ball ache...between acc, revizto, and aconex clash detection (and the like)..the defacto standard is pretty much telling me x is touching y....great...can you group this crap intelligently to get my hi rise clashes per discipline from 2000 down to 10? Can you navigate me there in revit (yes switchback in revizto is great) but revizto itself could improve.

aakashprasad91

Yes one of the biggest values of our system is reducing “noise.” Instead of surfacing 2,000 micro-clashes, we cluster findings into higher-order issues (e.g., “all conflicts caused by this duct run” or “all lighting mismatches tied to this dimming spec”). We’re not a BIM viewer yet, but we do map issues back to sheet locations, callouts, and detail references so teams can navigate directly to the real source of the problem.

aakashprasad91

We store files securely on AWS with strict access controls, encryption in transit and at rest, and zero sharing outside the file owner’s account. Only our engineers can access a project for debugging and only if the customer explicitly allows it. We can also offer an enterprise option with private cloud/VPC deployment for firms that require even tighter controls. Users can delete all files permanently at any time.

shuangly

Documents are stored on AWS with strict access controls, meaning they are only accessible to the file owner and, if necessary, our engineers for debugging purposes. After the check, users can delete the project and optionally permanently delete the files from our S3 buckets on AWS.