Skip to content(if available)orjump to list(if available)

Ask HN: Anyone using augmented reality, VR, glasses, helmets etc. in industry?

Ask HN: Anyone using augmented reality, VR, glasses, helmets etc. in industry?

62 comments

·June 25, 2025

Since Google Glass made its debut in 2012, there's been a fair amount of hype around augmented reality and related tech coming into its own in industry, presumably enhancing worker productivity and capabilities.

But I've heard and seen so little use in any industries. I would have thought at a minimum that having access to hands-free information retrieval (e.g. blueprints, instructions, notes, etc), video chat and calls for point-of-view sharing, etc would be quite useful for a number of industries. There do seem to be interesting pilot trials involving Hololens in US defense (IVAS) as well as healthcare telemonitoring in Serbia.

Do you know of any relevant examples or use cases, or are you a user yourself? What do you think are the hurdles - actual usefulness, display quality, cost, something else?

isk517

Yes, the company I work for has started using Hololens 2. We have a program that can overlay the 3D models from our CAD program onto the physical steel assemblies for QC. When it works, it works well and enables our quality checkers to perform checks faster and more accurately than using tape measures while going back and forth looking at a 2D drawing printed on 11 x 17 paper.

The biggest hurdles is that none of the large companies think there is enough profit to be made from AR. The Hololens 2 is the only headset on the market both capable of running the program required while also being safe to use in a active shop enviroment (VR with passthrough is not suitable). Unfortunately the Hololens 2 is almost 6 years old as is being stretched to the absolute limits of its hardware capabilities. The technology is good but feels like it is only 90% of the way to where it needs to be. Even a simple revision with double the RAM and faster more power efficient processor would alleviate many of the issues we've experienced.

Ultimately from what I've seen, AR is about making the human user better at their job and there are tons of industries where it could have many applications, but tech companies don't actually want to make things that could be directly useful to people that work with their hands, so instead we will just continue to toss more money at AI hoping to make ourselves obsolete.

NewUser76312

Thank you very much for sharing your experience.

Quick question about your use case - is the 3D overlay really that important, or would you get most of the value simply seeing the blueprints in your heads-up display, maybe doing a quick finger swipe or voice command to switch between pages/images?

isk517

Yes, the 3D overlay is the entire point. A heads up display is just looking at the blueprint on a piece of paper with an additional layer of complexity, it wouldn't remove the need to manually measure, nor would it provide any assistance in spotting missing attached pieces (or some extra pieces). Once the model is overlayed QC goes from having to measure the placement of every pieces and the location of every hole to just walking around the finished assembly and ensuring that every conforms to the civil engineer approved model. A half hour process can be done faster and more precisely in 5 minutes, you notices very quickly when there is solid steel where the hologram has a hole, or thin air where the hologram shows that a plate was suppose to be welded on.

isk517

The biggest issue comes from area mapping. In order to keep the hologram steady and anchored you need to perform a mapping process so that the helmet recognizes both the physical steel assembly and a bit of the surrounding area to keep it steady when moving around. The 8GB of RAM puts a limit on the amount of mapping data that can be stored putting a limit of the size of assembly you can work with, and since the mapping process relies on using the helmets own software that has not been work on in years it is extremely sensitive to any sort of background movement, which means it works best in as controlled of a environment as possible.

Right now we are just using it for special projects that are complex and have little margin for error. We'd like to be able to use it for everything but that isn't feasible with where the tech is currently stuck at.

softfalcon

Very interesting, and I agree with your assessment of the difficulties using the aging HoloLens.

I am curious, what size of clients are you working with and how many contracts has it realistically turned into?

I also believe proper AR hardware/software can revolutionize the QA and inspections industry.

What I am noticing is a chicken/egg problem where companies want proof it works, while also reluctant to put their money where their mouth is and invest in the R&D. Which then leads to Microsoft and similar refusing to fully invest in new AR tech.

As such, it all stays mostly in experimental and drawing board land, never quite fully reaching the market.

Thoughts?

isk517

We work with all of the large general contractors in the steel construction industry. Right now, it's turned into one contract, but we are the second company the client has employed since the first company they hired failed to produce a single assembly that met their requirements. The client was the one that originally was using this tech since they wanted a way to do their own QC after the first experience, and we decided it was worth while pursuing ourselves since successfully pulling this project off while be a HUGE boost to our reputation. The construction industry is all about your portfolio of past projects.

QA is the big sales point of the software we are using, but there are many other potential applications for the same product. It should be possible to overlay the model on the main assembly prefab then use that to quickly mark where holes should be drilled and additional pieces attached. The other potential application that is being explored is using the holographic overlays to construct things out of the usual order, instead of building part 1 then starting part 2 since it needs to be built to conform to the first part you can instead build around the hologram so that your not relying on the previously built parts to ensure your angles are correct.

I agree about the chicken/egg problem. Its an emerging technology where the payoff might be a decade away, customers need software that will actually benefit them, developers need reliable hardware capable of running software that has practical uses, and hardware companies want to know there is a customer base. The issue is AR falls under the category of product that the customer does not know they actually want, so the only way it is going to be developed is if one of the hardware manufactures takes a leap of faith and makes the long term investment. Sadly, I feel like AR is a million dollar idea with practical uses that has to contend with a business climate where you can make billions making some doodad that collects private data then displays ads to the masses.

isk517

An additional layer of insight to the chicken/egg problem: the developer of the software we are using was founded by someone in the construction industry, not software. I think one of the issue with the adoption of AR is that there is currently a disconnect between the people who have a problem and the people who could produce a solution. Compared to 'a solution in search of a problem', AR seems to be 'a solution that is failing to introduce itself to the problems it can solve'

itake

There are thousands of ways companies can invest to make their employees more efficient. My guess is companies are choosing to invest in lower hanging fatter fruits.

Companies have put billions into R&D, but still haven't delivered a product that surpasses the hurdle rate.

edent

My MSc project was testing VR headset with office-based employees. Even if you ignore the expense, health-risks of sharing headsets, lack of decent tools, and power requirements - you still have to contend with a large subset of people feeling sick when using VR.

I use VR for gaming. The headsets are uncomfortable after about 45 minutes, they're hot and sweaty, and they're incredibly isolating. All that's fine if you want to slay baddies while alone at home, but utterly propellant to most people.

idkwhattocallme

I've gone down this rabbit hole with customers as a PM still working in the space. Here is what I've learned. The past decade of devices (hololens, realwear, google glass, vuzix, etc) were some combination of way too heavy, expensive, fragile, short battery life, no wifi connectivity, too much UI long to get to point of value and/or simply not useful. That and most customers had a content problem. The AR/VR use in the field typically came down to looking something up in a manual or calling someone. Both easily and perhaps more effectively done on a smartphone. There was an instance where I asked techs why they weren't using the headset for showing what they were seeing in realtime and they said, it's easier to facetime (hard to argue that). The cool AR 3-D demos or overlays rarely worked in the field on real equip or didn't actually convey anything useful (everyone knows the basics of how the machine works). There are training VR use cases (like learning to operate a crane), but once again it's a nice to have supplement and not a replacement. Recent advances with LLMs (specifically voice) + Meta type "glasses form factor" have created intrigue again with innovation centers at large companies. The use case we're currently working on is inspections or filling out forms with audio/videos.

NewUser76312

Thanks for your comprehensive response! I've also been watching the field for a while, have done some contracting for others trying to make their own AR devices, and tipped my toes in the water making some basic prototypes myself.

>some combination of way too heavy, expensive, fragile, short battery life, no wifi connectivity, too much UI long to get to point of value and/or simply not useful

Was the screen quality, resolution, visibility in brightness, etc also one of these limiting factors? Or would you say screen quality has gotten reasonable by now?

>The AR/VR use in the field typically came down to looking something up in a manual or calling someone.

That's good to hear as someone interested in the field, I've been skeptical of the fidelity and utility of the fancy augmented 3D overlays.

Ah I see you realized something similar: >The cool AR 3-D demos or overlays rarely worked in the field on real equip or didn't actually convey anything useful (everyone knows the basics of how the machine works).

>Both easily and perhaps more effectively done on a smartphone.

Surely there are some use cases where hands-free operation would be a game changer, but I don't know enough about potential industries where this would be the case.

>The use case we're currently working on is inspections or filling out forms with audio/videos.

That's pretty interesting, do you even need a screen, or just voice? I would think a pretty quick-and-dirty way to do it is to take pdf forms, enumerate (put small numbers) next to every editable field, and then use voice commands like, "write the following in field 3: ...." The purpose of having a screen would be to verify what the LLM + voice is inputting in the form. Then at the end you can tell it to save/submit or whatever.

gavinray

I own two AR devices:

- Viture Pro XR glasses

- Vuzix Z100 glasses (through Mentra)

The Viture's I use as a lightweight alternative to VR headsets like the Meta Quest. I lay down on the couch/in bed and watch videos while wearing them.

The Vuzix are meant to be daily-wear glasses with a HUD, have yet to break them in.

Later this year, Google/Samsung are due big AR releases, so is Meta I think as well.

It'll be the debut of Android XR.

mentos

My bet is that having a physical monitor will always be the luxury option and that XR will never be able to get away from the annoyance of having something on your face. Curious if you agree or if maybe you prefer Vitures over a physical monitor?

gavinray

I think that _eventually_ VR/AR will be a superior screen-viewing experience, but from what I've tried it's not there yet.

It's good enough for watching videos, but for working and reading text, I personally haven't used a device with high enough text quality to prevent eye strain.

I'm very bullish on AR though, and I'm willing to bet that consumer grade devices which are genuinely comfortable to work in will become available within the next 2-3 years.

To me, AR is the next step in Human-Computer Interaction while we wait for full BCI (Brain-Computer Interface) devices.

mentos

Yea I guess my thought is even if they were light as a single feather they'd still tickle and annoy your face..

Happy to be proven wrong obviously but so far that's my outlook.

hamburglar

I find the quest 3 with virtual monitors actually pretty good from a text-reading perspective and I can use it for a long time, but that’s using a lower resolution than my native monitors. One thing I think is interesting about it is I don’t need my reading glasses, whereas I very much do when looking at a real monitor. I find the virtual display setup somewhat intolerable for other reasons, though, like the inflexibility about how the displays are arranged, and there’s the physical bit about having a bulky HMD on.

eska

As a bachelor thesis I created a HoloLens 1 app that communicated with Rhino3D to place waypoints for a 5 axis robot arm to cut with a plasma cutter after calculating and previewing the robot arm movements and confirming.

In my day job I occasionally hear about some AR startup doing demos for training and parts setup in CNC machines but the value add seems to be too insignificant for the work required.

spookie

They are used for training, personally developing something intended for firefighters.

Hurdles? Battery life, proper hardening against dust/water.

stuaxo

Have you seen the immersive environments by IglooVision ?

Its the opposite- surround projection, so you can put a group in a room into a scenario.

jlarocco

I work at a company that does CAD data translation (multiple formats into 3D PDF), and a while back we had an internal hackathon where one of the projects added basic support for VR glasses to our desktop app. It was really neat, and there was some excitement about it, but there hasn't been much follow up. I think the key issue is whether or not it adds enough value to justify buying everybody a headset, and for our use case I don't know if it does, though I'm just a lowly programmer, I don't know what the customers think about it.

We have another product that's geared towards collaborating and sharing data between teams and vendors, and it seems better suited there, but that one is a web application, and I don't know how well VR glasses are supported there.

I think it'd be awesome in the CAD applications themselves but I don't know if any of them support it out of the box.

j-wags

I just got a pair of TCL Rayneo air 2 display glasses since I'm farsighted and my eyes become fatigued after a day of working on a conventional monitor. The increased focal distance seems to help, but the nose piece is weirdly designed and the pressure under the pads becomes a little painful after an hour or two. Also the field of view is too wide and so the edges are blurry (hard to see clock, corner buttons in fullscreen windows, health bar in video games, etc).

Worked great to avoid eye fatigue/posture issues on airplanes though. I'm happy I have them, but in hindsight I'd have gotten a Viture or something with a better nose bridge and a narrower field of view.

gs17

Viture still might be worth it for you, the built-in diopter adjustments might be enough depending on your vision.

geocrasher

I am actively looking into an Oculus 3 for a virtual desktop environment that I can use portably, such as in an Airbnb, instead of lugging around a 43" 4K monitor. I'm also looking at projectors for this purpose. The context is remote work.

aerostable_slug

Try to find a way to try various facial interfaces and head strap arrangements. The stock ones can be uncomfortable for longer-term usage on many people's faces. I don't find this to be much of an issue while playing games (I use Ace, the competition pistol shooting simulator), but it gets to me when viewing movies or reading text/code. Also, extended batteries mean you can't easily rest your head in a chair.

spookie

I would search for lighter options if you intend to use it for long periods of time or get a better head strap.

woadwarrior01

This is the only use case I have for the Apple Vision Pro, and it works quite well for that, paired with my 16" M3 Macbook Pro.

FredPret

- How is reading text for long periods?

- Does your neck get tired?

- Do you ever have to be on video calls? I can't talk to clients looking like a spaceman

crooked-v

Not them, but I have one. I love it for movies or gaming where I can lay down and easily readjust it, but it's just no good for sitting/standing over long periods.

It's not actually the weight. I have a Quest 3 with a BoboVR head strap, external battery, etc that all add up to be heavier than the AVP, but I can easily go for multi-hour social sessions with that on without any physical discomfort. You can put a ton of weight on your head with perfect comfort as long as it's balanced properly.

The AVP's real problem is that its ergonomics are just shit. As with a bunch of other things, they designed for the ads instead of actual usability, so it's significantly worse than headsets that are actually much heavier, and the earband design with the way-too-far-back connectors and no top connections makes it nigh impossible for third parties to improve on it.

The closest thing I've seen to making it comfortable is the third-party ResMed Kontor headstrap, and that's being produced in such low numbers that it's functionally impossible to actually buy.

nahuel0x

Saw some good reviews of the Viture Pro XR for this use case.

koakuma-chan

That's actually a great idea. Don't need multiple screens if you can just use AR/VR.

rightbyte

I think alt-tabbing or virtual desktops has less mental overhead and physical strain than using VR though.

jona777than

This is a good point. I thought about it, too.

I could see a frequent traveler using an AVP as a "full setup" on the go. In my experience, I can get away with most with a MacBook. Some projects really benefit from the extra screen real estate (and a mechanical keyboard.)

jayd16

Its less mental strain to just lay out the windows spatially. Who doesn't like more monitor space? However, its much more physical strain than multiple monitors.

ugolino91

I’m one of the founders @ resolvebim.com YCW15. We built a VR and (some) AR platform to help engineering and construction companies use HMDs in their everyday workflows to enhance their 3D BIM review workflows.

We post case studies regularly on our blog, so you can read about real world deployments there: blog.resolvebim.com

From my experience the hardware is still a hurdle simply because it doesn’t completely replace all pc based workflows right now and therefore has to be used selectively at the right moments alongside 2D monitors.

NewUser76312

Thanks for your response, that's certainly an exciting use case, and what I'd hope AR headsets could help with in the future.

From your company's landing page, I saw the video and it looks like you're working with in-office project managers and similar white-collar types.

Do you work with any products in the field, like on the job sites? Is that something that would be interesting or valuable? Some examples: letting workers be able to quickly share first-person recorded videos of issues, first-person video chat with supervisors, ability to pull up blueprints and instructions in their heads-up displays, etc? Assuming perhaps a different platform than the Meta, as I don't think fully covered VR would be appropriate for a worksite.

ugolino91

Some of our clients do take the quest into the field, but we advise them about the safety hazards. There are use cases where you can use the HMD while standing still and not moving to eliminate the risk of trip hazards. You can see one of our clients doing it here: https://www.linkedin.com/posts/tony-duan-6085021a4_bim-resol...

You can see in that video that you can markup the site virtually and yes you can record video, leave issue markers, pull up 2D plans from other tools we integrate with like Procore, ACC, etc. However, it still is primarily a stationary tool on site because of the field of view limitations.

There are some rumors about next gen MR headsets allowing for a "full field of view" by basically removing the head gasket altogether. We'll see.

codybontecou

There was a recent Nvidia video (https://www.youtube.com/watch?v=_2NijXqBESI) that showcases some of the robotics problems Nvidia is working on.

They use the Apple Vision Pro headset fairly significantly in human interaction and data gathering that they then utilize for simulations.

quantumquetzal

Hello!

I spent a lot of time in graduate school researching AR/VR technology (specifically regarding its utility as an accessibility tool) and learning about barriers to adoption.

In my opinion, there are three major hurdles preventing widespread adoption of this modality:

1. *Weight*: To achieve powerful computation like that of the HoloLens, you need powerful processing. The simplest solution to this is to put the processing in the device, which adds weight to it. The HoloLens 2 weighs approximately 566g (or 1.24lb), which is a LOT of weight compared to a pair of traditional glasses, which weigh approximately 20-50g. Speaking as someone who developed with the HL2 for a few years, all-day wear with the device is uncomfortable and untenable. The weight of the device HAS to be comfortable for all-day use, otherwise it hinders adoption.

2. *Battery*: Ironically, making the device smaller to accommodate all-day wear means that you're simultaneously reducing its battery life, which reduces its utility as an all-day wearable: any onboard battery must be smaller, and thus store less energy. This is a problematic trade-off: you don't want the device to weigh too much that people can't wear it, but you also don't want the device to weigh too little that it ceases to have function.

3. *Social Acceptability*: This is where I have some expertise, as it was the subject of my research. Simply put, if a wearer feels as though they stand out by wearing an XR device, they're hesitant to wear it at all when interacting with others. This means that an XR device must not be ostentatious, as the Apple Vision Pro, HoloLens, MagicLeap, and Google Glass all were.

In recent years, there have been a lot of strides in this space, but there's a long way to go.

Firstly, there is increasingly an understanding that the futuristic devices we see in sci-fi cannot be achieved with onboard computation (yet). That said, local, bidirectional, wireless streaming between a lightweight XR device (glasses) and a device with stronger processing power (a la smartphone) provides a potential weigh of offloading computation from the device itself, and simply displaying results onboard.

Secondly, Li+ battery tech continues to improve, and there are now [simple head-worn displays capable of rendering text and bitmaps](https://www.vuzix.com/products/z100-smart-glasses) with a battery life of an entire day. There is also active development work by the folks at [Mentra (YC W25)](https://www.ycombinator.com/companies/mentra) on highlighting these devices' utility, even with their limited processing power.

Lastly, with the first two developments combined, social acceptability is improving dramatically! There are lots of new head-worn displays emerging with varying levels of ability. There was the recent [Android XR keynote](https://www.youtube.com/watch?v=7nv1snJRCEI), which shows some impressive spatial awareness, as well as the [Mentra Live](https://mentra.glass/pages/live) (an open-source Meta Raybans clone). In terms of limited displays with social acceptability, there are the [Vuzix Z100](https://www.vuzix.com/products/z100-smart-glasses), and [Even Realities G1](https://www.evenrealities.com/g1), which can display basic information (that still has a lot of utility!).

As an owner of the Vuzix Z100 and a former developer in the XR space, the progress is slow, but steady. The rapid improvements in machine learning (specifically in STT, TTS, and image understanding) indirectly improve the AR space as well.

ramses0

But why not some sort of hip-clip and cable for battery, or some sort of laser-tag vest with a Mac mini on the front and a battery on the back?

I mean- even the Sony Walkman started with audio streaming from a hip-mounted computer/power device, especially for work/industrial usage?