Skip to content(if available)orjump to list(if available)

Wired Called Our AirGradient Monitor 'Not Recommended' over a Broken Display

thomassmith65

This is an immature response which comes across as "Indoor Air Quality Monitor" populism.

A more professional response would just stick to the facts rather than trash the reviewer and pontificate on what is wrong with journalism today.

cinntaile

I checked out the Wired article. Not recommended seems to mostly be about the fact that it's a tiny display, which some people (like the reporter) will have trouble to read. That the screen degraded didn't help of course. The reporter doesn't want to use a web dashboard to check out the readings on his indoor air monitor. I think that's a fair comment, maybe just a bit harsh to put it in the not recommended bucket. I understand that this can affect sales quite a bit for a small supplier like AirGradient.

It's linked in the article but here it is. https://www.wired.com/gallery/best-indoor-air-quality-monito...

floppyd

> However, the reviewers logic is difficult to follow when you compare it across products:

> - Our monitor: Downgraded due to a faulty display (a warranty-covered hardware issue).

> - Another Monitor: Recommended, despite having no display at all.

> - Another Monitor: Also recommended, despite lacking a CO2 sensor—one of the most critical metrics for assessing indoor air quality and ventilation.

sjsdaiuasgdia

The problem with that "no display" example is that the monitor isn't failing to do a thing they're trying to do, which AirGradient's did in the reviewer's perspective.

It's not a failure that the one without a display doesn't have a display. It's a design choice. The AirGradient unit has a display, but it's tiny and hard to read. Scrolling through the article, all the other units with displays have much larger and more readable text. You can read the biggest data points from across a room. The AirGradient has a display, but it fails to be a good display, hence the reviewer's perspective - it's not living up to its goals.

bryant

This feels best summarized as:

• Product A has limited features but does them well. If the customer is okay with the features the product has, the reviewer can recommend it for this customer.

• Product B has more features but is impacted by QA issues as well as product design decisions that make those features harder to use. This impacts the ability of the customer to use features they might've paid for to use the product, and it may even impact their ability to use features core to other products. This potentially makes Product B less desirable for comparable use cases.

With this in mind, I'm inclined to agree with Wired's decision.

altf4-0

I have one of these. The unit has bright LEDs that display the levels at a glance. Key point being that you can actually program the LEDs to report the parameter you're most interested in as well which is great. The LEDs are part of the "display" as well.

newsclues

I have an air gradient monitor.

There are three outputs. LEDs that go from green to yellow or red. The small display. A webpage dashboard. Or you can plug the data into HA for whatever you want.

The only issue I have with the display is that it’s monochrome and that prevents making data easy to read the trends, by showing positive changes as green or negative ones as red.

If the display is too small the LEDs are easily visible for quick information and then the dashboard is for more data.

Reviewers often have their issues really understanding how people use products, often because rapidly changing things to review, doesn’t allow them the time to truly use and understand a product.

jeroenhd

Looks like the AirGradient went up against a similar device with a huge colour screen that's easy to read. No wonder they preferred that, although I'm not sure if the $100 price difference is worth it for other people.

The AirGradient screen isn't even that small, but the UI can be much more user-friendly IMO. There's a reason all the other meters with screens do HUGE NUMBER+tiny label.

I'm sure many people will prefer the AirGradient, but I don't think the reviewer is wrong for having different preferences.

Mashimo

But it also had an LED light to show the quality at a glance.

ahaucnx

Achim from AirGradient here. Good to see that my post has been submitted here. Happy to answer any questions you might have.

I spend quite a long time writing this post and it actually helped me to see the bigger picture. How much can we actually trust tech reviews?

I am already getting very interesting results from the survey I posted and already planning to write a follow up post.

edent

Ultimately, you shipped a broken product.

That points to a lack of QA on your part and, I think, it is fair for a reviewer to point out.

Even if you have an exemplary warranty process and easy instructions, that's still a hassle. Not everyone has the confidence or the time to repair simple things.

As for the objective/subjective nature of reviews. Are your customers buying air monitors for their 100% precision or for "entertainment" purposes / lifestyle factors?

I have a cheap Awair air monitor. I have no idea if it is accurate - but it charges by USB-C and has an inconspicuous display. That's what I wanted it for.

It is perfectly fair for a reviewer to point out their personal preferences on something like this. They aren't a government testing lab.

pwg

From your "pop out" in the article:

"is that this review is ... pretty much purely based on the personal preferences of the author."

You've found the core takeaway about nearly all "product reviews" in nearly all publications. They are almost all simply "the personal preferences of the author".

These authors have neither the time, nor the science skills, for anything even beginning to look like a rigorous scientific review, and so the "best" vs. "ok" vs. "not recommended" tags applied result because the author liked the particular shade of pink used on a trim piece on one, or liked that another one looks like the Apple computer they are using, and so forth.

But they are never based upon any objective criteria, and are never (nor ever were intended to be) reproducible in any scientific fashion.

Yet, as you say, they have "great power" to influence buying decisions on the part of folks who read their reviews.

bryant

> But they are never based upon any objective criteria, and are never (nor ever were intended to be) reproducible in any scientific fashion.

This is also why review aggregators exist: if I'm just getting into a thing, such as watching movies or buying appliances, I probably need a general sense of how people collectively feel about a thing. But if I'm keenly aware of my preferences, it helps me to find reviewers who align with how I think. People routinely seek out specific reviewers or specific publications for this reason.

For instance, someone reading this review might conclude "I really appreciate that ease of use is a topic that's front of mind with this reviewer." Another reviewer's focus might be customizability, and they might recommend AirGradient. And that reviewer's audience follows that person likely because those concerns are front of mind.

...to be honest, if AirGradient had responded more along those lines ("we prioritized X and Y. We understand if this isn't the best fit for customers looking for Z, but we're working on it"), it would've felt more empathetic and less combative to me.

jzellis

I think your concerns are legit, but it's not necessarily the reviewer's fault that they're on three deadlines and don't have the time to give your product the care and concern it deserves. It's probably the editor's or the publisher's.

I'm a world class writer but I stopped doing it for a living a long time ago. Why? Because as media moved from print to online, the work was devalued. I've worked for 25 cents a word sometimes, which was pretty decent when one 1200 word piece could pay rent back then. Nowadays, writers are offered $25 per article flat with no compensation for rewrites. Staff positions pay badly for too much work but are as coveted as C suite gigs are in the tech world. Maybe more so.

So if the reviewer is staff, they might be assigned three or four reviews in a given week on top of other work. If they're freelance, they might have to take on more just to make their rent. This is because your average magazine staffer who's not management pulls about as much as a Starbucks manager, and was ever thus, unless you got in at Vanity Fair or The Atlantic back in the Before Times.

It's like when I was reviewing albums for $50 a pop: I'd get a stack of them to review and cue up track one and if I didn't get hooked pretty quick, I'd just pop in the next one.

Your device arrived damaged, which is absolutely no one's fault, but your reviewer doesn't have time or honestly impetus to give it a second chance. Not for whatever they're getting paid for that review, which is not much at all.

It's just bad luck, is all. And yes, it's not fair and, yes, you're right to complain, but it's not as simple as "tech writer lazy".

(And if anyone's response is "They accepted the job, they should do their best at it no matter how little it pays", I'm guessing you've never had to duck your landlord to try not to get evicted before the freelance check you've been hunting up for three weeks arrives. There's a reason I'd rather make a living as a mediocre coder than a very good writer these days - at its worst, the tech industry is more renumerative and stable than the publishing industry is.)

tobr

> I'm a world class writer

This is awkward, but I think you mean ”I'm a world-class writer”.

handoflixue

Be kind. Don't be snarky.

mind-blight

Super smart move. I hadn't heard of you folks before, but I'm interested in your product - open source and repairability are high on my list for home monitors. I'm lying in bed awake right now sir to an air quality issue, so it's top of mind.

The only thing you're missing for me is radon detection. I just bought a house and tests came in below remediation levels, but the report showed a lot of spikes and variance. So you have any plans for a model with radon detection in the future?

philipwhiuk

> Another Monitor: Recommended, despite having no display at all

Isn't this an outdoor one. Outdoor ones aren't expected to have a display because you want to check them without going outdoors.

This seems reasonable.

ahaucnx

No, this is actually the purple air indoor monitor which has no display.

no_wizard

The fact you didn’t include a tired / wired meme feels like a missed opportunity

Havoc

Rotten luck but at the same time reviewers review the device in front of them. They can’t really throw that experience out on the basis of some assumption that it’s not representative

nati0n

To anyone else reading, I would recommend, AirGradiant is the only real contender that checks all the boxes for an air monitor. Love the way they spun this narrative.

dgllghr

I agree. I love my AirGradient monitor. I’ve had it for years and it still works wonderfully.

burnt-resistor

Wired sold-out to Condé Nast long ago. They're the tired ones.

This sounds like something Louis Rossmann should cover as a counter-example of mfgrs trying to do the right thing but fickle, corporate reviewers behaving in a petty, unfair manner.

luma

The reviewer bought the product and received a broken unit. Why is it unfair to write about their actual experience with the product? Sure, warranty exists but none of the other products being tested needed a warranty cycle.

AirGradient (and several commenters here) feels like they're trying to spin their own QC problems as an indictment of modern journalism.

derbOac

What have we learned about AirGradient improving their product in response to the review?

That they "immediately sent replacement parts and a new unit, including repair instructions, as repairability is one of our core differentiators".

That's good, I genuinely respect that. But are there going to be improvements in QC protocol? Consideration of a bigger display? Apparently not, or at least there's no mention of this.

Instead they launch into a distracting and unproductive discussion about reviews in general, missing the entire point of the review's critiques and an opportunity to make a better product, or at least to clarify why they don't see a need for better QC, or don't think a bigger display would be a good idea.

ahaucnx

Agree.

I actually tried to reach out to Louis Rossmann a few times but haven’t got a response (yet).

I think what’s most interesting is that we figured out a business model based on open source hardware that’s sustainable. Thus a win-win for the manufacturer and the customer.

Repairability was actually a feature we design into the product from the start.

newsclues

Oh you totally need to send him a diy kit to assemble for a video!

justusthane

Not exactly on topic, but does anyone else feel that the bolded key phrases actually makes it harder to read? I find my eyes jumping between them without absorbing the rest of the text.

jeroenhd

It's marketing, they need you to remember their key phrases more than they need you to read their full rebuttal.

They don't seem to be as interested in the fact their outdoor monitor was the recommended outdoor solution either.

philipwhiuk

> I find my eyes jumping between them without absorbing the rest of the text.

That's the idea. The caveats they don't want to you to remember are left unbolded.

holografix

I wouldn’t worry too much tbh if I was Airgradient. I don’t think anyone trusts Wired for serious tech reviews and the target audience would veer towards plug and play crowd anyway.

My airgradient monitor has been online for years and sending data to Prometheus reliably. I’ve been able to plot the air quality across a few climate events and the introduction of a Samsung air filter in my bedroom. It’s a good little product.

rjkingan

I own several AirGradient monitors and have used other brands in the past. As far as I am concerned AirGradient is clearly superior, not only for ease of use, repairability and their open source approach, but also because of their tremendous enthusiasm for getting accurate data and being totally transparent about the strengths and weaknesses of the technology.

captainreynolds

I have one of these AirGradient indoor units. I also have a dedicated RadonEye Bluetooth device and Airthings wave Pro.

The oled display is nice, but I rarely care in realtime what the exact metrics are. I have that stored as time series stats sso I can see trends over time. Exactly like I do for metrics of production systems in SRE life.

The unit also has a series of LEDs across the top and I can read the actual status from 20’ away (which is as far as I can get without going out a window or around a corner).

One green led? Good. Two green leds? Meh. Three LEDs? They’re red now and that’s not great

Single red led in the top left in addition to any on the right? Spectrum is having yet another outage (no internet)

No LEDs? Not powered on

Reviewer was overly severe and did his readers a disservice.

It’s better imho than my Airthings wave pro, and it lets me get awesome, actionable time series data. It’s sensitive enough to show air quality taking a dive overnight with two people and multiple pets breathing in the same room (one green during the day, three or four red at night), and also to show that adding a half dozen spider plants keeps co2 well in check (consistent one green led).

And I can read the air quality from across the room without getting out of bed.