Skip to content(if available)orjump to list(if available)

Can we test it? Yes, was can [video]

cloogshicer

I think what people really mean when they say "This can't be tested" is:

"The cost of writing these tests outweighs the benefit", which often is a valid argument, especially if you have to do major refactors that make the system overall more difficult to understand.

I do not agree with test zealots that argue that a more testable system is always also easier to understand, my experience has been the opposite.

Of course there are cases where this is still worth the trade-off, but it requires careful consideration.

ChrisMarshallNY

This is the case.

I did a lot of work on hardware drivers and control software, and true testing would often require designing a mock that could cost a million, easy.

I've had issues, with "easy mocks"[0].

A good testing mock needs to be of at least the same Quality level as a shipping device.

[0] https://littlegreenviper.com/concrete-galoshes/#story_time

fleventynine

I've had a lot of success writing driver test cases against the hardware's RTL running in a simulation environment like verilator. Quick to setup and very accurate, the only downside is the time it takes to run.

And if you want to spend the time to write a faster "expensive mock" in software, you can run your tests in a "side-by-side" environment to fix any differences (including timing) between the implementations.

MoreQARespect

It's often shorthand for "this cant be unit tested" or "this isnt dependency injected" even though integration tests are perfectly capable of testing non-DI code.

The author's claims that we should isolate code under test better and rely more on snapshot testing are spot on.

rhizome31

Testing is a skill. The more you do it, the less expensive it becomes.

cloogshicer

The main cost isn't writing the tests themselves but the increased overall system complexity. And that never goes down.

j_w

My takeaway from this is that when you have a system or feature that "can't be tested" that you should try to isolate the "untestable" portions to increase what you can test.

The "untestable" portions of a code base often gobble up perfectly testable functionality, growing the problem. Write interfaces for those portions so you can mock them.

webdevver

i thought that's what customers were for?

diggan

Seems like blog spam, the actual content (presentation/talk) is at: https://www.youtube.com/watch?v=MqC3tudPH6w

tomhow

aspenmayer

When this happens, how do you determine who gets the karma? Is it right and just and logical for OP to get karma for submitting a URL that HN readers didn't visit after being updated by mods, or for OP to get karma previously for a URL that was deemed lacking with regards to the guidelines? It seems like they should get one or the other, but not both.

Just some food for thought. The reason I mention it, is that a person who has been commented upon by me previously for using scripts submitted this before OP, and if precedent holds, they should get the karma, not OP. But they have been commented upon by mods for having used scripts, but somehow haven't been banned for doing so, because dang has supposedly interacted with them/spoken with them, as if that could justify botting. But I digress.

To wit:

https://news.ycombinator.com/item?id=44449650

tomhow

It's an imperfect system and some human judgment about fairness applies.

In this case, the original URL submitted had the YouTube video prominently embedded, along with some commentary. It's no big deal to do that, as sometimes the commentary adds something. In this case nobody seems to think it does so I updated the URL to the primary source, but there's no need to penalize the submitter.

If the primary/best source for a topic has been submitted by multiple people, all being equal we'll promote the first-submitted one to the front page and mark later ones as dupes.

But things aren't always equal, and if the first submission was from a user who submits a lot and gets many big front page hits, we don't feel the need to just hand them more karma and will happily give the points to a less karma-rich user who submitted it later, especially if theirs is already on the front page.

sebastianmestre

They're fake internet points, it's no big deal

somewhereoutth

My feeling on testing:

- If it is used by a machine, then it can be tested by a machine.

- If it is used by a human, then it must be tested by a human.

__MatrixMan__

User acceptance testing is a good idea but doing it in cases where you can get away with cheaper testing is not.

alex_smart

But testing by human is expensive

Anonbrit

and horribly unreliable even when done by competent and motivated humans, let alone most IT workers

karolinepauls

I've heard of good experiences using a 3rd party QA company (for frontend-heavy changes) and had pretty OK experiences doing it in-house with subject experts (though testing backend changes, so pretty much "either it works or it doesn't" in my case).

Retric

Expensive isn’t unnecessary.

Developer written tests can’t tell you if your UI is intuitive for novice users.

sfn42

They can however tell you whether the button/form/page/whatever is working and continues working.

sfn42

Tools like playwright allow pretty nice web UI testing. You can make sure things are working properly and continue working as things change.

Doesn't replace human testing but it does ease the human load and help catch problems and regressions before they get to the human testers.

Ygg2

> Tools like playwright allow pretty nice web UI testing.

Can they test for color blindness and myopia?

dylan604

“Doesn't replace human testing”

It’s like you stopped reading to try to score internet points or something. The answer to your question was one more sentence from where you stopped reading