Skip to content(if available)orjump to list(if available)

Things we've learned about building products

fdlaks

Wow. 900 applications down to 10 "SuperDay" participants down to 4 hires. All to work at.... posthog. What a depressing statistic.

This felt like a humble brag to help make their point about hiring good talent and how many people want to be a hogger (or whatever they call people that work there) but this just really highlights how brutal the job market is. Yes the market is also flooded with unqualified applicants and or bots that will apply to any job listing thats posted, but still this is ridiculous.

I really feel bad for the 6 people who had to endure the technical interview AND THEN were given the honor of attending the "SuperDay" which sounds like a full day of at least 5 interviews, 2 - 3 being technical, and still got rejected. Not sure what the technical interview is like at posthog, but assuming this is just an hour phone screen those 6 people still probably had more than 7 hours devoted just to interviewing at this place just to get rejected. That's not including any time spent preparing for interviews or anything else either.

There must be a better way to do interviews. Posthog is not Google, Posthog (or any other startup) does not need to hire to the same standard that Google does.

Let me know when you're on par with Google in terms of revenue or benefits or prestige, or anything else really that Google offers then sure I will jump through as many hoops as you want for the interview. Until then, hard pass.

sibeliuss

Having attended a SuperDay, I can hands down state that their interview process is the best I've ever had (didn't get the job tho, which was probably for the best at this phase of life). Designed to perfectly lift signal and minimize noise, for what they're trying to achieve. Don't change a thing PostHog.

fdlaks

I personally think there are more efficient ways to get a high signal to noise ratio on if you are going to be a good hire or not without having the candidate invest almost 9 hours into an interview process, but that’s just me

NotDEA

> Wow. 900 applications down to 10 "SuperDay" participants down to 4 hires.

You’re almost 10 times more likely to be accepted to Stanford’s undergraduate program than to ever work as a hogger

rbaudibert

Superday is a paid day of work with a 30-minute talk with a founder + a 30min review about the day with an engineer

fdlaks

Ah ok my mistake, so that’s 8 hours including the review and discussion portion for the super day, then let’s say 45 minutes for the technical interview so 8 hours and 45 minutes of time spent interviewing at a minimum.

enraged_camel

>> Wow. 900 applications down to 10 "SuperDay" participants down to 4 hires. All to work at.... posthog. What a depressing statistic.

Yeah, at first I thought it was some kind of parody, then I realized it's a serious article and was astonished.

echelon

> “If you aren't excited about what you're working on, pivot. It's as simple as that. You'll achieve more if you're working on something that feels yours.”

I doubt the rank and file ICs feel this way at all. It's analytics plumbing, and it's all for the sake of the paycheck.

fdlaks

Ya I have yet to meet anyone who is passionate about analytics plumbing surprisingly, I’m glad posthog has found the 4 people in the world who truly are.

What this really translates to is the founders saying “we think posthog is our golden ticket to becoming rich in an exit event someday, so don’t mess it up for us”. It’s just not politically correct to say that, so it’s expressed as being “passionate about the problems the company solves” or “working on something that feels yours”.

And if you’re not someone who wants to dance and clap along with the founders as they sing “I’ve got a golden ticket!” on the way to the chocolate factory, only to be left standing behind the gate as they enter, then ya go ahead and pivot because you’re killing the vibe here…

kevmo314

This list mentions A/B testing a few times and it's worth noting that A/B testing is great but it's not free.

I've seen a nontrivial number of smart engineers get bogged down in wanting to A/B test everything that they spend more time building and maintaining the experiment framework than actually shipping more product and then realizing the A/B testing was useless because they only had a few hundred data points. Data-driven decisions are definitely valuable but you also have to recognize when you have no data to drive the decisions in the first place.

Overall, I agree with a lot of the list but I've seen that trap one too many times when people take the advice too superficially.

simonw

I think A/B testing is one of the most expensive ways of getting feedback on a product feature.

- You have to make good decisions about what you're going to test

- You have to build the feature twice

- You have to establish a statistically robust tracking mechanism. Using a vendor helps here, but you still need to correctly integrate with them.

- You have to test both versions of the feature AND the tracking and test selection mechanisms really well, because bugs in any of those invalidate the test

- You have to run it in production for several weeks (or you won't get statistically significant results) - and ensure it doesn't overlap with other tests in a way that could bias the results

- You'd better be good at statistics. I've seen plenty of A/B test results presented in ways that did not feel statistically sound to me.

... and after all of that, my experience is that a LOT of the tests you run don't show a statistically significant result one way or the other - so all of that effort really didn't teach you much that was useful.

The problem is that talking people out of running an A/B test is really hard! No-one ever got fired for suggesting an A/B test - it feels like the "safe" option.

Want to do something much cheaper than that which results in a much higher level of information? Run usability tests. Recruit 3-5 testers and watch them use your new feature over screen sharing and talk through what they're doing. This is an order of magnitude cheaper than A/B testing and will probably teach you a whole lot more.

light_triad

Some teams think they can A/B test their way to a great product. It can become a socially acceptable mechanism to avoid having opinions and reduce friction.

Steve Blank's quote about validating assumptions: "Lean was designed to inform the founders’ vision while they operated frugally at speed. It was not built as a focus group for consensus for those without deep convictions"

Is the Lean Startup Dead? (2018) https://medium.com/@sgblank/is-the-lean-startup-dead-71e0517...

Discussed on HN at the time: https://news.ycombinator.com/item?id=17917479

eddythompson80

Any sort of political/PR fallout in any organization can be greatly limited or eliminated if you just explain a change as an "experiment" rather than something deliberate.

"We were just running an experiment; we do lots of those. We'll stop that particular experiment. No harm no foul" is much more palatable than "We thought we'd make that change. We will revert it. Sorry about that".

With the former people think: "Those guys are always experimenting with new stuff. With experimentations comes hiccups, but experimentation is generally good"

With the later; now people would wanna know more about your decision-making process. How and why that decision was made. What were the underlying reasons? What was your end goal with such a change? Do you actually have a plan or are you just stumbling in the dark?

porridgeraisin

> You have to build the feature twice

Why though? Can't you have it dynamically look up whether the experiment is active for the current request and if so behave a certain way? And the place it looks up from can be updated however?

stetrain

But you have to implement and test both sides of that "if" statement, both behaviors. Thus "build the feature twice"

bluGill

Most A/B tests should not be done in a production like way. Grab some post-it notes and and sketch out both A and B: then watch users work with it.

For a lot of what you want to know the above will give better information than a 100% polished A/B test in production. When people see a polished product they won't give you the same type of feedback as they will for an obvious quick sketch. The UI industry has gone wrong by making A/B in production too easy, even though the above has been known for many decades.

(A/B in production isn't all bad, but it is the last step, and often not worth it)

marcosdumay

You are pushing for bit of too much of name-misuse. A/B tests are something what the entire point is that you run them on the real world, and gather how real users react.

Design-time user testing has been a thing for much longer than A/B tests. They are a different thing.

I mean, your point stands. But you can't do A/B tests on anything that is not production, those your are recommending are a different kind of tests.

bluGill

I'll accept your definition correction. However I think my point still stands: there are better things than A/B testing to get the information you need.

nightpool

It probably helps that one of PostHog's core products is an A/B testing framework, so it's much easier for them to iterate on it internally for what they need to A/B PostHog. Even when you already have a best in class A/B testing framework though, I agree—A/B testing too much or waiting too long for "more data" to make decisions can slow down momentum badly for features that should be no-brainers.

pkaler

Agree!

Most orgs should just be shipping features. Before starting an Experiment Program teams should be brainstorming a portfolio of experiments. Just create a spreadsheet where the first column is a one-line hypothesis of the experiment. Eg. "Removing step X from the funnel will increase metric Y while reducing metric Z". And the RICE (Reach-Impact-Confidence-Estimation) score your portfolio.

If the team can't come up with a portfolio of 10s to 100s of experiments then the team should just be shipping stuff.

And then Experiment Buildout should be standardized. Have standardized XRD (Experiment Requirements Doc). Standardize Eligibility and Enrollment criteria. Which population sees this experiment? When do they see it? How do you test that bucketing is happening correctly? What events do analysts need? When do we do readouts?

That's just off the top of my head. Most orgs should just be shipping features.

abxyz

A/B testing is also high risk: a good test produces valuable data, a bad test produces harmful data. A company that does no testing is better off than a company that does bad testing. Many people treat product decisions made based on test results as unimpeachable, whereas they treat product decisions made on a hunch with a healthy skepticism that leads to better outcomes.

null

[deleted]

phillipcarter

I know this is not related to the article, which is great, but I am wondering how long "posthog" is going to be the name of this company given what "post hog" means.

somekyle2

I marvel at this every single time i see their billboards. It does mean I read all of their billboards, I guess.

drewbeck

I'm kind of dreading anywhere I work picking up the service b/c of how much I'd have to say the name without laughing or making jokes about it.

Chyzwar

18. Instead, forcing PRs into day of work unit, it is better to be minimum testable increment. Some features just need more work to be tested. Forcing everything into tiny tickets make both planning tedious and often introduce bugs in half finished features.

22. I saw design system fail in many companies. It is very hard to get right people and budget for this to succeed. For most startups are better to pick existing UI toolkit and do some theming/tweaking.

27. I disagree, If you put Product manager as gatekeeper to users you will transform the organization into a feature factory. Engineers should be engaged with users as much as possible.

scottishbee

27. I don't think you do disagree. Read point 29: Hire and rely on product engineers. They have full-stack technical skills needed to build a product along with customer obsession. Yes, this means they need to talk to users, do user interviews, recruit tests for new features, collect feedback, do support, and respond to incidents.

intelVISA

> skills needed to build a product along with customer obsession

So a disempowered founder-lite? What's their incentive?

apsurd

These are ok. They're great to highlight the surface area of product building. But the list is very biased from an analytics and testing perspective because posthog product is analytics and testing.

Capturing analytics is a no brainer. however, most data in most products at most companies starting out just fundamentally does not matter. It's dangerous to get in the habit of "being data driven" because it becomes a false sense of security and paradoxically data is extremely easy to be biased with. And even with more rigor, you get too easily trapped in local optimums. Lastly, all things decay, but data and experimentation runs as is if the win is forever, until some other test beats it. It becomes exhausting to touch anything and that's seen as a virtue. it's not.

Products need vision and taste.

the__alchemist

Thought on this one:

> Trust is also built with transparency. Work in public, have discussions in the open, and document what you’re working on. This gives everyone the context they need, and eliminates the political squabbles that plague many companies.

This seems prone to feedback loops; it can go both directions. If there are political squabbles, discussion may be driven private, to avoid it getting shut down or derailed by certain people.

MrLeap

I've seen this. Takes management vigilantly guarding the commons from excessive drive bys and divebombs.

It takes a lot less energy to throw shit than it does to clean shit. There's infinite signals. Big egos take a lot of energy to repel and redirect to maintain it. I think it's absolutely worth it when it's possible, but yeah.

You wouldn't think so until you've done it, but it's really hard to get 6+ adults together where everyone's primary goal in that team is to make a thing. Seems like there's always one or more people who want to peck others, build fiefdoms, hold court.

cjs_ac

> Your product is downstream from your ideal customer profile (ICP).

Do not start with an idea. Start with a problem, and then construct a solution. The solution is the product. The problem implies someone who has that problem: this is the customer. How much of a problem it is tells you how much they want it and how much they will pay for it. Because an idea doesn't necessarily have a problem, it results in a product that doesn't necessarily have a customer.

> As 37Signal’s Jason Fried says “You cannot validate an idea. It doesn’t exist, you have to build the thing. The market will validate it.”

Similarly, don't set out to change the world. Put your product into the world, and the world will decide whether to change as a consequence.

bilater

The problem with having such a specific, prescriptive formula for success is that it never actually works out that way. Sure, there are high-level principles, the PostHog team executes brilliantly, and I love the product, but I think we're really bad at connecting the dots on what actually made something successful. Instead, we assign credit to random things just to make it all make sense. A lot of times, it's the equivalent of saying, "Bill Gates eats this cereal for breakfast, so if I do that, I should become a billionaire too."

srameshc

I was always very passionate about programming and startups or small team/co, but I never even got to the first round because of my undergraduate degree. I think I would have tried hard given an opportunity and worked with lot of discipline and passion. So now I have my own small team and I try to see if someone who doesn't have the right background but still willing to learn and is passionate about building stuff. It is probably not the idea of the author and he is right in his approach as it has been established, but I will test and see if what I am trying will work or not.

skyyler

In the first image in the article, what is a "SuperDay"?

Is this like a trial day where you're invited to do a day of work for free?

adamgordonbell

They pay you for it, but it is a trial work day.

Story time. I interviewed for a job at posthog. I knew that I really loved their communication style. But I hadn't used their product and didn't know a ton about them except that their writing is fantastic.

The 'product for engineers' focus that they is cool but when I had an interview, it was clear that I wasn't a 'product for engineering' person.

When they proposed the Super Day. I was like, I'm not sure because it's awesome to get paid for a day, but it's also not an unstressful event. And I sort of said I had some more questions before we moved on to the Super Day.

And they basically just said: we don't think it's going to work out. It was actually a pretty positive experience. I think that they correctly assessed the situation pretty quickly and my hesitation was a real signal to them.

(This is my recollection - I could have the details wrong.)

But yeah, super day is a day of very structured work in the role that they setup. And its paid.

kayo_20211030

Getting paid is, at least, fair. Everyone has some skin in the game. Pretty impressive.

rapfaria

> And I sort of said I had some more questions before we moved on to the Super Day.

> And they basically just said: we don't think it's going to work out.

Ouch, so their tight-knit, no-shortcuts hiring process is only thorough for them, not for the engineer applying.

adamgordonbell

Perhaps, but it wasn't a bad experience. I've come to value hiring processes and I guess employers where they known how to swiftly make decisions.

light_triad

> If you’re going to pivot, make it big.

This is a great point. I've seem teams apply lean startup by testing -> changing something -> testing -> changing something -> testing ...

The problem is that the changes are so small that statistically you end up testing the same thing over and over and expecting different results. You need to make a significant change in your Persona, Problem, Promise, or Product to (hopefully) see promising results.