How three years at McKinsey shaped my second startup
158 comments
·May 4, 2025throw10920
burningChrome
>> that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human.
Real world evidence supporting your argument:
United Health Group is currently embroiled in a class action lawsuit pertaining to using AI to auto-deny health care claims and procedures:
The plaintiffs are members who were denied benefit coverage. They claim in the lawsuit that the use of AI to evaluate claims for post-acute care resulted in denials, which in turn led to worsening health for the patients and in some cases resulted in death.
They said the AI program developed by UnitedHealth subsidiary naviHealth, nH Predict, would sometimes supersede physician judgement, and has a 90% error rate, meaning nine of 10 appealed denials were ultimately reversed.
https://www.healthcarefinancenews.com/news/class-action-laws...
blitzar
> has a 90% error rate, meaning nine of 10 appealed denials were ultimately reversed.
feature, not bug
thenewwazoo
> 90% error rate, meaning nine of 10 appealed denials were ultimately reversed.
This is a fantastic illustration of selection bias. It stands to reason that truly-unjustified (some hidden variable) denials would be appealed at a higher rate and therefore the true value is something less than 90%.
That's not to say UHG are without blame, I just thought this was really interesting.
tkluck
Your scientific take is useful in the case where selection bias is unavoidable and needs to be corrected for.
This case is not like that; if the insurance agency wants to dispute the 90% false denial rate, it would be trivial for them to take a random sample of _all_ cases, go through the appeal process for those, and publish the resulting number without selection bias.
As long as that doesn't happen, the most logical conclusion for us outside observers is: the number is probably not so much lower than 90% that it makes a difference.
tbrownaw
Seems to me that the use of AI is irrelevant[1], and the real problem is the absurd error rate.
[1] In the sense of "it doesn't matter if it caused the problem", rather than "it probably didn't have any effect". Because after all, "to err is human, but to really foul things up takes a computer".
sergius
Not to mention the creation of a single point of failure for a critical service...
polynomial
Right, but in this case the critical service isn't providing "health" for users, it's extracting profit from them (from the transactions) for the shareholders. THAT'S the critical service this company cybernetically fulfills.
siliconc0w
AI adjudication of healthcare is fine but there needs to be extremely steep consequences for false negatives and a truly independent board of medical experts to appeal to. If a large panel agrees the denial was wrong, a penalty of 10-100x the cost of procedure would be assessed depending on the consequence of the denial.
throw10920
Yes, I agree. My point was contingent on the current state of affairs - until we can change that, then AI remains a terrible idea.
vkou
Or, and here's the wild thing, put all these parasites, leeches, and other useless middle-men out of a job and just go single-payer.
reassess_blind
No one is going to accept a claim rejection from AI. Everyone will want to dispute, which will have to go to a human to review. At the end of the day I don’t see how 100 people is realistic.
bcyn
This reaction is primarily an emotional one. Why is a human rejecting a claim better than an AI rejecting a claim? Presumably the AI will one day -- if not today -- be more accurate in following decisioning logic than humans, who will continue to make human errors.
aianus
If that were true then they would also dispute every first-line human review. I don't think the average first-line human customer service rep is any better than AI even today.
mring33621
Fine! You win!
We'll send the appeals through Mechanical Turk.
Happy now?
DiggyJohnson
I don't think there an ethical responsibility to worrying about your competitor's labor. That would lead to stagnation and it's own sort of ethical issues.
klank
I don't think it's as easy as hand waving it away as "your competitor's labor". Your competitors labor is your community, it's people. I believe we all have an ethical responsibility to that.
For the points you brought up, why is stagnation for the purposes of upholding an ethical position a bad thing?
And yes, by definition, worrying about ethical responsibility would lead to ethical issues. That's the whole point.
vasco
So should we all be farming and collecting berries? Most advancements since have put people out of jobs in "competitors" that didn't adapt. Still the unemployment rate isn't 99.9%. Yet we displaced whole industries many times over the centuries. Obviously people move to better jobs and find other things to do. There's nothing particularly good about sitting on a computer denying people insurance all day, why not have a computer do it?
John23832
You misread. The poster is speaking about the ethical handling of customer service.
bee_rider
This comment is replying to the sentiment:
> Completely separate from the potential ethical issues and economic implications of putting 100k people out of a job, […]
I’m pretty sure. Although, the original comment was basically putting that issue aside, so I’m not sure what there is to say about it.
aprilthird2021
The whole ugly turn of AI hypemen claiming its somehow morally okay for everyone to lose their jobs all at once makes me think the Luddites were right all along
Y_Y
Can we imagine a world where the claims are adjudicated by an uninterested party (as far as possible)? I don't want the insurance company to decide a contractual issue, that's ridiculous. At the moment they're kept honest by the law and by public opinion (which varies by country), but the principal-agent problem is too big to ignore.
sokoloff
Life insurance claims seem fairly unambiguous to adjudicate.
lostlogin
I agree. And then I recall my last few interactions with insurance companies.
Dealing with a machine is unlikely to be worse.
alabastervlog
My knee-jerk reaction is to think that the prospect of an insurance company handing support over to machines is a terrible development.
But it was already the case that they just arbitrarily do WTF ever they want, that outside a small set of actions that "bots" can perhaps handle fine they aren't going to do anything for you, and that the only way to get actual support for a real problem involves something being sent from a .gov email address or on frightening letterhead.
So... not really any different? You already basically have to threaten them (well, have someone scarier than you threaten them) to get any real support, this wouldn't be different.
Muromec
>the only way to get actual support for a real problem involves something being sent from a .gov email address or on frightening letterhead.
And then the they will add a low cost arbitration clause, where disputes are also handled by AI. Free market goes brrr
throw10920
My last few interactions with an insurance company were moderately annoying but far from terrible - I would absolutely loathe having those replaced by a machine, given the terrible quality of every AI "assistant" I've ever used.
kurthr
Similarly, I was just forced to talk to an insurance company and the only way I got any response was by talking to a human. The more robotic they are, instead of working around known issues, the more likely we are to get to a satisfactory solution (e.g. don't overcharge me and then do nothing about it).
RankingMember
While a human interaction can be awful, there's a special hellishness that is trying to negotiate with a robot to get something related to your healthcare taken care of.
RealityVoid
It seems to me apparent that there needs to be some way to arbiter the claims outside the insurer company itself. I'm... not sure that there is. But if there were and there exists some sort of sanction or incentive for the insurer to get it right the first time... I'm confident that AI insurance companies could streamline the process. But you need this incentive mechanism, else it's a recipe for dystopia. (Deeper thought goes that you would shift a lot of work to the arbiter, but I won't touch that for now.)
guywithahat
I don't see it as inherently a problem; AI can (theoretically) be a lot more fair in dealing with claims, and responds a lot sooner.
That said I suspect the founder is seriously overestimating the number of highly intelligent, competent people he can hire, and underestimating how much bureaucratic nonsense comes with insurance, but that's a problem he'll run into later down the road. Sometimes you have to hire three people with mediocre salaries because the sort of highly motivated competent person you want can't be found for the role.
alpha_squared
> AI can (theoretically) be a lot more fair in dealing with claims
Respectfully, no it can't. From a Western perspective, specifically American, and from an average middle-class person's perspective, specifically American, it only appears to be fair.
However, LLMs are a codification of internet and written content, largely by English speakers for English speakers. There are <400m people in the US and ~8b in the world. The bias tilt is insane. At the margins, weird things happen that you would be otherwise oblivious to unless you yourself come from the margins.
throw10920
I don't even think most Americans (except those trying to do the automating) would consider it to be fair.
AI is bias automation, and reflects the data it's trained on. The vast majority of training data is biased, even against different slices of Americans. The resulting AI will be biased.
bcyn
> LLMs are a codification of internet and written content
Only true for pre-trained foundational models without any domain-specific augmentations. A good AI tool in this space would be fine-tuned or have other mechanisms that overshadow the pre-training from internet content.
Muromec
On the other hand, once the claim is mishandled by AI, one can use the normal process to discover the juiced prompt and all the papertrail that comes with implementing it.
nradov
Nope. Claims adjudication LLMs aren't trained on random Internet content. If you're going to criticize then at least get your basic facts right.
QuercusMax
You're incredibly naive if you think AI will be used to pay out claims more fairly instead of being used as a deny-bot.
simianwords
Why? There are many tasks where AI beats humans. Humans are also prone to bias and fatigue etc.
Although I would still agree that there would need to be a mechanism for escalation to a human.
robertlagrant
You can use people to deny as well. Or non-AI automation; just some business rules in a normal system.
SoftTalker
This is life insurance specifically. It's not very hard to prove someone is dead, is there really much room for argument over paying out the policy benefit?
crazygringo
> that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI. That, to me, is deeply disturbing, and very very difficult to justify.
I don't know. Given the human beings I've interacted with in customer support, and the number of times I've had to escalate because they were quite simply "intelligence-challenged" who couldn't even understand my issues, I'm not sure this is a bad thing.
In my limited experience with AI agents, they've been far more helpful and far faster, they actually seem to understand the issue immediately, and then either give me the solution (i.e. the obscure fact I needed in a support PDF that no regular rep would probably ever have known) or escalate me immediately to the actual right person who can help.
And regular humans will stonewall you anyway, if that's corporate policy. And then you go to the courts.
ben_w
While I get the vibes, and have had experience of human customer support being very weird on a few occasions, replacing mediocre humans with mediocre AI isn't a win for customers getting actual solutions.
And right now, the LLMs aren't really that smart, they're making up for low intelligence by being superhumanly fast and able to hold a lot of context at once. While this is better than every response being from a randomly selected customer support agent (as I've experienced), and when they don't even bother reading their own previous replies when the randomiser puts the same person in the chain more than once, it's not great.
LLM customer support can seem like a customer win to start with, when the AI is friendlier etc., but either the AI is just being more polite about the fixed corporate policy, or the LLM is making stuff up when it talks to you.
throw10920
I think there's an interesting implication here: that the actually good (for the customer) support experience is a real human who has access to a RAG where they can look up company documents/policies/procedures, but still be able to use their human brain to make judgement calls (and, of course, they have to be willing to, y'know, read the notes left by the previous rep).
crazygringo
> replacing mediocre humans with mediocre AI isn't a win for customers getting actual solutions.
No it's not, but that's not what I described. I described replacing mediocre humans with better AI for at least the first level of customer service.
null
johnobrien1010
Wanted to point to the startup the author seems to be running, which is to sell insurance somehow tied to Bitcoin: https://meanwhile.bm/
For the record, that strikes me as seriously improper. Life insurance is a heavily regulated offering intended to provide security to families. It is the opposite of bitcoin, which is a highly speculative investment asset. Those two things should not be mixed.
Also, the fact that the disclosure seems to limit sales to being only occurring in Bermuda seems intentional. I suspect that this product would be highly illegal in most if not all US states, so they must offer this only for sale in Bermuda to avoid that issue.
verall
I think it's actually tax avoidance disguised as life insurance:
> You can borrow Bitcoin against your policy, and the borrowed BTC adopts a cost basis at the time of the loan. So if BTC were to 10x after you fund your policy, you could borrow a Bitcoin from Meanwhile at the 10x higher cost basis—meaning you could sell that BTC immediately and not owe any capital gains tax on that 10x of appreciation
n_ary
Did I read that right? Sam Altman is funding this? If true, I am having some new perspective him.
pinkmuffinere
I assume from your comment that you haven’t heard of WorldCoin, also funded by Altman.
Fomite
"I wanted to derisk my resume by working somewhere with high signaling."
You can take the founder out of a consultancy, but you can't take the consultancy out of the founder.
comrade1234
My wife made a McKinsey consultant cry… she hired McKinsey for some internal project. One person on the project was a recent Harvard grad. They were in a meeting going over the deliverables along with the McKinsey partner on the project and in the meeting my wife said something to the effect that their work wasn’t up to McKinsey standards.
The junior guy started crying in the meeting. Like just blubbering. My wife still feels bad for it but still…
Weird thing, instead of firing him McKinsey kept him and stipulated that he can only be in meetings when the partner is present.
SJC_Hacker
I don't care if you went to Ivy League and graduated at the top of your class, I really don't get WTF someone whose life experience has been almost exclusively in school really knows running about business.
Get at least a few years work experience and call me. Or alternatively, start your own dang business if you are really that smart.
guywithahat
The whole business is nonsensical. The point of a consultant is they have a lot of experience in a specific domain, a recent Harvard grad is useless. From what I've heard, tons of their consultants are young people with minimal real industry experience
Raidion
You pay for one or two people with real experience and 4 reasonably new hires whose job it is to answer questions posed by the senior team and to build documentation.
You want the senior people focusing on the problems, strategy, and comms and not data aggregation and power point formatting.
Half the time it doesn't actually matter who the consultant is, the business is just looking for an arbiter to provide a second opinion or justify a decision.
staticcaucasian
The Partner is the consultant. The 'recent grad' is just extra low-cost apprenticeship for the partner. The customer is (ridiculously over-) paying for the Partner's time and tolerating the apprentices that come along for the ride.
throw4847285
They have a brutal up or out culture. The idea is that the recent grads are grunts who are ground into the dirt. They actively hope that most of them quit and the few who don't get promoted into positions where they do have the experience, or really, one very specific type of experience that consultancy firms select for. Similar setup to Big Law.
matwood
> The point of a consultant is they have a lot of experience in a specific domain
This is your mistake. The point of a consultant is to tell the business to do what the business was already planning to do anyway. This way the consultant takes the risk/blame of the decision. It's similar to the classic 'no one was ever fired for buying IBM' "I did what the McKinsey consultant told me" is CYA. The last piece is that since everyone is in on the game, when a decision leads to bad outcomes they don't blame the consultant, but something they could not have foreseen.
fatbird
There's a cycle to one's relationship with Mckinsey and the big accounting firms. You start with a lot of attention from the partner, who over time shifts you to more experienced assistants, who over time shift you to new hires. Then you scream at them about the shit quality of their work, and you get the partner's attention again for a year or two.
crazygringo
You don't seem to understand how consulting works.
The person making the recommendations isn't just out of school. They've been at the firm for years, and do have a ton of experience.
The recent grads are there for all of the grunt work -- collecting massive amounts of data and synthesizing it. You don't need years of business experience for that, but getting into a top college and writing lots of research papers in college is actually the perfect background for that.
null
frankfrank13
Having worked at Mck, what I could very well imagine happened behind the scenes here was
1. This BA/Asc was on <4 hours of sleep, maybe many days in a row
2. They walked into that meeting thinking they had completed exactly what the client (your wife) wanted
And after the meeting (this I feel more confident about, as it happens a lot)
1. A conversation happened to see if the BA/Asc wanted to stay on the project
2. They said yes, and the leadership decided that the best way to make this person feel safe was to always have a more experienced person in the room to deal with hiccups (in this case, the perception of low quality work)
Isn't that... good? What else would you expect
biker142541
Genuinely funny. Had to once interface with a small team from Deloitte on a project, and pushed hard during an early meeting for them to outline the problems and scope. Just complete incompetence... I didn't make anyone cry, but definitely squirm a lot. Just asking questions about their understanding, process to close gap of understanding, and project management plans were enough to make clear to the main executive stakeholder on our end that this was going to be a trainwreck. They were fired shortly after.
chollida1
> Weird thing, instead of firing him McKinsey kept him and stipulated that he can only be in meetings when the partner is present.
Why would they fire him after a singe incident?
Sounds like McKinsey is a more companionate organization than you, and that's saying something:)
yodsanklai
That's what I thought. Having the partner present seems to be the right way of handling this. The company is responsible for employees well-being and shouldn't let a client bullying them.
mannyv
Client bullying?
Saying the works sucks isn't bullying, unless you didn't know you were incompetent.
yodsanklai
McKinsey employees are people too
shitpostbot
[dead]
tiffanyh
> Meanwhile: to break into a highly-regulated, commoditized market like insurance, you need both a truly differentiated product that incumbents can't easily replicate and an associated distribution strategy that leverages their blind spots.
Having worked in highly regulated industries, I’ve learned that the best way to disrupt incumbents is by creating a product that assumes more business risk than is typically accepted. Large, regulated companies are extremely risk-averse—so if you can take on that risk in a smart, innovative way, you’ll win.
Alpha3031
What if you take that risk by putting "crypto" in it? I think it might work out for our founder here but I am not so optimistic about the results for any of the poor schmucks suckered into this scheme.
JohnMakin
This reads like a linkedin post and I'm only commenting because I'd like to hear more about the 2nd type of big-org problems he faced that he felt weren't fixable, and why - but instead got a pitch to his new startup, which I guess should've been expected from the title. Just hoped for more substance.
dzink
The engineer in me immediately looks for ways to map out how tax avoidance via crypto trading on life insurance funds via a Bermuda company can possibly go wrong. Insurance has a nice long term cash flow that has proved very sweet for Berkshire Hathaway, and investment on top of that gets perks for the insurance. However Crypto, which has liquidity issues and is heavily scammed/stolen would benefit far more than the users of business. The holdings would stay for decades, allowing arbitrage of the main company with user investments. If there is a leak or a collapse of the crypto, the customers won’t know it until they can’t get their funds back, but since AI is handling the claims, they may never even find out the real reason they can’t get their money back. And since it’s life insurance, the buyer might never find out, while their descendants or loved ones may not know how to deal with it or be plenty confused by the lack of customer service. Very novel scheme.
whistle650
Looking at the home page of Meanwhile only made me think of how life insurance is such a different thing than, say, a mortgage. With life insurance, counterparty risk matters. You don't care about your mortgage counterparty. I'm not going to buy life insurance from an insurer with Youtube videos of Anthony Pompliano on their home page. Know your enemy.
mettamage
> I learned deeper truths about where startups can win and compete.
Now that I'm working at a big organization (a Fortune 500 company), I can relate. I'm by far the most innovative person in my team and I'm being held down because I'm not doing my role (as I'm not a dev but a data analyst at the moment).
If I'd be doing my role however, then we wouldn't be innovating and the C-suite wants us to innovate with AI. I'm the only one at my department that can create actual AI automations. And the IT department is basically stripped out by upper management.
If anyone wants an actual dev building AI automations and think how we can disrupt with the state of the art, my email is in my profile.
niemandhier
Insurance business is mostly about hoarding and investing money so you can actually pay when you have to.
Unless you can solve that part of the problem as well as the big players, you will run into problems at some point, using extrem value theory you can even estimate when.
phendrenad2
Valuable article, it's rare to see a glimpse into McKinsey in normal human language.
The fact that the company has become a sort of pseudo-VC (mentorship but not financing) for small teams within megacorps is interesting. I wonder why large corps find it so difficult to innovate. I think that they become somewhat "load-bearing" in society and the lines between the company and the market begin to blur. Any change the company makes causes a misalignment because they shaped the market to fit themselves.
nforgerit
Oh McKinsey had a name for that program ("Leap"). I once worked at a "Telco Enterprise Startup" in Berlin founded by them.
They essentially lied about any anticipated KPI potentials and let their "tech" people put together a 15k EUR/month (before public release) platform on AWS which was such a pile of mess, it made the second year's CTO start from scratch. After some heavy arguments because of their poor performance, McKinsey agreed to let some "non-technical" people work there for a couple of months for free. All arguments you'd had with the McKinsey "Engineers" felt like talking to AWS Sales, they had barely any technical insights but a catalog of "pre-made solutions" to choose from.
Article is interesting on the whole (I have no experience with "professional" work, and would love for suggestions as to how to be more familiar), but I latched onto this nugget:
> Our vision at Meanwhile is to build the world's largest life insurer as measured by customer count, annual premiums sold, and total assets under management. We aim to serve a billion people, using digital money to reach policyholders and automation/AI to serve them profitably. We plan to do with 100 people what Allianz and others do with 100,000.
Completely separate from the potential ethical issues and economic implications of putting 100k people out of a job, I see one very concrete moral problem:
that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI.
That, to me, is deeply disturbing, and very very difficult to justify.