Skip to content(if available)orjump to list(if available)

Auto-grading decade-old Hacker News discussions with hindsight

Auto-grading decade-old Hacker News discussions with hindsight

232 comments

·December 10, 2025

Related from yesterday: Show HN: Gemini Pro 3 imagines the HN front page 10 years from now - https://news.ycombinator.com/item?id=46205632

popinman322

It doesn't look like the code anonymizes usernames when sending the thread for grading. This likely induces bias in the grades based on past/current prevailing opinions of certain users. It would be interesting to see the whole thing done again but this time randomly re-assigning usernames, to assess bias, and also with procedurally generated pseudonyms, to see whether the bias can be removed that way.

I'd expect de-biasing would deflate grades for well known users.

It might also be interesting to use a search-grounded model that provides citations for its grading claims. Gemini models have access to this via their API, for example.

ProllyInfamous

What a human-like critizicism of human-like behavior.

I [as a human] also do the same thing when observing others in IRL and forum interactions. Reputation matters™

----

A further question is whether a bespoke username could influence the bias of a particular comment (e.g. A username of something like HatesPython might influence the interpretation of that commenter's particular perception of the Python coding language, which might actually be expressing positivity — the username's irony lost to the AI?).

khafra

You can't anonymize comments from well-known users, to an LLM: https://gwern.net/doc/statistics/stylometry/truesight/index

WithinReason

That's an overly strong claim, an LLM could also be used to normalise style

wetpaws

How would you possibly grade comments if you change them?

jasonthorsness

It's fun to read some of these historic comments! A while back I wrote a replay system to better capture how discussions evolved at the time of these historic threads. Here's Karpathy's list from his graded articles, in the replay visualizer:

Swift is Open Source https://hn.unlurker.com/replay?item=10669891

Launch of Figma, a collaborative interface design tool https://hn.unlurker.com/replay?item=10685407

Introducing OpenAI https://hn.unlurker.com/replay?item=10720176

The first person to hack the iPhone is building a self-driving car https://hn.unlurker.com/replay?item=10744206

SpaceX launch webcast: Orbcomm-2 Mission [video] https://hn.unlurker.com/replay?item=10774865

At Theranos, Many Strategies and Snags https://hn.unlurker.com/replay?item=10799261

arowthway

Comment dates on hn frontend are sometimes altered when submissions are merged, do you handle this case properly?

SauntSolaire

I'd love to see sentiment analysis done based on time of day. I'm sure it's largely time zone differences, but I see a large variance in the types of opinions posted to hn in the morning versus the evening and I'd be curious to see it quantified.

embedding-shape

Yeah, I see this constantly any time Europe is mentioned in a submission. Early European morning/day, regular discussions, but as the European afternoon/evening comes around, you start noticing a lot anti-union sentiment, discussions start to shift into over-regulation, and the typical boring anti-Europe/EU talking points.

nostrebored

“Regular” to who? Pro EU sentiment almost only comes from the EU, which is what you’re observing. Pro-US sentiment is relatively mixed (as is anti-US sentiment) in distribution.

matsemann

I like the "past" functionality here, maybe wished there was one for week/month I could scroll back as well.

Miss it for reddit as well. Top day/week/month/alltime makes it hard to find top a month in 2018.

HanClinto

Okay, your site is a ton of fun. Thank you! :)

modeless

This is a cool idea. I would install a Chrome extension that shows a score by every username on this site grading how well their expressed opinions match what subsequently happened in reality, or the accuracy of any specific predictions they've made. Some people's opinions are closer to reality than others and it's not always correlated with upvotes.

An extension of this would be to grade people on the accuracy of the comments they upvote, and use that to weight their upvotes more in ranking. I would love to read a version of HN where the only upvotes that matter are from people who agree with opinions that turn out to be correct. Of course, only HN could implement this since upvotes are private.

cootsnuck

The RES (Reddit Enhancement Suite) browser extension indirectly does this for me since it tracks the lifetime number of upvotes I give other users. So when I stumble upon a thread with a user with like +40 I know "This is someone whom I've repeatedly found to have good takes" (depending on the context).

It's subjective of course but at least it's transparently so.

I just think it's neat that it's kinda sorta a loose proxy for what you're talking about but done in arguably the simplest way possible.

nickff

I am not a Redditor, but RES sounds like it would increase the ‘echo-chamber’ effect, rather than improving one’s understanding of contributors’ calibration.

baq

Echo chamber of rational, thoughtful and truthful speakers is what I’m looking for in Internet forums.

mistercheph

it depends on if you vote based on the quality of contribution to the discussion or based on how much you agree/disagree.

modeless

Reddit's current structure very much produces an echo chamber with only one main prevailing view. If everyone used an extension like this I would expect it to increase overall diversity of opinion on the site, as things that conflict with the main echo chamber view could still thrive in their own communities rather than getting downvoted with the actual spam.

intended

Echo chambers will always result on social media. I don't think you can come up with a format that will not result in consolidated blocs.

PunchyHamster

More than having exact same system but with any random reader voting ? I'd say as long as you don't do "I disagree therefore I downvote" it would probably be more accurate than having essentially same voting system driven by randoms like reddit/HN already does

janalsncm

That assumes your upvotes in the past were a good proxy for being correct today. You could have both been wrong.

emaro

I like the idea and certainly would try it. Although I feel in a way this would be an anti-thesis to HN. HN tries to foster curiosity, but if you're (only) ranked by the accuracy of your predictions, this could give the incentive to always fall back to a save and boring position.

potato3732842

>This is a cool idea. I would install a Chrome extension that shows a score by every username on this site grading how well their expressed opinions match what subsequently happened in reality, or the accuracy of any specific predictions they've made.

Why stop there?

If you can do that you can score them on all sorts of things. You could make a "this person has no moral convictions and says whatever makes the number go up" score. Or some other kind of score.

Stuff like this makes the community "smaller" in a way. Like back in the old days on forums and IRC you knew who the jerks were.

TrainedMonkey

I long had a similar idea for stocks. Analyze posts of people giving stock tips on WSB, Twitter, etc and rank by accuracy. I would be very surprised if this had not been done a thousand times by various trading firms and enterprising individuals.

Of course in the above example of stocks there are clear predictions (HNWS will go up) and an oracle who resolves it (stock market). This seems to be a way harder problem for generic free form comments. Who resolves what prediction a particular comment has made and whether it actually happened?

mvkel

Out of curiosity, I built this. I extended karpathy's code and widened the date range to see what stocks these users would pick given their sentiments.

What came back were the usual suspects: GLP-1 companies and AI.

Back to the "boring but right" thesis. Not much alpha to be found

miki123211

> Analyze posts of people giving stock tips on WSB, Twitter, etc and rank by accuracy.

Didn't somebody make an ETF once that went against the prediction of some famous CNBC stock picker, showing that it would have given you alpha in the past.

> seems to be a way harder problem for generic free form comments.

That's what prediction markets are for. People for whom truth and accuracy matters (often concentrated around the rationalist community) will often very explicitly make annual lists of concrete and quantifiable predictions, and then self-grade on them later.

Karrot_Kream

I ran across Sybil [1] the other day which tries to offer a reputation score based on correct predictions in prediction markets.

[1]: https://sybilpredicttrust.info/

leobg

That’s what Elon’s vision was before he ended up buying Twitter. Keep a digital track record for journalists. He wanted to call it Pravda.

(And we do have that in real life. Just as, among friends, we do keep track of who is in whose debt, we also keep a mental map of whose voice we listen to. Old school journalism still had that, where people would be reading someone’s column over the course of decades. On the internet, we don’t have that, or we have it rarely.)

8organicbits

The problem seems underspecified; what does it mean for a comment to be accurate? It would seem that comments like "the sun will rise tomorrow" would rank highest, but they aren't surprising.

smeeger

just because an idea is qualitative doesn't mean its invalid

prawn

Didn't Slashdot have something like the second point with their meta-moderation, many many years ago?

tptacek

'pcwalton, I'm coming for you. You're going down.

Kidding aside, the comments it picks out for us are a little random. For instance, this was an A+ predictive thread (it appears to be rating threads and not individual comments):

https://news.ycombinator.com/item?id=10703512

But there's just 11 comments, only 1 for me, and it's like a 1-sentence comment.

I do love that my unaccredited-access-to-startup-shares take is on that leaderboard, though.

mvkel

Hilariously, it seems you anticipated this happening and copyrighted your comments. Is karpathy's tool in violation of your copyright?!

tptacek

Karpathy, I'm coming for you next.

kbenson

I noticed from reviewing my own entry (which honestly I'm surprised exists) that the idea of what it thinks constitutes a "prediction" is fairly open to interpretation, or at least that adding some nuance to a small aspect in a thread to someone else prediction counts quite heavily. I don't really view how I've participated here over the years in any way as making predictions. I actually thought I had done a fairly good job at not making predictions, by design.

n4r9

Yeah, I'm having to pinch myself a little here. Another slightly odd example it picked out from your history: https://news.ycombinator.com/item?id=10735398

It's a good comment, but "prescient" isn't a word I'd apply to it. This is more like a list of solid takes. To be fair there probably aren't even that many explicit, correct predictions in one month of comments in 2015.

btbuildem

I've spent a weekend making something similar for my gmail account (which google keeps nagging me about being 90% full). It's fascinating to be able to classify 65k+ of emails (surprise: more than half are garbage), as well as summarize and trace the nature of communication between specific senders/recipients. It took about 50 hours on a dual RTX 3090 running Qwen 3.

My original goal was to prune the account deleting all the useless things and keeping just the unique, personal, valuable communications -- but the other day, an insight has me convinced that the safer / smarter thing to do in the current landscape is the opposite: remove any personal, valuable, memorable items, and leave google (and whomever else is scraping these repositories) with useless flotsam of newsletters, updates, subscription receipts, etc.

Rperry2174

One thing this really highlights to me is how often the "boring" takes end up being the most accurate. The provocative, high-energy threads are usually the ones that age the worst.

If an LLM were acting as a kind of historian revisiting today’s debates with future context, I’d bet it would see the same pattern again and again: the sober, incremental claims quietly hold up, while the hyperconfident ones collapse.

Something like "Lithium-ion battery pack prices fall to $108/kWh" is classic cost-curve progress. Boring, steady, and historically extremely reliable over long horizons. Probably one of the most likely headlines today to age correctly, even if it gets little attention.

On the flip side, stuff like "New benchmark shows top LLMs struggle in real mental health care" feels like high-risk framing. Benchmarks rotate constantly, and “struggle” headlines almost always age badly as models jump whole generations.

I bet theres many "boring but right" takes we overlook today and I wondr if there's a practical way to surface them before hindsight does

yunwal

"Boring but right" generally means that this prediction is already priced in to our current understanding of the world though. Anyone can reliably predict "the sun will rise tomorrow", but I'm not giving them high marks for that.

onraglanroad

I'm giving them higher marks than the people who say it won't.

LLMs have seen huge improvements over the last 3 years. Are you going to make the bet that they will continue to make similarly huge improvements, taking them well past human ability, or do you think they'll plateau?

The former is the boring, linear prediction.

bryanrasmussen

>The former is the boring, linear prediction.

right, because if there is one thing that history shows us again and again is that things that have a period of huge improvements never plateau but instead continue improving to infinity.

Improvement to infinity, that is the sober and wise bet!

bigiain

LaunchHN: Announcing Twoday, our new YC backed startup coming out of stealth mode.

We’re launching a breakthrough platform that leverages frontier scale artificial intelligence to model, predict, and dynamically orchestrate solar luminance cycles, unlocking the world’s first synthetic second sunrise by Q2 2026. By combining physics informed multimodal models with real time atmospheric optimisation, we’re redefining what’s possible in climate scale AI and opening a new era of programmable daylight.

yunwal

> Are you going to make the bet that they will continue to make similarly huge improvements

Sure yeah why not

> taking them well past human ability,

At what? They're already better than me at reciting historical facts. You'd need some actual prediction here for me to give you "prescience".

Dylan16807

LLMs aren't getting better that fast. I think a linear prediction says they'd need quite a while to maybe get "well past human ability", and if you incorporate the increases in training difficulty the timescale stretches wide.

SubiculumCode

Perhaps a new category, 'highest risk guess but right the most often'. Those is the high impact predictions.

arjie

Prediction markets have pretty much obviated the need for these things. Rather than rely on "was that really a hot take?" you have a market system that rewards those with accurate hot takes. The massive fees and lock-up period discourage low-return bets.

Gravityloss

something like correctness^2 x novel information content rank?

Gravityloss

Actually now thinking about it, incorrect information has negative value so the metric should probably reflect that.

jimbokun

The one about LLMs and mental health is not a prediction but a current news report, the way you phrased it.

Also, the boring consistent progress case for AI plays out in the end of humans as viable economic agents requiring a complete reordering of our economic and political systems in the near future. So the “boring but right” prediction today is completely terrifying.

p-e-w

“Boring” predictions usually state that things will continue to work the way they do right now. Which is trivially correct, except in cases where it catastrophically isn’t.

So the correctness of boring predictions is unsurprising, but also quite useless, because predicting the future is precisely about predicting those events which don’t follow that pattern.

schoen

I predict that, in 2035, 1+1=2. I also predict that, in 2045, 2+2=4. I also predict that, in 2055, 3+3=6.

By 2065, we should be in possession of a proof that 0+0=0. Hopefully by the following year we will also be able to confirm that 0*0=0.

(All arithmetic here is over the natural numbers.)

0manrho

It's because algorithmic feeds based on "user engagement" rewards antagonism. If your goal is to get eyes on content, being boring, predictable and nuanced is a sure way to get lost in the ever increasing noise.

johnfn

This suggests that the best way to grade predictions is some sort of weighting of how unlikely they were at the time. Like, if you were to open a prediction market for statement X, some sort of grade of the delta between your confidence of the event and the “expected” value, summed over all your predictions.

jacquesm

Exactly, that's the element that is missing. If there are 50 comments against and one pro and that pro has it in the longer term then that is worth noticing, not when there are 50 comments pro and you were one of the 'pros'.

Going against the grain and turning out right is far more valuable than being right consistently when the crowd is with you already.

mcmoor

Yeah a simple of total points of pro comments vs total points of con comments may be simple and exact enough to simulate a prediction market. I don't know if it can be included in the prompt or better to be vibecoded in directly.

copperx

Is this why depressed people often end up making the best predictions?

In personal situations there's clearly a self fulfilling prophecy going on, but when it comes to the external world, the predictions come out pretty accurate.

simianparrot

Instead of "LLM's will put developers out of jobs" the boring reality is going to be "LLM's are a useful tool with limited use".

jimbokun

That is at odds with predicting based on recent rates of progress.

xpe

> One thing this really highlights to me is how often the "boring" takes end up being the most accurate.

Would the commenter above mind sharing the method behind of their generalization? Many people would spot check maybe five items -- which is enough for our brains to start to guess at potential patterns -- and stop there.

On HN, when I see a generalization, one of my mental checklist items is to ask "what is this generalization based on?" and "If I were to look at the problem with fresh eyes, what would I conclude?".

pierrec

"the distributed “trillions of Tamagotchi” vision never materialized"

I begrudgingly accept my poor grade.

hackthemack

I noticed the Hall of Fame grading of predictive comments has a quirk? It grades some comments about if they came true or not, but in the grading of comment to the article

https://news.ycombinator.com/item?id=10654216

The Cannons on the B-29 Bomber "accurate account of LeMay stripping turrets and shifting to incendiary area bombing; matches mainstream history"

It gave a good grade to user cstross but to my reading of the comment, cstross just recounted a bit of old history. The evaluation gave cstross for just giving a history lesson or no?

karpathy

Yes I noticed a few of these around. The LLM is a little too willing to give out grades for comments that were good/bad in a bit more general sense, even if they weren't making strong predictions specifically. Another thing I noticed is that the LLM has a very impressive recognition of the various usernames and who they belong to, and I think shows a little bit of a bias in its evaluations based on the identity of the person. I tuned the prompt a little bit based on some low-hanging fruit mistakes but I think one can most likely iterate it quite a bit further.

patcon

I think you were getting at this, but in case others didn't know: cstross is a famous sci-fi author and futurist :)

Tossrock

So where do I collect my prize for this 2015 comment? https://news.ycombinator.com/item?id=9882217

johncolanduoni

Never call a man happy until he is dead. Also I don’t think your argument generalizes well - there are plenty of private research investment bubbles that have popped and not reached their original peaks (e.g. VR).

Tossrock

It wasn't a generalized argument, though, it was a specific one, about AI.

johncolanduoni

Okay, but the only part that’s specific to AI (that the companies investing the money are capturing more value than they’re putting into it) is now false. Even the hyperscalers are not capturing nearly the value they’re investing, though they’re not using debt to finance it. OpenAI and Anthropic are of course blowing through cash like it’s going out of style, and if investor interest drops drastically they’ll likely need to look to get acquired.

LeroyRaz

I am surprised the author thought the project passed quality control. The LLM reviews seem mostly false.

Looking at the comment reviews on the actual website, the LLM seems to have mostly judged whether it agreed with the takes, not whether they came true, and it seems to have an incredibly poor grasp of it's actual task of accessing whether the comments were predictive or not.

The LLM's comment reviews are of often statements like "correctly characterized [program language] as [opinion]."

This dynamic means the website mostly grades people on having the most confirmist take (the take most likely to dominate the training data, and be selected for in the LLM RL tuning process of pleasing the average user).

LeroyRaz

Examples: tptacek gets an 'A' for his comment on DF which the LLM claiming that the user "captured DF's unforgiving nature, where 'can't do x or it crashes is just another feature to learn' which remained true until it was fixed on ..."

Link to LLM review: https://karpathy.ai/hncapsule/2015-12-02/index.html#article-....

So the LLM is praising a comment as describing DF as unforgiving (a characterization of the present then, not a statement about the future). And worse, it seems like tptacek may in fact be implying the opposite of the future (e.g., x will continue to crash when it was eventually fixed.)

Here is the original comment: " tptacek on Dec 2, 2015 | root | parent | next [–]

If you're not the kind of person who can take flaws like crashes or game-stopping frame-rate issues and work them into your gameplay, DF is not the game for you. It isn't a friendly game. It can take hours just to figure out how to do core game tasks. "Don't do this thing that crashes the game" is just another task to learn."

Note: I am paraphrasing the LLM review, as the website is also poorly designed, with one unable to select the text of the LLM review!

N.b., this choice of comment review is not overly cherry picked. I just scanned the "best commentators" and tptacek was number two, with this particular egregiously unrelated-to-prediction LLM summary given as justifying his #2 rating.

hathawsh

Are you sure? The third section of each review lists the “Most prescient” and “Most wrong” comments. That sounds exactly like what you're looking for. For example, on the "Kickstarter is Debt" article, here is the LLM's analysis of the most prescient comment. The analysis seems accurate and helpful to me.

https://karpathy.ai/hncapsule/2015-12-03/index.html#article-...

  phire

  > “Oculus might end up being the most successful product/company to be kickstarted… > Product wise, Pebble is the most successful so far… Right now they are up to major version 4 of their product. Long term, I don't think they will be more successful than Oculus.”

  With hindsight:

  Oculus became the backbone of Meta’s VR push, spawning the Rift/Quest series and a multi‑billion‑dollar strategic bet.
  Pebble, despite early success, was shut down and absorbed by Fitbit barely a year after this thread.

  That’s an excellent call on the relative trajectories of the two flagship Kickstarter hardware companies.

xpe

Until someone publishes a systematic quality assessment, we're grasping at anecdotes.

It is unfortunate that the questions of "how well did the LLM do?" and "how does 'grading' work in this app?" seem to have gone out the window when HN readers see something shiny.

voidhorse

Yes. And the article is a perfect example of the dangerous sort of automation bias that people will increasingly slide into when it comes to LLMs. I realize Karpathy is sort of incentivized toward this bias given his career, but he doesn't even spend a single sentence even so much as suggesting that the results would need further inspection, or that they might be inaccurate.

The LLM is consulted like a perfect oracle, flawless in its ability to perform a task, and it's left at that. Its results are presented totally uncritically.

For this project, of course, the stakes are nil. But how long until this unfounded trust in LLMs works its way into high stakes problems? The reign of deterministic machines for the past few centuries has ingrained a trust in the reliability of machines in us that should be suspended when dealing with an inherently stochastic device like an LLM.

karmickoala

I get what you're saying, but looking at some examples, they look kinda of right, but there are a lot of misleading facts sprinkled, making his grading wrong. It is useful, but I'd suggest to be careful to use this to make decisions.

Some of the issues could be resolved with better prompting (it was biased to always interpret every comment through the lens of predictions) and LLM-as-a-judge, but still. For example, Anthropic's Deep Research prompts sub-agents to pass original quotes instead of paraphrasing, because it can deteriorate the original message.

Some examples:

  Swift is Open Source (2015)
  ===========================
sebastiank123 got a C-, and was quoted by the LLM as saying:

  > “It could become a serious Javascript competitor due to its elegant syntax, the type safety and speed.”
Now, let's read his full comment:

  > Great news! Coding in Swift is fantastic and I would love to see it coming to more platforms, maybe even on servers. It could become a serious Javascript competitor due to its elegant syntax, the type safety and speed.
I don't interpret it as a prediction, but a desire. The user is praising Swift. If it went the server way, perhaps it could replace JS, to the user's wishes. To make it even clearer, if someone asked the commenter right after: "Is that a prediction? Are you saying Swift is going to become a serious Javascript competitor?" I don't think its answer would be 'yes' in this context.

  How to be like Steve Ballmer (2015)
  ===================================
  
  Most wrong
  ----------
  
  >     corford (grade: D) (defending Ballmer’s iPhone prediction):
  >         Cited an IDC snapshot (Android 79%, iOS 14%) and suggested Ballmer was “kind of right” that the iPhone wouldn’t gain significant share.
  >         In 2025, iOS is one half of a global duopoly, dominates profits and premium segments, and is often majority share in key markets. Any reasonable definition of “significant” is satisfied, so Ballmer’s original claim—and this defense of it—did not age well.

Full quote:

  > And in a funny sort of way he was kind of right :) http://www.forbes.com/sites/dougolenick/2015/05/27/apple-ios...
  > Android: 79% versus iOS: 14%
"Any reasonable definition of 'significant' is satisfied"? That's not how I would interpret this. We see it clearly as a duopoly in North America. It's not wrong per se, but I'd say misleading. I know we could take this argument and see other slices of the data (premium phones worldwide, for instance), I'm just saying it's not as clear cut as it made it out to be.

  > volandovengo (grade: C+) (ill-equipped to deal with Apple/Google):
  >  
  >     Wrote that Ballmer’s fast-follower strategy “worked great” when competitors were weak but left Microsoft ill-equipped for “good ones like Apple and Google.”
  >     This is half-true: in smartphones, yes. But in cloud, office suites, collaboration, and enterprise SaaS, Microsoft became a primary, often leading competitor to both Apple and Google. The blanket claim underestimates Microsoft’s ability to adapt outside of mobile OS.
That's not what the user was saying:

  > Despite his public perception, he's incredibly intelligent. He has an IQ of 150.
  > 
  > His strategy of being a fast follower worked great for Microsoft when it had crappy competitors - it was ill equipped to deal with good ones like Apple and Google.
He was praising him and he did miss opportunities at first. OC did not make predictions of his later days.

  [Let's Encrypt] Entering Public Beta (2015)
  ===========================================

  - niutech: F "(endorsed StartSSL and WoSign as free options; both were later distrusted and effectively removed from the trusted ecosystem)"

Full quote:

  > There are also StartSSL and WoSign, which provide the A+ certificates for free (see example WoSign domain audit: https://www.ssllabs.com/ssltest/analyze.html?d=checkmyping.c...)
  > 
  > pjbrunet: F (dismissed HTTPS-by-default arguments as paranoid, incorrectly asserted ISPs had stopped injection, and underestimated exactly the use cases that later moved to HTTPS)
Full quote:

  > "We want to see HTTPS become the default."
  > 
  > Sounds fine for shopping, online banking, user authorizations. But for every website? If I'm a blogger/publisher or have a brochure type of website, I don't see point of the extra overhead.
  > 
  > Update: Thanks to those who answered my question. You pointed out some things I hadn't considered. Blocking the injection of invisible trackers and javascripts and ads, if that's what this is about for websites without user logins, then it would help to explicitly spell that out in marketing communications to promote adoption of this technology. The free speech angle argument is not as compelling to me though, but that's just my opinion.
I thought the debate was useful and so did pjbrunet, per his update.

I mean, we could go on, there are many others like these.

andy99

I haven’t looked at the output yet, but came here to say,LLM grading is crap. They miss things, they ignore instructions, bring in their own views, have no calibration and in general are extremely poorly suited to this task. “Good” LLM as a judge type products (and none are great) use LLMs to make binary decisions - “do these atomic facts match yes / no” type stuff - and aggregate them to get a score.

I understand this is just a fun exercise so it’s basically what LLMs are good at - generating plausible sounding stuff without regard for correctness. I would not extrapolate this to their utility on real evaluation tasks.

jeffnappi

The analysis of the 2015 article about Triplebyte is fascinating [1]. Particularly the Awards section.

1. https://karpathy.ai/hncapsule/2015-12-08/index.html#article-...

nixpulvis

Quick give everyone colors to indicate their rank here and ban anyone with a grade less than C-.

Seriously, while I find this cool and interesting, I also fear how these sorts of things will work out for us all.