Skip to content(if available)orjump to list(if available)

Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps

Show HN: CLI that spots fake GitHub stars, risky dependencies and licence traps

73 comments

·May 12, 2025

When I came across a study that traced 4.5 million fake GitHub stars, it confirmed a suspicion I’d had for a while: stars are noisy. The issue is they’re visible, they’re persuasive, and they still shape hiring decisions, VC term sheets, and dependency choices—but they say very little about actual quality.

I wrote StarGuard to put that number in perspective based on my own methodology inspired with what they did and to fold a broader supply-chain check into one command-line run.

It starts with the simplest raw input: every starred_at timestamp GitHub will give. It applies a median-absolute-deviation test to locate sudden bursts. For each spike, StarGuard pulls a random sample of the accounts behind it and asks: how old is the user? Any followers? Any contribution history? Still using the default avatar? From that, it computes a Fake Star Index, between 0 (organic) and 1 (fully synthetic).

But inflated stars are just one issue. In parallel, StarGuard parses dependency manifests or SBOMs and flags common risk signs: unpinned versions, direct Git URLs, lookalike package names. It also scans licences—AGPL sneaking into a repo claiming MIT, or other inconsistencies that can turn into compliance headaches.

It checks contributor patterns too. If 90% of commits come from one person who hasn’t pushed in months, that’s flagged. It skims for obvious code red flags: eval calls, minified blobs, sketchy install scripts—because sometimes the problem is hiding in plain sight.

All of this feeds into a weighted scoring model. The final Trust Score (0–100) reflects repo health at a glance, with direct penalties for fake-star behaviour, so a pretty README badge can’t hide inorganic hype.

I added for the fun of it it generating a cool little badge for the trust score lol.

Under the hood, its all uses, heuristics, and a lot of GitHub API paging. Run it on any public repo with:

python starguard.py owner/repo --format markdown It works without a token, but you’ll hit rate limits sooner.

Please provide any feedback you can.

the__alchemist

> It checks contributor patterns too. If 90% of commits come from one person who hasn’t pushed in months, that’s flagged.

IMO this is a slight green flag; not red.

sethops1

I have to agree - the highest quality libraries in my experience are the ones maintained that one dedicated person as their pet project. There's no glory, no money, no large community, no Twitter followers - just a person with a problem to solve and making the solution open source for the benefit of others.

artski

Fair take—it's definitely context-dependent. In some cases, solo-maintainer projects can be great, especially if they’re stable or purpose-built. But from a trust and maintenance standpoint, it’s worth flagging as a signal: if 90% of commits are from one person who’s now inactive, it could mean slow responses to bugs or no updates for security issues. Doesn’t mean the project is bad—just something to consider alongside other factors.

Heuristics are never perfect and it's all iterative but it's all about understanding the underlying assumptions and taking the knowledge you get out of it with your own context. Probably could enhance it slightly by a run through an LLM with a prompt but I prefer to keep things purely statistical for now.

85392_school

It could also mean that the project is stable. Since you only look at the one repository's commit activity, a stable project with a maintainer who's still active on GitHub in other places would be "less trustworthy" than a project that's a work in progress.

kstrauser

I agree. I have a popular-ish project on GitHub that I haven't touched in like a decade. I would if needed, but it's basically "done". It works. It does everything it needs to, and no one's reported a bug in many, many years.

You could etch that thing into granite as far as I can tell. The only thing left to do is rewrite it in Rust.

artski

Not a bad idea tbh, maybe an additional how long issues are left open, would be a good idea. Though yeh thats why I was contemplating of not necessarily highlighting the actual number and more have a range e.g. 80-100 is good, 50-70 Moderate and so on.

null

[deleted]

delfinom

The problem is your audience is:

> CTOs, security teams, and VCs automate open-source due diligence in seconds.

The people that probably have less brain cells than the average programmer to understand the nuance in the flagging.

artski

Lol yeah tbh - I just made it without really thinking of an audience, just was looking for a project to work on till I saw the paper and figured it would be cool to check it out on some repositories out there. That part is just me asking gpt to make the read me better.

mlhpdx

The signal here is how many unpatched vulnerabilities there are maybe multiplied by how long they’ve been out there. Purely statistical. And an actual signal.

255kb

Also, isn't that just 99% of OSS projects out there? I maintained a project for the past 7+ years, and despite 1 million downloads, tens of thousands of monthly active users, it's still mostly me, maintaining and committing. Yes, there is a bus factor, but it's a common and known problem in open-source. It would be better to try to improve the situation instead of just flagging all the projects. It's hard enough to find people ready to help and work on something outside their working hours on a regular basis...

lispisok

It's gonna flag most of the clojure ecosystem

throwaway150

Yep, and it's not just Clojure. This will end up flagging projects across all non-mainstream ecosystems. Whether it's Vim plugins, niche command-line tools, academic research code, or hobbyist libraries for things like game development or creative coding, they'll likely get flagged simply because they're often maintained by individual developers. These devs build the projects, iterate quickly in the early stages, and eventually reach a point where the code is stable and no longer needs frequent updates.

It's a shame that this tool penalizes such projects, which I think are vital to a healthy open source ecosystem.

It's a nice project otherwise. But flagging stable projects from solo developers really sticks out like a sore thumb. :(

artski

It would still count as "trustworthy" just wouldnt come out to 100/100 :(.

j45

Not sure if this is a red flag.

coffeeboy

Very nice! I'm personally looking into bot account detection for my own service and have come up with very similar heuristics (albeit simpler ones since I'm doing this at scale) so I will provide some additional ones that I have discovered:

1. Fork to stars ratio. I've noticed that several of the "bot" repos have the same number of forks as stars (or rather, most ratios are above 0.5). Typically a project doesn't have nearly as many forks as stars.

2. Fake repo owners clone real projects and push them directly to their account (not fork) and impersonate the real project to try and make their account look real.

Example bot account with both strategies employed: https://github.com/algariis

artski

Crazy how far people go for these things tbh.

hungryhobbit

Dependencies: PyPI, Maven, Go, Ruby

This looks like a cool project, but why on earth would it need Python, Java, Go, AND Ruby?

27theo

It doesn't need them, it parses SBOMs and manifests from their ecosystems. I think you misunderstood this section of the README.

> Dependencies | SBOM / manifest parsing across npm, PyPI, Maven, Go, Ruby; flags unpinned, shadow, or non-registry deps.

The project seems like it only requires Python >= 3.9!

deltaknight

I think these are just the package managers that it supports parsing dependencies for. The actual script seems to just be a single python file.

It does seem like the repo is missing some files though; make is mentioned in the README but no makefile and no list of python dependencies for the script that I can see.

artski

Yeah to be fair I need to clean it up, was stuck in the testing diff strategies and making it work and just wanted to get feedback asap before moving on to the next step (didn't want to spend too much time on something and turns out I was wrong about something badly) - next step is to get it all cleaned up.

catboybotnet

Why care about stars in the first place? Github is a repo of source repos, using it like social media is pretty silly. If I like a project, it goes into a folder in my bookmarks, that's the 'star' everyone should use. For VCs? What, are you looking to make an open source todo app into a multi million dollar B2B SaaS? VCs are the almighty gods of the world, and us humble peons do not need to lend our assistance into helping them lose money :-)

Outside of that, neat project.

never_inline

The social-ification of tech is indeed a worrying trend.

This is exacerbated by low-quality / promotional medium articles, linkedin etc.. which promote Nth copycat HTML GPT wrapper app or kubernetes dashboard as the best thing after sliced bread, and not-so-technical programmers falling for it.

This applies to some areas more than others (eg, generative AI, cloud observability)

catboybotnet

I feel like it promotes younger folks getting into programming by having them follow trends instead of what they might want to do. Don't bother starting to learn, AI will take your job if you do. Vibe code up a SaaS startup and sell it! Look at levelsio, he's making millions and is an influencer!

There's even websites that are dedicated to dumping your SaaSes, like acquire.com!

Stay off social media, ignore trends, and learn the basics. JS frameworks come and go like flies, don't get too invested in the latest and greatest!

ngangaga

> they still shape hiring decisions, VC term sheets, and dependency choices

This is nuts to me. A star is a "like". It has carries no signal of quality and even its popularity proxy is quite weak. I can't remember the last time I looked at stars and considered them meaningful.

rurban

A star is more of a "follow" or "watch", because at each update shows up in my timeline then

pkkkzip

the difference means getting funded or not. people fake testimonials and put logos of large companies too on their saas.

some people even buy residential proxies and create accounts on communities that can steer them towards specific action like "hey lets short squeeze this company let me sell you my call option" etc.

there's no incentive to be honest, i know two founders where one cheated with fake accounts, github likes and exited. the other ultimately gave up and worked in another field.

the old saying "if you lie to those who wants to be lied to you will become wealthy" rings true.

however at the end of the day it is dishonest and money earned through deception is bad.

ngangaga

> the difference means getting funded or not.

This speaks more to the incompetence of VC than anything. How can you justify deploying hundreds of thousands or millions of dollars on the basis of "stars"?

I have a very hard time blaming the people who pull off this scam. Money is money and taking from VCs is morally (nearly) free.

Yiling-J

It would be interesting if there were an AI tool to analyze the growth pattern of an OSS project. The tool should work based on star info from the GitHub API and perform some web searches based on that info.

For example: the project gets 1,000 stars on 2024-07-23 because it was posted on Hacker News and received 100 comments (<link>). Below is the static info of stargazers during this period: ...

artski

Yeah I thought about this and maybe down the line, but wanted to start with the pure statistics part as the base so it's as little of a black box as possible.

knowitnone

Great idea. This should be done by Github though. I'm surprised Github hasn't been sued for serving malware.

swyx

> I'm surprised Github hasn't been sued for serving malware.

do you want a world where people can randomly sue you for any random damages they suffer or do you want nice things like free code hosting?

unclad5968

In the US people can already randomly sue you for any random damages. I could sue github right now even if I'd never previously heard of or interacted with the site.

KomoD

> do you want a world where people can randomly sue you for any random damages they suffer

Isn't that already a thing, but in the US, not the entire world.

MrDarcy

I’m not sure if you’re being sarcastic but if the claim of damages is likely to win then I’d like someone to hear it.

artski

Yeah to be fair would be great, sometimes just giving a nudge and showing people want these features is the first step to getting an official integration.

binary132

I approve! It would be cool to have customizable and transparent heuristics. That way if you know for example that a burst of stars was organic, or you don’t care and want to look at other metrics, you can, or you can at least see a report that explains the reasoning.

feverzsj

CTOs don't care about github stars. They are behind tons of screening processes.

throwaway314155

Believe me, CTO's of startups do.

bavell

I believe you, throwaway314155!!

zxilly

Frankly, I think this program is ai generated.

1. there are hallucinatory descriptions in the Readme (make test), and also in the code, such as the rate limit set at line 158, which is the wrong number

2. all commits are done on github webui, checking the signature confirms this

3. too verbose function names and a 2000 line python file

I don't have a complaint about ai, but the code quality clearly needs improvement, the license only lists a few common examples, the thresholds for detection seem to be set randomly, _get_stargazers_graphql the entire function is commented out and performs no action, it says "Currently bypassed by get_ stargazers", did you generate the code without even reading through it?

Bad code like this gets over 100stars, it seems like you're doing a satirical fake-star performance art.

zxilly

I checked your past submissions and yes, they are also ai generated.

I know it's the age of ai, but one should do a little checking oneself before posting ai generated content, right? Or at least one should know how to use git and write meaningful commit messages?

artski

It's a project I'm making purely for myself and I like to share what I make - sorry I didn't put up most effort in the commit messages, will not do that again.

cyberge99

Don’t apologize. You didn’t do anything wrong. It’s your repo, use it how you wish. You don’t owe that guy anything.

artski

Well I initially planned to use GraphQL and started to implement it, but switched to REST for now as it's still not fully complete, just to keep things simpler while I iterate and the fact that it's not required currently. I’ll bring GraphQL back once I’ve got key cycling in place and things are more stable. As for the rate limit, I’ve been tweaking things manually to avoid hitting it constantly which I did to an extent—that’s actually why I want to add key rotation... and I am allowed to leave comments for myself for a work in progress no? or does everything have to be perfect from day one?

You would assume if it was pure ai generated it would have the correct rate limit in the comments and the code .... but honestly I don't care and yeah I ran the read me through GPT to 'prettify it'. Arrest me.

bavell

You probably should have put "v0.1" or "alpha/beta" in the post title or description - currently it reads like it's already been polished up IMO.

Am4TIfIsER0ppos

What is a license trap? This "AGPL sneaking into a repo claiming MIT"? Isn't that just a plain old license violation?

artski

Basically what I mean by it is for example a repository appears to be under a permissive license like MIT, Apache, or BSD, but actually includes code that’s governed by a much stricter or viral license—like GPL or AGPL—often buried in a subdirectory, dependency, or embedded snippet. The problem is, if you reuse or build on that code assuming it’s fully permissive, you could end up violating the terms of the stricter license without realising it. It’s a trap because the original authors might have mixed incompatible licenses, knowingly or not, and the legal risk then falls on downstream users. So yeah essentially a plain old license violation which are relatively easy to miss or not think about

tough

oh interesting you put a word on it, most of the VC funded FOSS -open- core apps/saas that have pop up the past years are like this

the /ee folders are a disgrace

tough

they get around it by licensing differently only packages / parts of the codebase

sesm

How does it differentiate between organic (like project posted on HN) and inorganic star spikes?

colonial

Just spitballing, but assuming the fake stars are added in a "naive" manner (i.e. as fast as possible, no breaks) you could distinguish the two by looking for the long tail usually associated with organic traffic spikes.

Of course, the problem with that is the adversary could easily simulate the same effect by mixing together some fall-off functions and a bit of randomness.

artski

For each spike it samples the users from that spike (I set it to a high enough value currently it essentially gets all of them for 99.99% of repos - though that should be optimised so it's faster but just figured I will just grab every single one for now whilst building it). It checks the users who caused this spike for signs of being "fake accounts".