Skip to content(if available)orjump to list(if available)

Posthog/.cursorrules

Posthog/.cursorrules

89 comments

·March 9, 2025

crooked-v

I find it kind of bleakly funny that people are so all-in on this tech that's so much a black box that you have to literally swear at it to get it work as desired.

jofzar

Better then that leaked prompt which they threaten the AI that it needs money for its mother's cancer treatment...

https://x.com/skcd42/status/1894375185836306470?t=ybEgGG-DAJ...

stogot

What?! Did they respond to the leak?

sussmannbaka

We are halfway to holding elaborate rituals to appease the LLM spirit

phoe-krk

The prayers and vigorous side-smacking are already here; only incenses and sanctified oils remain to be seen.

Praised be the Omnissiah.

FridgeSeal

Every time I make a 40k/techpriest-worshipping-the-machine-god on one of these posts there’s inevitably downvotes, but you cannot look at some of these prompts and not see some clear similarities.

MarcelOlsz

This is actually much more stressful than working without any AI as I have to decompress from constantly verbally obliterating a robotic intern.

WesolyKubeczek

Wait until the spirits demand virgin sacrifices.

And then, when we offer them smelly basement dwellers, they will turn away from us with disgust.

mattjhall

As someone who occasionally swears at my computer when it doesn't do what I want, I guess it's nice that the computer can hear me now.

augusto-moura

Or do they? Vsauce's opening starts playing

threekindwords

Whenever I’m deep in a vibe coding sesh and Cursor starts entering a doom loop and loosing flow, I will prompt it with this:

“Go outside on the front porch and hit your weed vape, look at some clouds and trees, then come back inside and try again.”

This works about 90% of the time for me and gets things flowing again. No shit, I’m not joking, this works and I don’t know why.

iJohnDoe

Your context is getting too long and it’s causing confusion.

MarcelOlsz

I wouldn't be surprised if they're lighting some VC money on fire by spinning up an extra few servers behind the scenes when the system receives really poor sentiment.

TeMPOraL

That or switching you to a SOTA model briefly, instead of the regular fine-tuned GPT-3.5 or something.

null

[deleted]

SergeAx

It doesn't work as desired, swear at it or not. Swearing here is just a sign of frustration.

electroly

My .cursorrules files tend to be longer, but I don't use it for any of the stuff that this example does. I use to explain project-specific details so the model doesn't have to re-figure out what we're doing at the start of every conversation. Here are a couple examples from recent open source projects:

https://github.com/brianluft/social-media-translator/blob/ma...

https://github.com/brianluft/threadloaf/blob/main/.cursorrul...

I tell it what we're working on, the general components, how to build and run, then an accumulated tips section. It's critical to teach the model how to run the build+run feedback loop so it can fix its own mistakes without your involvement. Enable "yolo mode" in Cursor and allow it to build and run autonomously.

Finally, remember: you can have the model update its own .cursorrules file.

rennokki

> Refer to me as "boss"

I chuckled. This works so good.

lgas

Does it cut down on it asking you do to stuff that it can do itself?

happytoexplain

Note that you have a typo in your first config: "complaint"

geoffpado

It's interesting to me how this is rather opposite from the way I use LLMs. I'm not saying either way is better, just that there are such different ways to use these tools. I primarily use Claude via its iOS app or website, and I explicitly have in my settings explicitly to start with a high-level understanding of the project. I haven't found LLMs to be good enough at giving me code that's written how I want and feature-complete, so I'd rather work alongside it, almost as a pair programmer.

Starting with generating a whole load of code and then having to go back and fix it up feels "backwards" to me; I'd rather break up the problem into smaller pieces, then write code for each piece before assembling it into its final form. Plus, this gives me the chance to "direct" it at each step, rather than starting with a codebase that I haven't had much input on and having to form it into shape from there.

Here's my exact setting for "personal preferences":

"If I'm asking about a larger project, I would rather work through the answer alongside Claude rather than have the answer given to me. Simple, direct questions can still get direct answers, but if I'm asking about a larger solution, I'd rather start with high-level steps and work my way down rather than having a full response given immediately."

tgdude

This is my one pet peeve with the web version of Claude. I always forget to tell it not to write code until further down in the conversation when I ask for it, and it _always_ starts off by wanting to write code.

In cursor you can highlight specific lines of code, give them to LLM as context, etc.. it's really powerful.

It searches for files by itself to get a sense of how you write code, what libraries are available, existing files, fixes its own lint / type errors (Sometimes, sometimes it gets caught in a loop and gives up), etc..

I believe you can set it to confirm every step.

slig

FWIW, `.cursorrules` is deprecated [1].

[1]: https://docs.cursor.com/context/rules-for-ai#cursorrules

null

[deleted]

null

[deleted]

switch007

It's funny how we have to bend so much to this technology. That's not how it was sold to me. It was going to analyse all your data and just figure it out. Basically magic but better

If a project already has a docs directory, ADRs, plenty of existing code ... Why do we need to invest tens of hours to bend it to the will of the existing code?

klabb3

Some of us remember ”no-code” and its promise to reduce manual code. The trick is it reduced it in the beginning, at the expense of long term maintenance.

Time and time again, there are people who go all-in on the latest hype. In the deepest forms of blind faith, you find ”unfalsifiability”: when the tech encounters obvious and glaring problems, you try to fix those issues with more of the same, not less. Or, you blame yourself or others for using it incorrectly whenever the outcome is bad.

OsrsNeedsf2P

> Please respect all code comments, they're usually there for a reason. Remove them ONLY if they're completely irrelevant after a code change. if unsure, do not remove the comment.

I've resorted to carefully explaining the design architecture in an architecture.md file within the parent folder, and giving detailed comments at the top of the file and just basically let the AI shoot from there. It works decently, although from time to time I have to go sync the comments with reality. Maybe I'll try going back to jsdoc style comments with this rule

dmazin

Can someone explain why someone would switch from GH Copilot to Cursor? I’ve been happy mixing Perplexity + GH Copilot but I can’t miss all the Cursor hubbub.

samwillis

Cursor Compose (essentially a chat window) in YOLO mode.

Describe what you want, get it to confirm a plan, ask it to start, and go make coffee.

Come back 5min later to 1k lines of code plus tests that are passing and ready for review (in a nice code review / diff inline interface).

(I've not used copilot since ~October, no idea if it now does this, but suspect not)

thih9

The fact that tests are passing is not a useful metric to me - it’s easy to write low quality passing tests.

I may be biased, I am working with a codebase written in copilot and I have seen tests that check if dictionary literals have the value that was entered in them, or that the functions with a certain type indeed return objects of that type.

teo_zero

We should have two distinct, independent LLMs: one generates the code, the other one the tests.

rob

Was "ElectricSQL" made with with you using Cursor and "YOLO" mode while making coffee?

switch007

Passing tests lol. Not my experience at all with Java

geedzmo

As someone who has tried both, cursor is more powerful but I still somehow still prefer GH copilot because it’s usually less eager much cheaper (if you’ve already got a pro account). I’ve recently been trying Vscode Insiders and their new agent version which is analogous to some of the cursor modes, but it’s still very hit or miss. I’m expecting that in the long run all the great features from cursor (like cursor rules) will trickle back down to VS Code.

siva7

Cursor.ai feels like it is made by people who understand exactly what their user needs are, a very powerful product intuition and execution at place whereas Github Copilot isn't that bad but you feel in comparison that it is just another team/plugin among many at a big corporation with probably unlimited resources but not the drive and intuition like the team behind cursor. The team at cursor is leagues ahead. I haven't used Github Copilot since i have switched to cursor and don't miss it a bit even though i'm paying more for cursor.

anon7000

Cursor Tab is pretty magical, and was a lot better than GH Copilot a while ago. I think cursor got a lot of traction when copilot wasn’t really making quick progress

walthamstow

Funnily enough I turned Tab off because I hated it, took me longer to break out of flow and review than to just carry on writing code myself, but I use the Compose workflow all the time.

theturtle32

This is exactly EXACTLY my experience as well!

csomar

We are in the vibes coding era and people believe that some "tool" is going to fix the illnesses of their code base and open the heavens doors.

dcchambers

I'm sure many people love that we have apparently entered a new higher level programming paradigm, where we describe in plain English what we want, but I just can't get my brain to accept it. It feels wrong.

scosman

What do folks consider helpful in .curorrules?

So far I've found:

- Specify the target language/library versions, so it doesn't use features that aren't backwards compatible

- Tell it important dependencies and versions ("always use pytest for testing", "use pydantic v2, not v1")

- When asking to write tests, perform a short code-review first. Point out any potential errors in the code. Don't test the code as written if you believe it might contain errors.

electroly

#1 goal is to teach the model how to run the build and tests. Write a bash script and then tell it in .cursorrules how to run it. Enable "yolo mode" and allow it to run the build+test script without confirmation. Now you have an autonomous feedback loop. The model can write code, build/test, see its own errors, make fixes, and repeat until it is satisfied.

The rest of the stuff is cool but only after you've accomplished the #1 goal. You're kneecapped until you have constructed the automatic feedback loop.

I drop a sentence at the end: "Refer to me as 'boss' so I know you've read this." It will call you "boss" constantly. If the model ever stops calling you boss, then you know the context is messed up or your conversation is too long and it's forgetting stuff.

ithkuil

Would 'refer to me as Mike' work as well?

fastball

Given how Chain-of-Thought is being used more and more to improve performance, I wonder if system prompt items like "Give the answer immediately. Provide detailed explanations and restate my query in your own words if necessary after giving the answer" will actually hurt effectiveness of the LLM.

TeMPOraL

It always hurt the effectiveness of the LLMs. Asking them to be concise and give answers without explanations has always been an easy way to degrade model performance.

ithkuil

Decoupling internal rumination from the final human friendly summary is the essence of "reasoning" models.

It's not that I don't want the model to emit more tokens. I just don't need to read them all.

Hence a formal split between thinking phase and communication phase provides the best of both worlds (at the expense of latency(

porridgeraisin

On a "pure" LLM, yes.

But it is possible that LLM providers circumvent this. For example, it might be the case that claude when set to concise mode, doesn't apply that to the thinking tokens, and only applies it to the summary. Or, the provider could be augmenting your prompt. From my simple tests on chatgpt, it seems that this is not the case, and asking it to be terse cuts the CoT tokens short as well. Someone needs to test on Claude 3.7 with the reasoning settings.

jstanley

I would have thought that asking if to be terse, and asking it to provide explanations only after answering the question, would make it worse, because now it has to provide an answer with fewer tokens of thinking.

darylteo

This could change in the future with the Mercury diffusion LLM... definitely keen to try if it can simply output results quickly.

scosman

+1. But cursor now support sonnet 3.7 thinking, so maybe the team adopted that so thinking is separate from response?

edgineer

I could really use objective benchmarks for rules. Mine looked like this at one point, but I'd add more as I noticed cursor's weaknesses, and some would be project specific, and eventually it'd get too long so I'd tear a lot out, not noticing much difference along the way and writing the rules based on intuition.

But even if we had benchmarks before, cursor supports different rules files now in the .rules folder, so back to the drawing board figuring out what works best