I learned to code a year ago. Halfway through the bootcamp, I remember seeing everyone talking about this new code editor called Cursor.
I was already a bit frustrated with Sublime and had people recommending VS Code to me. So when I saw that Cursor was a VS Code fork, I figured “why not.”
Around the same time at the coding bootcamp, we all started figuring out that before bugging Ryan with an asinine question that would probably result in him pointing out a missing comma, we should run our issue through ChatGPT first.
That worked really well, especially when I asked it to explain the problem and solution in a super simple way so that I actually understood how to avoid making the same mistake in the future. It made programming more approachable because I didn’t have to worry about banging my head against a wall before there was permanent damage.
Fast forward to trying Cursor for the first time — it’s astonishing how much more useful AI is when it’s quite literally in your codebase and knows your code better than you do.
Forget ChatGPT working in a silo. Forget GitHub Copilot trying to guess what I’m doing. Cursor can tell me what I should do and then do it for me.
Here’s the best tips I’ve picked up to make the most of coding with Cursor.
In your settings, you can define rules that apply to every prompt. Adding something like this drastically improves the results.
Before this, I found myself going round and round in circles because code was accidentally replaced with a placeholder, non-ideal solutions were implemented, or it wrote custom CSS instead of using Tailwind.
Defining rules eliminates 90% of the annoying and inaccurate code.
Models update all the time. And sometimes the latest and greatest isn’t always enabled.
I had to go in and enable claude-3.5-sonnet which has been much better than gpt-4o.
Any time you make sweeping changes, add new files, or update models, you need to resync.
It’s not clear how often this happens by default, so be sure to do it any time you make a big change, especially if you’re still working within the same chat.
There have been several times I wanted to punch my screen because it wanted to recreate something I literally just did.
Resync often.
This also reduces the need for executing with global context since this indexing refreshes the “memory” of your codebase.
I shouldn’t even have to talk about how useful the sidebar chat is. The first real solution to live up to the “AI companion” promise.
You can see here that even a very dumb prompt with very little specs or context can still get you incredibly accurate and comprehensive results.
My favorite button is “Apply.” Damn that’s magical.
Sometimes when you’ve gone back and forth several times about an issue and you notice that it’s not offering new solutions, I’ve found it best to start a new chat. Begin with a more detailed prompt and tell it what you’ve tried already. Usually that does the trick to get good troubleshooting again.
When you’re trying to test the quality of a solution, I like to ask it simple questions that start with “How should I think about…” and it will go into much more detail, and sometimes even offer better versions of the solution.
And oftentimes when the solution is unclear, it’s helpful to mention a couple of potential solutions you’re thinking of to steer it in the right direction. Mention your preferred solution if you do have one.
Once in a while I’ll ask it to see if there are any upgrades I should make or if my code is can be improved with recent updates.
If you have your eye on a recent update that you’re keen on using, it’s helpful to highlight it this way so it's in the forefront of the context when implementing something.
If you’re using a boilerplate template, it’s incredibly useful to link to the documentation for it so that it knows the ins and outs of how everything is set up to work.
Take a screenshot of another app and ask it to implement a similar UI customized for your specs.
I’ve done this several times by referencing the screenshots and code of v0.dev mockups.
I’ve found that executing every command with global context results in worse code. I’m guessing this is because the lion-share of the brainpower is going to searching the codebase and not on the solution.
Instead, referencing a few relevant files and executing without global context (but it’s still using the files you give it as context) gets much better results.
The cases where executing with global context makes the most sense is when you’re querying your own codebase for something, asking it to explain how your code works, or when you need to make a pretty significant shift in your architecture.
Composer is the sidebar chat on steroids.
The chat can still edit multiple files, it just won’t do it simultaneously or with as much accuracy. I’ve found that the chat treats one file as the primary file, so anything else that needs to be edited or changed will be a bit more of a challenge to get high quality code.
That must be why Cursor created the Composer — it treats all the files the same with the context of achieving the desired result from the prompt and then makes sweeping changes.
Here’s an example of something I wanted added to one of my apps.
A few seconds later, it started to explain the changes and I saw them all being applied in the background.
~78 lines of code added, without error, in under a minute.
Possibly. But I still read all generated code and ask it to explain why it did it the way it did.
Sometimes I’ll point out a better way to do something. And sometimes AI will point out a better way to do something. It’s a two-way street.
I could care less if it makes me a worse programmer, or even a better programmer, for that matter.
To me, coding is a means to an end.
I don’t want to brag about how smart I am and show off the beautiful prose of my code.
All I care about is shipping useful, profitable apps. And so far, AI has helped me do that 10x faster.