Becoming a 1000x Developer With AI: Part 1 - You Can Write 90% of Your Code With AI
Part 1 of a 3 part series on how to become a 1000x developer with AI.
Its June 2025. Teams are claiming that they're using AI to build entire apps, end to end.
I am now at a point where AI is writing up to 90% of my code, and its really really good.
You've heard the claims: teams are using AI more and more to build entire apps, end to end. My team at Inference is one of those teams.
Mike Krieger, the CPO of Anthropic, recently did an interview with Lenny Rachitsky where he said 90-95% of Claude Code is now written by Claude Code. There are countless other claims like this being posted every single day from small startups to huge companies like Google and Microsoft.
.@AnthropicAI CPO and @Instagram co-founder Mike Krieger (@mikeyk) on what comes nextDon't miss this one.We discuss: How Claude Code now writes 90-95% (!!) of Claude Code🔸 What new bottlenecks appear once so much of your code is written by AI🔸 How Anthropic plans to… pic.twitter.com/e9XNemhjqQ
— Lenny Rachitsky (@lennysan) June 5, 2025
Do you read those tweets and think "what the hell, I'm not getting those kind of results!"
Maybe when you kick off a coding agent (Cursor, Windsurf, Claude Code, etc.) and give it a task, crap like this happens:
- It starts to hallucinate types and functions that don't exist.
- It thinks you're using Mongo but you're actually using Postgres (or vice versa).
- It tries to write a script to "check if it broke anything", but the script doesn't work and it starts writing another script that doesn't work either.
- It thinks it finished the feature, but it doesn't work or is half complete.
- It completely ignores the style of your codebase and starts writing files in random places that don't make sense, or break the patterns that you've established.
- It completely ignores the different services you have in your stack and starts writing new services from scratch, causing you to have duplicated services and logic.
- And so on and so forth.
I've been there. Its frustrating. It starts to feel like its actually worse than not using AI at all. And you start wondering if you're falling behind and are missing out on the huge gains that other people are getting from using AI.
I have great news! These are solvable problems now. I was not convinced of this until OpenAI's o3 came out recently, and am now living in a new reality where AI is writing 90% of my code since Claude Opus 4 dropped.
This series of posts will be broken down into main themes: philosophy, and practice.
The philosophy of how to use AI is actually more abstract and less about what libraries you're using, or what your code looks like or what your stack is. These concepts will seem very natural to software engineers who have been writing code for 5+ years, especially if you're actually good.
To the mid developers, I'm sorry but AI is just going to continue to make you mid with extra steps. You need to invest in the core principles of software engineering in order to get to a point where you're writing 90% or more of your code with AI.
Its not rocket science, and I do believe you can learn these principles in a matter of days or weeks, especially considering that you don't actually need to write the code anymore, you just need to understand it and keep the core principles in mind when building your products and systems.
These philosophies are actually more about systems-thinking, scoped to software development.
Lets jump right in.
Putting Names to the Problems
Lets quickly name the problems so that I can explicitly cite why a particular philosophy is important.
Problem 1: Ambiguity and "Open to Interpretation"
This is probably the biggest reason why your AI is not working as well as it could be.
If you're not providing the AI with a clear example of what you want, it's going to have to guess. And AI loves to guess. It would much rather guess and write a bunch of code that doesn't make sense than say "I don't know how to do this, please give me examples of what it should be.**
I think this is just due to the dataset that the current models are trained on today. There probably are not a lot of examples of chats where the AI recognizes that it doesn't know how to do something, and asks for more or information.
Therefore we need to always minimize ambiguity and maximize clarity.
We'll dive into exactly how to do this in the next section
Problem 2: Lack of structure
If your codebase lacks structure, AI is not going to be as effective as it could be.
Providing structure makes it way harder for the AI to hallucinate. It also makes it way easier for the AI to find what it needs to do the job correctly.
A codebase with a high level of structure is:
- Easier to explore
- Easier to understand
- Easier to add to
- Easier to maintain
- Easier to comprehend from a high level (this is important for the AI to know what to pull into context)
Problem 3: Losing track of progress
AI does not have infinite memory. It can only remember a few things at a time.
As much as you think @
'ing your entire codebase in Cursor is helping, its actually probably hurting the AI's ability to
understand what's actually relevant.
Imagine if you're at a spelling bee and the moderator says that the word you need to spell will be the 100th word that they say, and then go on to say 99 words that have on relevance, and then 99 more words that also have no relevance, expecting you to only focus on that 100th word and completely ignore the other 199 words.
This is what you're doing when you just shove anything into context.
You need to be selective about what you put into context.
The things that are most important to put into context are:
- The current task we're working on (from a very high level)
- What we've done so far
- What's left to do
- Examples that are relevant to the current task
- Information that is relevant to the current task
Managing this context is an art, and if you're a good systems thinker, you'll be better at managing it than others.
When running in something like Cursor agent mode, you want to minimize the amount of times the AI "infers" what context is relevant. It should be extremely clear to the AI what context is relevant, and what isn't. That's what clear structure and explicit documentation and examples will help you achieve.
I'm convinced that context management and instruction writing is the biggest differentiator between builders who are getting 1000x more leverage from AI, and those who are only getting 10x more leverage.
We're not going to dive into solving these problems in this post, but as you follow along in the series we'll reference these problems to justify the tips and suggestions that I make.
Lets jump into part 2 where I'll suggest a slew of tips and tricks that you can apply to any codebase without having to gut your system and slow you down.