Lessons Learned in 2026
Dear reader,
This is the 2026 edition. If you read last year’s lessons, you know the format: weekly reflections from the intersection of engineering, leadership, and figuring it out as you go.
2025’s lessons were about foundations — compounding, communication, writing things down, engineering your own luck. Those still hold. But this year hit different. AI went from “interesting tool” to “daily coworker.” The job market got brutal. The old career playbooks started showing cracks.
These lessons come from what I actually did this year — building software, leading teams, running side projects. Some landed. Some didn’t. All of them taught me something.
Let’s get into it.
(Updated April 2026 with 8 new lessons.)
1. “AI is a power tool, not a replacement.”
The discourse is exhausting. AI will take your job. AI won’t take your job. Here’s what actually happened in 2026:
- Engineers who use AI ship faster. Period.
- Engineers who only use AI ship garbage.
- The skill is knowing when to prompt and when to think.
Translation: AI multiplies your existing ability. If you’re a strong engineer, you become dangerous. If you’re not, AI just helps you produce bad code faster.
But knowing when to use the tool is only half the equation. The other half is knowing what kind of engineer you are in the first place.
2. “Generalists win when the rules change.”
Every time the industry shifts (cloud, mobile, AI) specialists scramble to stay relevant while generalists connect the dots.
- They see how AI fits into existing systems and bridge product, engineering, and business because they’ve sat in all three rooms.
- Adaptation is their default mode. They’ve been doing it their whole career.
Specialists get hired for what they know. Generalists get hired for what they can figure out. In 2026, “figure it out” is the most in-demand skill on the market.
But figuring it out only matters if someone knows you can.
3. “Being qualified and being hireable are two different things.”
I interview candidates. A lot of them. I ask three things:
- System design. I give them two endpoints I’m actually building at work. I want to see how they think — what questions they ask, whether they consider caching, scaling, observability, indexing, failure modes… The answer matters less than the approach.
- AI stance. How are they using it today? Not “do you think AI is cool” but “show me how it fits into your workflow.” What habits they’ve built around it. What guardrails they have in place. This separates people who’ve actually integrated it from people who’ve read about it.
- Depth check. I pick something from their resume or website and ask them to go deep. Did they drive that project or were they along for the ride? You can tell in about 90 seconds.
Here’s the problem: most engineers work on internal systems. There’s no public proof of what they built. Their resume says “designed microservice architecture for payment processing” and so does everyone else’s. I can’t tell who actually made the decisions and who just attended the meetings.
That’s exactly why I started this website. I wanted proof of work that exists outside a company’s walls.
The candidates who stand out do the same thing. Or they have a network referral.
Your skills are the product. Your visibility is the distribution. You need both.
4. “Multitasking is a cope.”
Context switching doesn’t make you productive. It makes you feel productive. There’s a difference.
I tested this on myself. For one day, I set an hourly timer and wrote down what I was working on. The pattern was obvious: meeting, come out, start one thing, meeting, come out, start a different thing. By end of day I’d touched six tasks and finished zero. I felt busy. I wasn’t.
Two changes fixed it. First, I set up Outlook to auto-schedule two hours of focus time every day. It blocks my calendar so people can’t drop meetings into that window. I can move it around if I need to, but it’s there by default. Second, I stopped jumping between tasks. I pick one, get it to a state where I feel comfortable leaving it (which is typically a PR), then move to the next. I audited the tasks I was working on - if it’s something I can delegate to someone I trust, I hand it off. If I don’t know how to do it, I set it aside and find the right person instead of spinning on it.
Translation: Deep work is the only work that compounds. Everything else is motion without progress.
5. “Plan first, ship fast, iterate features.”
The engineers I learn the most from don’t skip planning. They plan well, ship the first version fast, then iterate on features. The design doc comes first. The architecture review comes first. But neither takes three weeks.
This matters more now than it did two years ago. AI moved the bottleneck. Writing code is no longer the slow part. Thinking about what to build, how it fits together, what breaks at scale — that’s where the time goes. The engineers who skip that step ship faster on day one and spend the next month cleaning up the mess.
Plan tight. Ship fast. Iterate on what users actually need, not what you guessed they’d want.
AI made code cheap. That makes the design doc more valuable, not less.
6. “Your network is your safety net. Build it before you need it.”
Layoffs hit 500,000+ tech workers since 2023. The ones who landed quickly had one thing the others didn’t: relationships.
- Referrals make up 7% of applications but 40% of hires. Cold applications convert at 1-2%.
- People hire people they’ve seen ship. A warm introduction beats a perfect resume every time.
Build relationships before you need them. Open source, blog posts, conferences, Slack communities. Show up so people know what you’re capable of before you need them to.
7. “Analyze what others do wrong.”
I’ve learned more from studying failures than success stories. Every engineer I respect does this.
Our most recent outage? JPA changes. Someone loaded entire joins across multiple tables, pulling way more data than needed. No lazy loading, no native queries. Just a full table scan hiding behind an ORM. That’s not a hard problem to avoid — but only if you’ve seen it go wrong before.
We also get 4xx and 5xx errors from engineers who don’t follow the API contract documented in Swagger or our wikis. Same pattern every time: someone builds to what they assume the interface does instead of reading the spec.
The deeper lesson I keep coming back to: clean your data before it enters the database. If you validate at ingestion, you need less explicit error handling and safeguards scattered across every downstream service. Most defensive code exists because someone upstream was sloppy.
Translation: Critical thinking about failures is staff-level thinking. Anyone can copy a pattern that works. The edge comes from recognizing what doesn’t.
8. “Learn to drop your ego.”
You will ship bugs. You will be wrong in design reviews. You will propose an architecture that gets torn apart. That’s the process.
Take code review feedback as signal, not attack. Say “I was wrong” out loud. It’s free and it earns trust. The least defensive engineers I know are also the ones who grow the fastest.
Ego protects your feelings. Humility protects your growth.
9. “Sprinters vs. grinders. Know your style.”
Some engineers thrive in bursts. Hack weeks, tight deadlines, sprint-to-finish energy. Others produce their best work through slow, steady, daily progress.
Neither is wrong. Both ship great software. The mistake is forcing one style when you’re wired for the other.
I’m a grinder. Slow, steady, daily progress. I’ve tried sprinting (hack week energy, caffeine-fueled late nights) and the output never holds up. Knowing that saved me years of frustration pretending otherwise.
Translation: Match your workflow to your wiring. Borrowed productivity systems break on contact with reality.
10. “Focus is the only way to create meaningful work.”
Deep architecture design, production debugging, high-quality PRs. None of these happen in 30-minute increments between meetings.
AI tools actually make this harder. Waiting for code generation creates micro-pauses that invite Slack, email, social media. Every pause is a door out of flow state. I’ve started treating my IDE like a cockpit — notifications off, Slack closed, phone in another room. The difference in output quality is night and day.
Distraction is the tax on deep work. In 2026, the tax rate went up.
11. “Invest in growth, not comfort.”
Six years ago, I transitioned into software engineering. I took a red-circle (a salary grade decrease for two years) to make it happen. I watched my peers earn more while I learned the fundamentals from scratch.
It took a while to catch back up. Longer than I expected. But the skill foundation I built during those two years is what everything else sits on — this website, the side projects, the ability to lead technical teams.
You can probably only afford to do this once in a career. That’s why you have to be selective about when. Pick the right moment, take the hit, and build something that compounds for the next decade.
Here’s the math: 1 hr/day of deliberate learning x 5 days/week x 50 weeks = 250 hours/year. That’s 250 hours your peers aren’t putting in. Over 3 years, you’re playing a different game.
Translation: Optimize for learning velocity in your 20s and 30s. The money follows. But be selective about when you take the hit — you only get one good window.
12. “Put in the hours early. Then redirect them.”
I worked a lot more hours in my 20s than I do now. Long days, weekends, whatever it took to close the gap between where I was and where I wanted to be. I don’t regret a single hour.
But here’s what changed: the hours didn’t go away. They shifted. I spend more time now on side projects, this blog, and building things outside my day job than I ever did in my 20s. The intensity is the same. The direction is different.
Early-career hours build your foundation. Mid-career hours build your optionality.
The work ethic doesn’t expire. Where you point it does.
13. “The best engineers think in systems, not features.”
Features are what you build. Systems are how everything connects.
We ran into this recently. We were designing a feature so consumers could query our results using existing identifiers. Simple enough. Except our data wasn’t keyed the way our consumers expected. They had layman codes. We had internal IDs. To get our data, they’d need to make additional API calls just to translate their identifiers into ours.
Feature thinking says: build GET /domain/{id} and call it done. Systems thinking says: our consumers don’t have our internal IDs, so that endpoint is useless to them. We had to design around what they actually had available, not what was convenient for our data model. It meant we couldn’t follow typical REST conventions exactly, but we could still use proper identifiers that made sense on both sides.
The gap between mid-level and senior is almost entirely this shift. Before you write a line of code, ask:
- How does this change affect the rest of the codebase?
- What happens when this fails at scale?
- Who else depends on this, and do they know it’s changing?
Feature thinking gets the ticket closed. Systems thinking prevents the next five tickets from being opened.
Systems thinking also means knowing why the system exists in the first place. Which brings up the hardest problem in engineering.
14. “Document your decisions, not just your code.”
Code comments explain what. Decision logs explain why. The second one is 10x more valuable.
Most of our data lives in Postgres. That’s the default. But we hit cases where we didn’t need normalized data — we could store structured data for specific use cases and take advantage of something AWS already offered that would speed up development. The decision to use DynamoDB for those cases made sense at the time. But in an enterprise where team structures change frequently, the person who made that call might not be around in six months.
Write it down. There’s a reason you chose Postgres for one thing and DynamoDB for another. Without documentation, someone inherits that codebase and starts asking “why do we do it this way?” — and now your team is re-investigating decisions that were already made.
This matters even more with AI tools. LLMs don’t store context. If you want to use an AI assistant to move fast on an existing codebase, it needs documented decisions to load. Otherwise you’re asking it to reason about code without knowing why the code exists.
The most expensive knowledge in any codebase is the context that only exists in someone’s head.
15. “Culture beats tooling.”
No AI tool, CI pipeline, or observability platform will save a team with bad culture.
- Blameless post-mortems and psychological safety matter more than any incident management software or code coverage metric.
- I’ve watched a team with mediocre tools outship a well-funded team with the best infrastructure money could buy. The difference was trust.
No tool will ever outperform a team that trusts each other.
16. “Continuous learning is survival.”
The half-life of technical skills is getting shorter. What you learned two years ago is already outdated.
Most of my background is Java backend. I had some full-stack experience with JavaScript and Angular, but the community moved. So I taught myself React and Tailwind. Not because someone told me to — because the job market did.
Right now I’m consuming a lot of content around Claude and AI tooling because enterprise is slower to adopt these things and I don’t want my skills to atrophy while I wait for my org to catch up. I’m building side projects, writing this blog, and learning things I never thought I’d need — email structure, sales practices, marketing fundamentals. In enterprise, other people do that for you. When you step outside your W-2, you realize you’re starting from zero on skills that most solo builders take for granted.
There’s a quote I like: legacy code is code without supporting tests. I think about that for careers too. If you’re not testing yourself against new skills, not continuously improving, you’re becoming legacy.
Translation: The moment you stop learning, you start becoming legacy.
17. “Cascade your foreign keys. Save yourself the headache.”
If you have a parent table with child relationships, set cascade on the optional foreign keys.
I watched a team burn half a Thursday debugging a data cleanup script that failed silently. The deletions had to happen in exact dependency order across six tables. One missed junction table and the whole thing rolled back. No error message that told you which table broke the chain. Just a silent rollback and a growing sense of dread.
With cascade set on the optional relationships, you manage the parent record. The database handles the rest.
Translation: Design your data model so the obvious action is the correct action. If deleting a parent requires a checklist of dependent operations, you’ve built a system that punishes the next person who touches it.
18. “Take AI doom articles with a grain of salt. First principles don’t expire.”
Every week there’s a new post about AI replacing engineers. Some of it is signal. Most of it is noise from people who’ve never shipped production software.
I review PRs from AI-assisted workflows daily. I still catch issues the model can’t see: a race condition that only shows up under load, a permission check that’s technically correct but doesn’t match how our consumers actually authenticate, an API contract that drifts from the spec because the LLM filled in what it assumed the interface did. The failure modes are the same ones we’ve always had. They just show up faster now.
The bottleneck is the same thing it’s always been: someone who understands what the code should do, whether it’s secure, and how it behaves when things go wrong.
Context management. Security. System design. Observability. Testing. Error handling. None of those expire regardless of who writes the code. The companies hiring right now aren’t looking for prompt jockeys. They want engineers who can use AI while still protecting the organization. That means knowing when the output is wrong, knowing what to test, and knowing what the model can’t see.
AI made code cheap. It didn’t change engineering principles.
19. “Taste is the new bottleneck.”
Anyone can generate a landing page, a blog post, a working prototype in an afternoon now. The barrier to shipping collapsed. That’s the good news.
The bad news is that most of it looks and reads the same.
I built this blog with AI tools. It took me longer to edit AI output into something that actually sounded like me than it would have taken to write it from scratch at first. The tool produced the words. I had to decide which ones deserved to stay.
In engineering, the same dynamic plays out in code. Every shortcut AI makes easy (skipping tests, ignoring edge cases, shipping the first draft) introduces tech debt or security gaps. Discipline to reject the fast-but-fragile option is what separates software that lasts from software you’ll rewrite.
Restraint turns tools into amplifiers. Without it, you get more volume but less soul.
Cultivate taste by consuming widely and learning to articulate why something works or doesn’t. Not “I don’t like it.” That’s a feeling (though its helpful because your gut is telling you something). “The hierarchy is wrong because the CTA competes with the headline.” That’s taste.
Does this serve the core problem? Does it strengthen the brand? Would I regret adding this in six months? If you can’t answer those, you’re not building.
Translation: When production is free, editing becomes the skill. The people who win aren’t the ones who ship the most. They’re the ones who know what to keep and what to kill.
20. “Build for the switch. The best model today won’t be the best model in three months.”
Four major coding models launched in six days this February. The benchmark gap between the best and worst was 2.6 percentage points. Two were proprietary, two open source. One cost $5 per million tokens. Another cost $0.11 for nearly the same performance.
I wrote a full breakdown of that week. The takeaway wasn’t which model won. It’s that the leaderboard reshuffles every few months and the cost curves keep collapsing. If your workflows, processes, and tooling are locked to one provider, you’re paying a tax every time the market moves and you can’t follow it.
Nobody runs a single EC2 instance type for their entire infrastructure. Same logic applies to LLMs. Simple tasks go to cheap models. Complex multi-file refactors go to frontier models. The routing decision should be easy to change because you’ll be changing it often.
The practical version: build your systems so switching a model is a config change, not a rewrite. Have identical AGENTS, GEMINI, and CLAUDE.md files. Treat the LLM as a dependency you expect to swap, the same way you’d treat a database driver or a cloud provider. The teams that move fastest aren’t the ones who picked the right model. They’re the ones who made it cheap to pick again.
Translation: Vendor lock-in is expensive when the market shifts every quarter. Build the abstraction now. The switch will come whether you planned for it or not.
21. “Generic prompts produce generic output.”
The prompt is the brief. Most people treat it as a starting point — fire something vague at the model and edit from there. The better approach flips it: do the hard thinking upfront. Define the role, the output format, and the constraints before you write a word of the prompt body.
When the role is specific (“You are a conversion copywriter who specializes in B2B SaaS landing pages”), the output is specific. When the task is one thing, not three, the output is usable. When the format is explicit, you don’t reformat by hand — you pipe it directly to the next step.
Less editing on the back end is a direct result of more precision up front. The model isn’t the bottleneck. The brief is.
Translation: Specificity is leverage. The time you save in the edit pass is always longer than the time you spend sharpening the prompt.
22. “Specialize then compound.”
Chaining prompts before any single prompt works is a common mistake. You get compound garbage — each step inherits the noise from the step before it.
Get the single version right first. Run the market research prompt until the output is actually usable as research. Then wire the persona prompt to receive it. Then the competitor gap prompt. Validate each handoff before you automate it.
The system only runs as fast as the slowest correct step. A broken chain fails silently — it produces output that looks reasonable until you check whether it’s actually accurate.
23. “The chain is the product.”
A single prompt is a move. A connected system of prompts is a workflow. The compound output beats any one.
The engineers who get real leverage from AI aren’t the ones who found the best individual prompt. They’re the ones who built the handoffs. Market research feeds into personas. Personas feed into competitor gaps. Competitor gaps feed into content strategy. Each step receives a clean, structured output from the step before.
What makes this hard is format discipline. The output shape of step one has to match the expected input shape of step two. If you don’t define both, the chain breaks at the seam and you’re back to copying and pasting between windows.
Translation: Individual prompts are tactics. A system of connected prompts is a strategy. The advantage compounds across every step.
24. “Version your prompts.”
The first version is never the best. The problem is most people don’t know why it got better — they made changes, the output improved, and they moved on. Next time they hit the same problem, they’re starting over.
Treat prompts like code. Write down what you changed and why. “Added a format constraint — output was unstructured and broke the downstream step.” “Narrowed the role — output was too general.” That log is worth more than the prompt itself.
Prompt sprawl is the failure mode. You end up with a folder of markdown files, three variants of the same persona prompt, and no memory of which one actually worked. The version history is the discipline that prevents it.
Translation: Iteration without documentation is just thrashing. Track what changed, why it changed, and what the result was. That’s how a prompt becomes a system.
25. “Volume finds the signal. Displacement captures it.”
There’s a tweet I came across that I keep thinking about — the answer to getting results is usually more. More outbound, more ads, more shots.
And then there’s Nick Saraev’s growth data showing the opposite: his biggest subscriber days all came from a handful of spikes where he read the market and moved fast, not from grinding out daily uploads. Both are right. They’re just sequential.
Volume is how you discover what works. You publish enough, pitch enough, ship enough that patterns emerge. But once you’ve found the signal, staying in volume mode is just motion.
Saraev’s framing is clean: motion is zigzagging. Displacement is the straight line to the goal.
The mistake is trying to be strategic before you have enough data to strategize from. The other mistake is staying in volume mode after you’ve already found what works.
Translation: Volume is exploration. Displacement is exploitation. The people who stall are the ones who never switch modes — or switch too early.
26. “Manipulate knowledge, not just code.”
Karpathy said “A large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge.”
He’s running an LLM-maintained markdown wiki in Obsidian — about 400K words, index files and summaries the model auto-maintains.
I’ve been running a version of this for a few months without naming it that clearly. Every knowledge base (KB) entry, every session recap, every brief that links back to research — it’s the same loop. Each interaction makes the next one more useful. The model gets better because the KB it reads keeps growing with references I decided were valuable.
That’s a fundamentally different relationship with AI than “paste context, get output, close tab.”
The practical implication: time spent organizing what you know compounds harder than time spent generating new output. A well-maintained wiki is infrastructure.
Translation: The highest-leverage AI workflow is building a knowledge system where every query makes the next one better.
27. “Your creative output has a metabolism.”
Dan Koe described a cycle I recognized immediately: shift focus away from your creative routine, stress fills the gap, ideas stop flowing, the stress compounds. He calls creativity a state of consciousness, not a talent — and identifies three things that kill it: conditioning that suppresses curiosity, productivity-obsession that leaves no room for open thinking, and information overconsumption that overwhelms your ability to process anything.
There’s a real tension between the volume lesson (take more shots) and the sustainability lesson (a stressed mind can’t see new connections). The resolution is recognizing that creative capacity has a metabolic rate. You can only process so much input before you go flat.
Feed the system, but leave room for it to digest.
Translation: Information overconsumption works like overeating. Your mind has a metabolic rate. Exceed it and the output goes flat regardless of how hard you push.
28. “Confidence moves faster than intelligence.”
The ones who moved fastest around me weren’t the most talented. They were the ones who learned presence — how to read the room, know the players, and position what they were saying so the right people heard it.
That’s a different kind of intelligence. It’s not IQ. It’s knowing how to get what you want out of a situation. How to frame an architecture decision so your VP cares. How to present your work so leadership remembers your name when the next opportunity opens. How to sell yourself without feeling like you’re selling yourself.
People remember who showed up with conviction.
Translation: Technical skill gets you to the table. Presence is what makes people listen once you’re there.
29. “Speed of action beats quality of plan.”
This connects to lesson 25 but sharpens the point. The bolder you allow yourself to be, the higher the chances you end up somewhere good. Simply doing something generates information. Every action produces feedback. Every plan just produces more planning.
I shipped my first blog post before I had a content strategy. I published my first side project before I had a portfolio site. Both of those decisions taught me more in a week than the months I spent thinking about them beforehand. The plan would have been wrong anyway. The action showed me what to fix.
Translation: A mediocre action beats a perfect plan because action produces data and plans produce assumptions.
30. “If it takes two minutes, do it now.”
Small undone tasks don’t stay small. They sit in your head, drain a tiny amount of attention, and compound into a fog of “things I need to get to.” The Slack reply, the PR approval, the quick config change. None of them are hard. All of them weigh something when they pile up.
I started applying this after I noticed my focus sessions were getting interrupted by my own mental backlog, not by other people. The two-minute rule isn’t about productivity. It’s about clearing the runway so the deep work has room to land.
Translation: Small completions build momentum. Small deferrals build drag. The two-minute rule isn’t about efficiency — it’s about keeping your head clear for the work that actually matters.
31. “You just have to be consistently stupid about not giving up.”
There’s no elegant way to say this. Most of the outcomes I’m proud of came from refusing to quit long past the point where quitting made sense.
Persistence isn’t a strategy. It’s what’s left after every strategy fails. The people who end up with something worth having are usually the ones who were too stubborn to stop when the numbers said they should.
Translation: Talent gets you started. Stubbornness gets you there. The gap between people who ship and people who don’t is almost never ability.
32. “No one is coming to save you.”
No mentor, no company, no framework, no AI tool is going to hand you the career you want.
Once you stop waiting for permission, rescue, or the right opportunity to find you, you start building the thing yourself. That’s when it compounds.
Translation: Ownership is the only strategy that scales. Everything else is a dependency on someone else’s priorities. Be high agency
The generalist’s edge
If there’s one thread through everything above, it’s this: the world is getting more complex, not less. AI handles the routine. Specialists handle the deep. But someone still needs to see the whole board, connect the systems, and make sense of the chaos.
That’s the generalist’s job.
In a year where AI replaced the easy parts and the hard parts got harder, the people who looked across domains and found the pattern nobody else saw? They were the ones who mattered.
Get posts like this in your inbox
Bi-weekly emails on automation, AI, and building systems that run without you. No fluff.
No spam. Unsubscribe anytime.
Related Articles
Lessons Learned in 2025
December 31, 2025
Sharing lessons learned as a Senior Software Developer.
Intentional AI Integration: How to Adopt AI Coding Tools Without Wrecking Your Codebase
April 1, 2026
AI tools made your team faster. Then patterns started drifting. Here's how to keep architectural coherence without killing the productivity gains.
The AI Coding Model Wars: How Open Source Is Closing the Gap on Proprietary Coding Models
February 15, 2026
Four major coding models launched in six days. The benchmark gap? 2.6 points. The price gap? 45x. A head-to-head comparison of Opus 4.6, Codex 5.3, GLM-5, and Kimi K2.5.
Wrestling with a technical challenge?
I help companies automate complex workflows, integrate AI into their stacks, and build scalable cloud architectures.