Skip to content
Go back
✍️ 에세이

AI Native Engineer: Taste Built on Principles

by Tony Cho
27 min read 한국어 원문 보기

TL;DR

You can master the nine skills of agentic engineering (How), you can work inside an AX-transformed org (Where), but if you (Who) aren't someone who exercises taste on top of principles, none of it matters. DORA 2025 data, my iOS hard lessons, Carson Gross's sorcerer's apprentice trap, Linear's Quality Wednesday: an AI Native Engineer is someone who decides what to build, judges whether what AI built is right, and owns it to the end.

AI Native Engineer: Taste Built on Principles

1. Opening

The AI-native era is here, and a lot of people are scared of it. The phrase “developer collective depression” doesn’t feel like an exaggeration anymore. Fear of losing the job, even talk of a new Luddite movement. Should we go smash AI servers like the original Luddites did? Will the AI revolution leave every individual knowledge worker without work, with a handful of giants hoarding all knowledge production? I don’t know.

As I’ve said in earlier posts, AI is essentially a mirror of the person using it. Strong people get stronger with it; lazy people get lazier. We’ve reached the point where it’s hard to imagine working without it, and yet what I see around me is that output quality still depends massively on the individual. Right now everyone’s swept up in FOMO, racing to adopt new tools and reshare and repost them, but the people who actually ship results are still few.

I’ve worn a lot of hats (developer, lead, builder) and met a lot of people. From that experience, my own take is surprisingly optimistic. Anxiety is what shows up first, but this post is about what’s beyond the anxiety. Companies will start hiring juniors again soon, and the companies that succeed in the AI era will hire too. I run more than twelve agents around the clock, every day, applying every skill and case study I can find to my own work. And yet when I stop everything and write a single thought-note on a blank page, that one note beats all of it. The agents can chew through my notes and produce a working implementation at speeds I couldn’t have imagined before, but without “my” thought-note in the first place, what is there to start from?

No matter how many times you square zero, it’s still zero. An AI Native Engineer is someone who isn’t zero.

In 9 Skills of Agentic Engineering I covered the How, and in AX Organization Transformation I covered the Where. This time it’s the Who: what kind of person is an AI Native Engineer? This one is the hardest to write, because in the end I have to look at myself.

The How has plenty of answers already. OpenAI laid out a Delegate-Review-Own model. Mike Mason calls Context Engineering the next stage after prompt engineering. Karpathy described how spinning up agents, handing them tasks, and reviewing them in parallel has already become an engineer’s daily routine. Steve Yegge orchestrated 20-30 agents in parallel and produced a million lines in a year. There’s no shortage of methodology for how to use AI.

What’s missing is the Who. Handling tools well is a condition, not an identity. Knowing your knife doesn’t make you a chef, and being good with AI doesn’t make you an AI Native Engineer.

Taste without principles is guesswork.


2. What Got Exposed: How This Differs from the Old Engineer

I’ve been writing software for fifteen years. Most of that time was spent wrestling with tools. Memorizing language syntax, learning framework conventions, configuring build systems, building deployment pipelines. There’s a line in the preface to Drew Hoskins’s The Product-Minded Engineer:

The tools and languages were so hard that learning and using them was a full-time job in itself.

Reading that, the last puzzle piece finally clicked into place.

Good with Swift, you’re an iOS developer. Good with React, you’re a frontend developer. Good with Kubernetes, you’re a DevOps engineer. The tool was the identity. “What you built with” mattered more than “what you built.” That was the era.

AI has started doing that full-time job for us.

Syntax? AI knows it. Frameworks? AI knows them. Build configs? AI handles them. Deployment pipelines? AI writes them. It’s not perfect yet, of course. Complex legacy codebases still demand grunt work. But the direction is clear.

And once that full-time job started disappearing, the things that should have mattered all along (the ones the tools were hiding) came into the foreground.

It came down to user understanding, product thinking, business ownership.

The data backs this up.

According to the DORA 2025 report, PR volume jumped 98% after AI tool adoption. Almost double. Software delivery performance, though? Flat. No change.

The number stings a little, and Nicole Forsgren named exactly why. The coding inner loop (writing code, testing, building locally) got faster. The outer loop (review, approval, integration, deployment, security, feedback) is still the bottleneck. The real bottleneck was never coding. What we called “coding productivity” was a tiny slice of the full value chain.

There’s a colder data point too. Professor Rem Koning at Harvard Business School ran an experiment giving ChatGPT to small-business founders in Kenya, and the group that was already underperforming saw their profits drop 10% after using AI. They got plenty of advice, but they didn’t have the judgment to tell good advice from bad. The group that already had judgment filtered out the bad advice and improved their numbers. Koning’s takeaway: AI is not an equalizer. It’s an amplifier. Without earned insight from doing the work yourself, AI just leads you toward slop.

The era of “as long as you can code, you’re fine” hasn’t ended. What’s ended is the illusion that coding alone is enough. The easier the tools get, the less you can hide behind them.

So what specifically has changed? Compared to the old-era engineer, three things stand out.

The expansion of responsibility

Back when I was leading a dev team, our scope of responsibility ended at delivery. Shipping accurately and quickly, that was our job. Sales and ops were always somebody else’s. What I feel viscerally now is one thing: discovery matters more than delivery, and shipping one well-found thing beats shipping a hundred wrong ones. In a world where building got dramatically faster, an engineer who can’t take ownership of discovery is a zero engineer.

In the old days the PM asked “why are we building this?” on the engineer’s behalf. Engineers got the spec and implemented it. That structure wasn’t bad. It made sense. Tools were too hard. But now that AI has taken a big chunk of implementation, what is an engineer who doesn’t know the “why” actually doing? Handing the spec to AI? That’s a relayer, not an engineer.

What does the user actually want? What’s the business impact of this feature? Why is this priority set this way? The gap between engineers who get this context and engineers who don’t is widening dramatically in the AI era. AI takes over the “how,” so an engineer who doesn’t know the “why” has nothing left. An engineer who doesn’t know the user just gets left behind.

“Sam” has been GitHub-inactive for five years and has zero social media presence. And former colleagues line up to hire him. One startup was ready to invent a new role just for him. The moment he understands a project, he breaks the whole thing down on a whiteboard. He frames delays as “tradeoffs,” not “delays.” When working with another team, he doesn’t open with “please do this for me,” he opens with “the customer is hitting this problem, can you walk me through how this system works?” High-level decomposition plus customer-problem thinking, in one person. This isn’t a new AI-era skill. It’s what good engineers always did. Now there’s nowhere left to hide if you don’t.

Ten times faster learning

If AI generates ten times the code, the speed at which you read and judge that code has to be ten times faster too. To look at the 200 lines AI produced in 30 seconds and call out “this is right, that’s wrong, here’s a perf issue,” your foundation has to be solid.

Basics never die. CS fundamentals, yes, and also a deep grasp of whatever tool you use, language or framework. Surface familiarity won’t let you tell good AI output from bad. And if you can’t tell, you stop using AI and start being dragged by it. That’s not AI Native, that’s AI Dependent.

In the old era, going deep on one technology kept you fed for ten years. Good at Java? Java developer. Good at Golang? Backend developer. Depth was the moat. Not anymore. AI is collapsing the walls between languages. Someone who doesn’t know iOS can ship an iOS app with AI’s help (I’m exhibit A). But that doesn’t make them an “iOS engineer.” The gap between code that runs on the surface and code that holds up in production is still wide.

So the way you learn has to change. Instead of building up brick by brick from syntax, you read what AI generated and reverse-engineer the principles. Dig into “why does this code behave this way?” and you naturally end up at deep understanding. AI’s output becomes your textbook. You just need the eyes to read it.

The speed of judgment

Forsgren pointed out that working with AI means reconstructing your mental model dozens of times in 30 minutes. The old pace was one or two design decisions a day. Now it’s dozens in half an hour.

When AI offers three approaches, you decide which one fits. When AI says “add a cache,” you decide whether caching is actually the answer or whether the query itself is the problem. When AI proposes a refactor, you decide whether it’s genuinely better design or just the same complexity wearing different clothes. All of this happens fast, back to back.

Fast judgment comes from deep understanding. Someone who knows the principles can decide intuitively. “This is O(n²), so it’ll break under load.” “This structure has consistency problems in a distributed setting.” “This API design dumps too much responsibility on the client.” Calls like these need to land in half a second, in this era.

Before, if you didn’t know, you could look it up. There was time. Now, to keep pace with the speed at which AI throws code at you, your gut has to fire before you can look anything up. That gut doesn’t come out of nowhere. It comes from years of accumulated principles.


The essence didn’t change; it just got exposed plainly. Good engineers had these three things before too. The tools were just hard enough to hide them.

I might have been hiding behind tools for fifteen years. Using frameworks well, building servers well, designing solid architectures: I believed that was my craft. Not entirely wrong, but I sometimes acted like that was the whole story. There were times a user told me something was painful, and I doubled down on “technically it’s correct.”

When AI started doing that toolwork for me, the person hiding behind the tools got exposed.

(And it’s an uncomfortable thing to sit with.)


3. The Maker’s Backlash

In the earlier AX post I separated Maker from Closer. The Maker produces; the Closer brings outcomes home. Now let’s go a level deeper, from the org to the individual.

The classic Makers believed their KPIs were aligned with making things. Writing code, shipping features, opening PRs, closing sprints. The ones who coded until dawn, deployed on weekends, ran toward incidents. They were genuinely working hard.

But in most IT orgs, there comes a moment when most of the Makers are judged not to be contributing directly to business KPIs. That’s when the layoffs hit. Big tech let go of tens of thousands, and most of them were people who’d been working hard.

It’s painful, but most of the time this isn’t the individual’s fault. It’s the backlash that hits people who were hired without thought and who diligently did the work they were given.

AI is accelerating that backlash. The Maker’s work (writing code, implementing features) has been shown to be replaceable by AI in large part, and the wave is breaking faster than we expected.

Some Makers, even in that environment, were probably already asking “is this actually contributing to the org?” The ones who checked the data on their own features, and were the first to say “let’s kill this” when the metrics didn’t show up. Either they left or they survived. Either way, the depth of their thinking made the next move clearer. (Not everyone gets that chance, of course.)

It used to be that “a Maker with a Closer mindset” was a compliment. “A developer who also gets the business. Impressive.” Now? You can’t survive without being a Closer. It’s not praise anymore. It’s a survival condition.

The faster I click-build with AI, the faster everyone else does too. The act of building itself is becoming a commodity. In an era where anyone can build, “I’m good at building” doesn’t differentiate. Click and you have an app. That’s not an edge anymore.

The love of building isn’t the problem. The direction of that love has to shift.

This isn’t a knock on Makers. I’m one. I love building. The rush of writing code, designing architecture, opening a clean PR. That part is still good.

But love doesn’t change reality. We’re in an era where building alone doesn’t complete value. Build, deliver, validate, iterate, and only then does it become value. Someone has to own the whole arc.

This is the bitterest part for me. I had Maker pride too. I believed building well was valuable in itself. Reality is colder than that.

That doesn’t mean technology stopped mattering. The opposite, actually. Technology matters more. Just a different kind of technology.

I learned this one in my body.


4. The Sorcerer’s Mistake: The Paradox Where Tech Matters More

Right now I’m building a backend in Golang and a client in iOS.

In Golang I sometimes get lost in code-logic bugs, but I rarely get blocked by not knowing the tech itself. I jump straight to the core logic, fix things fast, and grasp AI-generated code quickly.

iOS is a different story. I started native iOS development for the first time this year, and I’m leaning heavily on AI. When variables don’t render right in WidgetKit, when layouts don’t come out the way I want, my native iOS skills aren’t strong enough, so I spent two or three days endlessly editing while neither AI nor I could actually fix the problem, stuck in an infinite loop. Most of it was layout issues that don’t surface in code logs, or transitions that need to feel natural. Walking in without basics, from how Navigation Stack works to Liquid Glass, I got destroyed.

AI throws together believable-looking output by vibe. On the happy path everything looks normal. Then you go back, the header breaks. The loading cuts out. The transition feels wrong. AI implemented everything, and somehow the result was indistinguishable from nothing being implemented at all. Every skill, every harness-engineering trick. The work was still piece by piece. I’d talk through the problem with AI, reproduce it by hand, and when AI kept circling the same bug, I’d try to give it the parent layout or app-wide context. Eventually one thought kept coming back: “an iOS engineer would have done this in five minutes.”

WidgetKit, screen transitions, all of it: using technology I didn’t understand cost me days. If I’d been an iOS engineer, I’m certain I would have fixed it in five minutes. Being able to do something doesn’t mean you can do it well.

A product engineer is an engineer who thinks about the end user. What AI produces is a starting point, and shaping it into something the end user is actually happy with comes down to the engineer. Principles and taste aren’t opposed. The more you want to exercise taste, the more you’ll find that without understanding the principles, you cannot produce real quality.


This isn’t only my experience.

Carson Gross (HTMX creator, Montana State professor) calls it the “Sorcerer’s Apprentice Trap,” and that was exactly my situation. The Disney Fantasia scene where the apprentice enchants the broom to fetch water and loses control. (My all-time favorite Mickey episode, though Ellie says she doesn’t really know it. Anyone else here who watched the Disney cartoon hour on Sunday mornings?) The relationship between AI and coding looks just like that.

If juniors don’t know how to write code, they don’t know how to read code. And if they can’t read it, they get jerked around by the LLM.

That described my iOS problem precisely. Code is usually read more often than it’s written. So AI writes the code for you, and writing matters less? Reading matters more, then. You have to read code you didn’t write, understand it, and judge it.

What’s scarier is the broken feedback loop. Normally, as code gets more complex, your body sends a signal first. Hands stop, head hurts, the “this is too hard” signal arrives. That signal forces the design simpler. With AI generation, that process disappears. AI hands over 200 lines or 2,000 lines without flinching. Complexity stacks invisibly, then explodes all at once.

LLMs don’t reduce essential complexity. They just generate accidental complexity easily. That’s the exact distinction Fred Brooks drew in 1986 in “No Silver Bullet.”

Steve Krouse (Val Town CEO) hit something similar from a different angle, and that overlaps my experience exactly.

Vibe coding gives you the illusion that your vibe is a precise abstraction.

It works at first. The demo is perfect. You win the hackathon prize. Then you add features or grow the scale, and bugs sneak in at lower abstraction layers you never understood. Same as my iOS experience. Network timeouts, memory leaks, concurrency issues. In production, with users doing things you didn’t expect, problems erupt at layers you never grasped.

And to debug those problems, you end up needing the principles.

One of Krouse’s questions stayed in my head. “Nobody talks about ‘vibe writing.’” Nobody seriously argues for “just write by feel” when it comes to writing. Good writing demands grammar, structure, and reading a lot of other writing. So why does coding get the “just go by vibe” treatment?

Thinking it through, it comes down to tool knowledge versus principle knowledge.

Tool knowledge. Swift syntax, React patterns, Kubernetes YAML, the API of a specific framework. AI can replace this. It already is. I don’t memorize Swift syntax these days. Claude knows it.

Principle knowledge. Networking, computer architecture, OS, distributed systems, data structures, algorithms. This is what shines more in the AI era. To judge “why is this code slow” or “why is this concurrency broken” in AI-generated output, you have to know the principles.

You can ask AI “why is this code slow?” AI might give you an answer. Whether that answer is right is up to a person who knows the principles. AI says “add a cache,” but the real problem is N+1 queries? Only someone who understands networking and the database can call out the difference.

Engineering mindset emerges on top of engineering theory. Mindset alone doesn’t build it. (A product engineer is, in the end, still an engineer.)

Saying “this API feels slow” without understanding networking is guesswork, not taste. When someone who understands TCP handshakes says “this is where the latency is happening,” that’s taste. Saying “the app feels heavy” without knowing the OS is impression. When someone who knows memory management points and says “there’s a leak right here,” that’s diagnosis.

Product taste has to sit on top of CS fundamentals. Order matters. Trying to grow taste without the foundation is like playing technical soccer without basic conditioning.

The sorcerer’s mistake lives here. AI replaces tool knowledge, so people assume “tech matters less.” In practice, principle knowledge matters more. The space tool knowledge used to fill has to be filled by principle knowledge.


5. Taste on Principles: Eval

So are principles enough?

No. Principles without taste might make a scholar, but they don’t make a good engineer. A CS PhD doesn’t automatically become a great engineer. Writing strong papers doesn’t mean you ship strong products. Knowing the principles is necessary, not sufficient.

The “Bob” story from Hoskins’s The Product-Minded Engineer illustrates the point well. Bob is a capable engineer. He knows the principles, his code is clean, his reviews are thorough.

But Bob implements without scenarios.

He builds what’s in the spec, without asking “in what real situation would the user actually use this?” Bob’s features work, the tests pass. And the users don’t use them.

“Technically perfect feature, nobody uses it.” I’ve lived through that several times (and maybe still am).

Hoskins compared engineers to editors. Not the person who writes good sentences, but the one who cuts the unnecessary ones. The person who decides “we don’t need this.” That’s the heart of Product Architecture.

AI can write code well. Whether the feature is what the user actually needs is still on the human.

That judgment is what Anthropic called “taste.” The thing that the people who build AI best are the slowest to hand off to AI.

The word “taste” sounds a little mystical, though. “Isn’t taste innate? What if you don’t have it?” I felt that way at first too. Especially watching engineers around me who clearly do.

Then Linear CTO Thomas answered it cleanly.

Taste is not mystical. It’s a craft.

Taste isn’t mystical. It’s a craft. It can be sharpened.

The Linear team proved this. Quality Wednesday: every Wednesday the entire team hunts and fixes product defects. Subtle scroll jank, buttons misaligned by 3px, they go after the small stuff with discipline. Over two years they fixed 2,500 defects. Repeat that weekly and a “always-looking-for-the-next-thing-to-fix” mindset forms automatically. Taste sticks to your body like muscle that way.

The way reading lots of good writing makes bad writing jump out. The way experiencing lots of good UX makes bad UX irritate you. The intuition that shows on the outside is the result of experience stacked on the inside.

Taste is accumulated experience, not innate gift.

I covered the 9 skills of agentic engineering in an earlier post. Writing this one, I’m convinced I need to add one more.

Eval. The judgment to evaluate what AI produces.

If the nine skills are “how to work with AI,” Eval is “how to judge AI’s output.” If taste is the gut feeling that says “this isn’t quite right,” Eval is the ability to point out exactly why and propose a fix.

You hand AI a UI layout. It generates code, runs the tests. All Pass. CI is green.

“Oh, looks good.”

Then you touch it on a real device, and it’s a different story.

The layout warps on a narrow screen. The scroll feels off. The touch target is too small for a thick finger. The text gets clipped. The animation behaves differently from intent.

AI optimizes for the test cases, not for the user experience. Whether AI-generated code functions and whether it gives users a good experience are completely different things. AI can do the first. The second is still on humans, and will be for a while.

The deeper problem hits when AI patches a narrow piece of the layout in a way that breaks consistency, or when development heads in entirely the wrong direction. Tests stay green, the product drifts somewhere strange. AI Is Only as Smart as You Are, that’s the title of a post I wrote before, and it lands here. AI’s judgment doesn’t exceed mine.

“Is AI’s All Pass also All Pass for me?”

Anyone who can ask that question is an AI Native Engineer. Not the one who relaxes when tests pass, but the one who looks with their own eyes, touches it with their own hands, and judges from the user’s seat. That’s Eval.

And this Eval is sharper in the hands of someone who knows how networking moves, how memory is managed, how the rendering pipeline flows. Taste exercised on top of principles. That’s real Eval.

End-to-end ownership (from spec to deployment to user feedback) used to belong to the PO or the CEO. Developers “just built what they were told.” That boundary is collapsing. AI takes over “building it well,” so the human role left is “what should we build” and “is it actually valuable.”

This can’t run on individual willpower alone. If the role and responsibility aren’t given, the environment isn’t a good one for becoming an AI Native Engineer. “You just implement this feature to spec.” Eval can’t grow in an org like that. No chance to meet users, no permission to look at metrics, no space to ask “why are we building this?“


6. So here I am

If you’ve read this far, you’re probably wondering what to do.

I went through all of it. Stepped on every trap.

“I’ll learn first, then start.” I did that too. Sounds reasonable, but it was actually a declaration of not-starting. AI tools change every month. If you wait for prep to finish, you’ll never start. Like swimming, you have to get in the water; reading swimming theory books won’t teach you to swim. The day I caught myself saying the same thing a year later, I finally got it.

I’ve also opened Twitter, muttered “I should be doing this too,” and closed it. Anxiety can fuel action, but anxiety itself isn’t a substitute for action.

“Maybe I’ll take a course first.” I thought that too at first. Then I realized AI answers my actual questions directly. The time spent watching a course is ten times less productive than just hitting the wall. Isn’t it a little odd for an engineer to learn a tool by watching a lecture? (I know that sounds kkondae-ish (a patronizing senior who can’t read the room). There are good courses out there too.)

There was a stretch when I only communicated through code. The more AI takes over coding, the more talking with people and understanding users becomes the differentiator, and I’ve been the one who froze when asked “explain why we need this feature.”

The turn came when I started running into the wall.

When I got stuck, I asked AI. When I hit an error, I pasted the error message; when I got blocked on design, I asked about architecture; when a test failed, we analyzed the failure together. The process itself was the learning. One round of grunt work beats ten courses, and I learned that with my body.

I built what I actually wanted. Not “someday I’ll build it,” but starting now. With AI, I could build it much faster than before. Building fast meant failing fast too. That, more than anything, was the opportunity.

And critically, I dragged it all the way to a product with real users. A side project you alone use, and a product other people actually use, are worlds apart. The mix of unease and learning that hits the moment a user says “this is uncomfortable,” you have to feel that to develop product sense. AI doesn’t grow that for you.


I have a soft spot for tech and for the Maker craft. Still. A new framework drops, I want to play with it; designing clean architecture makes me proud; code that runs beautifully makes me happy. (Listening to talks on extreme Golang concurrency at GopherCon and feeling crushed in the process is a bonus.) When I see a great Maker, I respect them.

But the real value-creation wasn’t in any of that. However beautiful the code, if it doesn’t deliver value to the user, it’s self-satisfaction. All of us are only meaningful when the business actually generates value. It took me a long time to admit that.

I’ve watched developers dismiss users. “What do users know?” “The plan is wrong, technically this is the right way.” I’ve done that too (it’s embarrassing in retrospect).

Looking back, that was a defense mechanism. Taking users seriously requires much higher quality. Smooth UX, server infrastructure that doesn’t drop, AI features that feel natural. Take users seriously and every domain has a real challenge in it. Without confidence, the easier path is to look away. Hiding inside “what’s technically correct” is comfortable.

I’ve built a lot of products. I’ve had a business fail. I’ve experienced firsthand how an org collapses when it can’t be led to a win. I’ve watched capable people lose direction and scatter. Technically the team was excellent. We just couldn’t answer the question “is what we’re building giving real value to users?” with any clarity.

That experience, more than my tech romance, is what brought me here.

Loving tech and using tech to create value are different things. They’re not opposed, though. I love tech, so I dig into the principles, and because I know the principles, I shine more in the AI era. Without love, principles are just textbook content.

I want both. The priority shifted. Tech used to come first. Value comes first now.

Am I an AI Native Engineer(?) I’m not sure. The honest answer is “I’m becoming one.” I do know the direction. Value over tech. Close over Make. User over code. Stack the principles, sharpen the taste.

Right now, working with Ellie on an app, I use AI, take feedback, watch the metrics, and fix it again. Break yesterday’s work today, fix today’s work tomorrow. I try to judge user experience while understanding the rendering pipeline, and try to say “we don’t need to build this” while knowing the system architecture.

I’m still scrambling to create value today. That’s all there is.

It isn’t glamorous. But this is the real thing.


7. Closing: A Compass on the Accelerant

“Is what you’re building actually generating value?”

Even when I run dozens of agents through n rounds of feedback to produce a draft plan, Ellie still finds gaps. Same for this post. AI gave me a plausible draft, but in the end I rewrote most of it myself, with AI handling only some of the polish. You know the feeling. A piece written entirely by AI lands flat.

However much you automate testing, you can’t skip the act of using it yourself. And often, using it once and writing it up is faster than running 100 AI tests.

Terry Winograd, the Stanford first-generation researcher who has watched AI for over half a century since SHRDLU in 1971, said this:

AI is not the cause of the problem. AI is an accelerant.

Coming from someone who walked through the past AI winters in person, that lands with weight. Problems that already existed are getting accelerated by AI. People running in a good direction arrive at good places faster. People running in the wrong direction hit the wall faster. What changed is the speed, not the direction.

An accelerant needs a compass.

That compass is taste built on principles.

Taste without principles stays guesswork; principles without taste stay academic.

An AI Native Engineer is someone who exercises taste on top of principles.

Someone who understands networking and can also judge user experience. Someone who knows system architecture and can also say “we don’t need to build this.” Someone who can read code and also ask “is this even the right problem?”

Even with the How (agentic skills), even working in the Where (an AX org), if the Who (you) isn’t someone who exercises taste on top of principles, none of it means anything.

Taste built on principles. That’s the AI Native Engineer.

End-to-end ownership has always been what made a good engineer. It was true before AI, and it’ll be true after. The only difference is, now it’s hard to keep pretending otherwise.


References

  1. Andrej Karpathy, X post on AI Builders vs Coders
  2. Mike Mason / ThoughtWorks, “AI-First Software Engineering — Context Engineering”
  3. Steve Yegge, “Revenge of the Junior Developer”
  4. OpenAI, “Building an AI-Native Engineering Team”
  5. Drew Hoskins, The Product-Minded Engineer (O’Reilly, 2025)
  6. Pragmatic Engineer, “How to Be a 10x Engineer”
  7. DORA, “Accelerate State of DevOps Report 2025”
  8. Rem Koning / Harvard Business School, “AI Native lecture at Harvard” — EO Korea
  9. Nicole Forsgren / Faros AI, “Key Takeaways from the DORA Report 2025”
  10. Terry Winograd, Stanford Interview on AI as Accelerant
  11. Carson Gross, “Yes, and… The Sorcerer’s Apprentice Trap”
  12. Steve Krouse, “The Death of Code is Greatly Exaggerated”
  13. Anthropic, “How AI Is Transforming Work at Anthropic”
  14. Pragmatic Engineer, Product-Minded Engineer Panel — Linear CTO Thomas: “Taste is not mystical. It’s a craft.”
  15. Linear, Quality Wednesday
  16. flowkater, “9 Skills of Agentic Engineering”
  17. flowkater, “AX Organization Transformation”
  18. flowkater, “AI Is Only as Smart as You Are”
  19. flowkater, “No Victory, No Future”
  20. flowkater, “2025 GopherCon Korea Review”

FAQ

What is an AI Native Engineer?
Not someone who's good with AI tools, but someone who exercises taste on top of principles. An engineer with CS fundamentals (networking, OS, data structures) as the foundation, and the taste/Eval to judge user experience and business value on top.
What's the difference between tool knowledge and principle knowledge?
Tool knowledge (Swift syntax, React patterns, Kubernetes YAML) is what AI is already replacing. Principle knowledge (networking, distributed systems, algorithms) is what you need to judge whether AI-generated code is slow, race-prone, or just wrong.
What did the DORA 2025 report show about AI adoption?
After AI tool adoption, PR volume went up 98%, but software delivery performance was flat. Coding (the inner loop) sped up, while review, approval, and deployment (the outer loop) stayed the bottleneck.
What is the Eval skill?
It's the judgment to evaluate what AI produces. If taste is the gut feeling that says 'this isn't quite right,' Eval is the ability to point out exactly why and propose a fix. It goes beyond green tests to judging the actual user experience.
What are the limits of vibe coding?
It works at first, but the moment you add features or scale up, bugs surface at abstraction layers you never understood. Steve Krouse pointed out that nobody talks about 'vibe writing.' Coding, like writing, demands a real grasp of the basics.
Tony Cho profile image

About the author

Tony Cho

Indie Hacker, Product Engineer, and Writer

제품을 만들고 회고를 남기는 개발자. AI 코딩, 에이전트 워크플로우, 스타트업 제품 개발, 팀 빌딩과 리더십에 대해 쓴다.


Share this post on:

반응

If you've read this far, leave a note. Reactions, pushback, questions — all welcome.

댓글

댓글을 불러오는 중...


댓글 남기기

이메일은 공개되지 않습니다

Legacy comments (Giscus)


Previous Post
When Package Install Becomes a Hack: Why Zero Dependency Matters
Next Post
Installing Claude Code Across Your Org Doesn't Make It AX