Installing Claude Code Across Your Org Doesn’t Make It AX
1. To Start: Between Admiration and Discomfort
A LinkedIn post showed up the other day about a Korean startup rolling out Claude Code company-wide. I also saw a piece claiming OpenClaw was deployed across the whole org and productivity exploded. Given how conservative the corporate culture is here (even at startups), these moves deserve credit.
Honestly, my first reaction was, “Wow, that’s pretty serious.” I’ve also bolted AI tools onto my own teams, hooked up MCP, felt the productivity bump in my own hands. But something kept nagging at me. The tools clearly got better. The way the org actually worked together didn’t feel any different.
Is that really AX at the org level (AI Transformation, redesigning the entire organization with AI as the baseline assumption)? Can we actually say the org “adopted” AI in any meaningful sense?
Before Claude Code there was Notion. Before that, Google Drive. Before that, Slack. So let’s get to the bottom of it. Over the past decade, did organizations adopting these tools see truly explosive productivity gains compared to before? Did those gains actually convert into business outcomes? Are there real cases where simply adopting a tool produced that kind of result?
I’ve written plenty about AI-native engineers. This time I want to talk about how an AI-native organization should actually be built. The gap between what we feel personally as LLM performance leaps forward, and what shows up at the org-level outcome layer.
To untangle that gap, the vocabulary has to come first. The reason this distinction matters is that too many people use the same phrase, “AI adoption,” to mean wildly different levels of change. I think of the relationship between AI and organizations in three stages.
Stage 1. AI usage. Individuals using ChatGPT, Claude, or Copilot for work. Most office workers are here. They’re picking tools, tuning prompts, going “huh, this is actually pretty handy.”
Stage 2. AI adoption (enablement). The company writes install guides, runs training sessions, hands out access. Most of the recent buzzy cases sit here. They deploy MCP, teach non-engineers how to use it, demo workflows by job function. It’s genuinely valuable work.
Stage 3. AX transformation. Roles, approval flows, KPIs, pipelines, governance, and accountability all redesigned around AI as the baseline. Almost no organization has gotten this far. Bain says “treating GenAI like a tool doesn’t work,” and McKinsey notes that only 1% of organizations have reached AI maturity, with leadership being the biggest blocker.
This post is about mistaking stage 2 for stage 3. About the difference between adoption and transformation, and how (inside that gap) the roles of organizations and individuals, especially Makers (who produce) and Closers (who deliver outcomes), need to shift.
2. A Good Adoption and a Good Transformation Are Not the Same
Let’s read this fairly first. The recent Korean cases that drew attention genuinely did certain things well.
That company removed the install barrier. They wrote OS-specific guides, deployed MCP, ran company-wide sessions. They had non-engineer colleagues do the demos themselves, building the perception that “I can do this too.” BX, HR, PM, finance, CX, business development: every job function got an actual workflow, and an internal survey showed every respondent said they used it almost daily.
Most domestic companies still can’t pull this off. That’s worth real credit.
There are great adoption stories abroad too. Morgan Stanley embedded AI directly into its financial advisor workflow, connecting meeting notes to summary to email draft to Salesforce save, and the majority of advisor teams adopted it. Impressive. But as Morgan Stanley itself says, “human relationships remain the core.” The financial advisor’s role itself didn’t change. The tool got better.
So we have to push the question one step further. What’s the unit of change here?
The PM became a faster PM. Finance became faster finance. HR became faster HR. AI accelerated the work inside each role. But did the relationship between PM and finance and HR shift? The approval flow? The KPIs? The decision-making path?
A Reddit comment nails this exactly: “AI made the ‘typing’ part instant, but it didn’t solve the organizational friction. So the business sees the same velocity, even if the devs feel like wizards.”
Funny thing. There was a similar scene seven or eight years ago. People expected adopting Notion to make organizational knowledge flow. Adopting Slack would supposedly speed up communication. The result? Notion became per-team wikis. Slack became per-team channels. The tools changed; the structure of how information flowed didn’t. The same pattern is repeating with AI tooling now.
Laying down a tool and rebuilding the organization around that tool are different categories of work. “But the start matters, doesn’t it?” Sure. I’m not denying that. The point is, you can’t mistake the starting line for the destination.
After your org adopted AI, did the number of handoffs between PM and engineer shrink? Did approval time shrink? Did the speed of value reaching the customer go up? If the answer to any of these is “no,” it’s still adoption, not transformation.
Of course there are teams doing this well. A 10-year engineering manager summarized his team’s full Claude Code rollout this way. They consolidated on a single agent environment. They converted existing Confluence runbooks into AI skills. They structured the codebase and architecture docs as AI context, automated ticket creation, and started catching missing requirements upfront. They had AI verify meeting notes so any tangent could be corrected immediately, and they added a new weekly “AI workflow share” meeting.
What stands out here is that this team didn’t just install tools. Meeting structure, ticket process, doc system, weekly sync. They changed the way the work itself flowed. A small European operator said something similar: “The biggest shift wasn’t the tools. It was redesigning the daily workflow. Bolt AI onto the existing process and you get marginal gains at best.”
When you look at the places that pulled it off, a pattern shows up. They didn’t succeed by laying down better tools. They succeeded by changing how they actually worked. But that’s a story about one team. Team-level transformation is doable. Org-level transformation only happens when leadership rewires the structure.
3. Why a Functional Org Absorbs AI and Stays the Same
The problem with a functional org isn’t that it can’t use AI. It’s that it absorbs AI too well. Marketing inside marketing, finance inside finance, engineering inside engineering. Each function bolts on AI and gets faster. But the place that needs AX isn’t inside a department. It’s between departments. It’s the entire value flow, not the inside of a job function.
AI Doesn’t Break Silos. It Reinforces Them.
People expect AI to tear down the walls between departments. The reality can go the other way. When each department optimizes AI only for its own work, marketing builds a marketing-grade AI, customer support builds a customer-support-grade AI, and you end up with per-department AI islands while company-wide outcomes stay flat. HBR points out that AI often reinforces functional silos rather than dissolving them.
Bolt AI onto a functional org and the functional org doesn’t disappear. It becomes a faster functional org. That’s high-speed silo-ification, accelerating territorial behavior between departments. It’s not AX.
The Bottleneck Lives Between Departments, Not Inside Them
Here’s something I lived through as CTO.
Another department sent over a project plan that would actually move a real KPI. Once it landed in R&D, it slipped behind our own priorities and weeks went by. Eventually those departments routed cooperation through the CEO not as a “request” but as a “directive,” and we stopped what we were doing to start theirs. By then the launch timing had drifted way off. The market wasn’t going to wait.
It happened inside the same division too. The information pipeline from lead to PM to design to frontend to backend to QA wasn’t smooth, more often than not. Most teams stayed inside their own KPIs (engineers ship their assigned features, for example), and so things like product polish or post-launch analysis (arguably more important than the build itself) didn’t get done even between people building the same product.
In the end, “Why are we doing another department’s work?” started showing up as a complaint. I couldn’t blame the team member who said it. People drift in that direction because of structural limits, and that’s not on the individual.
After that, the org shuffled along. Everyone got busy passing accountability around. The org itself lost initiative. Capable individuals dimmed too. Leadership grew frustrated with increasingly passive members, and the increasingly passive members grew distrustful and disappointed with the org. Maybe it was a case of the functional org’s downsides taken to the extreme. But watching capable, self-driven teammates turn passive isn’t a pleasant thing to sit with as a leader.
(I covered more of this in How Organizations That Don’t Win Fall Apart.)
The problem isn’t “who works faster.” It’s “how does the work flow.” No matter how much AI accelerates the work inside a department, the approval queues, handoffs, and decision delays between departments stay put. Local productivity goes up while the org’s end-to-end cycle time stays the same.
Each Department Gets Better. The Company Doesn’t Win More.
In a functional org, each department has its own KPI. Marketing has MQL, sales has revenue, engineering has ship velocity, support has response time. When AI shows up, each department uses it to hit its own KPI better. But company-wide outcome isn’t the sum of departmental efficiencies. BCG warns that if compensation, evaluation, and governance keep rewarding the legacy model, transformation stalls.
The bottleneck isn’t the prompt. It’s the approval structure. You can install tools. You can’t install accountability.
Bottom-Up Diffusion Alone Doesn’t Change the Structure
A developer can write the install guide, share skills, hook up MCP, become an internal evangelist. What a developer can’t do: change the headcount policy, change the eval system, redraw the boundary between departments.
Uber is a good case in point. 84% of developers use agents, and 11% of PRs are opened directly by agents. By the numbers, this looks close to AX. But at the same time, AI-related costs grew 6x year over year, and at the CFO level the question of whether this connects to business impact is still open. Even cases that look like wins aren’t airtight. Adoption was also slower than expected.
There’s data on this gap between what people feel and what’s measurable. In a randomized controlled trial by METR, experienced open-source developers were given AI tools and timed on tasks. They were actually 19% slower. After the task, they reported being 20% faster. When this happens at the org scale, the conviction “we innovated with AI” gets divorced from the measurement.
A startup founder told a story along the same lines. “I gave my CTO Cursor and a week-long task got done in a few hours. So I gave it to the whole team. Same result didn’t happen.” The productivity of one person who holds the codebase context in their head, and the productivity of the whole team, are different problems.
A developer can spread AI across the company. A developer can’t redraw the company around AI.
4. What Has to Change for It to Be Transformation: Five Axes from the Cases
In section 3 we covered the structural limit of the functional org. So what did the companies that actually crossed that limit change? Not just “they use AI well.” What axis of the org actually shifted? Let’s read it from the cases.
A Company That Changed the Role: Shopify
The first axis of AX is role. If AI shows up and everyone is still doing the same thing, that’s adoption, not transformation.
Shopify CEO Tobi Lütke set AI usage as a baseline expectation. To request a new headcount or a new resource, you first have to prove “why AI can’t do it.” AI usage shows up in performance reviews and peer reviews. It’s not encouragement. It’s policy. Roles that used to be production-by-job-function are being redefined as supervisor, reviewer, and judge, all assuming AI does the producing. A developer can evangelize AI all day and still can’t change hiring criteria, evaluation criteria, or resource allocation. A CEO can.
In your org, after the AI rollout, did anyone’s job description change? If not, the role axis hasn’t moved.
Companies That Changed the Pipeline: Uber, DBS
The second axis is pipeline, the path the work travels.
Section 3 mentioned the limits of Uber’s bottom-up adoption. The interesting part is that Uber recognized that limit and moved up to the platform level. They didn’t just install Claude Code. They built an agentic system in four layers: an MCP gateway, a background agent platform called Minion, smart PR routing called Code Inbox, AI code review called uReview, automated test generation called Autocover, and large-scale migration management called Shepherd. The developer workflow itself shifted from “code in a single IDE” to “orchestrate multiple agents in parallel.” The serial handoff of a functional org converted into a hybrid human-agent pipeline. That said, costs grew 6x, and the link to business impact is still an open problem. Changing the pipeline doesn’t guarantee success.
Singapore’s largest bank, DBS, went one step further. They completed nine operating-model transitions and rebuilt their human-AI collaboration workflows. DBS calls this “operating model transformation.” Not just installing tools. Redesigning the path the work takes.
Companies That Changed KPIs and Governance, and Companies That Couldn’t
The third and fourth axes are KPI and governance. They move together. If what you measure (KPI) and who has what authority (governance) don’t change, no matter how much you change roles and pipelines, the org snaps back into shape.
BBVA scaled AI from 3,000 people to the full 120,000-person org, personally training 250 executives including the CEO and bringing security, legal, and compliance in from the start. Governance wasn’t bolted on later. It was baked into the rollout design. Box also explicitly designed an executive sponsor + functional ownership + central build team + AI manager structure, embedding AI governance into the org structure from the beginning. J&J ran 900 AI use cases and confirmed that 10-15% of them generated 80% of the value, shifting weight from central governance to domain ownership. They moved from “let’s try a lot of things” to “we have to choose what to use.” The strategic focus moved from usage to impact.
Moderna defined itself as a “real-time AI organization” and hit 65% real-usage rates. The notable part is the public refusal of a model where business growth requires more headcount. CEO Stéphane Bancel said the company has to be able to run billions in revenue with thousands of people. That’s a deliberate move from “headcount expansion” to “per-person efficiency” as the standard for growth.
On the other side, there are clear cases of companies that couldn’t touch this axis and fell apart.
In 2024, Klarna trumpeted that an AI chatbot was doing the work of 700 people and emphasized headcount cuts. Then in 2025, the CEO admitted they “leaned too hard on cost cutting” and went back to hiring. When the KPI is bent purely toward “cost cutting,” automation gets mistaken for the operating model. Costs went down, service quality went down with them, and they had to hire back.
McDonald’s × IBM tried AI voice ordering and shut it down in 2024. Confusion, order errors, accent recognition problems. There were technical limits, but the deeper issue was that the operation on the ground wasn’t ready to run that technology. When tech and operational readiness don’t move together, pipeline transformation ends as an experiment.
Duolingo expanded to 148 courses fast with AI and posted real business results, but the “AI-first” declaration paired with messaging about replacing contractors triggered serious backlash. Even with results, emphasizing only headcount cuts without the role-shift context loses legitimacy inside and out. Transformation without change management is half-built.
The Fifth Axis: Resource Reallocation
The last axis is resources. The budget AX needs isn’t the tool license fee. It’s the cost of work redesign, change management, training, and operating-principle work. BCG sums it up: “real value goes to the small set of organizations that go past tool deployment and redesign how work flows.”
Atlassian announced about a 10% workforce reduction (around 1,600 people) in March 2026, reinvesting in AI and enterprise sales while reorganizing around its System of Work. A clear case of “we’re rewiring the org itself because of AI.”
Putting It Together
The common changes across these cases line up into five axes.
| Axis | Functional org | AX org |
|---|---|---|
| Role | Producer by job function | Supervisor, reviewer, judge, exception-handler |
| Pipeline | Serial handoff between functions | Hybrid human-agent pipeline, end-to-end single team |
| KPI | Output volume, utilization | End-to-end cycle time, decision latency, exception rate, customer outcome |
| Governance | Per-department approval and access | Central data access, model permissions, risk ownership, audit logs |
| Resources | Tool license budget | Work redesign, change management, training, operating principles |
If even one of these five hasn’t moved, you’re still in the adoption phase. You can call it transformation only when all five move together.
5. What an AX Organization Actually Looks Like
We laid out five axes. But axes alone don’t paint the picture. Let’s sketch what a day looks different inside an org that actually went through AX.
A Project in a Functional Org vs. a Project in an AX Org
Picture a single new feature shipping in a functional org. The PM writes the spec. Hands it to design. The designer makes mockups, gets PM sign-off again. Hands it to engineering. Frontend first, backend next. QA tests, finds bugs, hands them back to engineering. After release, the data team runs the analysis. By the time results come back to the PM, a month has passed.
Across this whole process, the actual work time inside each team adds up to two days. The rest is waiting, handoffs, context switching, approval queues.
In an AX org, this flow is fundamentally different. A single mission-aligned team has product engineers and product designers in it together, with AI agents working as part of the pipeline. There’s no separate PM writing a spec to hand off. The product engineer makes direction calls on top of AI’s data analysis. The product designer makes fit calls on top of AI’s drafts. AI writes the code, the engineer reviews. Tests are automated, and post-deploy analysis comes back inside the same team in real time.
Break down the existing PM role and it looks like this. Spec writing? AI does it. Inter-department coordination? There are no handoffs in a mission-aligned team, so this isn’t needed. Data analysis and prioritization? AI proposes, humans decide. Decision-making? That stays. But the product engineer and product designer can do that themselves. One of the biggest reasons the PM existed was to act as the bridge between departments. When the boundaries between departments melt, that part of the role shrinks.
One thing not to misread: this isn’t saying PMs disappear. PMs exist at companies like Shopify or Stripe too. The thing is, those PMs aren’t inter-department coordinators. They’re people who define customer problems and make calls on product direction. The “PM who writes specs and manages handoffs” in a functional org and the “PM who decides what to build and owns the result” in a mission-aligned team have the same title and completely different jobs. The former PM has a shrinking reason to exist in an AX org. The latter PM becomes more important.
The core difference is two things. First, handoffs disappear. The work flows inside a single team. Second, the human role shifts from production to judgment. Every team member operates as a Closer instead of a Maker. They don’t write code; they judge code quality. They don’t make designs; they judge design fit. They don’t write specs; they decide product direction.
Past the Product Team: Marketing, Finance, and Support All Folded In
Let’s push this one step further. There’s no reason to confine this logic to the product team.
In a functional org, when a product launches, the marketing team plans the campaign, sales sells it, support handles inbound, and finance reconciles the revenue. Each has its own KPI and its own reporting line. To know how a feature played in the market, you have to gather data from all those teams, and that gathering itself creates more handoffs and more waiting.
In an AX org, those boundaries melt. Teams form around a single customer journey or a business mission. Inside that team you have product engineers and product designers, plus growth, customer experience, and revenue analysis. Different titles, same KPI.
Take a mission like “80% completion rate on new-user onboarding.” Inside that team, AI analyzes user behavior data, growth designs experiments, the product engineer improves the onboarding flow, and customer experience compiles feedback from churned users. All of this happens inside one team, looking at the same dashboard, discussed in the same weekly meeting.
There’s a reason AI makes this possible. It used to be that you had to hire a marketing specialist, a finance specialist, and a data analyst separately. Each specialist had to produce the deliverable in their own area. But when AI does the producing, one person can use AI to analyze marketing data, summarize customer feedback, and model revenue at the same time. The depth of expertise stays; the coverage widens.
DBS redefining ownership of the customer journey, mentioned earlier, is exactly this model. Each customer journey belongs to a single mission-aligned team, and that team owns it end-to-end, including product, marketing, customer experience, and revenue. Uber building four layers of agentic systems is the same idea. An individual using Claude Code and an org embedding agents into the pipeline are completely different categories of thing.
A functional org grouped people by expertise. An AX org groups people and agents by mission. That’s the most fundamental difference.
The Subject of AX Is the Executive
Who can actually execute this transition? Not a developer.
What a developer can do: install, share skills, hook up MCP, evangelize internally. What a developer can’t change: headcount policy, the eval system, ownership between departments, risk policy, budget allocation, KPIs.
Shopify had a CEO cross that line. Uber had a platform team design governance and the cost system centrally. BBVA trained 250 executives including the CEO before going company-wide. The common thread is clear. AX isn’t a rollout campaign. It’s a redesign of management. Unless an executive decides to redraw the org chart, no matter how good the tool, the functional org just stays a faster functional org.
Andrew Ng says the bottleneck of the AI era isn’t coding. It’s product management. The ability to decide what to build becomes scarcer than the ability to build. The people who can make that decision sit at the top of the org.
6. Maker and Closer: Individual Careers Have to Shift
We sketched the picture of an AX org. Mission-aligned teams, hybrid human-agent pipelines, end-to-end with no handoffs. So who survives in that org?
When the Org’s Basic Unit Changes, So Does the Talent It Needs
The problem is that marketing, finance, and engineering are separated into different teams. They have to be reorganized into mission-aligned units. Every member of that team has to focus on hitting the same KPI.
Andrew Ng says the biggest bottleneck ends up being humans, specifically the people deciding the product. But in a traditional functional org, the bottleneck between organizations is creating much bigger delays and losses than the bottleneck between humans. Group people by mission, put one leader on the line for the entire end-to-end value flow. That’s the shape closest to AX.
Reporting lines can be plural. Outcome ownership has to be singular.
The Uncomfortable Truth: Are Current People Right for the Future Org?
Rebuilding the org costs money. A lot of money. And here’s a more uncomfortable truth: organizations are starting to ask whether the current people are the right fit for the new structure, and whether the current headcount is even necessary. Those questions are the real reason new hiring slowed down.
The reason hiring is cautious in the AI era isn’t recession. It’s that companies aren’t sure of the future role structure.
Maker and Closer
Here’s the distinction I want to propose.
Maker. Whether marketing, engineering, or finance, this is the person focused on producing output in their current work. They write specs, write code, create designs, draft reports.
Closer. This is the person who uses that output to actually hit a business KPI. They own the last mile from output to outcome.
Closer here isn’t the sales sense of closing. The point is how far accountability extends. A Maker thinks “my piece is done” once the deliverable is in. A Closer goes further, evaluates whether the deliverable actually became an outcome, and owns the result. Output vs. Outcome. That’s the difference.
Put it in behavior. A Closer is someone who can stop the work by saying “this direction is wrong.” No matter how good the output, if they judge it isn’t going to convert into customer outcome, they bend the direction. Calling a stop on production isn’t something a Maker can do.
In professional careers, Makers tend to get more recognition. But the thing keeping the business alive is the Closer.
If output translated into outcome cleanly, great. But reality often doesn’t work that way. You can stack up output and have all of it become useless if the goal is heading the wrong way. On the other side, there are Closers who hit the goal even with thinner output, by bending direction. A weak product that produces business results, a great product that lets the business die. These are common stories.
Five or six years ago I did tech consulting for early-stage startups and met a lot of organizations. There were companies where there wasn’t a single proper engineer (forget CS majors, not even a bootcamp grad), where the founder taught themselves, wrote code in shapes that shouldn’t have worked, and pulled in major follow-on investment and customer growth on top of that. That founder was a terrible Maker and a top-tier Closer. On the other side, I saw plenty of engineers with truly genius-level backgrounds who got absorbed in the craft and ignored what some of them call “the adult world.” Perfect Makers. Not Closers.
Engineering organizations mostly leaned Maker. Designers and PMs (people who should have been Closers by nature) ended up doing Maker work too, inside shrunken authority and rigid org structures. The structure that wouldn’t allow Closers couldn’t produce Closers.
Are You a Maker or a Closer?
Look back at your last week. What did you produce directly? What did you make the final call on? If most of your time went into writing specs, writing code, and producing reports, you’re a Maker. If you decided direction, judged priority, and owned outcomes, you’re closer to a Closer.
This isn’t saying Maker is bad. Until now, Makers were the engine of the org. But once AI starts doing the producing, the value of being a Maker shrinks. AI writes the code. AI makes the docs. AI runs the analysis. What’s left is the ability to judge “what should we build” and “is this actually valuable to the customer.”
Career Direction for the AX Era
As AX progresses, more of the Maker role gets delegated to AI. Writing code, drafting documents, making designs, analyzing data: the act of producing moves into AI’s lane. In many cases the Closer role gets amplified, and inside a fully AI-native organization their decision-making becomes the only real bottleneck.
For most working people, the personal career direction in an AI-native org has to shift away from Maker and toward Closer. The people who pivot their careers that way are the ones who’ll still be needed in the AI era. Deep expertise in two areas plus the ability to run an end-to-end cycle: what people call π-shaped talent. Because AI handles the producing, one person’s coverage can widen, which is exactly what makes π-shaped talent realistically possible for the first time.
7. Korean Organizations and AX: Why It Hasn’t Moved Yet
Reading this far, you might be thinking “okay, the overseas cases are nice, but Korea is different.” That’s true. Korea’s situation is different. Different doesn’t mean safe.
In Korea, even in the AI era, people don’t lose jobs easily. Breaking convention and building a new org structure takes a long time.
But even when OpenClaw or Claude Code gets rolled out company-wide in a Korean firm, marketing is still marketing and engineering is still engineering. People say productivity exploded, but that’s Maker productivity. Whether it shows up in a Closer’s final outcome (revenue or business KPI) is a separate question.
Take the Korean SaaS startup case from section 2 again. The rollout was a success. Six months out, what does that org look like?
Engineering ships code faster with Claude Code. Marketing ships content faster with ChatGPT. Support categorizes inbound faster with AI. Everyone is faster. But the campaign-planning process between engineering and marketing is still a serial handoff: spec → approval → handoff → build. For customer feedback from support to make it into the product roadmap, it still has to move through PM → lead → sprint planning. AI came in. The way the work flows is exactly the same as before.
It’s true each department got more efficient inside its own work. The walls between departments stayed put. Worse, with each department optimizing its own AI tools, you ended up with per-department AI islands. The high-speed silo-ification from section 3 is exactly this picture.
Why haven’t Korean orgs started asking these questions yet? A few structural reasons.
First, the functional org has roots that go too deep. Most Korean tech companies have a marketing department, engineering department, and planning department as the spine of the org chart. Changing that spine requires the whole executive team to agree, and it’s hard for executives who became executives inside a functional org to argue for dismantling the functional org.
Second, the performance measurement system is output-centric. Engineers are evaluated on number of releases, marketers on number of campaigns, PMs on number of specs. To shift this to “you’ll now be evaluated on customer outcome,” you have to rebuild the eval system itself.
Third, a culture that doesn’t encourage learning and growth blocks change. Pivoting into a new role requires learning, but how many companies actively encourage and support that learning? Most orgs are already maxed out just keeping up with current work.
In an environment like that, most orgs slip into the easy path. If I were the person in charge, frankly, it’d be more comfortable to roll out OpenClaw and announce “we did AX” than to commit to rebuilding people and structure. Reality grinds you down.
But when newer companies grow with so-called AI-native structures from day one, can the existing companies match that speed? The fact that Korea hasn’t yet felt large-scale AI-driven layoffs doesn’t mean it’s safe. The shock arrives later. That’s all.
Not every org needs to change all five axes right now. A 10-person startup is already close to a mission-aligned team and has few handoffs. AX transformation is urgent for mid-sized and larger companies where the functional org has hardened. But even small orgs need to check one thing: is the AI tool they’re laying down reinforcing the existing structure, or making a new structure possible?
In the end, AI will rewrite the way we work. The people who don’t change inside an org that doesn’t change just get overtaken by the people who do change inside an org that does.
8. To Close: Tools Make the Start. Structure Makes the Transformation.
Planting AI across a company is a good start. Real applause to the people who created that start.
But calling the start the finish makes the next step disappear.
Real AX isn’t bolting AI onto an existing functional org. It’s redefining roles, redesigning pipelines, changing KPIs, building governance, reallocating resources. Rewiring the organization into a hybrid human-agent operating system. That’s why the subject of AX isn’t a developer. It’s an executive.
And individual careers have to shift from Maker to Closer. What AI eats is production itself. What stays with humans longer is outcome ownership and direction.
I was a Maker for a long time too. There was a stretch where I believed if you wrote good code, the business would follow. Now I know better. Good output is a necessary condition, not a sufficient one. Direction has to be right for output to mean anything. Setting the direction is the Closer’s job.
Installing Claude Code across your org doesn’t make it AX. AX starts the moment you redraw the org chart.
References
- Bain & Company, “Unsticking Your AI Transformation”
- McKinsey & Company, “AI in the Workplace: Empowering People to Unlock AI’s Full Potential”
- BCG, “Companies Must Go Beyond AI Adoption to Realize Its Full Potential”
- BCG, “Five Things Boards Need to Get Right with AI”
- Harvard Business Review, “Don’t Let AI Reinforce Organizational Silos”
- The Verge, “Shopify CEO says no new hires without proof AI can’t do the job”
- The Pragmatic Engineer, “How Uber Uses AI for Development”
- Business Insider, “Andrew Ng: Product Management Is the New Bottleneck”
- Computer Weekly, “DBS Rewires Operating Models for AI Reasoning Era”
- Morgan Stanley, “Launch of AI @ Morgan Stanley Debrief”
- Moderna, “Our Journey to Becoming a Real-Time AI Organization”
- Reuters, “Sweden’s Klarna shifts AI focus from cost cuts to growth”
- AP News, “McDonald’s ends test run of AI-powered drive-thrus with IBM”
- Reuters, “Duolingo raises 2025 forecast”
- Wall Street Journal, “Johnson & Johnson Pivots Its AI Strategy”
- Atlassian, “Team Update March 2026”
- OpenAI, “BBVA: Scaling AI Across 120,000 Employees”
- Box, “AI-First: Building the Future of Intelligent Content Management”
- METR, “Measuring the Impact of AI on Experienced Open-Source Developer Productivity”
- r/cursor, “Do you believe the claims that AI isn’t improving programmer productivity?”
- r/ExperiencedDevs, “Did AI increase productivity in your company?”
- r/ExperiencedDevs, “AI is working great for my team, and y’all are making me feel crazy”
댓글
댓글을 불러오는 중...
댓글 남기기
Legacy comments (Giscus)