Skip to content
Go back
🔄 회고

Scrumble Tech Retro - 2. The Frontend, with a Side of Vibe Coding

by Tony Cho
16 min read 한국어 원문 보기

TL;DR

I walk through Scrumble's Next.js 15 frontend, the stack we picked for a realtime feel, and the small but ugly bits like HEIC conversion. I also write down what went wrong with Claude-centric vibe coding and how the workflow actually got better.

What the Scrumble frontend had to do

Tech stack

Architecture

Overview

Application shell (src/app)

Feature modules (src/features/*)

Shared layer (src/shared)

Data and state flow

Realtime sync

UI composition and styling

Testing and developer experience

High-level module interaction


graph TD

subgraph "App Router (src/app)"

app_pages["Pages & Layouts"]

app_providers["Global Providers"]

end



subgraph "Feature Modules (src/features/*)"

feature_pages["Feature Pages"]

feature_components["Feature Components"]

feature_hooks["Feature Hooks"]

feature_services["Feature Services"]

feature_stores["Local Zustand Stores"]

end



subgraph "Shared Layer (src/shared)"

shared_components["Shared UI Components"]

shared_hooks["Shared Hooks"]

shared_contexts["Shared Contexts"]

shared_services["Infrastructure Services"]

shared_stores["Shared Stores"]

shared_lib["API & Token Library"]

end



subgraph "Infrastructure"

tanstack["TanStack Query Client"]

axios_client["Axios Resource Clients"]

centrifugo_service["Centrifugo Service"]

backend[("REST API Backend")]

centrifugo[("Centrifugo Hub")]

end



app_pages --> feature_pages

app_providers --> tanstack

app_providers --> shared_contexts



feature_pages --> feature_components

feature_pages --> feature_hooks



feature_components --> shared_components

feature_hooks --> shared_hooks

feature_hooks --> feature_services

feature_hooks --> feature_stores

feature_services --> shared_services



shared_hooks --> tanstack

shared_services --> axios_client

shared_services --> centrifugo_service

shared_stores --> feature_components



tanstack --> axios_client

axios_client --> backend

centrifugo_service --> centrifugo

shared_hooks --> centrifugo_service

feature_hooks --> tanstack

feature_hooks --> centrifugo_service

Implementation retro

Styling

We mostly used Tailwind CSS, and I leaned on Claude Code so heavily that I barely wrote styling by hand. Most vibe coding videos out there don’t actually start from a designer’s spec. They pull tips from sites the author wants to benchmark. From where I sit (working with an actual product designer, Ellie), those tips just weren’t useful.

CSS Vibe

Around April and May, MCP exploded into a trend. Figma MCP especially got hyped as if hooking it up to Cursor or Claude Code would auto-implement everything. In practice, that’s not how it goes. AI does have some image recognition, but accuracy drops fast. Even when Figma MCP pulls data, it’s reading the properties of Figma image objects under the hood, and because the LLM is text-based, it can’t get to a 100% implementation. It flounders a lot.

You could ask the designer to make the Figma file pristine (naming, layout, all of it). But realistically, having the designer hand-craft every detail is less efficient than me grabbing the rough style values one at a time and pasting them into a prompt. So MCP got a few uses early on and then I dropped it. I’ll get into this more when I write about how I code with AI, but as of now I don’t use Playwright MCP, Context7, or any of those.

People talk about MCP a lot less these days too. “It connects” and “it actually works” are different things. Plenty of early write-ups about “it connects,” but I’ve seen almost nothing about MCP actually working well at production scale or in a real team setting (as opposed to toy projects, MVPs, and prototypes). Could be I’m just ignorant about it. But I decided that the time I’d spend researching MCP was better spent being a little more diligent myself.

Back to the implementation. What I ended up doing was grabbing the style values straight out of Figma Dev Mode as text, and implementing each component that way. The first pass is a bit of a slog, but it gets you to roughly 90%, and you only need a light touch after that. With MCP I’d hit 30% and then wrestle with prompts forever, so this is the workflow that keeps me moving right now.

A side note. I tried doing a few screens with no design at all, just whatever Claude proposed visually, but Claude’s own design taste is rough, and I’d have to put serious effort into the prompt without any clear sense of what “done” looked like. I gave up. My setup assumes I have a strong product designer as a partner. For people building solo or without design resources, MCP and benchmark-driven vibe coding will probably still be valid. Just always carry a clear definition of “done” with you.

State management

We use Zustand for some global UI state (saving the last selected date, that kind of thing), but most of state lives in TanStack Query (a.k.a. react-query).

React-query

React-query gives you caching plus a bunch of UI-state primitives, and the early learning curve is real. The painful case is when you layer auth middleware on top of plain API calls. You need automatic refresh logic when the access token expires, something I’ve been doing for over ten years. Mixing that into the react-query and Axios layer caused confusion. The actual cause was Claude Code generating duplicate code that I missed. Early on, expired access tokens triggered an infinite redirect loop, and I had to go back and review every line of the react-query code Claude had written, one by one, to fix it. If I’d just read the code, it would have been a five-minute fix. I tried to one-click my way out of reading the react-query code and burned over an hour instead.

That was the first time I felt how brittle the software gets when you go pure vibe coding outside of MVP territory, and I felt it a lot more after that. If you can read the code, read it. It’s faster. (For now.)

React-query manages its own cache. I had similar experience with caching in gql-apollo, but react-query’s surface is broader and more feature-rich. Once the server adds caching too (Redis), things get genuinely tricky. Right now I’m doing full-stack so it’s fine (I know every policy in my head), but in a setup where backend and frontend are split, applying caching with the wrong policy can produce bug-shaped behavior that isn’t really a bug. The backend has an interface layer in its architecture, and the API is shaped around client use cases, so this kind of thing has to be designed carefully. Caching done right cuts UX latency and load. Done wrong, it shows the user values they didn’t expect.

Optimistic UI

As I mentioned on the backend side, the early infra region delay made API latency painfully long. (The feed screen took 2+ seconds.) Without changing the infra (we wanted to squeeze what we had first), the first thing I reached for was Optimistic UI.

The idea behind Optimistic UI is straightforward. You patch react-query’s internal state first so the UI reflects the change immediately and the user sees no delay. The actual API request runs in the background, and if the response comes back with a different state or an error, you roll the UI state back.

The first version had small UI glitches. The most obvious one: the UI updated, then the response came back and re-updated, causing a flicker. That’s because we were re-applying the server response to the UI. It guaranteed the freshest server state, but the flicker hurt UX. I changed it so that when the response comes back without issues, we skip the redundant update.

Realtime

When someone reacts with an emoji to a check-in or check-out, or drops a comment, it updates in realtime. Anyone in the same space should see those feedback actions live. We had Go and Centrifugo (Redis) wiring up the WebSocket nicely, but realtime updates kept getting dropped early on, and I struggled with it. I first suspected a server-side WebSocket issue, but if Centrifugo were the problem, the ClickUp realtime trigger we had wired up next to it would have broken too. That one worked perfectly, which pointed at the client.

To make the WebSocket channels efficient and spread the load, the feed has multiple posts and we subscribed to each one by post ID as its own channel. As you scroll and a post leaves or re-enters the viewport, the handler resubscribes to that post’s channel.

I thought this was efficient because we weren’t holding subscriptions to every post ID at all times. The catch: deciding handler connection state from the UI meant connections occasionally got lost, so comments wouldn’t arrive in realtime now and then. I spent a fair amount of time fixing it, but honestly, just keying the channel on space + date and subscribing to the whole feed would have been simpler. If the event volume or feed size were huge, sure, you’d want optimization. But this was a case of trying to be too clever upfront and burning time on trial-and-error.

File storage

One thing I like about Next.js is that you get your own server. We didn’t implement file storage in the Go server. We did it directly in Next.js, and only sent the file metadata and the file address to the backend.

R2

As part of going off AWS, we used R2 for storage. R2 is basically Cloudflare’s S3. It’s just as simple to use as S3. Connect the address, register the auth keys as a Vercel or local secret, and you can upload right away. If you need file storage for something you’re building, I’d recommend R2. Below is a comparison of the current Free Tiers for S3 and R2. The most attractive piece is the unlimited egress. If your images get embedded across a lot of websites, R2 is worth a serious look on top of just upload cost.

ItemAWS S3 Free Tier (first 12 months)Cloudflare R2 Free Tier (permanent)
Storage5 GB10 GB-month
Class A ops2,000 PUT/COPY/POST/LIST requests1 million requests
Class B ops20,000 GET/SELECT requests10 million requests
Egress100 GBFree (unlimited)
Other$100 credit on new accounts (30+ services)Permanently free, per account

The bigger lift wasn’t the upload itself. It was the drag-and-drop UI for dropping files in directly, plus the upload progress indicator. We also did a small client-side resize instead of always uploading the original. The feature came out of how I’d use it personally, occasionally attaching a photo to a check-in or comment. Down the road, if we extend to a tiptap implementation, we can embed images inside tiptap, so we debated this in the planning phase. We landed on a separate photo upload to fit the check-in/check-out purpose.

HEIC Converter

Most image upload features don’t actually support this, but Ellie and I are both iPhone users, so most of our iPhone images come out as HEIC. The usual upload flow accepts jpg, png, gif, and just refuses HEIC. We wanted the experience of opening Photos on a Mac and dragging straight in, so we built a converter to make HEIC uploads work. I assumed dropping in a single library would handle it. It wasn’t that simple.

Fallback logic

Conversion methods and libraries

I leaned on Claude Code a lot, but the result ended up being a fairly gnarly converter. With this many fallbacks, every HEIC image converts without exception.

The side effect: I now happily dig up any old photo and throw it in.

Wrapping up

Vibe coding!?

After the backend, I went through the frontend at the same fairly minimal level. The frontend leaned on Claude Code far more, and ironically, the higher my dependence on AI got, the lower my productivity went. Especially when I had to debug or fix an issue without much grasp of the code, Claude Code would either fail to find the simple cause of the issue or look at it through too narrow a lens and fall into an infinite loop. Most of the time, when I read the code myself and put my hands on it, the fix turned out to be simple.

AI writes code fast, sure, but in real situations the context drifts constantly because of policy changes and other issues. Trusting AI alone often left me stuck.

Writing an SDD or PRD doc, defining the spec up front, is the closest thing to a real alternative. Once requirements get complex and implementation gets complex, though, you often only know things “after building.” A lot of people treat coding like math, where you plug in a formula and get one correct answer. Reality isn’t that. In a working environment where requirements shift hour by hour, and even when you’re coding solo, context gets lost all the time.

These days I only use Codex, with a flow of: requirements doc, then implementation steps doc, then development broken up by step. That’s been my method, and it’s been much more efficient than before. Once the project I’m on now wraps up roughly, I’ll write a broader piece on what I’ve learned about vibe coding overall.

Wrapping the project

It’s already been over a month since the first release. Using it internally for our own work makes the things I want to improve very visible. We didn’t build this to open it externally, but I’d love to ship even a beta in the near future. The frontend has piled up a lot of bad code, courtesy of Claude. After the project I’m currently working on releases, the goal is to clean those up bit by bit while continuing feature development.

At my previous company I rarely coded the frontend hands-on, so this was the chance to deep-dive properly, study, and actually implement. The reason frontend feels hard isn’t really the frontend itself. It’s that it touches the user directly. When I touch what I built and catch a whiff of how bad it smells, I keep fixing and fixing until hours have disappeared. Even so, watching the UI come to life and run is its own kind of dopamine, different from the backend in a way I can’t quite name.

Same as with the backend, going through this project gave me a clear picture of how I’d run the frontend on the next one.

I finally wrapped up the long-postponed frontend chapter. That’s the front-end side. Backend post next.


FAQ

How did you build realtime features on the Scrumble frontend?
We used the Centrifugo client (centrifuge) over WebSocket, with auto-reconnect and channel persistence. Realtime notifications, emoji reactions, and comments all run through the same connection.
What is a vertical slice architecture?
It's a structure where UI, hooks, services, stores, and types for one feature live in a single folder. You organize code by domain, so all related code stays in one place. In Scrumble we did this under src/features/*.
How do you handle iOS HEIC images?
We use heic2any and libheif-js to convert HEIC to standard web formats, paired with a presigned upload to Cloudflare R2.
Tony Cho profile image

About the author

Tony Cho

Indie Hacker, Product Engineer, and Writer

제품을 만들고 회고를 남기는 개발자. AI 코딩, 에이전트 워크플로우, 스타트업 제품 개발, 팀 빌딩과 리더십에 대해 쓴다.


Share this post on:

반응

If you've read this far, leave a note. Reactions, pushback, questions — all welcome.

댓글

댓글을 불러오는 중...


댓글 남기기

이메일은 공개되지 않습니다

Legacy comments (Giscus)


Previous Post
What Should Junior Developers Learn in the AI Era?
Next Post
Scrumble Team: First-Release Interview