What the Scrumble frontend had to do
- It’s an SNS at the core, so people post, react with emoji, drop comments, and attach images to content. All of that is non-negotiable.
- After user auth, members re-authenticate inside a workspace, so the whole product runs on member-level identity rather than just user-level identity.
- As I covered on the backend side, realtime mattered everywhere. Notifications, emoji, comments all needed to update live.
- The to-do list had to feel as smooth as we could make it, which meant mapping keyboard shortcuts.
Tech stack
- Framework: Next.js 15.1.8 (App Router) + React 19, dev server on Turbopack
- Language: TypeScript 5
- Styling: Tailwind CSS 3.4, tailwind-merge, class-variance-authority, Pretendard font, PostCSS/Autoprefixer
- State and data: Zustand 5 for client state, TanStack Query 5 + Axios 1.9 for server communication
- Realtime:
centrifugeclient talking to Centrifugo (auto-reconnect, channel persistence) - Files and media: Cloudflare R2 presigned upload, HEIC-to-web-format conversion utilities (
heic2any,libheif-js) - UI/UX: Framer Motion 12, Emoji Mart, Lucide and Remix icon sets, React Day Picker, React Window virtualization
- PWA: next-pwa 5.6.0 service worker plus manifest
Architecture
Overview
-
The Next.js 15 App Router owns routing, layouts, and the boundary between server and client components. Global providers all sit in one file,
src/app/providers.tsx. -
Page components only handle rendering and interaction. Reads, writes, and realtime sync are pulled out into dedicated hooks so the page itself stays thin.
-
Product features live inside
src/features/*as a vertical slice (UI, hooks, services, and a local Zustand store all bundled together by domain). -
Anything shared (UI, contexts, services, utils) goes into
src/shared, so each domain module can stay focused on its core logic.
Application shell (src/app)
-
The App Router directory (
page.tsx,layout.tsx,loading.tsx, etc.) defines route surface, lazy loading, and metadata. -
providers.tsxwrapsQueryClientProvidertogether with auth, timezone, and the global loading context. The point is to make sure every global hook reads from the same query client. -
Route groups like
spaces,auth, and the dynamic segment[spaceSlug]map one-to-one with feature domains. Leaf pages usually just call into an entry point undersrc/features/**/pagesand delegate the logic. -
API routes under
src/app/api/*only act as a server-side bridge to the backend when we actually need one.
Feature modules (src/features/*)
- Each feature follows a vertical slice layout:
components,pages,hooks,services,stores,types,utils,dataall in one folder. Domain UI and logic stay in one place. - A feature page is just a thin wrapper that composes shared layouts with domain components.
- Hooks wrap the read/write logic and side effects, pulling in TanStack Query helpers from
src/shared/hooks/queriesand the domain services. - Local Zustand stores (
stores) only hold transient UI state that shouldn’t leak outside the feature. - Feature services use the shared API client to implement domain-specific formatting, optimistic updates, and derived models.
Shared layer (src/shared)
-
components: design system components built on Tailwind + class-variance-authority (feedback, navigation, inputs, and so on). -
hooks: reusable query hooks (queries/*), auth helpers, and realtime subscription hooks. These hide data fetching and state orchestration. -
services: infrastructure services likecentrifugo.service.ts, plus file upload helpers and an analytics logger. -
contexts: the auth, timezone, and global loading contexts that the app shell consumes. -
stores: global Zustand stores for auth state, toasts, time utilities, and so on. -
lib: low-level integrations (Axios API clients per resource, the token manager, fonts, and API helpers). -
typesandschemas: type definitions and validation schema fragments shared across feature and API layers. -
utils: formatting, error handling, and general-purpose helpers.
Data and state flow
-
Axios clients live in
src/shared/lib/api/*.ts, which centralizes base URLs, interceptors, and the token refresh logic. -
Query and mutation hooks under
src/shared/hooks/queriesstandardize TanStack Query keys and caching policies, so feature modules can compose them safely. -
Global state goes into the lightweight Zustand stores under
shared/stores. Feature-specific state stays in a local store inside that feature folder. -
Forms mostly use React Hook Form + Zod, with a shared resolver utility on hand when we need it.
Realtime sync
-
CentrifugoServicekeeps a single Centrifuge client alive and handles reconnects, channel persistence, and event dispatch. -
Realtime hooks don’t open new sockets. They extend the central Centrifugo layer, so every feature reuses the same connection and event bus.
-
When an event arrives, the hook patches the TanStack Query cache or the local store directly, so the UI updates without an extra refetch.
UI composition and styling
-
Tailwind CSS + PostCSS is the styling base. We compose variant classes safely with class-variance-authority and tailwind-merge.
-
Animations and micro-interactions go through Framer Motion. Icons and emoji come from shared libraries like Lucide, Remix Icons, and Emoji Mart.
-
Responsive behavior and virtualized scrolling (React Window) are factored out into reusable components, so page code stays declarative.
Testing and developer experience
- We test hooks and components with Jest + React Testing Library, and run E2E regression scenarios with Playwright.
- TypeScript strict mode, ESLint, and
npm run build(Next.js with SWC/Turbopack) keep types, lint, and build stability honest before commits. - Shared tooling like
shared/utils/debugand the token manager keeps debugging and storage strategies consistent across environments.
High-level module interaction
graph TD
subgraph "App Router (src/app)"
app_pages["Pages & Layouts"]
app_providers["Global Providers"]
end
subgraph "Feature Modules (src/features/*)"
feature_pages["Feature Pages"]
feature_components["Feature Components"]
feature_hooks["Feature Hooks"]
feature_services["Feature Services"]
feature_stores["Local Zustand Stores"]
end
subgraph "Shared Layer (src/shared)"
shared_components["Shared UI Components"]
shared_hooks["Shared Hooks"]
shared_contexts["Shared Contexts"]
shared_services["Infrastructure Services"]
shared_stores["Shared Stores"]
shared_lib["API & Token Library"]
end
subgraph "Infrastructure"
tanstack["TanStack Query Client"]
axios_client["Axios Resource Clients"]
centrifugo_service["Centrifugo Service"]
backend[("REST API Backend")]
centrifugo[("Centrifugo Hub")]
end
app_pages --> feature_pages
app_providers --> tanstack
app_providers --> shared_contexts
feature_pages --> feature_components
feature_pages --> feature_hooks
feature_components --> shared_components
feature_hooks --> shared_hooks
feature_hooks --> feature_services
feature_hooks --> feature_stores
feature_services --> shared_services
shared_hooks --> tanstack
shared_services --> axios_client
shared_services --> centrifugo_service
shared_stores --> feature_components
tanstack --> axios_client
axios_client --> backend
centrifugo_service --> centrifugo
shared_hooks --> centrifugo_service
feature_hooks --> tanstack
feature_hooks --> centrifugo_service
Implementation retro
Styling
We mostly used Tailwind CSS, and I leaned on Claude Code so heavily that I barely wrote styling by hand. Most vibe coding videos out there don’t actually start from a designer’s spec. They pull tips from sites the author wants to benchmark. From where I sit (working with an actual product designer, Ellie), those tips just weren’t useful.
CSS Vibe
Around April and May, MCP exploded into a trend. Figma MCP especially got hyped as if hooking it up to Cursor or Claude Code would auto-implement everything. In practice, that’s not how it goes. AI does have some image recognition, but accuracy drops fast. Even when Figma MCP pulls data, it’s reading the properties of Figma image objects under the hood, and because the LLM is text-based, it can’t get to a 100% implementation. It flounders a lot.
You could ask the designer to make the Figma file pristine (naming, layout, all of it). But realistically, having the designer hand-craft every detail is less efficient than me grabbing the rough style values one at a time and pasting them into a prompt. So MCP got a few uses early on and then I dropped it. I’ll get into this more when I write about how I code with AI, but as of now I don’t use Playwright MCP, Context7, or any of those.
People talk about MCP a lot less these days too. “It connects” and “it actually works” are different things. Plenty of early write-ups about “it connects,” but I’ve seen almost nothing about MCP actually working well at production scale or in a real team setting (as opposed to toy projects, MVPs, and prototypes). Could be I’m just ignorant about it. But I decided that the time I’d spend researching MCP was better spent being a little more diligent myself.
Back to the implementation. What I ended up doing was grabbing the style values straight out of Figma Dev Mode as text, and implementing each component that way. The first pass is a bit of a slog, but it gets you to roughly 90%, and you only need a light touch after that. With MCP I’d hit 30% and then wrestle with prompts forever, so this is the workflow that keeps me moving right now.
A side note. I tried doing a few screens with no design at all, just whatever Claude proposed visually, but Claude’s own design taste is rough, and I’d have to put serious effort into the prompt without any clear sense of what “done” looked like. I gave up. My setup assumes I have a strong product designer as a partner. For people building solo or without design resources, MCP and benchmark-driven vibe coding will probably still be valid. Just always carry a clear definition of “done” with you.
State management
We use Zustand for some global UI state (saving the last selected date, that kind of thing), but most of state lives in TanStack Query (a.k.a. react-query).
React-query
React-query gives you caching plus a bunch of UI-state primitives, and the early learning curve is real. The painful case is when you layer auth middleware on top of plain API calls. You need automatic refresh logic when the access token expires, something I’ve been doing for over ten years. Mixing that into the react-query and Axios layer caused confusion. The actual cause was Claude Code generating duplicate code that I missed. Early on, expired access tokens triggered an infinite redirect loop, and I had to go back and review every line of the react-query code Claude had written, one by one, to fix it. If I’d just read the code, it would have been a five-minute fix. I tried to one-click my way out of reading the react-query code and burned over an hour instead.
That was the first time I felt how brittle the software gets when you go pure vibe coding outside of MVP territory, and I felt it a lot more after that. If you can read the code, read it. It’s faster. (For now.)
React-query manages its own cache. I had similar experience with caching in gql-apollo, but react-query’s surface is broader and more feature-rich. Once the server adds caching too (Redis), things get genuinely tricky. Right now I’m doing full-stack so it’s fine (I know every policy in my head), but in a setup where backend and frontend are split, applying caching with the wrong policy can produce bug-shaped behavior that isn’t really a bug. The backend has an interface layer in its architecture, and the API is shaped around client use cases, so this kind of thing has to be designed carefully. Caching done right cuts UX latency and load. Done wrong, it shows the user values they didn’t expect.
Optimistic UI
As I mentioned on the backend side, the early infra region delay made API latency painfully long. (The feed screen took 2+ seconds.) Without changing the infra (we wanted to squeeze what we had first), the first thing I reached for was Optimistic UI.
The idea behind Optimistic UI is straightforward. You patch react-query’s internal state first so the UI reflects the change immediately and the user sees no delay. The actual API request runs in the background, and if the response comes back with a different state or an error, you roll the UI state back.
The first version had small UI glitches. The most obvious one: the UI updated, then the response came back and re-updated, causing a flicker. That’s because we were re-applying the server response to the UI. It guaranteed the freshest server state, but the flicker hurt UX. I changed it so that when the response comes back without issues, we skip the redundant update.
Realtime
When someone reacts with an emoji to a check-in or check-out, or drops a comment, it updates in realtime. Anyone in the same space should see those feedback actions live. We had Go and Centrifugo (Redis) wiring up the WebSocket nicely, but realtime updates kept getting dropped early on, and I struggled with it. I first suspected a server-side WebSocket issue, but if Centrifugo were the problem, the ClickUp realtime trigger we had wired up next to it would have broken too. That one worked perfectly, which pointed at the client.
To make the WebSocket channels efficient and spread the load, the feed has multiple posts and we subscribed to each one by post ID as its own channel. As you scroll and a post leaves or re-enters the viewport, the handler resubscribes to that post’s channel.
I thought this was efficient because we weren’t holding subscriptions to every post ID at all times. The catch: deciding handler connection state from the UI meant connections occasionally got lost, so comments wouldn’t arrive in realtime now and then. I spent a fair amount of time fixing it, but honestly, just keying the channel on space + date and subscribing to the whole feed would have been simpler. If the event volume or feed size were huge, sure, you’d want optimization. But this was a case of trying to be too clever upfront and burning time on trial-and-error.
File storage
One thing I like about Next.js is that you get your own server. We didn’t implement file storage in the Go server. We did it directly in Next.js, and only sent the file metadata and the file address to the backend.
R2
As part of going off AWS, we used R2 for storage. R2 is basically Cloudflare’s S3. It’s just as simple to use as S3. Connect the address, register the auth keys as a Vercel or local secret, and you can upload right away. If you need file storage for something you’re building, I’d recommend R2. Below is a comparison of the current Free Tiers for S3 and R2. The most attractive piece is the unlimited egress. If your images get embedded across a lot of websites, R2 is worth a serious look on top of just upload cost.
| Item | AWS S3 Free Tier (first 12 months) | Cloudflare R2 Free Tier (permanent) |
|---|---|---|
| Storage | 5 GB | 10 GB-month |
| Class A ops | 2,000 PUT/COPY/POST/LIST requests | 1 million requests |
| Class B ops | 20,000 GET/SELECT requests | 10 million requests |
| Egress | 100 GB | Free (unlimited) |
| Other | $100 credit on new accounts (30+ services) | Permanently free, per account |
The bigger lift wasn’t the upload itself. It was the drag-and-drop UI for dropping files in directly, plus the upload progress indicator. We also did a small client-side resize instead of always uploading the original. The feature came out of how I’d use it personally, occasionally attaching a photo to a check-in or comment. Down the road, if we extend to a tiptap implementation, we can embed images inside tiptap, so we debated this in the planning phase. We landed on a separate photo upload to fit the check-in/check-out purpose.
HEIC Converter
Most image upload features don’t actually support this, but Ellie and I are both iPhone users, so most of our iPhone images come out as HEIC. The usual upload flow accepts jpg, png, gif, and just refuses HEIC. We wanted the experience of opening Photos on a Mac and dragging straight in, so we built a converter to make HEIC uploads work. I assumed dropping in a single library would handle it. It wasn’t that simple.
Fallback logic
- Inspect the actual byte signature. If a file is already converted to JPEG, just rename it to .jpg and return it as-is.
- Run through the five strategies defined in the conversionMethods array in order. Return immediately on success. Log success/failure and stats for each attempt.
- If every method fails, summarize the accumulated error list to the log, then return the original file instead of throwing, so the upload flow doesn’t break.
- Expose
getHeicConversionInfoanddebugHeicFilefor support and debugging. They show the strategies available in the current environment, the file header, the ftyp brand, and so on.
Conversion methods and libraries
- heic2any: the default strategy. Tries three output options in sequence (JPEG 90%, PNG, JPEG 100%). Treats an empty Blob as a failure.
- heic-decode: decodes the HEIF bitstream directly with heic-decode, normalizes the various data structures (Uint8Array, object form, etc.) into ImageData, draws to a canvas, then serializes to JPEG.
- FileReader: reads a Base64 URL with no external library, loads it into an img tag, draws to canvas, and produces a JPEG Blob using only browser-native APIs as the fallback.
- heic-convert: dynamically imports the Node-based heic-convert module into the browser bundle and tries Buffer to JPEG conversion. Treats an empty output buffer as an error.
- Browser-native: as a last resort, uses only URL.createObjectURL and a canvas tag to redraw the image and produce a JPEG Blob.
I leaned on Claude Code a lot, but the result ended up being a fairly gnarly converter. With this many fallbacks, every HEIC image converts without exception.
The side effect: I now happily dig up any old photo and throw it in.
Wrapping up
Vibe coding!?
After the backend, I went through the frontend at the same fairly minimal level. The frontend leaned on Claude Code far more, and ironically, the higher my dependence on AI got, the lower my productivity went. Especially when I had to debug or fix an issue without much grasp of the code, Claude Code would either fail to find the simple cause of the issue or look at it through too narrow a lens and fall into an infinite loop. Most of the time, when I read the code myself and put my hands on it, the fix turned out to be simple.
AI writes code fast, sure, but in real situations the context drifts constantly because of policy changes and other issues. Trusting AI alone often left me stuck.
Writing an SDD or PRD doc, defining the spec up front, is the closest thing to a real alternative. Once requirements get complex and implementation gets complex, though, you often only know things “after building.” A lot of people treat coding like math, where you plug in a formula and get one correct answer. Reality isn’t that. In a working environment where requirements shift hour by hour, and even when you’re coding solo, context gets lost all the time.
These days I only use Codex, with a flow of: requirements doc, then implementation steps doc, then development broken up by step. That’s been my method, and it’s been much more efficient than before. Once the project I’m on now wraps up roughly, I’ll write a broader piece on what I’ve learned about vibe coding overall.
Wrapping the project
It’s already been over a month since the first release. Using it internally for our own work makes the things I want to improve very visible. We didn’t build this to open it externally, but I’d love to ship even a beta in the near future. The frontend has piled up a lot of bad code, courtesy of Claude. After the project I’m currently working on releases, the goal is to clean those up bit by bit while continuing feature development.
At my previous company I rarely coded the frontend hands-on, so this was the chance to deep-dive properly, study, and actually implement. The reason frontend feels hard isn’t really the frontend itself. It’s that it touches the user directly. When I touch what I built and catch a whiff of how bad it smells, I keep fixing and fixing until hours have disappeared. Even so, watching the UI come to life and run is its own kind of dopamine, different from the backend in a way I can’t quite name.
Same as with the backend, going through this project gave me a clear picture of how I’d run the frontend on the next one.
I finally wrapped up the long-postponed frontend chapter. That’s the front-end side. Backend post next.
댓글
댓글을 불러오는 중...
댓글 남기기
Legacy comments (Giscus)