Frontend Is a Distributed Systems Problem
A backend engineer deploys code to servers they control, known hardware, known OS, known runtime, known network. A frontend engineer deploys code to every device on the planet simultaneously, unknown hardware, unknown browser version, unknown screen size, unknown network speed, unknown ad blockers, unknown browser extensions, unknown accessibility needs, unknown language. The backend has staging environments that mirror production. The frontend’s production environment is every phone, laptop, and tablet on earth, and no two are identical.
Frontend is a distributed systems problem. We just do not call it that.
I have spent 8 years building frontends, including a Chrome Extension wallet that had to inject code into websites I do not control, a transfer app that handled 20K daily active users until we hit our Netlify limits, a learning platform that auto-switched between 5 Indian languages, and a bridge UI that needed SEO without a server. Every one of these problems was harder than it looked from the outside, and every one was underestimated by someone who thought frontend was “just the UI”
Here is what frontend engineers are actually solving.
The runtime you do not control
Backend engineers choose their runtime. Node 20, Python 3.12, Go 1.22, whatever you pick, you know exactly what your code runs on. You test against that version and deploy to that version.
Frontend engineers do not choose their runtime. Your code runs on Chrome 125, Chrome 118 (the user has not updated), Safari 17 (which handles CSS differently), Firefox 126, Samsung Internet (yes, that is a real browser with 500 million users), and whatever browser is embedded in a WeChat mini-program. Each of these runtimes has different JavaScript engine behavior, different CSS rendering, different API support, and different bugs.
When I built the Arcana Chrome Extension wallet, the extension had to inject a provider script into every website the user visited. That script runs in the page’s JavaScript context, not in the extension’s sandboxed context. Every website has different Content Security Policies, different JavaScript frameworks polluting the global scope, different third-party scripts competing for window.ethereum, and different DOM structures. The injected script had to work on Aave, Uniswap, Lido, and Hyperliquid, each with its own build tooling, polyfills, and runtime behavior. Testing “does my code work” meant testing against an unbounded number of host environments that I could not predict or control.
A backend service that needs to interact with 4 different external systems would be called “complex integration work.” A frontend that must run correctly inside 4 different websites simultaneously is called “a Chrome Extension.”
The network is lying to you
Backend services communicate over internal networks: low latency, high bandwidth, predictable performance. When service A calls service B, the network is fast and reliable. If it is not, you set up retries and circuit breakers and call it resilient architecture.
Frontend communicates over the public internet. Your user might be on fiber in Mumbai, 4G on a train in rural Maharashtra, satellite internet in New Zealand, or a corporate VPN that strips certain headers. You do not know. You cannot know. And unlike a backend retry, a frontend network failure happens while a human is staring at the screen waiting for something to happen.
When SendIt scaled to 20K peak daily active users, the frontend had to handle every flavor of network condition simultaneously. Optimistic UI updates (showing the transaction as “sent” before the server confirmed) were not a nice-to-have, they were essential because users on slow connections would stare at a spinner for 3-5 seconds otherwise and assume the app was broken. Error states needed to distinguish between “the server returned an error” (show a message) and “the network request timed out” (show a retry button) and “the user’s connection dropped entirely” (show an offline indicator). Each of these is a different UX path with different recovery logic.
The backend team handled scaling by upgrading servers. I handled scaling by migrating the entire frontend infrastructure from Netlify to self-managed AWS because we exhausted Netlify Pro’s bandwidth limits. The backend’s scaling problem was vertical, more server resources. The frontend’s scaling problem was horizontal, more diverse clients with more diverse network conditions hitting the same static assets.
Every pixel is a contract
Backend APIs have contracts: request schemas, response types, error codes. If the API returns { "status": "error", "code": 422 }, the consumer knows exactly what happened. The contract is explicit, documented, and versioned.
Frontend has implicit contracts with users. There is no schema for “this button looks clickable.” There is no spec for “the loading state communicates that something is happening.” There is no error code for “the user does not understand what this screen is asking them to do.” These contracts are enforced by user behavior and users do not file bug reports with stack traces. They just leave.
A backend engineer who designs a bad API gets a GitHub issue. A frontend engineer who designs a bad interface gets user churn. Both are bugs. One is visible in logs. The other is visible only in analytics, if you are tracking the right events.
State management is a distributed consensus problem
A backend service manages state in a database. One source of truth. Reads and writes go to the same place. Consistency is a database problem.
A frontend manages state across multiple sources simultaneously: server state (data from APIs, possibly stale), client state (form inputs, UI toggles, modal open/closed), URL state (query parameters, route segments), browser state (localStorage, sessionStorage, cookies), and device state (online/offline, screen size, orientation, battery level). These sources of truth frequently disagree, and the frontend must reconcile them in real time while the user is interacting with the interface.
The CAP theorem plays out in a browser tab unlike a backend system where you can choose consistency over availability, a frontend must always remain available, you cannot show the user a “503 Service Unavailable” page because one of the endpoints timed out.
Accessibility is not a feature. It is a constraint.
Backend engineers do not think about whether their API is usable by people with disabilities. The API returns JSON. JSON does not care who is reading it.
Frontend engineers build for users who navigate with keyboards, screen readers, voice control, switch devices, and eye-tracking systems. A button that works with a mouse click but not with a keyboard press is a bug. A modal that traps focus but does not return focus to the trigger element when closed is a bug. A form with red error text but no aria-label is invisible to blind users. A chart that conveys information only through color is meaningless to colorblind users.
These are not edge cases. Keyboard navigation alone affects every power user, every user with a broken trackpad, every user on a TV browser, and every user who prefers not to use a mouse. Accessibility is a constraint that shapes every component, every interaction, and every architectural decision. It is woven into the foundation, not bolted on at the end.
Internationalization is not just translation
Backend internationalization typically means storing strings in a translation file and returning the correct locale’s string based on a header. The backend does not care whether the translated string is longer than the English version and breaks the layout.
The frontend cares. When we built the Agrostar farmer learning platform supporting 5 Indian languages, every screen had to render correctly in Devanagari, Tamil, Telugu, Kannada, and Gurmukhi scripts — each with different character widths, different line-height requirements, and different text wrapping behavior. A button that says “Start Learning” in English might say “सीखना शुरू करें” in Hindi — which is significantly longer and may break the button’s layout if the container has a fixed width. The video player had to auto-switch audio tracks and subtitles based on the user’s language preference, and that preference had to persist across sessions.
Internationalization on the frontend is not a string lookup. It is a layout system, a typography system, a media system, and a persistence system — all wired together and triggered by a single user setting.
Performance is a user experience problem, not a technical one
Backend performance is measured in milliseconds of server response time. Frontend performance is measured in how long a human feels they are waiting.
These are fundamentally different metrics. A server that responds in 200ms is fast. A frontend that shows a blank screen for 200ms, then a layout shift, then loads an image that pushes the content down — that frontend feels slow even though every individual operation completed quickly. Cumulative Layout Shift, Largest Contentful Paint, First Input Delay, Interaction to Next Paint — these Core Web Vitals are not technical benchmarks. They are perceptual benchmarks that measure the user’s subjective experience of speed.
My Outline Analytics SDK is under 4KB gzipped with zero dependencies. That is not a technical flex — it is a product decision. Every analytics SDK the user’s site loads adds to their page weight, competes for network bandwidth, and potentially blocks rendering. A 40KB analytics library on a page that otherwise loads in 1.2 seconds might push it to 1.5 seconds. That 300ms difference is the difference between a user staying and a user bouncing. My SDK’s size is a feature that directly affects my users’ users.
SEO is an architecture decision
Backend engineers do not think about whether Google can understand their service. Google does not crawl your API.
Google crawls your frontend. And how you architect your frontend determines whether Google can index it. A React SPA that renders everything client-side is invisible to search engines unless you add server-side rendering, static generation, or a pre-rendering layer. This is an architectural decision that affects your deployment infrastructure, your hosting costs, your development workflow, and your framework choice.
For Nexus Fastbridge, we needed individual chain pages to be indexable by Google and render proper previews on social media. Instead of adopting Next.js (which would have added a server runtime, SSR complexity, and hydration debugging), I wrote a custom solution (Not a Vite plugin, a simple script that can be run at build time) that generates static HTML pages with correct meta tags during the build process. The React SPA hydrates on top of these static shells. Google indexes the static HTML. Users interact with the hydrated SPA. Both are happy.
This was a build system decision, a deployment architecture decision, and a frontend architecture decision, all driven by the requirement that search engines need to read the page. No backend engineer has ever had to restructure their build pipeline because Google could not understand their API responses.
Testing the untestable
Backend tests are deterministic. Given input X, the function returns output Y. If the test passes on your machine, it passes in CI, and it passes in production (assuming the same runtime version).
Frontend tests run in browsers, real or emulated where rendering behavior varies, animation timing is non-deterministic, and user interaction sequences are unpredictable. A component that renders correctly in Chrome might overflow its container in Safari because Safari calculates flex-basis differently. An animation that runs at 60fps on your MacBook might jank at 15fps on an Android phone. A drag-and-drop interaction that works with a mouse fails with touch events on mobile.
End-to-end tests for frontends are inherently flaky because they depend on rendering timing, network speed, and browser behavior that varies between runs. The frontend testing pyramid (unit tests for logic, component tests for rendering, integration tests for user flows, visual regression tests for appearance) exists because no single testing approach can cover the full surface area of what “the frontend works correctly” means.
The point is not that frontend is harder than backend
It is not a competition. Backend engineering has its own deep challenges: distributed systems coordination, database performance, data consistency at scale, security, and infrastructure management. Those are genuinely hard problems.
The point is that frontend engineering is solving an equivalently hard set of problems that happen to be less visible because the output is “a screen” rather than “a system.” When a company allocates one frontend engineer and four backend engineers to a project because “the frontend is just the UI layer,” they are making a resource allocation error based on a misunderstanding of what the frontend engineer is actually doing.
The frontend engineer is building a distributed client application that runs on hardware they do not control, over networks they cannot predict, for users whose needs they must anticipate, in browsers that behave differently, while maintaining performance, accessibility, internationalization, SEO, and visual consistency simultaneously.
That is not “just the UI”, that is engineering.
Written by Shrinath Prabhu, Senior Staff Frontend Engineer at Avail Project. 8+ years building production frontends across Web3, SaaS, and developer tools. Case studies at shrinath.me/work.