Home › Technical › Performance

Performance optimization

This is your strongest territory and they explicitly want it. The JD calls out: "Optimize rendering performance for large datasets and complex UI state. Improve load time and responsiveness (Core Web Vitals, Lighthouse). Profile and resolve performance bottlenecks." Be ready to talk about this fluently in stage 1, deeply in stage 3.

01Core Web Vitals

What are the Core Web Vitals?

Three user-experience metrics Google publishes, with thresholds:

  • LCP (Largest Contentful Paint) — time until the largest above-the-fold element renders. Good: ≤ 2.5s.
  • INP (Interaction to Next Paint) — replaces FID. Latency of the slowest user interaction in a session. Good: ≤ 200ms.
  • CLS (Cumulative Layout Shift) — visual stability — how much things move unexpectedly. Good: ≤ 0.1.

Supporting metrics worth knowing: FCP (First Contentful Paint), TTFB (Time to First Byte), TTI (Time to Interactive).

How do you improve LCP?

The LCP element is usually a hero image, video, or a large block of text. Strategies in order of impact:

  1. Faster TTFB — server-side caching, edge delivery, faster origin.
  2. Preload the LCP resource<link rel="preload" as="image" href="...">
  3. Modern image formats — WebP, AVIF; serve appropriately sized images via srcset
  4. Eliminate render-blocking resources — inline critical CSS, defer non-critical JS
  5. Avoid client-side rendering for the hero — SSR or SSG the above-the-fold content
  6. fetchpriority="high" on the hero image
How do you improve INP / interaction latency?

INP is about keeping the main thread free. Strategies:

  • Break up long tasks — anything > 50ms blocks input. Use scheduler.yield(), setTimeout(0), or requestIdleCallback to chunk work.
  • useTransition / useDeferredValue in React for low-priority state updates.
  • Move work off the main thread — Web Workers for heavy computation.
  • Debounce expensive handlers — search, filter.
  • Virtualize long lists — don't render what isn't visible.
  • Reduce hydration cost — ship less JS, use streaming SSR or React Server Components.
How do you improve CLS?
  • Always set width and height on images and videos so the browser reserves the space.
  • Reserve space for ads, embeds, and dynamically-injected content with min-height placeholders.
  • Avoid inserting content above existing content. If you must, use contain-intrinsic-size or fixed positioning.
  • Use font-display: optional or swap with the right fallback metrics — size-adjust, ascent-override — to avoid font-swap layout shifts.
  • Skeletons that match the final layout, not generic spinners.

02Bundle size & loading

How do you reduce JavaScript bundle size?
  1. Measure first. Use webpack-bundle-analyzer, vite-bundle-visualizer, or source-map-explorer. You can't optimize what you can't see.
  2. Code splitting. Split by route at minimum. Lazy-load heavy components (charts, editors, modals).
  3. Tree shaking. Use ES modules. Avoid default-exporting an object of utilities. Mark your package as "sideEffects": false when safe.
  4. Find the bloat. Common offenders: moment (use date-fns or dayjs), lodash (use lodash-es with named imports, or just write the function), full-icon-set imports.
  5. Dynamic imports for rarely-used code paths. Admin features, charting libs, PDF generators.
  6. Compression. Brotli > gzip. Make sure your CDN is doing it.
  7. Watch transitive deps. npm ls or your bundle visualizer will show what's hiding inside that one library you added "just for one helper."
What's the difference between code splitting and tree shaking?

Tree shaking happens at build time — the bundler removes unused exports from the final bundle.

Code splitting creates separate bundles loaded on demand at runtime. Your route-level lazy-loaded chunks, your dynamic imports.

They're complementary. Tree shaking shrinks each bundle; code splitting reduces what loads upfront.

How does HTTP caching work for your assets?

Two-tier strategy that's standard now:

  • Hashed asset filenames (main.a8f3d.js) get Cache-Control: public, max-age=31536000, immutable — cached forever, since the filename changes when contents do.
  • HTML files get short cache or no-cache, so they always reflect the latest hashed asset names.

Add ETag or Last-Modified for conditional requests on resources that don't have content hashes.

What are preload, prefetch, preconnect?
  • preconnect: warm up DNS+TCP+TLS to a critical origin before you actually need it.
  • preload: fetch a resource you'll need on the current page, with high priority. Use for fonts, hero images, critical scripts.
  • prefetch: low-priority fetch of a resource you'll probably need on the next page or soon. Routes the user is likely to navigate to.
  • modulepreload: like preload but for ES modules with proper dependency handling.

Don't preload everything — it competes with the actual critical path.

03React performance

How do you find and fix unnecessary re-renders in React?

Process:

  1. Open React DevTools Profiler. Record a session of the slow interaction.
  2. Look at the flame graph. Find components rendering more than they need to.
  3. For each one, ask: did its props or state actually change?
  4. Common causes:
    • New object/array literal passed as prop every render — useMemo the prop, or restructure.
    • Inline function passed as prop — useCallback if it's going to a memoized child.
    • Context provider re-rendering, causing every consumer to re-render even if their slice didn't change — split contexts, or use a selector library.
    • State lifted too high — push it down to the smallest subtree that needs it.
  5. Wrap the legitimately-expensive child in React.memo.

Don't memoize blindly. Each memo/useMemo has a comparison cost.

How do you render a list of 50,000 rows performantly?

Virtualization. Only render the rows visible in the viewport plus a small buffer.

Tools: @tanstack/react-virtual, react-window, or AG Grid (which is virtualized by default — that's relevant for Omnesoft).

Beyond virtualization:

  • Stable keys so React doesn't tear down rows on scroll.
  • Memoize row components — React.memo with a custom equality if needed.
  • Avoid heavy work per row — defer formatting, lazy-load images, defer non-critical cell content.
  • If rows are interactive, consider event delegation at the container level rather than handlers per row.
  • Pagination or windowed loading if 50k is more than realistically needs to be in memory.
What's the cost of Context, and how do you mitigate it?

Every consumer of a Context re-renders when the Context value changes (by reference). If you put a frequently-changing object in a single Context, every consumer rebuilds, even ones that only care about one field.

Mitigations:

  • Split contexts by what changes together. Theme in one, user in another, app state in a third.
  • Don't inline the value — useMemo the provider's value object so it doesn't change identity unnecessarily.
  • Selector pattern — use a library like use-context-selector or move to Zustand/Redux for fine-grained subscriptions.

04Profiling & tools

Walk me through your performance debugging workflow.

I start with the user-visible problem — what specifically is slow? Initial load? An interaction? A specific action? That decides which tools.

For load:

  • Lighthouse for a quick audit.
  • WebPageTest for waterfalls and real-network conditions.
  • Chrome DevTools Network tab — what's blocking, what's late, what's huge.
  • Bundle visualizer to find the bloat.

For interactions / runtime:

  • Chrome DevTools Performance tab — main-thread flame chart shows where the time is.
  • React DevTools Profiler — find slow renders and unnecessary work.
  • Performance API (performance.measure) for custom timing of critical paths.

For real users:

  • RUM tools — Sentry, DataDog, App Insights — watch P75/P95 over time.
  • Web Vitals JS library to capture LCP/INP/CLS for actual users and send to your backend.

The discipline I try to keep: measure, fix the biggest thing, measure again. No guessing.

05Performance budgets

What's a performance budget? Have you used one?

A performance budget is an explicit limit on size or speed metrics that the team agrees not to exceed without conscious decision. Examples:

  • JS budget: 200kb gzipped on the critical path.
  • LCP budget: 2.5s on a 4G connection.
  • Per-route bundle budget: 50kb new JS for a new route.

Enforced via Lighthouse CI, bundle-size CI checks, or a package like size-limit. The point isn't punishment — it's making the trade-off visible. "We can add this lib, but it costs us X kb. Is it worth it?"

06Your Artlist story (rehearse this)

Have a 60-second version of your Artlist performance story ready. Translation infra, locale shipping, performance impact. See the STAR stories page for the full version. Bring at least one number — even if it's rough — because numbers separate you from candidates who say "I worked on perf." Recommended numbers to anchor on:

  • JS bundle reduction (kb or %)
  • Locale-switch time improvement
  • Translation key count reduction (dead-code work)

Even rough numbers ("around 30% reduction") beat no numbers.