Home › Technical › Testing

Testing

JD calls out Jest, React Testing Library, and Playwright. Have a coherent testing philosophy ready — they will ask, and "test all the things" is a junior answer.

01Your testing philosophy (the answer)

I think of tests as a confidence-to-ship tool, not a coverage number. The shape I aim for is roughly pyramid, but with a heavier middle — a strong base of unit tests for pure logic, a thick layer of integration tests using React Testing Library for component behavior, and a thinner top of E2E tests covering critical user flows. Coverage is a side effect, not a goal.

For unit tests I focus on pure functions — utilities, reducers, calculations. The signal-to-noise ratio is highest there. For UI I lean on RTL and test behavior, not implementation — what does the user see, what does the user do, what happens. Internal state and hooks are tested through their effects, not directly. For E2E I cover the critical user flows on every PR and a broader set on main.

02Unit testing with Jest / Vitest

What do you test as a unit?

Pure functions — utils, formatters, reducers, validators. Anything where given the same input you get the same output. These are the cheapest tests and the most valuable per minute spent.

I avoid testing implementation details of components at the unit level — that's RTL territory.

How do you mock dependencies?

In order of preference:

  1. Don't. Refactor to inject the dependency, then pass a fake in the test. Cleaner code and cleaner tests.
  2. Mock at the module boundaryjest.mock('./api') for API calls. Keep mocks shallow.
  3. MSW (Mock Service Worker) for HTTP-level mocking. Best of both worlds — your real fetch code runs, MSW intercepts at the network layer.

The mocking smell: tests that mock 12 things to test 1. That's a sign the unit under test has too many responsibilities.

03React Testing Library

Why is it called "Testing Library" and not "React Testing Utility"?

The name reflects the philosophy: test the way users interact with your UI, not the implementation details of your component. The guiding principle on the website: "the more your tests resemble the way your software is used, the more confidence they can give you."

What's the right way to query for elements?

RTL provides queries in a recommended priority order:

  1. Accessible to everyonegetByRole, getByLabelText, getByPlaceholderText, getByText
  2. Semantic queriesgetByAltText, getByTitle
  3. Test IDgetByTestId — last resort

If your test can find an element with getByRole('button', { name: /submit/i }), you've also verified the button is accessible. Tests that lean heavily on testIDs are a code smell — usually means the markup isn't accessible.

Can you write a quick test?
import { render, screen } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import { OrderForm } from './OrderForm';

test('submits with the entered SKU', async () => {
  const onSubmit = jest.fn();
  const user = userEvent.setup();

  render(<OrderForm onSubmit={onSubmit} />);

  await user.type(screen.getByLabelText(/sku/i), 'ABC-123');
  await user.click(screen.getByRole('button', { name: /submit/i }));

  expect(onSubmit).toHaveBeenCalledWith({ sku: 'ABC-123' });
});

Note: userEvent.setup() is preferred over fireEvent — it simulates real user interactions more faithfully.

How do you test async behavior?

Use findBy* queries and waitFor:

// findBy* waits up to a default timeout
expect(await screen.findByText(/order created/i)).toBeInTheDocument();

// waitFor for arbitrary assertions
await waitFor(() => expect(onSubmit).toHaveBeenCalled());

For API calls, use MSW to intercept and respond — your component's real fetch logic runs against a fake server.

Common RTL mistakes you see?
  • Testing implementationexpect(component.state.x). Just test the rendered output and behavior.
  • Snapshot tests for everything — bloated, brittle, no one reads them when they break. Use snapshots sparingly for stable, hard-to-describe outputs.
  • Over-mocking — mocking the world to test almost nothing.
  • Using container.querySelector when an RTL query exists. The escape hatch is for legitimate edge cases, not laziness.
  • act() warnings ignored — they signal real issues with timing.

04End-to-end with Playwright

Why Playwright over Cypress?

Both are good. Playwright wins for me on:

  • Multi-browser — Chromium, Firefox, WebKit out of the box.
  • True parallelism across browsers and projects, much faster CI.
  • Auto-waiting — most flakiness vanishes without explicit waits.
  • Network interception — first-class API for mocking, recording, replaying.
  • Trace viewer — debugging failed CI runs is genuinely good.

Cypress has a really nice DX for interactive use and great docs. Either is defensible. For Omnesoft (Playwright in the JD), Playwright.

What does an E2E test look like?
import { test, expect } from '@playwright/test';

test('user can create an order', async ({ page }) => {
  await page.goto('/orders/new');
  await page.getByLabel('SKU').fill('ABC-123');
  await page.getByLabel('Quantity').fill('5');
  await page.getByRole('button', { name: 'Submit' }).click();

  await expect(page.getByText('Order created')).toBeVisible();
});

Note Playwright's getBy* locators mirror RTL semantics — query by accessible name first, fall back to test IDs.

How do you keep E2E tests fast and reliable?
  • Run them in parallel. Playwright does this by default — make sure tests don't depend on shared state.
  • Use storage state for auth — log in once, reuse the session across tests instead of logging in per test.
  • Seed the backend or use fixtures — don't rely on UI flow to set up test data. Hit the API directly to create state.
  • Smoke set on every PR, full suite on main — keep PR feedback fast.
  • Trace viewer + screenshots on failure — make CI failures debuggable.
  • Stable selectors — accessible names first, test IDs second.
  • Avoid sleeps. Use Playwright's auto-waiting locators.

05Visual regression / accessibility tests

Do you do visual regression testing?

Sometimes. Tools: Chromatic (Storybook-based), Playwright's built-in screenshot diffing, Percy. Big trade-off: maintenance — diffs flag intentional changes too, and reviewing them adds review burden. I'd reach for it when there's a real design-consistency stake (a public-facing landing page, a design system) and skip it for an internal ERP UI where visual exactness is less of a concern than behavior.

How do you test accessibility?

Multiple layers:

  1. Automatedjest-axe in unit tests, @axe-core/playwright in E2E. Catches obvious issues like missing labels, contrast failures, ARIA misuse.
  2. RTL queries themselves — using getByRole exercises the accessibility tree. If a query can't find your button by role, the button isn't accessible.
  3. Manual keyboard testing — tab through anything significant before shipping. If you can't, neither can your users.
  4. Screen reader spot checks — VoiceOver on macOS, NVDA on Windows. Not on every PR, but on critical flows.

Automated tools catch ~30% of real issues. The rest need human testing. That's worth saying out loud — interviewers like seeing nuance here.

06What you DON'T need to test

  • Third-party libraries. Don't test that React's useState works.
  • Implementation details. "Did the component use this hook?" — wrong question.
  • Trivial wiring. A component that just passes props through to a child. Test the integration, not the wiring.
  • Static UI. Snapshot tests for "this div has these classes" rarely earn their cost.