Interview Prep

The QA & Test Engineering Interview

Automation · Strategy · CI/CD · Quality Advocacy

QA interviews test more than whether you can write automated tests. At mid and senior level, they're evaluating your testing philosophy, your ability to reason about risk, and whether you can influence a team's approach to quality — not just execute test cases.

What the interview process looks like

1
Technical screen — 45 min
Testing theory, light automation questions, and often a scenario: "How would you test a login page?" They're calibrating seniority and whether you think about testing beyond the happy path.
2
Coding challenge — 60–90 min
Write automated tests for a provided codebase or API. Or debug and fix a broken test suite. Clean, maintainable test code matters as much as coverage.
3
Test design session — 60 min
"Here's a feature. Design a test strategy." Or a whiteboard test plan for a complex system. They want to see risk-based thinking and realistic prioritisation — not a test case catalogue.
4
Strategy & process discussion — 60 min
Your experience, your opinions on tooling, how you've introduced automation to teams, how you work with developers and PMs. This round reveals your actual seniority.
5
Pair programming session (sometimes)
Extend or refactor an existing test suite alongside an engineer. Shows how you work with code you didn't write and how you communicate technical decisions.

Testing Fundamentals

A weak foundation signals a junior regardless of automation tool experience. Interviewers use theory questions to establish the baseline before going deep on practical skills.

The testing pyramid

The pyramid describes the ideal distribution of test types in a suite. Many teams invert it — too many E2E tests, too few unit tests — and pay the price in slow, fragile pipelines.

  • Unit tests — fast, cheap, isolated. Test a single function or class. Should make up the majority of your suite.
  • Integration tests — test how components work together (service + database, service + external API). Slower, but catch a different class of bugs.
  • E2E tests — test the full user journey through the UI. Valuable but expensive. Slow, brittle, hard to maintain. Use sparingly for critical paths only.
  • Know why inverting the pyramid is an anti-pattern: E2E tests take minutes, break on cosmetic changes, and give you no information about where the bug is.

Test doubles

  • Mock — a fake object that records interactions and lets you assert on them. "Was this method called? With what arguments?"
  • Stub — returns a predefined value. Replaces a real dependency to control the test environment. No verification on how it was called.
  • Spy — wraps a real object and records interactions without replacing the implementation.
  • Fake — a working implementation, but simplified. An in-memory database instead of a real one. The most powerful double, but the most effort to build.
  • Know when to use each. Over-mocking leads to tests that pass when the real system is broken.

Test design techniques

  • Equivalence partitioning — divide input space into groups that should behave identically. Test one value per partition rather than every value.
  • Boundary value analysis — bugs cluster at the edges. If a field accepts 1–100, test 0, 1, 2, 99, 100, 101.
  • Decision tables — for complex business logic with multiple conditions. Map every combination of inputs to expected outputs.
  • Exploratory testing — unscripted, experience-driven investigation. Complements scripted tests, especially for new features or after major changes.

Test Automation

The framework is not the skill. Interviewers care whether you write maintainable, reliable tests — not whether you know the latest tool. A flaky test suite in Playwright is worse than a solid one in Selenium.

Framework landscape

  • Playwright — modern, multi-browser, multi-language (JS, Python, Java, C#). Auto-waiting, trace viewer, screenshot on failure. The current default for new projects.
  • Cypress — JavaScript-only, runs in the browser process. Excellent developer experience, but limited to Chrome-family browsers and can't test multi-tab flows or native file dialogs.
  • Selenium / WebDriver — the original. Still widely used in enterprise. More verbose, but proven at scale. Know it exists and why companies haven't migrated.
  • Pytest (Python) — the default for Python test suites. Fixtures, parametrize, plugins. Used for API testing, unit testing, and integration testing.

Page Object Model

The single most important pattern in UI test automation. Without it, test suites become unmaintainable as the UI evolves.

  • Each page or component gets a class that encapsulates its locators and actions
  • Tests interact with the page through the model, not through raw locators
  • When the UI changes, you update the Page Object once — not every test that touches that element
  • Know the critique too: Page Objects can become fat classes that are hard to reuse. The component/fragment pattern addresses this.

Flaky tests — have a point of view

Flaky tests are a major interview topic. Interviewers use them to assess your debugging skills and your attitude toward test quality.

  • Common causes: timing issues (no proper wait), shared test state, environment dependencies, non-deterministic test data
  • Fix timing with proper waits, not sleep(). Use explicit wait conditions: wait until element is visible, wait until network is idle.
  • Fix shared state by isolating tests: each test creates its own data, cleans up after itself, never depends on another test's outcome.
  • Quarantine flaky tests — don't let them block CI while you investigate. But never accept them as permanent.

API Testing

What to test and how

  • Happy path — valid inputs, expected responses. Start here, but don't stop here.
  • Error cases — invalid inputs, missing required fields, wrong data types. Does the API return 400 with a useful error, or 500 with a stack trace?
  • Authentication & authorisation — missing token, expired token, insufficient permissions. These bugs ship to production constantly.
  • Rate limiting & edge cases — extremely long strings, special characters, empty payloads, duplicate requests.
  • Contract testing (Pact) — verify that the consumer's expectations of the API contract match what the provider actually delivers. Catch breaking changes before deployment.

REST fundamentals

  • HTTP methods: GET (read, idempotent), POST (create), PUT (replace, idempotent), PATCH (partial update), DELETE (remove, idempotent)
  • Status codes: 2xx (success), 3xx (redirect), 4xx (client error), 5xx (server error). Know the common ones: 200, 201, 400, 401, 403, 404, 409, 422, 500, 503.
  • Headers: Content-Type, Accept, Authorization, Cache-Control. Know what each does.
  • Idempotency: calling the same request multiple times has the same effect as once. Critical for safe retries.

CI/CD & Shift-Left Testing

QA that only runs at the end of the sprint isn't quality assurance — it's delay assurance. The strongest QA engineers understand how to embed quality into the development process, not append it at the end.

Where tests belong in the pipeline

  • On every commit / PR — unit tests and fast integration tests. Must complete in under 5 minutes or developers stop waiting for them.
  • On merge to main — full integration suite and API tests. Can be slower — up to 15–20 minutes.
  • Nightly or on-demand — full E2E regression suite, performance tests. Too slow for every commit but essential before releases.
  • Parallelisation is the key lever for keeping pipelines fast as the test suite grows.

Shift-left in practice

  • Three amigos — developer, QA, and product discuss the feature before a line of code is written. Catch ambiguities and edge cases while they're cheap to fix.
  • Definition of done — include "automated tests written and passing" and "tested in staging" in your team's DoD. Make QA criteria explicit, not implied.
  • Feature flags — release code dark, enable gradually. Allows safe production testing and instant rollback.
  • Developer self-testing — QA's job is not to catch every bug. It's to make the team's overall approach to quality better. Coach developers on what to test.

QA Strategy & Process

Risk-based testing

You will never test everything. The question is where to focus. Risk-based testing prioritises coverage based on probability of failure × impact of failure.

  • High risk + high impact: payment flows, authentication, data loss scenarios — test exhaustively, automate everything.
  • Low risk + low impact: cosmetic changes, internal admin screens — manual spot-check, minimal automation.
  • Being explicit about risk prioritisation is what senior QA engineers do in interviews. It shows strategic thinking, not just execution.

Metrics that matter

  • Defect escape rate — bugs found in production vs bugs found in testing. The ultimate quality signal.
  • Mean time to detect (MTTD) — how quickly issues are caught after introduction. Lower is better.
  • Test coverage — a useful indicator, not a target. 80% code coverage with poor test quality is worse than 60% with well-designed tests.
  • Pipeline duration — if your test suite takes 90 minutes to run, developers disable it. Speed is a quality metric.

Bug severity vs priority

  • Severity — technical impact. How badly does the system behave? (Critical, Major, Minor, Trivial)
  • Priority — business urgency. How quickly does this need to be fixed? A low-severity bug on the homepage may be high priority; a critical bug in a rarely-used admin screen may be low priority.
  • Know the difference and be able to articulate it. Conflating the two leads to misaligned expectations with developers and product managers.

Common questions

How would you test a login page?
Expected: valid credentials, invalid password, non-existent user, SQL injection, XSS, brute force lockout, "remember me," session timeout, MFA, accessibility (keyboard nav, screen reader). Weak answers list only the happy path.
What's the difference between a stub and a mock?
A stub returns predefined data; you don't verify how it was called. A mock records interactions and lets you assert on them. Both control dependencies — they serve different verification purposes.
How do you decide what to automate?
Automate what's stable, frequently executed, and time-consuming to do manually. Don't automate exploratory testing, one-off checks, or things that change constantly.
How would you introduce automated testing to a team that has none?
Start with the most painful manual regression. Pick one stable feature. Get one test passing in CI. Make the value visible before asking for broad adoption. Don't boil the ocean.
A test that passes locally fails consistently in CI. How do you debug it?
Timing/async issue, environment difference (env vars, dependencies), shared state from another test, file system paths, missing seed data. Reproduce in CI locally with docker first.
How do you test something that can't be automated?
Exploratory testing with documented sessions, usability testing, visual testing tools, human review checklists. Automated testing is not the only testing.

Tips from the hiring side

What strong QA candidates do differently
  • They come with opinions. "It depends" without follow-through is a red flag. Have a point of view on frameworks, on shift-left, on flaky tests, and defend it.
  • They talk to developers and PMs, not just testers. The best QA engineers are quality advocates, not gatekeepers at the end of the pipeline.
  • They know the business context of what they're testing. Testing a payment flow without understanding the financial risk is just checking boxes.
  • They can write code. Test automation is software engineering. If you can't write clean, maintainable code, your test suite will become a liability.
  • They understand CI/CD. If your tests aren't integrated into the delivery pipeline, they're not providing continuous feedback — they're a manual step with better tooling.
  • When asked "how would you test X," go well beyond the happy path immediately. That's the move that signals a senior engineer.