The QA & Test Engineering Interview
Automation · Strategy · CI/CD · Quality Advocacy
QA interviews test more than whether you can write automated tests. At mid and senior level, they're evaluating your testing philosophy, your ability to reason about risk, and whether you can influence a team's approach to quality — not just execute test cases.
What the interview process looks like
Testing Fundamentals
The testing pyramid
The pyramid describes the ideal distribution of test types in a suite. Many teams invert it — too many E2E tests, too few unit tests — and pay the price in slow, fragile pipelines.
- Unit tests — fast, cheap, isolated. Test a single function or class. Should make up the majority of your suite.
- Integration tests — test how components work together (service + database, service + external API). Slower, but catch a different class of bugs.
- E2E tests — test the full user journey through the UI. Valuable but expensive. Slow, brittle, hard to maintain. Use sparingly for critical paths only.
- Know why inverting the pyramid is an anti-pattern: E2E tests take minutes, break on cosmetic changes, and give you no information about where the bug is.
Test doubles
- Mock — a fake object that records interactions and lets you assert on them. "Was this method called? With what arguments?"
- Stub — returns a predefined value. Replaces a real dependency to control the test environment. No verification on how it was called.
- Spy — wraps a real object and records interactions without replacing the implementation.
- Fake — a working implementation, but simplified. An in-memory database instead of a real one. The most powerful double, but the most effort to build.
- Know when to use each. Over-mocking leads to tests that pass when the real system is broken.
Test design techniques
- Equivalence partitioning — divide input space into groups that should behave identically. Test one value per partition rather than every value.
- Boundary value analysis — bugs cluster at the edges. If a field accepts 1–100, test 0, 1, 2, 99, 100, 101.
- Decision tables — for complex business logic with multiple conditions. Map every combination of inputs to expected outputs.
- Exploratory testing — unscripted, experience-driven investigation. Complements scripted tests, especially for new features or after major changes.
Test Automation
Framework landscape
- Playwright — modern, multi-browser, multi-language (JS, Python, Java, C#). Auto-waiting, trace viewer, screenshot on failure. The current default for new projects.
- Cypress — JavaScript-only, runs in the browser process. Excellent developer experience, but limited to Chrome-family browsers and can't test multi-tab flows or native file dialogs.
- Selenium / WebDriver — the original. Still widely used in enterprise. More verbose, but proven at scale. Know it exists and why companies haven't migrated.
- Pytest (Python) — the default for Python test suites. Fixtures, parametrize, plugins. Used for API testing, unit testing, and integration testing.
Page Object Model
The single most important pattern in UI test automation. Without it, test suites become unmaintainable as the UI evolves.
- Each page or component gets a class that encapsulates its locators and actions
- Tests interact with the page through the model, not through raw locators
- When the UI changes, you update the Page Object once — not every test that touches that element
- Know the critique too: Page Objects can become fat classes that are hard to reuse. The component/fragment pattern addresses this.
Flaky tests — have a point of view
Flaky tests are a major interview topic. Interviewers use them to assess your debugging skills and your attitude toward test quality.
- Common causes: timing issues (no proper wait), shared test state, environment dependencies, non-deterministic test data
- Fix timing with proper waits, not sleep(). Use explicit wait conditions: wait until element is visible, wait until network is idle.
- Fix shared state by isolating tests: each test creates its own data, cleans up after itself, never depends on another test's outcome.
- Quarantine flaky tests — don't let them block CI while you investigate. But never accept them as permanent.
API Testing
What to test and how
- Happy path — valid inputs, expected responses. Start here, but don't stop here.
- Error cases — invalid inputs, missing required fields, wrong data types. Does the API return 400 with a useful error, or 500 with a stack trace?
- Authentication & authorisation — missing token, expired token, insufficient permissions. These bugs ship to production constantly.
- Rate limiting & edge cases — extremely long strings, special characters, empty payloads, duplicate requests.
- Contract testing (Pact) — verify that the consumer's expectations of the API contract match what the provider actually delivers. Catch breaking changes before deployment.
REST fundamentals
- HTTP methods: GET (read, idempotent), POST (create), PUT (replace, idempotent), PATCH (partial update), DELETE (remove, idempotent)
- Status codes: 2xx (success), 3xx (redirect), 4xx (client error), 5xx (server error). Know the common ones: 200, 201, 400, 401, 403, 404, 409, 422, 500, 503.
- Headers: Content-Type, Accept, Authorization, Cache-Control. Know what each does.
- Idempotency: calling the same request multiple times has the same effect as once. Critical for safe retries.
CI/CD & Shift-Left Testing
Where tests belong in the pipeline
- On every commit / PR — unit tests and fast integration tests. Must complete in under 5 minutes or developers stop waiting for them.
- On merge to main — full integration suite and API tests. Can be slower — up to 15–20 minutes.
- Nightly or on-demand — full E2E regression suite, performance tests. Too slow for every commit but essential before releases.
- Parallelisation is the key lever for keeping pipelines fast as the test suite grows.
Shift-left in practice
- Three amigos — developer, QA, and product discuss the feature before a line of code is written. Catch ambiguities and edge cases while they're cheap to fix.
- Definition of done — include "automated tests written and passing" and "tested in staging" in your team's DoD. Make QA criteria explicit, not implied.
- Feature flags — release code dark, enable gradually. Allows safe production testing and instant rollback.
- Developer self-testing — QA's job is not to catch every bug. It's to make the team's overall approach to quality better. Coach developers on what to test.
QA Strategy & Process
Risk-based testing
You will never test everything. The question is where to focus. Risk-based testing prioritises coverage based on probability of failure × impact of failure.
- High risk + high impact: payment flows, authentication, data loss scenarios — test exhaustively, automate everything.
- Low risk + low impact: cosmetic changes, internal admin screens — manual spot-check, minimal automation.
- Being explicit about risk prioritisation is what senior QA engineers do in interviews. It shows strategic thinking, not just execution.
Metrics that matter
- Defect escape rate — bugs found in production vs bugs found in testing. The ultimate quality signal.
- Mean time to detect (MTTD) — how quickly issues are caught after introduction. Lower is better.
- Test coverage — a useful indicator, not a target. 80% code coverage with poor test quality is worse than 60% with well-designed tests.
- Pipeline duration — if your test suite takes 90 minutes to run, developers disable it. Speed is a quality metric.
Bug severity vs priority
- Severity — technical impact. How badly does the system behave? (Critical, Major, Minor, Trivial)
- Priority — business urgency. How quickly does this need to be fixed? A low-severity bug on the homepage may be high priority; a critical bug in a rarely-used admin screen may be low priority.
- Know the difference and be able to articulate it. Conflating the two leads to misaligned expectations with developers and product managers.
Common questions
Tips from the hiring side
- They come with opinions. "It depends" without follow-through is a red flag. Have a point of view on frameworks, on shift-left, on flaky tests, and defend it.
- They talk to developers and PMs, not just testers. The best QA engineers are quality advocates, not gatekeepers at the end of the pipeline.
- They know the business context of what they're testing. Testing a payment flow without understanding the financial risk is just checking boxes.
- They can write code. Test automation is software engineering. If you can't write clean, maintainable code, your test suite will become a liability.
- They understand CI/CD. If your tests aren't integrated into the delivery pipeline, they're not providing continuous feedback — they're a manual step with better tooling.
- When asked "how would you test X," go well beyond the happy path immediately. That's the move that signals a senior engineer.