Engenheiro QA Perguntas de Entrevista & Respostas

As entrevistas de engenharia QA testam sua capacidade de pensar sistematicamente sobre qualidade, projetar estrategias de teste eficazes e construir automacao robusta. Espere perguntas sobre design de testes, frameworks de automacao e integracao CI/CD.

Perguntas comportamentais

  1. 1. Tell me about a critical bug you caught before it reached production. How did you find it?

    Resposta modelo

    During regression testing for a payment processing update, I noticed that currency conversion was rounding incorrectly for transactions under $1. The automated test suite passed because all test cases used amounts above $10. I found it through exploratory testing — I systematically tested boundary values near zero, and discovered that transactions of $0.99 or less were being rounded to $0.00 after currency conversion. The root cause was integer division instead of float division in the conversion function. If this had reached production, it would have affected roughly 15% of our micro-transactions — about 8,000 daily transactions worth an estimated $3,200 daily in lost revenue. After the fix, I added 20 boundary-value test cases to the automated suite covering small amounts, zero amounts, and maximum amounts in every supported currency.

  2. 2. Describe a time you improved a testing process that was slowing down development.

    Resposta modelo

    Our end-to-end test suite took 3 hours to run and was blocking every pull request merge. Developers were either skipping tests or batching PRs to avoid the wait. I analyzed the suite and found three issues: tests were running sequentially instead of in parallel, 30% of tests were redundant (testing the same code paths), and flaky tests caused 40% of runs to fail on the first attempt. I containerized the test environment so tests could run in parallel across 8 workers, deleted 45 redundant tests, and quarantined 12 flaky tests while fixing their root causes (mostly race conditions and hardcoded test data). Run time dropped from 3 hours to 25 minutes, flakiness went from 40% to under 2%, and developer PR merge frequency doubled. The team went from seeing tests as an obstacle to trusting them as a safety net.

  3. 3. Tell me about a time you disagreed with a developer about whether something was a bug.

    Resposta modelo

    I found that our search feature returned results in a different order when the same query was run twice. The developer said it was 'by design' because the results had equal relevance scores and the database didn't guarantee order for ties. I argued this was a UX bug even if it was technically correct — users expect consistent results. I showed user session recordings where users were confused by results shifting between searches. We compromised: we added a secondary sort by creation date for equal-relevance results, giving deterministic ordering without changing the relevance algorithm. I learned that 'bug vs. feature' debates are best resolved by looking at user impact, not technical correctness. If users are confused, something needs to change regardless of what the spec says.

  4. 4. Give me an example of how you've mentored or influenced a team's approach to quality.

    Resposta modelo

    When I joined the team, developers wrote minimal unit tests and relied entirely on QA for catching bugs. Instead of lecturing about test coverage, I started pair-programming test code with developers during code reviews. I'd say: 'Here's how I'd test this function — want to add these cases?' After a few sessions, developers started writing their own tests. I also created a 'bug wall' — a dashboard showing where bugs were found (unit test, integration test, QA, production). Over 3 months, the data clearly showed that bugs caught later cost exponentially more to fix. Production bug count dropped 55% as developers internalized the shift-left mindset. The key was making it collaborative rather than adversarial — I positioned myself as a quality enabler, not a bug police.

Perguntas técnicas

  1. 1. Explain the testing pyramid and how you apply it in practice.

    Resposta modelo

    The testing pyramid has three layers: unit tests at the base (fast, isolated, many), integration tests in the middle (test component interactions, moderate count), and end-to-end tests at the top (slow, full system, few). In practice, I aim for roughly 70% unit, 20% integration, 10% E2E. Unit tests cover individual functions and edge cases — they run in milliseconds and give instant feedback. Integration tests verify that components work together: API endpoints return correct responses, database queries work against a real schema, message consumers handle messages correctly. E2E tests cover critical user journeys: sign up, purchase, core workflow. I keep these minimal because they're slow, flaky, and expensive to maintain. The anti-pattern I fight most: the 'inverted pyramid' where teams have 500 E2E tests and 20 unit tests. That means 3-hour test runs, constant flakiness, and developers ignoring test failures.

  2. 2. How would you design a test strategy for a REST API?

    Resposta modelo

    I'd test at four levels. Contract testing: validate that request and response schemas match the OpenAPI spec. This catches breaking changes before they affect consumers. Functional testing: for each endpoint, test happy paths, validation errors (missing fields, wrong types, boundary values), authentication and authorization (unauthorized access, wrong role), and edge cases (empty collections, maximum payload sizes, special characters). Integration testing: verify the API interacts correctly with its dependencies — database operations, external service calls, message queue publishing. Use a real database with test data, but mock external services at the network level. Performance testing: measure response time under normal load (baseline), stress test to find the breaking point, and soak test for memory leaks over extended periods. I'd automate all of this in the CI pipeline: contract and functional tests on every PR, integration tests on merge to main, performance tests nightly. I'd use Postman/Newman for functional tests, k6 or JMeter for performance, and Pact for contract testing.

  3. 3. What's the difference between mocking, stubbing, and faking? When do you use each?

    Resposta modelo

    A stub returns pre-configured responses — it replaces a dependency with a simplified version that gives predictable output. I use stubs when I need a dependency to return specific data for my test but don't care how it's called. A mock is a stub that also records how it was called and lets you verify interactions — did this method get called with these arguments? I use mocks when the behavior I'm testing is the interaction itself: verifying that a service sends an email after registration, not what the email contains. A fake is a working implementation with shortcuts — an in-memory database instead of PostgreSQL, a local file system instead of S3. I use fakes when I need realistic behavior but can't use the real dependency in tests. My rule of thumb: prefer fakes for data stores (more realistic), stubs for external APIs (predictable), and mocks sparingly for interaction verification. Over-mocking is a common anti-pattern — if you mock everything, you're testing your mocks, not your code.

  4. 4. How do you handle flaky tests?

    Resposta modelo

    Flaky tests are a serious problem — they erode trust in the test suite and train developers to ignore failures. My approach: first, identify flaky tests systematically. I tag tests that fail intermittently and track their failure rate. Any test that fails more than once without a code change is considered flaky. Second, quarantine: move flaky tests to a separate suite that runs but doesn't block deployments. This preserves the signal-to-noise ratio of the main suite. Third, fix root causes: the most common sources are timing issues (add explicit waits for async operations instead of sleep), shared state (ensure test isolation with proper setup/teardown), external dependencies (mock or containerize them), and order dependency (tests that pass individually but fail when run after another test). Fourth, prevent new flaky tests: add retry-detection to CI — if a test passes on retry, flag it as potentially flaky and create a ticket. I've found that treating flaky tests with the same urgency as production bugs keeps the suite reliable.

Perguntas situacionais

  1. 1. The product manager says there's no time for testing before a major release. How do you respond?

    Resposta modelo

    I wouldn't argue for 'testing time' in the abstract — I'd make the risk concrete. I'd ask: 'What's the cost if we ship a critical bug to our 50K users?' Then I'd present a risk-based testing plan. I can't test everything, so I'd identify the highest-risk areas: new code, changed code, and critical user flows (payment, authentication, data integrity). I'd propose a targeted 2-day testing plan covering these areas, plus automated regression tests running in parallel. I'd also present the alternative: we can ship without testing, but we need a fast rollback plan and someone on-call to respond to issues. Making the tradeoff explicit — 'we save 2 days but accept the risk of X' — usually changes the conversation. If the PM insists on zero testing, I'd document the decision and the risks in writing. In my experience, when you quantify the risk clearly, the answer is almost always 'let's find 2 days.'

  2. 2. You find a major bug 1 hour before a scheduled release. What do you do?

    Resposta modelo

    First, I'd assess severity and impact. If it affects core functionality (data loss, security, payment), I escalate immediately and recommend delaying the release — no exception. If it's a UI issue or affects a minor flow, I'd determine if we can ship with a known issue and hotfix within 24 hours. I'd communicate clearly to the release manager and product owner: here's the bug, here's the impact, here are the options. Option A: delay the release, fix the bug, run regression. Option B: release without the affected feature (feature flag it off). Option C: release with a known issue, hotfix tomorrow. I'd provide my recommendation but let the business decision-makers decide. What I wouldn't do: ignore it, downplay it, or attempt a rushed fix without testing. Every 'quick fix' I've seen attempted 1 hour before release has introduced a second, worse bug.

  3. 3. You're asked to test a feature but the requirements are vague and incomplete. How do you proceed?

    Resposta modelo

    I'd start by testing what I can infer from the feature itself — use it as a user would and document my assumptions. I'd create a test plan based on these assumptions and share it with the product manager and developer: 'Here's what I plan to test. These are the assumptions I'm making. Are they correct? What am I missing?' This approach is faster than waiting for perfect requirements and often surfaces gaps the PM hadn't considered. For each gap, I'd ask specific questions: 'What happens when the user enters more than 500 characters? What should the error message say? Should the form preserve data on validation failure?' I'd also look at similar features in the product for consistency patterns. The goal is to turn vague requirements into testable acceptance criteria through conversation, not to wait for a perfect spec that may never arrive.

  4. 4. A developer says your automated test is wrong because the feature behavior changed. How do you verify?

    Resposta modelo

    I'd treat it as a genuine investigation, not a confrontation. First, I'd verify the test's assertion against the current requirements — is the test checking outdated behavior? If the feature intentionally changed and the test wasn't updated, that's a process gap, not a wrong test. I'd update the test to match the new behavior. Second, I'd check if the behavior change was intentional by looking at the commit history, pull request description, and any linked tickets. If there's a documented decision to change the behavior, the test should be updated. If there's no documentation, I'd ask the developer and the PM to confirm. Sometimes developers inadvertently change behavior while refactoring, and the 'wrong' test actually caught a real bug. Either way, the conversation produces a good outcome: either we update the test (and the test's change is now documented) or we discover an unintended regression. I'd also flag the process improvement: behavior changes should trigger test updates in the same PR.

Dicas para a entrevista

Prepare exemplos mostrando que voce pensa em qualidade de forma holistica -- nao apenas encontrar bugs, mas preveni-los. Se lhe derem um produto para testar durante a entrevista, comece com condicoes de contorno, casos extremos e modos de falha.

Pratique estas perguntas com IA

Experimente uma entrevista simulada grátis

Pratique estas perguntas com IA

Perguntas frequentes

Quais linguagens de programacao um Engenheiro QA deve conhecer?
Python e JavaScript sao as escolhas mais versateis. Java e comum em ambientes empresariais. Aprenda a linguagem que corresponda ao stack da empresa alvo.
Como as entrevistas QA diferem das de desenvolvimento?
Focam em estrategia de testes, analise de defeitos e processos de qualidade em vez de problemas algoritmicos.
Experiencia em automacao e necessaria?
Para a maioria dos roles QA intermediarios e senior, sim. A industria evoluiu fortemente para automacao.
Como se preparar para um exercicio QA para fazer em casa?
Leia as instrucoes com cuidado. Escreva codigo de teste limpo e bem estruturado. Inclua casos positivos e negativos. Mostre a piramide de testes.

Cargos relacionados

Precisa de um currículo primeiro? Ver exemplo de currículo para Engenheiro QA →

Usamos cookies para analisar o tráfego do site e melhorar sua experiência. Você pode alterar suas preferências a qualquer momento. Cookie Policy