Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Book: SE@Google Ch 11: Testing Overview #8

Open
halfwhole opened this issue Jan 21, 2022 · 0 comments
Open

Book: SE@Google Ch 11: Testing Overview #8

halfwhole opened this issue Jan 21, 2022 · 0 comments
Assignees

Comments

@halfwhole
Copy link
Contributor

halfwhole commented Jan 21, 2022

Book: Software Engineering at Google
Chapter 11: Testing Overview

Summary:

Testing is important to ensure that our programs run as expected. But when projects become large, manual testing becomes unfeasible, and so we need automated testing.

We should write automated tests because:

  1. Tests catch bugs early in the development cycle, where it is much less expensive to fix
  2. Tests support the ability to change: we can refactor, redesign, or add new features confidently, without fearing that things will break
  3. Tests can act as documentation for our code
  4. Tests make reviews simpler: reviewers need less effort to verify that the code works as expected, such as edge cases, correctness, and error conditions

When designing a test suite, we should mostly write small tests: they’re faster, more deterministic, and easier to debug.

What makes for a “big” or “small” test? There are two dimensions: size and scope. Size refers to the resources needed to run a test case, such as memory, processes, and time. Scope refers to the specific code paths that we are verifying.

Test size is determined by how a test runs, what it’s allowed to do, and how many resources it consumes. Small tests run in a single process, medium tests run in a single machine, and large tests run wherever they want. We favour small tests, because they’re almost always faster and more deterministic than tests that consume more resources or involve more infrastructure. Large tests are expensive to run, and they should only be run during build and release so that it doesn’t impact the developer workflow.

Test scope refers to how much code is being validated by a given test. This is different from how much code is executed during a given test, as testing a single target class may potentially invoke its dependencies as well. Narrow-scoped tests, or unit tests, validate logic in a small, focused part of codebase, like an individual class or method. Medium-scoped tests, or integration tests, verify interactions between a small number of components, like between the server and database. Large-scoped tests, or end-to-end tests, validate interactions between or behaviours across multiple distinct parts of a system. As a guideline, we should aim for 80% unit tests, 15% interaction tests, and 5% E2E tests.

Tests should be hermetic: each test should itself contain all the information necessary to set up, execute, and tear down its environment. Tests should also be written clearly and be obvious upon inspection—it should be clear to diagnose what went wrong when a test fails. Tests should also strive to be concrete examples of input/output pairs. Don’t put logic, such as control flow statements and loops, within tests, as it risks creating bugs and making it difficult to determine the cause of test failure.

We should heed the Beyoncé rule: “If you liked it, then you shoulda put a ring on it.” In other words, we should test everything that we don’t want to break: if we want to be confident that a system exhibits a particular behaviour, the only way to be sure is to write an automated test for it.

Engineers should care about their tests, and treat them with as much respect as production code.

@halfwhole halfwhole self-assigned this Jan 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant