Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

In June 2024, this page was copied from 2U’s private Confluence to here, so that it could be adapted/adopted for the community, as was originally intended. The page requires clean-up, and could be an OEP or other type of document at this point.


Copied content begins here:

This updated process went into effect August 2017.

If you are familiar with the old Flaky Test Process; we have changed over to a zero-tolerance process, rather than a process that attempts to tolerate flaky tests.

Overview

Having dependable test suites is a requirement for decreasing time to value through more frequent edx-platform deployments. This translates to flaky tests no longer being an acceptable nuisance.  Instead, our tests need to be 100% trustworthy in our Continuous Integration (CI) and (near)-Continuous Deployment (CD) systems.

What is a flaky test? A flaky test is one that sometimes passes and sometimes fails. Most are bok-choy acceptance tests, testing the flow of the user through the site. Most flaky tests are flaky because of how the test was written, and not due to an actual bug.

The current process is now to delete the flaky test using the process defined below.

Consequences of this process

This updated process has the following consequences:

  • Test suites become dependable. We can rely on them for deployments. You can rely on them with your PRs.

  • We save lots of people and compute resources that were wasted on flaky tests.

  • We no longer pretend like flaky tests are a safety net against bugs.

  • There is a potential increase in risk for improved time to value.

  • Product development teams will continue to balance the risk vs reward of fixing the test, and determine how to move forward, without the above costs.

Given these consequences, it seems like a reasonable process to experiment with.  The plan is to revisit this process in October 2017 to determine how and if it needs to be improved.

How do I know I have encountered a flaky test?

  • No labels