Jenkins Guide

Quick Reference 

Status ContextPR comment to trigger rerun (case-sensitive)What is run
jenkins/quality jenkins run quality

pep8, pylint, jshint, diff_quality (for PR builds), code complexity, xsslint, xsscommitlint

jenkins/pythonjenkins run pythonpython unittests, coverage reports are generated, diff-cover (for PR builds)
jenkins/jsjenkins run jsjavascript unit tests, javascript coverage xml is generated
jenkins/django-3.0/[context]jenkins run django30 [context]runs the context (i.e. python, a11y, etc.) with the latest Django 3.0.x release
jenkins/django-3.2/[context]jenkins run django32 [context]runs the context (i.e. python, a11y, etc.) with the latest Django 3.2.x release
All contexts

jenkins run all

runs all the contexts (Each context will be reported on GitHub.)
Admin onlyjenkins add to whitelistthis adds a github username to the whitelist. It is temporary. To make it permanent, alert tools-core
Admin onlyjenkins ok to testfor this PR only, this allows tests to run. any context statement (e.g., "jenkins run js") can be run by a core contributor after this statement.

Note: You can specify more than one test to run in a single Github comment, ex:

jenkins run python
jenkins run quality

This will be way less annoying than multiple comments for those following your PR!


Why does my build say "automatically merged" or "merged build"?

That is just the language used by the plugin used to trigger builds. Don't worry – it is not actually merging anything.  We are working on making this language more clear. 

I tried to use "jenkins run xxxx" but I still see a red X, what gives?

        Click the build and look at the Jenkins page. If the circle is grey instead of red, see if there's language like this: "Build automatically aborted because there is a newer build for the same PR" - there will be a link to the new build. Sometimes Github takes awhile to catch up to Jenkins current state. Don't run a new build; your new build will post back to Github when it completes. You can manually edit the URL to view your new build.

How do I find a report page/logs/etc.?

 See the section about Navigating Jenkins.

How do I manually start a build or rerun my tests?

You can rerun tests by making a comment on your pull request. Refer to the Quick Reference.  (Note that it is case sensitive.)

What if I don't want tests to run automatically for a new commit on my PR?

You can include the text "[skip ci]" to your commit message.  This will prevent a new build from starting automatically. If you decide later you want tests to run, the trigger comments will still work.

Can I rerun just some of the tests on Jenkins?
  1. Case: part of a build fails for reasons unrelated to your changes (e.g. a github issue, or a flaky test) and you want to rerun just the part of the build that failed.  

  2. Case: you are working on a new test or a fix for a broken test and want to confirm your changes work without running a whole build.
As an open source contributor, how can I use this documentation? 

      This documentation outlines how to start builds and review them. Just pay attention the sections about permissions required to do certain things and note that some of the screen shots may look slightly different.

Where can I find more information about how this all works?

These slides from a lightning talk on edx-platform continuous integration should serve as a good overview.

Starting and checking the status of builds

There are three ways that builds start in our jenkins testing infastructure for edx-platform.

1) Automatically started builds for Pull Requests
    • Permissions Required: You must be on a whitelist. If you are a public member of the edx organization on github, then you are already on the whitelist. If you are not, someone will assist you during the pull request review process. You will still be able to view the build results as described below.

    • How it gets started

      • When you submit a pull request to the edx-platform repository, a jenkins build will automatically start and run tests (unit, acceptance, etc) at the most recent commit.
      • When you add a new commit to the PR, a new build will be run for those changes.
      • Sometimes it may take a little while for the build to start. That usually just means that Jenkins is pretty busy.
    • How it is reported

      • You will know a build is started if you see something like below.  This currently says "Merged build started" instead of "Waiting to hear about ...", and the build log may say something like "{CommitHash} automatically merged".  That is just the language used by the plugin used to trigger builds. Don't worry – it is not actually merging anything.  We are working on making this language more clear. 

      • When it is finished you will see either a green checkmark or a red X, indicating that the build either passed or failed respectively.

      • Use the show all checks link to show more details about the status of the build. 

      • When hovering over the icon next to the commit hash you'll see a summary of the build status: 
      • You can also see more details by clicking the icon next to the commit hash. 
2) Manually started builds for pull requests 
    • Through this approach, a person who has access to run builds for his or her own PRs

    • This approach can likewise be used to re-run tests on a commit hash that has already been tested

    • The permissions for this are the same as for automatically triggered pull request builds. (See above.)

    • How it gets started

      • Builds can be manually triggered via comments on an open pull request. Note that the comment triggers are case sensitive. (Use all lower-case.)  

      • To rerun tests for specific contexts, use one of the following:

          1. jenkins/quality

            • rerun with "jenkins run quality" 

          2. jenkins/python

            •  rerun with "jenkins run python" 

          3. jenkins/js

            • rerun with "jenkins run js"

          4. All contexts
            • rerun with "jenkins run all"

      • This will trigger a build on the most recent commit in the pull request for the specified context(s).

    • These builds are reported in the same way as automatically triggered pull request builds are. (See above.) 

    • TIP: Add an email filter to ignore emails from github for comments of this type.

3) Automatically started builds for new commits to the 'master' branch
    • A build is started whenever there is a new commit on the 'master' branch.
    • To see recent builds of 'master' look at the edx-platform-master-tests view.
    • To see current status of the master branch, look at the edx-eng-dashboard.
4) Manually started builds for specific bok choy or python unit tests
    • How it gets started

      1. Go to edx-platform-specific-tests.
      2. Make sure you are logged in.
      3. Fill in the required fields. (Descriptions are provided in the form.)
      4. Click 'Build'.
    • How it is reported

      • When you start the build, it will redirect you to the log page. You can watch this page for results.
      • The results will not be posted to github.
      • Note that this doesn't produce coverage reports. It only runs the specified test.

Navigating Jenkins

Getting to the right build via GitHub

Use the 'details' links next to the commit hash or near the 'merge' button on the PR.

Build Graphs

The edx-platform builds use the Jenkins Build Flow Plugin to coordinate the various steps of each build.  To see a graph of the various build steps click on the 'Build Graph' link.


These graphs will looks different depending on the context of the build.  As an example, the build graph for python tests looks something like this:

Coverage Reports

Note: Coverage reports will only be generated when all of the python unit tests pass.

    • How to find:

      Use the 'Details' link next the the 'Jenkins CI: Python Tests' on GitHub to get to the Jenkins build page for python tests.


      If the build is completed successfully, on the left side you will see:

      • ' Report' which shows the html reports produced by the coverage package.
      • 'Diff Coverage Report' which shows the coverage of new lines of code

Code Quality Reports

These are generated regardless of test results, but the quality check will fail if you have introduced new pep8 or pylint violations. It will also fail if the total number of violations is above a certain threshold.

Also see XSS Linter Violations for xsslint or xsscommitlint.

    • How to find:

      Use the 'Details' link next the the 'Jenkins CI: Quality Tests' on GitHub to get to the Jenkins build page for code quality checks.

      Once the build is complete, on the left side you should see:

      • 'Diff Quality' which shows new violations
      • 'Violations' which shows all violations

      Alternatively, under 'Build Artifacts', you should see the html reports.

      • "diff_quality_pep8.html" and "diff_quality_pylint.html" are probably the most useful to look at.
      • If don't see these here, click on 'Build Artifacts' and check in 'reports/diff_quality'
      • These show how many new violations have been introduced in your changes.
      • You can also download the full reports (files ending in '.report')
      • These show all of the found violations, not just newly introduced ones.
Test Results

Finding results for individual tests

      • If your build fails, the build steps that had failures will have red headers on the build graph.

      • The 'Test Result' link will take you to a summary of the test results (below).

      • From this page you can click through the test names to get to the console log for each test.

Console output for a build step

Examining the console output of a build can provide more information about the test results. For example, sometimes a worker will have an error during setup for the tests. If this happens, you will see a failure reported for that build step, but no individual test failures reported (since the tests did not run). In this case, checking the console log can be helful to determine the reason for the build failure. It can sometimes be useful to review even if there is an individual test failure.

To find it, click on the 'Console Output' link in the left navigation column.  When you are debugging a failed build, you should check the build graph to find the red boxes and click the link on them.  Each sub-build will also have a console output.

Troubleshooting failing jenkins builds

Check the logs to find out why your build failed.
    • See here for how to find the console log.
    • If the log shows the build erroring out before tests begin running, report this to the test engineering team. Be sure to share the link to your build log.
Run tests on devstack
    • If a build fails on jenkins, try to reporoduce and debug the failures in devstack.See the testing guide for how to run the tests.
    • For more debugging tips, see the testing FAQ. There you'll find tips for:

      • setting breakpoints in tests
      • visually debugging acceptance tests
      • running tests from a single file
      • and more..

   "What's devstack?" ... See here

Check the 'flaky test' list
    • Known flaky tests can be found with filters on the openedx Jira site
    • Remember that a test being listed as flaky doesn't mean that it can't fail for other reasons. Look into the logs and confirm that it is failing for the same reason as listed in the issue ticket.
    • If your build has failure that is a recently resolved flaky test, try rebasing from master. (A new auto-pr build will start when you do this on an open PR.)
Check if the failure is occurring on the master branch
    • Tests run for the master branch are here. You can inspect them the same way you would a PR build.

Jenkins performance metrics

We collect assorted information about Jenkins job runs in Splunk, where this data is aggregated and presented in a few useful dashboards.  (Note that you need to be connected to the edX VPN to be able to access this Splunk server.)  Once logged in, you can access the following information:

This data is useful in determining what's using up the most time in test runs, so we can decide what to optimize next (and determine if such optimizations are effective once implemented).