Open edX Policy for Generative AI Tools

Open edX Policy for Generative AI Tools

Generative AI tools can provide significant assistance in programming tasks, and are quickly being adopted across the software industry. As these new tools are experiencing explosive growth, there are a number of open questions about the data that models are trained on, and how that training is applied to the output. This document outlines the guidelines for use of generative AI tools in the Open edX project, and applies to all code contributions to the Open edX project.

Ownership

The contributor remains obligated to comply with all terms of their Contributor License Agreement (CLA), including without limitation having a legal right to license the code to Axim.

Code that has been created by generative AI tools must be directly modified by the author before it can qualify as “owned” by the contributor and be accepted into the project under their CLA. Axim is unable to advise on whether any set of human made changes is “adequate” to meet this test, contributors should use their best judgment and, if needed, consult with their own legal counsel.

Acceptable Tools

Currently only the following tools are allowed, though more may be added over time. If you would like to use a tool that is not on this list please file a ticket to Axim Engineering and wait for approval before opening a pull request with output from that tool.

  • Microsoft Copilot (all models and versions)

  • Anthropic Claude (all models and versions)

  • OpenAI (all models and versions)

Contributors should enable tool features that can reduce copyright risks (e.g., limiting the length of results). For example, see mitigations described on this page about Copilot and look for options like these with the GAI tools that are used.

For additional information about using GitHub Copilot for code reviews, please see Open edX Policy for GitHub Copilot Code Reviews.

Transparency

Contributors who submit code that was supported by GAI tools must (1) identify the tool they used and (2) briefly describe the work they did on top of the tool to ensure human intervention. Teams who review the PRs may have reasonable questions, and contributors should be prepared to collaborate.