Core Product Feature Research & Analysis

This documentation area is intended to serve as repository of feature research and evaluation, with a particular focus on comparator platform research to assist with core platform strategy development.

How was this research conducted?

The documents found within were created based on reviewing many public resources for the chosen comparator platforms. These include:

  • Platform documentation

  • Marketing blog posts (“Try our new drag-and-drop feature!” for example)

  • Community forums

  • Partner sites (for example IT and L&D departments at institutions telling their staff how to use tools)

  • Third-party vendor documentation

  • Sandbox environments (for first-hand experimentation)

A variety of methods were explored, but by the end, the research boiled down to a few key questions:

  • How do our comparator platforms implement this feature? - What user experience do they offer?

  • How do we currently implement this feature?

    • What do we do right?

    • What do we do wrong?

  • What does the ideal version of this feature look like? - What does it do? How does it work?

    • What user stories does the ideal version of this feature address?

This is not all purely objective, for example the ideal version of a feature as written is based on my own experiences, the way a feature was implemented on comparator platforms, as well as the opinions of community members both in our community and the communities of our comparator platforms. For example, if 5 people on the Canvas forums mentioned wanting a particular piece of functionality to be added to a feature, there’s good odds their desire is reflected in a user story for the ideal version of that feature. It does not mean that those people are objectively correct about what they want, and it may actually be a bad idea, even if they think they want it. Do not treat this research and the conclusions drawn within as being set in stone.

Why were the comparator platforms chosen?

For the initial phase of this research, a small number of comparator platforms were chosen. These were:

  • Canvas - Canvas was chosen as it delivers a solid experience for on-campus learning and teaching, particularly in the K-12 market. While Open edX is not intended to replace Canvas at most institutions as it has a different product focus, Open edX commonly exists alongside Canvas, and needs to deliver a comparable instructor experience to coexist smoothly for those users. It’s also got some tools that are just genuinely well-made, so it’s worth learning lessons from those.

  • Moodle - Moodle is a giant in the traditional LMS space, as it can do everything. Moodle is worth looking at because it has a basic implementation of just about anything you might want an LMS to do, even if the user experience is exceptionally poor in most cases. Moodle also has a top-down hierarchical granular permissions system that provides a great deal of insight into the needs of platform administrators.

  • Skilljar - Skilljar is a much smaller platform that is increasingly prevalent in corporate customer education. It has a modern user experience, and tools entirely focused on self-paced, solo learning. This effectively makes it an effective foil to Canvas, which is a lot more focused on augmenting the classroom experience. I have personally seen Skilljar beat out Open edX directly in the corporate sales process, so I wanted to include it to learn how we win that matchup as a platform. The answer, for those interested, lies in academic rigor. Skilljar is a good bit of software, but doesn’t offer much in terms of baked-in L&D best practices, assignment types and interactivity, or just plain helping its authors create courses that aren’t just video playlists.

  • Coursera - Coursera is an obvious competitor for those trying to deliver an edX.org-style MOOC platform, which is theoretically our strongest use-case given the platform’s background. Coursera is mostly a black box, and if you have first-hand knowledge of Coursera’s tools, this is a good place for you to contribute, as not much is publicly available. What research exists at the time of writing is based on unsecured partner documentation (try searching “filetype:pdf” sometime, you’ll be surprised what people leave lying around).

These aren’t the only platforms you’ll see in this documentation, especially as this documentation grows, and in some cases, other tool-specific platforms were investigated, such as form tools when discussing polls and surveys. Nor should this documentation be limited to these longer-term. If you have in-depth knowledge of how a platform works, or you’ve seen a version of one of these tools that you think shines, please contribute what you know!

Contributing to this Documentation

If you have insights, information, screenshots, or knowledge that can add to this research, please contribute to it into these docs either directly or in the form of a comment for future review! If you read something that you can confirm is incorrect due to advancements in our products or those of comparable platforms, please correct the error, or at least leave a comment with what you know. This is intended as a community resource for the betterment of the platform, and to assist future product development efforts. The more accurate it is, the better it is for everyone.

Contents