Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Jira Legacy
serverJIRA (openedx.atlassian.net)
serverId13fd1930-5608-3aac-a5dd-21b934d3a4b4
keyMA-1099

Table of Contents:

Goals: 
Anchor
goals
goals

  1. Understand the load we are able to handle with the discussion API for when the mobile app is released.

    1. What can the server handle?
  2. Understand the the overhead between the discussion API and the ruby forums code.
    1. Does the Discussion API perform better, worse, or on par with the browser's forums?
    2. What does the forums performance look like in general? 

...

Testing Strategy:

  • Default page size for the browser is 25 while the mobile device will be using 10. It is possible that more requests could be sent for the same amount of information.
  • Push notifications
    • There is a possibility for a different usage pattern to look out for. If there is a popular thread, bursts of requests can be expected.
    • Increased forum usage as there is currently no notifications for the browser.
  • The browser can display the Threads, Response, and Comments all at once. The mobile app treats all three of these as separate views. It is possible that more requests could be sent for the same amount of information.
  • General Usage. Discussions on mobile could naturally increase discussion forums usage.

Endpoints:

 

Testing Strategy:

Originally the plan was to isolate each endpoint and determine what kind of load it can handle, but after analysis of the data, some of these endpoints seem unnecessary to isolate for a load test. These endpoints include DELETE and PATCH which are a significantly small part of the overall load in production. For the isolated test for these endpoints, it will be paired with it's appropriate GET Thread/Comment. For example, every DELETE Thread request requires a thread_id. We obtain this thread_id by calling GET Thread List with randomize parameters, which returns a list of threads where one is then randomly selected. This selected thread is then DELETEd. Below is the chart of the additional request we make. As long the ratio of how many of these requests happen in each task is understood, we can get the desired endpoint distribution. 

...

Request

...

Originally the plan was to isolate each endpoint and determine what kind of load it can handle, but after analysis of the data, some of these endpoints seem unnecessary to isolate for a load test. These endpoints include DELETE and PATCH which are a significantly small part of the overall load in production. For the isolated test for these endpoints, it will be paired with it's appropriate GET Thread/Comment. For example, every DELETE Thread request requires a thread_id. We obtain this thread_id by calling GET Thread List with randomize parameters, which returns a list of threads where one is then randomly selected. This selected thread is then DELETEd. Below is the chart of the additional request we make. As long the ratio of how many of these requests happen in each task is understood, we can get the desired endpoint distribution. 

Request

RequiresReturnsOrder of requests
GET Threadthread_idThreadTaken from thread_id pool
GET Thread List Thread ListGET Thread List   
GET Comment Listthread_idComment ListGET Thread ListGET Comment List  
POST Threadcourse_idThreadPOST Thread   
POST Responsethread_idCommentGET Thread ListPOST Response  
POST CommentCommentCommentGET Thread ListGET Comment ListPOST comment 
PATCH Threadthread_idThreadGET Thread ListPATCH Thread  
PATCH Commentcomment_idCommentGET Thread ListGET Comment List PATCH Comment 
POST Threadcourse_idThreadPOST Thread   
POST Responsethread_idCommentGET Thread ListPOST Response  
POST CommentCommentCommentGET Thread ListGET Comment ListPOST comment 
PATCH Threadthread_idThreadGET Thread ListPATCH Thread  
PATCH Commentcomment_idCommentGET Thread ListGET Comment ListPATCH Comment 
DELETE Threadthread_idDELETE Threadthread_idNo ContentPOST Thread ListGET Thread ListDELETE Thread 
DELETE Responsecomment_idNo ContentGET Thread ListPOST ResponseGET Comment ListDELETE Response*
DELETE Commentcomment_idNo ContentGET Thread ListGET Comment ListPOST CommentDELETE Comment*

...

  • Pin Thread - Not implemented
  • Open/Close Thread -Not implemented
  • Endorsed - Not Implemented

Course topics - This will be addressed at another time. 

...

  • Not Implemented

Course topics - This will be addressed at another time. 

 

...

Endpoints: 
Anchor
endpoints
endpoints

Usage patterns to look out for:

  • Default page size for the browser is 25 while the mobile device will be using 10. It is possible that more requests could be sent for the same amount of information.
  • Push notifications
    • There is a possibility for a different usage pattern to look out for. If there is a popular thread, bursts of requests can be expected.
    • Increased forum usage as there is currently no notifications for the browser.
  • The browser can display the Threads, Response, and Comments all at once. The mobile app treats all three of these as separate views. It is possible that more requests could be sent for the same amount of information.
  • General Usage. Discussions on mobile could naturally increase discussion forums usage.

/threads/

GET: 
Anchor
/threads/get
/threads/get

...

Jira Legacy
serverJIRA (openedx.atlassian.net)
serverId13fd1930-5608-3aac-a5dd-21b934d3a4b4
keyMA-1102

Seeding Data: 
Anchor
seedingdata
seedingdata

Course Structure Setup:

    A tarfile with a very simple setup will be used for each load test. This course was created in studio and then exported. During the course creating when seeding data, this tarfile will be used as the skeleton.

...

Using this data, we were able to get an idea of what a course might look like. Most notably, the largest comment_count (comments and responses) for a thread is 5907 and the median seems to be 1. Although that value is an outlier, each course has a "Introduce yourself" topic which would consistently put a thread with a high comment_count in each course. Also, when thinking about mobile usage, push notification could possibly have a different usage pattern where these high comment_count threads could see high spikes in traffic. 

...

Test details and importance 
Anchor
testdetails
testdetails

Since the request distribution is very disproportional, the individual endpoint tests are categorized base on how often these requests are hit. 

...

  • Each thread has a ~250character body
  • Of the 1000 threads created
    • 200 have no comments
    • 300 have some sort of flag (abused/voted/following)
    • 100 has a response and a comment
    • 500 have a response
    • 200 will be of the type "question"
  • Of the response heavy threads
    • n threads will be created with a response that has n*20 comments (This could change)

    In addition to this test, different course sizes will be created as well and tested against as we expect the course size to affect the performance. 

GET Comment (Response is depth=1, comment is depth=2) - This test will be for the expected edge cases of a thread. It is important to note that the although the largest comment_count is ~5000, the ratio of responses to comments is unknown. 

  • Each response/comment has a ~250character body
  • Each response will have 20*n comments (could change)

Less important:

POST Thread/Comments - Expected to be constant, this test will just be POSTing threads. 

PATCH Comments/Threads - Will use the same setup as GET thread. This test will modify fields such as "abuse_flagged", "following", "voted", "body"

Insignificant:

DELETE Comment/Thread - These endpoints are hit significantly less than the other endpoints. If running these individually, Threads/Comments will be created to delete. Refer to "Testing Strategy" for more information.

Flowtest:

The flowtest is the testing of all endpoints while trying to simulate the expected usage patterns of production. The data used will be the same as GET Threads where we have ~2000 posts. Threads and Comments will be created and deleted appropriately as all the endpoints are configured to work together.