Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current View Version History

« Previous Version 14 Next »

Action items:

  • Discuss with DEVOPS to increase RAM. 
  • Wait until mongo for forums is in house
    • Get the list of most used queries

Notes:

  • It would be suggested to have a usable index for every query that we have. Slow queries can reduce the overall performance. 
  • Obtaining a list of queries would be ideal for index creation. The current indexes have not been designed with index intersection in mind. 
  • Common trend amongst slow queries is `count`
  • All errors rates observed are due to the decrease in timeouts. https://github.com/edx/cs_comments_service/pull/146
  • Index selectivity still needs some work.

Improvements:

After applying the index on Nov 11, 11:05, we can see a significant improvement as expected. 

Regressions:

After applying the index on Nov 11, 11:05, we can see that the 99th percentile began to timeout for the users endpoint. After inspection of the index on the read replica, it's not clear to why this could of happened. The slow query that occurs in these timeouts is 

COMMAND      database=comments-prod command={:count=>"contents", :query=>{"context"=>"course", "author_id"=>"8567599", "course_id"=>"course-v1:KULeuvenX+EUHURIx+3T2015", "anonymous"=>false, "anonymous_to_peers"=>false, "_type"=>{"$in"=>["CommentThread"]}}}

When running the query against the read replica, the expected index is being used. Removing `author_id_1` should not have affected this because the compound index `author_id_1_course_id_1` would be used instead. The median remains the same and the 95th percentile did not spike like the 99th. It seems like the proper indexes are in place, otherwise, there would be a regression for the median as well. One possible explanation could be the increase of index size explained in this wiki below. 

Index size:

The indexes that we added takes up more RAM than the ones removed. To add to that, the `delete_spam` index that was removed had a size of 0 which does not really improve performance at all. Overall it seems like another 1GB of indexes was added.

 db.contents.stats(1024*1024) prod

{
"ns" : "comments-prod.contents",
"count" : 7567251,
"size" : 5808,
"avgObjSize" : 804,
"storageSize" : 6624,
"numExtents" : 29,
"nindexes" : 23,
"lastExtentSize" : 511,
"paddingFactor" : 1,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 11907,
"indexSizes" : {
"_id_" : 328,
"parent_id_1" : 269,
"parent_ids_1" : 278,
"tags_array_1" : 253,
"author_id_1_course_id_1" : 494,
"votes.up_1__type_1" : 475,
"votes.down_1__type_1" : 467,
"commentable_id_1_created_at_-1" : 461,
"course_id_1__type_1_created_at_-1" : 736,
"_type_1_comment_thread_id_1_author_id_1_updated_at_1" : 489,
"comment_thread_id_1_sk_1" : 301,
"comment_thread_id_1_endorsed_1" : 142,
"_type_1_course_id_1_pinned_-1_created_at_-1" : 751,
"_type_1_course_id_1_pinned_-1_comment_count_-1_created_at_-1" : 793,
"_type_1_course_id_1_pinned_-1_votes.point_-1_created_at_-1" : 851,
"commentable_id_1" : 190,
"_type_1_course_id_1_context_1_pinned_-1_created_at_-1" : 709,
"_type_1_context_1" : 348,
"_type_-1_course_id_1_context_1_pinned_-1_last_activity_at_-1_created_at_-1" : 701,
"_type_-1_course_id_1_commentable_id_1_context_1_pinned_-1_created_at_-1" : 780,
"_type_-1_course_id_1_endorsed_-1_pinned_-1_last_activity_at_-1_created_at_-1" : 677,
"_type_-1_course_id_1_endorsed_-1_pinned_-1_votes.point_-1_created_at_-1" : 725,
"_type_-1_course_id_1_endorsed_-1_pinned_-1_comment_count_-1_created_at_-1" : 679
},
"ok" : 1
}

 db.contents.stats(1024*1024) loadtest

"_id_" : 290,
"parent_id_1" : 296,
"parent_ids_1" : 200,
"comment_thread_id_1" : 319,
"author_id_1" : 329,
"tags_array_1" : 293,
"last_activity_at_-1" : 245,
"author_id_1_course_id_1" : 496,
"votes.up_1__type_1" : 630,
"votes.down_1__type_1" : 596,
"commentable_id_1_created_at_-1" : 548,
"course_id_1__type_1_created_at_-1" : 933,
"_type_1_comment_thread_id_1_author_id_1_updated_at_1" : 607,
"comment_thread_id_1_sk_1" : 296,
"comment_thread_id_1_endorsed_1" : 142,
"_type_1_course_id_1_pinned_-1_created_at_-1" : 948,
"delete_spam_1" : 0,
"_type_1_course_id_1_pinned_-1_comment_count_-1_created_at_-1" : 1003,
"_type_1_course_id_1_pinned_-1_votes.point_-1_created_at_-1" : 1057,
"commentable_id_1" : 254,
"_type_1_course_id_1_context_1_pinned_-1_created_at_-1" : 1010,
"_type_1_commentable_id_1_context_1_pinned_-1_created_at_-1" : 784,
"_type_1_context_1" : 425,
"_type_-1_course_id_1_context_1_pinned_-1_last_activity_at_-1_created_at_-1" : 998,
"_type_1_course_id_1_context_1" : 860

Observations:

 Below is a compound index that we have that is not selective enough. Whether its due to poor indexes or our queries is a different question. This seems like the default view for the browser forums. Another solution may be to limit the results.

 non selective index

db.contents.find({"_type":"CommentThread", "course_id":"BerkeleyX/GG101x/1T2014", "context":"course"}).explain()
{
"cursor" : "BtreeCursor _type_-1_course_id_1_context_1_pinned_-1_last_activity_at_-1_created_at_-1",
"isMultiKey" : false,
"n" : 76481,
"nscannedObjects" : 76481,
"nscanned" : 76481,
"nscannedObjectsAllPlans" : 77520,
"nscannedAllPlans" : 77531,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 6691,
"nChunkSkips" : 0,
"millis" : 85812,
"indexBounds" : {
"_type" : [
[
"CommentThread",
"CommentThread"
]
],
"course_id" : [
[
"BerkeleyX/GG101x/1T2014",
"BerkeleyX/GG101x/1T2014"
]
],
"context" : [
[
"course",
"course"
]
],
"pinned" : [
[
{
"$maxElement" : 1
},
{
"$minElement" : 1
}
]
],
"last_activity_at" : [
[
{
"$maxElement" : 1
},
{
"$minElement" : 1
}
]
],
"created_at" : [
[
{
"$maxElement" : 1
},
{
"$minElement" : 1
}
]
]
},
"server" : "c0:27017",
"filterSet" : false
}

  • No labels