top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

High CPU Usage with MongoDB?

0 votes
522 views

I have problem with mongods high cpu usage. We get 3000 data per second, and I insert it per 100,000.(insert with c++ driver, using vector)

I expect even progress of cpu, but I intermittently found high cpu of mongod. Although there is no result of currentOp() command. Why cpu status shows like below?

Insert amount of 459~742 second is not bigger than before, but mongods cpu usage is much bigger.

What can be cause of this status?

posted Apr 17, 2017 by Kaushik

Looking for an answer?  Promote on:
Facebook Share Button Twitter Share Button LinkedIn Share Button

Similar Questions
+1 vote

I am having collection with 127706 document. In aggregation pipeline i having 2 group stages. It is giving me result in 1.5 sec.

To optimize it to more I have created index on the fields which I am using in match stages with no success. Is their any to optimize aggregation performance in more way?

I m using mongodb 3.2.1?

+2 votes

We are seeing some issues with iowait. On a machine with 16 cores and 60 GB memory and SSD (max IO around 250 mb/s) (virtual machine on google) we see around constantly 20% iowait on top and iotop shows its the mongoDB process. Only around 10MB/s are actually read from the disk and as I said, we know from other tests that the machine can do much much more.

The query producing this pattern sorts a result from a query by an attribute (all indexed). After running queries like this a few times it works better. The total index size is around 4GB, so I assume it gets loaded in memory directly.

Is there any suggestions on how to debug this? I cant see how there can be such a high iowait when all computing resources are not even barely utilized.

Note: Running MongoDB 3.0.10 on Ubuntu 14.04

+2 votes

What is CouchDB ? and which one is better mongodb or couchdb please explain with example.

0 votes

In our solution we have asynchronous processes with state persistence (aka sagas) where each saga has single state document persisted in mongodb. Depending on the saga type, this document may lead to high level of concurrent updates. So, we simply introduced optimistic concurrency there to retry handling of incoming messages. But as a result of that we face very high rate of retries what results in significant performance degradation. In the past we used pessimistic locking of saga document/record when used to store state in rdbms (mssql, oracle etc). I know that at the moment there is no such feature in mongodb, but maybe it would be useful/possible to have it for single document updates? I believe this feature could simplify many solutions.

Are there any other concurrency handling options? Suggestions?

+3 votes

I added high number of query to a db, somehow it impact query to another db. I am using mongodb 2.6.8. Should this version of mongodb has db level lock (since 2.2)?

Is this I/O disk problem?

...