Very slow Insert and update on 20 core 250 Gb RAM dedicated MYSQL 5.5 server on Ubuntu [closed]

Posted on

Question :

Most of the data is in tbl_user_ads. Its a huge table.

Many concurrent processes (over 1000) try to select, insert or update this table.

The CPU utilization is just around 2%.

As I increase the number of processes which access this table the performance drops further.

I am getting lots of deadlocks.

Thanks for your time and help.

innodb status

my.cnf customizations:

innodb_buffer_pool_size=220G
innodb_buffer_pool_instances = 4
innodb_flush_log_at_trx_commit = 0
innodb_log_buffer_size=2048M
innodb_log_file_size = 1024M
innodb_flush_method = O_DIRECT
innodb_write_io_threads = 16
innodb_read_io_threads = 16
innodb_locks_unsafe_for_binlog = 1
innodb_lock_wait_timeout = 100

Answer :

EDIT: after reading threw it, I noticed that your limiting it to 16 threads?
“innodb_read_io_threads = 16”
your server has 20 cores of cpu, and therefor you will only use a very small amount of CPU, try adding more threads and report back if it makes any difference,other then that i can’t think of any thing else!

it may be due to server load, the fact that you have 20 cores may be useless, if the program can’t multithread, whats your specs and general overall server load? other then that, I can’t really think of any thing else that may be causing this.

You can set innodb_buffer_pool_instances by the liux command numactl --hardware. It gives no. of nodes. set this value to it. For more read here

You can set innodb_flush_log_at_trx_commit = 2 if: it is blog/stats/e-commerce (with ~100x shop in day), etc.

innodb_flush_log_at_trx_commit = 1 if: you have a lot of customers or you need to work with money transaction like bank. so this time you should split your dataflow between several servers to have speed & safety.

You can set it to 2, because it has ~75x faster write speed and it fails ONLY if hardware fails. But it have its own impact. Read this-

The innodb_flush_log_at_trx_commit is used with the purpose as ..

If the value of innodb_flush_log_at_trx_commit is 0, the log buffer is
written out to the log file once per second and the flush to disk
operation is performed on the log file, but nothing is done at a
transaction commit.

When the value is 1 (the default), the log buffer is written out to
the log file at each transaction commit and the flush to disk
operation is performed on the log file.

When the value is 2, the log buffer is written out to the file at each
commit, but the flush to disk operation is not performed on it.
However, the flushing on the log file takes place once per second also
when the value is 2. Note that the once-per-second flushing is not
100% guaranteed to happen every second, due to process scheduling
issues.

The default value of 1 is required for full ACID compliance. You can
achieve better performance by setting the value different from 1, but
then you can lose up to one second worth of transactions in a crash.
With a value of 0, any mysqld process crash can erase the last second
of transactions. With a value of 2, only an operating system crash or
a power outage can erase the last second of transactions. InnoDB’s
crash recovery works regardless of the value.

In My opinion using innodb_flush_log_at_trx_commit to 2 should not be
an issue.But to use 1 is the safest.

  • Batch INSERTs, if practical. That is use the multi-row insert syntax.

  • (as mentioned) innodb_flush_log_at_trx_commit = 2 is better than 1, but it should not be better than the ‘0’ you have.

  • All well-tuned dataset won’t use many CPUs.

  • innodb_io_threads has to do with I/O; the number of cores is not very relevant.

  • I doubt if there are 1000 simultaneous threads — Check Threads_running or SHOW PROCESSLIST to see how much concurrency there really is.

  • What’s your QPS (Questions / Uptime)?

  • Are they all hitting the same table? Let’s see SHOW CREATE TABLE and some sample statements. Let’s see SHOW ENGINE INNODB STATUS to see one of the deadlocks.

  • How big is the table(s)? Has the system been running long enough for the table(s) to be fully cached now? (It takes a long time to fill 220GB.) Are you using SSDs?

  • Lots of writes? Is the Query cache on? The processlist may say “waiting for query cache lock”. Turn it off: both query_cache_type = OFF and query_cache_size = 0.

Leave a Reply

Your email address will not be published. Required fields are marked *