Question :
Server Characteristics
- Total system RAM: 8GB (running MySQL + other stuff than MySQL on it i.e. not dedicated to MySQL)
- Number of CPU Cores: 6
- I have data in the db amounting to about 2GB
- I have InnoDB Buffer Pool Size set to 4GB
Which is better:
- Innodb Buffer Pool Instances set to 1?
- Innodb Buffer Pool Instances set to 2 (2GB in each)?
- Innodb Buffer Pool Instances set to 4 (1GB in each)?
- Innodb Buffer Pool Instances set to 8 (the default setting)
I’m thus not sure as to how to reason when it comes to Buffer Pool Instances and also the whole “use instances or suffer OS Swap when having such large InnoDB Buffer Pool Size”.
Answer :
When I go back to MySQL 5.5, I would think about this same thing.
What I learned over those years was the following: If the Buffer Pool was bigger than half the installed RAM and innodb_buffer_pool_instances was 1 (default for 5.5), the threat of swapping was always imminent.
I discussed this before : Is there a rule of thumb regarding the size and number of a buffer pool instances?. In that post, I mentioned an example of a client who had 192GB RAM on the server with 162GB Buffer Pool. When innodb_buffer_pool_instances was 1, swapping happened. When I set innodb_buffer_pool_instances to 2, things got way better.
In your case, since the Buffer Pool is exactly half, a value of 1 may be OK. I would not chance it. I would set it to 2.
Since MySQL 5.6 has a default of 8, you shouldn’t have to think about it anymore.
I will say this: akuzminsky’s answer has the highest principle. My answer is just shooting from hip based on past experiences (good and bad).
Number of buffer pool instances should be increased to avoid buffer pool mutex contention.
With buffer pool size 8GB I doubt you’ll ever see the buffer pool mutex contention.
UPDATE 0:
I mention 8Gb buffer pool in the answer while in the original question the total memory was 8GB. Sure, the buffer pool must be less than 8GB. 4GB sounds like a good start but make sure no swapping happens.
UPDATE 1:
// from Yasufumi’s slides (in recent MySQL versions output may slightly differ)
To determine if there is a contention on the buffer pool mutex collect a dozen of SHOW ENGINE INNODB STATUS
samples during the peak time.
Then aggregate it using shell snippet:
#!/bin/sh
cat $1.innodb | grep "Mutex at " | cut -d"," -f1 | sort | uniq -c > /tmp/tmp1.txt
cat $1.innodb | grep "lock on " | cut -d"-"
-f2- | sort | uniq -c > /tmp/tmp2.txt
cat /tmp/tmp1.txt /tmp/tmp2.txt | sort -n > $1.contention rm /tmp/tmp1.txt /tmp/tmp2.txt
which gives output like this:
.....
4 lock on RW-latch at 0x7fb86b2c9138 created in file dict/dict0dict.c line 1356
6 lock on RW-latch at 0x7fb86b2c4138 created in file dict/dict0dict.c line 1356
12 lock on RW-latch at 0x7fb86b2d9538 created in file dict/dict0dict.c line 1356
20 lock on RW-latch at 0x7fb86b2db138 created in file dict/dict0dict.c line 1356
22 Mutex at 0x7fb86b28f0e0 created file btr/btr0sea.c line 139
30 lock on RW-latch at 0x7fb86b2ba938 created in file dict/dict0dict.c line 1356
36 lock on RW-latch at 0x7fb86b2bad38 created in file dict/dict0dict.c line 1356
71 Mutex at 0x7fb86b28ecb8 created file buf/buf0buf.c line 597
164 lock on RW-latch at 0x7fb86b28f0b8 created in file btr/btr0sea.c line 139
If you see high count of buffer pool mutex waits, then it’s time to consider multiple buffer pool instances. The contention is unlikely to happen on a buffer pool smaller than ~48G.
Set “swappiness” to 1 if your OS has such. The problem may be an overly aggressive OOM.
I would suggest setting this to match the maximum number of MySQL threads that you want to run simultaneously. I use the number of cores.
I also set innodb_read_io_threads
and innodb_write_io_threads
to match this number.
If innodb_buffer_pool_instances
is too low, your threads are likely to get stuck in semaphore waits. This makes both the CPU and I/O appear to be idle, even though the system should be busy — and your application latency will go through the roof.