I have a large table with millions of rows (geospatial data) which is getting updated several times per day.
Selects by lat/lon are very fast (0.02-0.04 seconds) most of the time, but during updates the same queries can take as long as 5 seconds. Profiling shows them hanging either in the “sending data” or “statistics” state. Of course there is a lot of I/O going on during these mass updates which are also quite time-consuming because every row needs to be updated.
The storage engine used is InnoDB, though I had to keep buffer pool size at 128M (a larger buffer pool appears to put more strain on the server during update/cleanup operations which even results in failed/stalled connections).
Any suggestions for tuning? I wondered if maybe there is a way to specifically limit I/O write operations or prioritize Selects over Updates/Inserts (with the latter ones not being that time-critical).
+-------------------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------------------+-------------+------+-----+---------+-------+ | vt | datetime | NO | PRI | NULL | | | lat | double | NO | PRI | NULL | | | lon | double | NO | PRI | NULL | | [lots of data fields]
[mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock max_connections=100 thread_cache_size=1 long_query_time = 1 open_files_limit=8192 table_cache=3000 table_open_cache=4000 table_definition_cache = 3000 query_cache_limit=256k query_cache_size=100M query_cache_type=1 query_cache_min_res_unit = 2k thread_cache_size=5 read_buffer_size=1M sort_buffer_size=2M join_buffer_size=2M max_heap_table_size=4G key_buffer_size=3G tmp_table_size=4G innodb_file_per_table=1
You can use
INSERT DELAYED to avoid blocking
When a client uses INSERT DELAYED, it gets an okay from the server at
once, and the row is queued to be inserted when the table is not in
use by any other thread.
Another major benefit of using INSERT DELAYED is that inserts from
many clients are bundled together and written in one block. This is
much faster than performing many separate inserts.
A few other maybe useful optimizations:
Looking at your configuration, I think your query cache is set quite large. (Though it might make sense in your specific case, I don’t know.) This can result in degraded performance due to lock contention, as the query cache is locked while it’s updated.
I can see how the query cache might make sense in your use case if you’re not writing to the database all that frequently. Note however that MySQL is retiring support for the query cache in MySQL 8.0. I don’t know what the fate of the query cache is on the MariaDB side.
Suggestions to consider for your my.cnf-ini [mysqld] section
key_buffer_size=5M # from 3G because minimum MyISAM data used query_cache_size=64M # from 100M to reduce QC overhead when TBL changes tmp_table_size=320M # from 4G for 1% of RAM available max_heap_table_size=320M # from 4G - should always be same as tmp_table_size innodb_buffer_pool_size=8G # from 128M default to 25% of RAM read_buffer_size=128K # from 1M - when 128K will be higher performance join_buffer_size=128K # from 2M - when 128K will be higher performance thread_cache_size=50 # from 1 AND remove SECOND line in my.cnf-ini query_cache_min_res_unit=512 # from 2K to increase QC capacity of RESULTS
After 3 days of uptime, please post (here or at pastebin.com):
SHOW GLOBAL STATUS;
SHOW GLOBAL VARIABLES;
Mysqltuner.plresults for more complete analysis and suggestions.
thread_cache_size=5 is TOTALLY inadequate for 100 max connections – see refman.