MySQL – Got an error reading communication packets – high open tables, server goes offline

Posted on

Question :

I am currently testing a product with a couple of my friends before releasing to the public. There’s now 10 of us who use the application daily. Yesterday, we noticed a performance error on the MySQL server, which currently runs on the smallest DigitalOcean droplet, with 1 GB Memory, 1 vCPU, 25 GB SSD Disk and 1 TB Transfer. Our plan is to increase this as soon as we go public, but we were hoping it would be sufficient for testing purposes.

The first thing I looked at was the MySQL error.log file. The output can be found here. This output was captured right after the server crashed for the first time, and I still don’t know why.

I also had a look at the config file, which can be found on this link.

A couple more interesting things:

mysql> show global status like '%onn%';
| Variable_name                                 | Value               |
| Aborted_connects                              | 19                  |
| Connection_errors_accept                      | 0                   |
| Connection_errors_internal                    | 0                   |
| Connection_errors_max_connections             | 0                   |
| Connection_errors_peer_address                | 0                   |
| Connection_errors_select                      | 0                   |
| Connection_errors_tcpwrap                     | 0                   |
| Connections                                   | 121                 |
| Locked_connects                               | 0                   |
| Max_used_connections                          | 10                  |
| Max_used_connections_time                     | 2019-06-27 16:52:29 |
| Performance_schema_session_connect_attrs_lost | 0                   |
| Ssl_client_connects                           | 0                   |
| Ssl_connect_renegotiates                      | 0                   |
| Ssl_finished_connects                         | 0                   |
| Threads_connected                             | 3                   |
16 rows in set (0.01 sec)

Note: this was captured right after I restarted the server.

mysql> SHOW VARIABLES LIKE 'table%';
| Variable_name              | Value |
| table_definition_cache     | 1400  |
| table_open_cache           | 2000  |
| table_open_cache_instances | 16    |
3 rows in set (0.01 sec)

Is there anything I can or should to do increase performance? Is there something I am doing wrong? This is the first time I am handling a MySQL server in combination with NodeJS, whereas my previous experience with MySQL was only with webhosting. If you need anything else, feel free to ask, I will be happy to provide anything I can find.

Per request from Wilson, I have uploaded a couple more files to pastebin which can be found here:

My specific concerns go towards, of course memory/cpu usage which caused the server to crash twice with almost no users (like 5 at most) and how I can prevent this and/or allocate the right resources to the right processes etc., but also about the timeout error in the error.log file. Of course if there are other suggestions to optimize my system, I would be happy to. I have this server run on a DigitalOcean droplet with 1 GB memory, 1 vCPU, 25 GB SSD Disk and 1 TB transfer – dedicated for MySQL so nothing else runs on there. Will upgrade to more memory/cpu/disk/transfer when we scale.

At this point I am a little afraid because I suspect a lot of changes would need to be made to make this small server be performant enough. Looking forward to your suggestions.

Answer :

First of all, you need to tweak your mysql database so that it doesn’t use all your server’s memory and dies. A good tool for that is using MySQLTuner, it can provide you with good recommendations.

What you should also do is to perform some load testing to see how your application will handle in a real world scenario. You can use several tools for testing, one of them is Jmeter from the Apache Foundation which is free. You can test for example what happens if 100 users concurrently visit and use your application, see what breaks and fix it.

Rate Per Second=RPS – Suggestions to consider for your my.cnf [mysqld] section,

innodb_io_capacity=1900  # from 200 to use additional SSD IOPS
innodb_buffer_pool_size=256M  # from 512M to support your present 1.2M of data tables
innodb_flush_method=O_DIRECT  # from 0 to typical Linux flush_method
query_cache_size=0  # from 128M to conserve RAM, QC is OFF, keep it OFF, please

From Linux Secure Shell Command prompt,

ulimit -n 65536     would increase Open Files above 1024 and is a dynamic request

To make this change persistent across OS restart, review this url

DO NOT SET your Open File limit to 500,000 or 1 Million as in the example at this url.

Please share your code that completes the CONNECT, processing, DISCONNECT (CLOSE) logic to analyze ‘got an error reading communication’ message cause.

Disclaimer: I am the content author of website mentioned in My Profile, Network profile where we have free Utility Scripts.

Leave a Reply

Your email address will not be published. Required fields are marked *