I’m a newbie on Database and for now I need to work with MySQL to build a web server.
In a word, I have a web framework, developed with Python. It will read/write data from/into MySQL.
For now I don’t know how to handle the case of multithreading
Let’s say there are many users, who are writing their data into one table. I will use a thread pool to process these requests, meaning that there would be multiple
inserts at the same time.
My question is:
If I have set
concurrent_insert=2, will multiple-
inserts still cause the locking of table so that I would get a very bad performance (all of
inserts have to be handled one by one)? Or MySQL itself can handle the multiple-
inserts properly so I can get a not-bad performance?
For the multiple-
inserts, is there any difference between “sharing one connection object” and “each
insert has its own connection object”?
Seriously, MySQL is designed to provide concurrency as the default. And it handles a wide variety of “conflicts” that could happen.
MySQL supports several “Engines”. Unless you are using an old version, the default will be
concurrent_insert relates to the old, deprecated,
ENGINE=MyISAM; don’t use that engine.
There is one thing you should always do: Check for errors after performing SQL statements. (That might be done via “try…catch”.) If nothing else, abort with an error message so that the errors won’t go unnoticed — this can be devilish to debug.
Even “thread pooling” is not required for a fully functioning website. (It may be important if your website becomes ridiculously popular.)
Normal web usage is: One process is started for one user wanting one web page. Then it should have one connection to MySQL. Period. (Does this address your “sharing one connection object”?) When another user brings up the same web page, the webserver launches another copy of python which will make a separate connection to MySQL.
The MyISAM engine is old and outdated and should not be used now. You should use the InnoDB transaction storage engine instead.
InnoDB does not have multithreaded inserts, but has a different mechanism called the “transaction isolation level”. InnoDB does not lock tables on INSERT / UPDATE / DELETE by default. This means that some query can read the table at the same time (including the rows being modified). The isolation level of a transaction determines the impact of changes on the retrieved data.
- READ UNCOMMITTED (or dirty read) allows you to read data that might be stale when read
- READ COMMITTED allows you to read data after it has been written/modified
- REPEATABLE READ allows you to read data unchanged multiple times (e.g. in subqueries)
- SERIALIZABLE will block all read operations until the completion of the previous transaction.