Question :
I have a large table (~4 million rows, ~100 fields) which I need to split into a few smaller tables by type so it doesn’t become unmanageable as it grows.
Each entry has an ID which is an automatically generated primary key. These keys need to be preserved when the data is moved to new tables. Additionally, new unique keys will need to be generated; and these will need to be unique across all of the smaller tables, rather than simply within each table itself.
I was thinking that a way to do this would be to transfer the ID into a new field (or a non-automatically generated primary key field?) when transferring the data into the new tables. Once the data is transferred, I could strip down the original table so it is just a primary key so it can continue automatically generating keys for any additional data – i.e. before inserting data into one of the new tables, get a new primary key from the original table.
However, that seems rather clumsy!
Also it leaves the question, if a user is referring to an ID without knowing what type the data is (i.e. what table it will be found in), how could they be ‘directed’ to the correct table?
I’m sure there must be a better way to do this?
Answer :
When you have a Primary Key with an auto_increment it will generate a new ID only if you insert a NULL value. If you set ID=4 in your INSERT
, the ID will be 4 so you’ll not loose your ID during your “move” operation.
We don’t have the “SEQUENCE” notion like in Oracle database so your “global ID” problem it’s not so easy to do.
Maybe you can try something like this (but it’ll add complications for just a 4 millions rows table)
Create a table used for generates your “Global ID”, with one int filed auto_incremented:
CREATE TABLE test.sequence_table (next_id int primary key auto_increment);
When you want insert a new row in your child table:
Solution 1: With SELECT
in information_schema
BEGIN; -- Start a new Transaction to ensure consistency
INSERT INTO test.sequence_table values (NULL); -- Generate a new ID
SELECT @next_ID:=(auto_increment - 1) FROM information_schema.tables WHERE table_schema="test" AND table_name="sequence_table"; -- Here I use a MySQL Variable but you can store it in PHP or whatever
INSERT INTO child_table values (null, @next_ID, "Max", "SQL"); -- Use your variable
COMMIT; -- Wonderfull :)
Edit after ypercube comment:
Solution 2: With LAST_INSERT_ID()
BEGIN; -- Start a new Transaction to ensure consistency
INSERT INTO test.sequence_table values (NULL); -- Generate a new ID
SELECT @next_ID:=LAST_INSERT_ID(); -- Use of the MySQL function LAST_INSERT_ID()
INSERT INTO child_table values (null, @next_ID, "Max", "SQL"); -- Use your variable
COMMIT; -- Wonderfull :)
If you are not going with partitioning which will be transparent for application, you need:
- create 2 tables to hold values “by type”
- create one more table to hold unique autogenerated keys
-
create a view for the application that will hide all of the above
good news is that you may use inner join between tables, because you alweay have a record in one or another
-
I believe that mysql is still missing INSTEAD OF triggers on views, so for DML operations you will need stored procedures.
I think it was not clear, you will create a new key in table from p2 and use that value to insert records into tables from p1.