Question :
I’ve recently increased my General Purpose SSD storage up from 1000Gb to 6000Gb in an accident. Now i’m trying to bring the storage size down to the original 1000Gb but i can’t do it over the console.
I also tried taking a snapshot and then allocating a smaller storage size down but once the snapshot is restored it goes back to 6000Gb.
My next option is to backup 900Gb worth of data from my current db and import into a new RDS instance using workbench import sql data (very slow).
Is there something else that i can do to either bring the size down, or a faster way to export and import data from the oversized rds instance to a new smaller instance?
Answer :
First, you should not try to squeeze 900GB of data in only 1000GB of disk. This does not leave room for maintenance. I recommend 2*900 or at least 900 + the size of the largest table.
I don’t happen to know how RDS does size changes, but it may be that they take some shortcuts when growing, but cannot use similar shortcuts when shrinking. I suspect they do as much as they can with snapshots of the disk.
Looking at migration in general, there are two main ways:
- LVM (or similar) Snapshot of disk (as RDS probably does)
- Dump the data to a file. It includes
CREATEs
andINSERTs
sufficient to rebuild the data from scratch. The dump of 900GB will take more (or less) than 900GB — depending on datatypes, etc. Such a dump can be done across the network from a tight (eg 1000GB; only 100GB to spare) machine to a machine with more room.
Bottom line: Aim for a 2000GB virtual machine. If that doesn’t work easily (via RDS tools), then I hope someone more versed in RDS will chime in.
Another issue… You should probably put your database on an RDS ‘server’ that is tuned for databases. And put your application on a separate ‘server’. They have different needs and can be migrated up or down separately.