Database Restoration Issue

Posted on

Question :

I am new with postgresql.

When I attempt to restore database a .sql.gz file into the database with psql, the restore does not complete when the database size is too large (32Gb, for example).

These reason again I restore empty tables.
How can I resolve this issue?

psql -U postgres -d dbname -f <filename>

Answer :

It may not be the data size, but rather the number of tables and schemas. Database dumps and restores are known to run into problems in those cases.

For large backups the preferred backup approach is to use pg_basebackup which creates a snapshot of the database files at a specific time. These are restored much faster. Unfortunately this does not necessarily work for certain sorts of things, like restoring across major versions (but see pg_upgrade).

the raw data size is not a problem. But in your case, pg_dump may not be very useful and you may want to look at other ways of making a backup.

This comment may help you if you can change parameters of pg_dump:

The comments at your post indicates that you have issues with backup and restore big databases.

For huge databases i recommend you using the custom format of pg_dump -Fc that are handled also for really huge databases by postgres well. (no issues with this up to 300GB)

Usage example:

pg_dump -Fc -U <user> -f <filename.dmp> <database>

to restore:

pg_restore -Fc -j <jobs> -U <user> <filename.dmp>

The custom format allows you to import your dump into the dev system with more than one thread. That means at the index creation on the dev server you can with 4 jobs use up to 4 cores for INDEX creation or ALTER TABLE statements in parallel.

If you need more options, have a look at the documentation of pg_dump and pg_restore.

I hope my knowledge about DnR in this post can help you save a lot of time trying to import dumps.

Leave a Reply

Your email address will not be published. Required fields are marked *