Introducing mysqlpump

Starting with MySQL 5.7.8, we are shipping a new client utility called mysqlpump that performs logical backups, producing a set of SQL statements that can be run to reproduce the original schema objects and table data. The goal of mysqlpump is to have a modern utility that is extendable and has native support for parallelization. We felt that the best way to achieve this was to write an entirely new tool where we would be free to break compatibility with mysqldump and where we would not be forced to implement some of the legacy functionality that it provides. mysqlpump executes all of the operations related to dumping multiple databases and the objects inside them in parallel in order to drastically reduce the total time needed to perform a logical backup.

Backup Examples

Here is the command you would to do a basic backup of all databases (by default mysqlpump will dump all databases):

Note: mysqlpump will not dump the following special internal databases by default: PERFORMANCE_SCHEMA, INFORMATION_SCHEMA, SYS SCHEMA, ndbinfo. mysqlpump will also not dump the following system tables from the mysql schema unless they are explicitly requested using the --include-tables option: user, db, tables_priv, columns_priv, procs_priv, proxies_priv, event, proc, apply_status, schema, general_log, slow_log.

mysqlpump divides the dump process into several sub-tasks and then adds these sub-tasks to a multi-threaded queue. This queue is then processed by N threads (2 by default) in order to allow the work to be done in parallel. Each thread makes a connection to the MySQL server to retrieve all necessary information and then begins its work. The number of threads used can be configured using the --default-parallelism and --parallel-schemas options.

To back up all databases with 4 threads:

To create 2 worker queues, 1 queue to process databases db1,db2 and 1 queue to process all others with 3 threads per queue (note that by default 2 threads will be created per queue in order to complete the dump tasks):

To spawn 5 threads to work on the first queue in which db1,db2 will be processed and 3 threads to work on the default queue for rest of the databases:

To spawn 5 threads to work on the first queue in which db1,db2 will be processed, 2 threads to work on the second queue to process db3,db4 and 3 threads to work on default queue used for all other databases:

To backup only the ‘accounts’ and ‘inventory’ databases/schemas:

If we want to export only the metadata information and skip all the data associated with the tables then we can use the --skip-dump-rows option:

This example would only dump databases/schemas db1 and db2:

This example would dump all databases/schemas with a name starting with ‘db’:

This example would dump all databases/schemas except db1 and db2:

This example would dump all tables from all databases/schemas except a table named ‘t’ that may be present within any database/schema:

This example would dump all tables from all databases/schemas except table names matching the ‘__user‘ pattern in any database/schema (dbuser, STuser, 45user, etc.):

This example would dump only events with name ‘ev2’ and routines with name ‘p1’ from all databases/schemas, excluding the mysql system database/schema:

This example would only dump the users present in mysql.user table. –users option dumps logical user definition in the form of CREATE USER, GRANT statement rather than as an insert statement into mysql.user table.

This example would dump all users present in the mysql.user table except the root user(s):

By default mysqlpump will also show the dump progress. For example:

Restore Example

Importing and restoring the data is easy. To import the generated backup file into a new instance of MySQL, you can simply use the mysql command-line client to import the data:

Current Limitations

mysqlpump currently tries to do as much work in parallel as possible and the dump threads lack a synchronization point before they start backing up the data. This makes it currently unsafe as a general purpose backup replacement (i.e. fully replacing mysqldump). We will be adding consistent backup related features and much more in upcoming versions.


For additional details on the development work so far, please see WL#7755.

We look forward to hearing from MySQL users as you begin trying out this new tool for creating logical dumps and backups! If you encounter any problems with this new tool—or you have ideas for new features—please let us know here in the comments, open a bug report / feature request at, or open a support ticket.

As always, THANK YOU for using MYSQL!

6 thoughts on “Introducing mysqlpump

  1. This sounds very nice.

    When backing up a database using mysqldump, running a “FLUSH TABLES WITH READ LOCK;” is necessary in order to get a consistent backup of the entire database. Unfortunately, this also means that the database is locked as read-only for the duration of the backup, and that’s a non-starter for many databases.

    As a workaround, many shops use master-secondary replication, and perform backups on the secondary node. However, running a second node is expensive.

    Does mysqlpump also require “FLUSH TABLES WITH READ LOCK;” to achieve a consistent backup? Maybe this will be clarified in future releases.

Leave a Reply

Your email address will not be published. Required fields are marked *

Please enter * Time limit is exhausted. Please reload CAPTCHA.