Mysql Logical Backup using mysqlsh – Export

Mysql Logical Backup using mysqlsh – Export

In this article, we are going to demonstrate MySQL Logical Backup using mysqlsh – Export. There are several ways to backup MySQL data.

Those are: –
                   1) Physical Backup
                   2) Logical Backup


Physical Backup:

It copies the directories and its files that have database contents. Physical backup is also called Raw Backup.


Logical Backup:

It converts all the available rows into single or multiple inserts and also contains create statements to create databases, tables, triggers, and so on. Logical backup also called Text Backup.

List of utilities available for logical backup: – 

              A) Mysqldump  Click here to read more
              B) Mysqlpump  Click here to read more
              C) Mydumper    Click here to read more
              D) mysqlshell

 

mysqlshell :

The MySQL shell (mysqlsh) is a powerful and advanced client and code editor for MySQL server.  MySQL shell can create a logical dump and do a logical restore for the entire database instance, tables including users.

Like mydumper, mysqlshell utility also available by default. We need to install it through mysql yum repository use yum command to install it.  

[root@test-machine01 backup3]# yum install mysql-shell
Loaded plugins: langpacks, ulninfo
Dependencies Resolved

==============================================================================================================================================================================================
 Package                                     Arch                                   Version                                       Repository                                             Size
==============================================================================================================================================================================================
Updating:
 mysql-shell                                 x86_64                                 8.0.23-1.el7                                  mysql-tools-community                                  32 M

Transaction Summary
==============================================================================================================================================================================================
Upgrade  1 Package

Total download size: 32 M
Is this ok [y/d/N]: y
Downloading packages:


Once mysqlsh utility is installed you can execute the below command to connect to MySql instance.

[root@test-machine01 backup3]# mysqlsh --uri root@localhost:3306
Please provide the password for 'root@localhost:3306': *********
Save password for 'root@localhost:3306'? [Y]es/[N]o/Ne[v]er (default No): yes
MySQL Shell 8.0.22

Copyright (c) 2016, 2020, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'root@localhost:3306'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 154
Server version: 8.0.22 MySQL Community Server - GPL
No default schema selected; type \use <schema> to set one.
 MySQL  localhost:3306 ssl  JS >
 MySQL  localhost:3306 ssl  JS > \quit
Bye!


In demonstration we are going to see below mysqlshell options: util.dumpInstance() , util.dumpSchemas() , util.dumpTables()

util.dumpInstance() - Dump an entire database instance, including users
SYNTAX : util.dumpInstance(outputUrl[, options])

util.dumpSchemas() - Dump a set of schemas
SYNTAX : util.dumpSchemas(schemas, outputUrl[, options])

util.dumpTables() - Load specific tables and views
SYNTAX : util.dumpTables(schema, tables, outputUrl[, options])


Full DB Instance Backup: Use below command to take backup of full mysql instance.

MySQL  localhost:3306 ssl  JS > util.dumpInstance("/u01/backup4/fulldb", {threads: 8, showProgress: true})
Acquiring global read lock
Global read lock acquired
Gathering information - done
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Writing users DDL
Preparing data dump for table `employees`.`titles`
Data dump for table `employees`.`titles` will be chunked using column `emp_no`
Writing DDL for schema `employees`

[root@test-machine01 fulldb]# ls -ltr
total 36600
-rw-r-----. 1 root root      876 Feb 16 14:23 @.json
-rw-r-----. 1 root root      813 Feb 16 14:23 employees.json
-rw-r-----. 1 root root      575 Feb 16 14:23 repl_test.json
-rw-r-----. 1 root root      240 Feb 16 14:23 @.sql
-rw-r-----. 1 root root      240 Feb 16 14:23 @.post.sql
-rw-r-----. 1 root root     4758 Feb 16 14:23 @.users.sql
-rw-r-----. 1 root root      579 Feb 16 14:23 employees.sql
-rw-r-----. 1 root root      747 Feb 16 14:23 employees@departments.sql
-rw-r-----. 1 root root      826 Feb 16 14:23 employees@employees.sql
-rw-r-----. 1 root root      867 Feb 16 14:23 employees@titles.sql


Excluding specific DB from Full DB Instance Backup: Use below command to exclude specific DB from full mysql instance.

 MySQL  localhost:3306 ssl  JS > util.dumpInstance("/u01/backup4/fulldb_excludesch", {excludeSchemas: ["repl_test"],threads: 8, showProgress: true})
Acquiring global read lock
Global read lock acquired
Gathering information - done
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Writing users DDL
Preparing data dump for table `employees`.`titles`

[root@test-machine01 fulldb_excludesch]# ls -ltr
total 36512
-rw-r-----. 1 root root      240 Feb 16 22:40 @.sql
-rw-r-----. 1 root root      821 Feb 16 22:40 @.json
-rw-r-----. 1 root root      813 Feb 16 22:40 employees.json
-rw-r-----. 1 root root      240 Feb 16 22:40 @.post.sql
-rw-r-----. 1 root root     4758 Feb 16 22:40 @.users.sql
-rw-r-----. 1 root root      579 Feb 16 22:40 employees.sql
-rw-r-----. 1 root root      867 Feb 16 22:40 employees@titles.sql
-rw-r-----. 1 root root      612 Feb 16 22:40 employees@titles.json
-rw-r-----. 1 root root      670 Feb 16 22:40 employees@employees.json


Excluding specific Table from Full DB Instance Backup: Use the below command to exclude specific table from full instance backup.

 MySQL  localhost:3306 ssl  JS > util.dumpInstance("/u01/backup4/fulldb_excludetab", {excludeSchemas: ["repl_test"],excludeTables: ["employees.employees","employees.salaries"],threads: 8, showProgress: true})
Acquiring global read lock
Global read lock acquired
Gathering information - done
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Writing users DDL
Preparing data dump for table `employees`.`titles`
Data dump for table `employees`.`titles` will be chunked using column `emp_no`
Preparing data dump for table `employees`.`dept_emp`

[root@test-machine01 fulldb_excludetab]# ls -ltr
total 6560
-rw-r-----. 1 root root     821 Feb 16 22:44 @.json
-rw-r-----. 1 root root     240 Feb 16 22:44 @.sql
-rw-r-----. 1 root root     686 Feb 16 22:44 employees.json
-rw-r-----. 1 root root     240 Feb 16 22:44 @.post.sql
-rw-r-----. 1 root root    4758 Feb 16 22:44 @.users.sql
-rw-r-----. 1 root root     612 Feb 16 22:44 employees@titles.json
-rw-r-----. 1 root root     579 Feb 16 22:44 employees.sql
-rw-r-----. 1 root root     620 Feb 16 22:44 employees@dept_manager.json
-rw-r-----. 1 root root     616 Feb 16 22:44 employees@dept_emp.json
-rw-r-----. 1 root root     575 Feb 16 22:44 employees@departments.json
-rw-r-----. 1 root root     867 Feb 16 22:44 employees@titles.sql
-rw-r-----. 1 root root    1019 Feb 16 22:44 employees@dept_manager.sql


Compressed Backup: Use the below command to take compress backup.

 MySQL  localhost:3306 ssl  JS > util.dumpInstance("/u01/backup4/comfulldb", {threads: 8, compression: "gzip", showProgress: true})
Acquiring global read lock
Global read lock acquired
Gathering information - done
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Writing users DDL
Preparing data dump for table `employees`.`titles`
Data dump for table `employees`.`titles` will be chunked using column `emp_no`
Preparing data dump for table `employees`.`employees`
Data dump for table `employees`.`employees` will be chunked using column `emp_no`

[root@test-machine01 comfulldb]# ls -ltr
total 44444
-rw-r-----. 1 root root      575 Feb 16 22:47 repl_test.json
-rw-r-----. 1 root root      876 Feb 16 22:47 @.json
-rw-r-----. 1 root root      813 Feb 16 22:47 employees.json
-rw-r-----. 1 root root      240 Feb 16 22:47 @.sql
-rw-r-----. 1 root root      240 Feb 16 22:47 @.post.sql
-rw-r-----. 1 root root     4758 Feb 16 22:47 @.users.sql
-rw-r-----. 1 root root      867 Feb 16 22:47 employees@titles.sql


Single Schema/DB backup: Use the below command to take a single DB/Schema backup.

 MySQL  localhost:3306 ssl  JS > util.dumpSchemas(["employees"],"/u01/backup4/schemabkp", {threads: 8, showProgress: true})
Acquiring global read lock
Global read lock acquired
Gathering information - done
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Preparing data dump for table `employees`.`titles`
Data dump for table `employees`.`titles` will be chunked using column `emp_no`
Preparing data dump for table `employees`.`employees`

[root@test-machine01 schemabkp]# ls -ltr
total 36504
-rw-r-----. 1 root root      240 Feb 16 23:11 @.sql
-rw-r-----. 1 root root      240 Feb 16 23:11 @.post.sql
-rw-r-----. 1 root root      634 Feb 16 23:11 @.json
-rw-r-----. 1 root root      813 Feb 16 23:11 employees.json
-rw-r-----. 1 root root      579 Feb 16 23:11 employees.sql
-rw-r-----. 1 root root      701 Feb 16 23:11 employees@current_dept_emp.pre.sql
-rw-r-----. 1 root root     1364 Feb 16 23:11 employees@current_dept_emp.sql
-rw-r-----. 1 root root      612 Feb 16 23:11 employees@titles.json
-rw-r-----. 1 root root      670 Feb 16 23:11 employees@employees.json


Multi Schema/DB backup: Use the below command to take more than one Schema/DB backup.

 MySQL  localhost:3306 ssl  JS > util.dumpSchemas(["employees","repl_test"],"/u01/backup4/multischemabkp", {threads: 8, showProgress: true})
Acquiring global read lock
Global read lock acquired
Gathering information - done
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Preparing data dump for table `employees`.`titles`
Data dump for table `employees`.`titles` will be chunked using column `emp_no`
Preparing data dump for table `employees`.`employees`

[root@test-machine01 multischemabkp]# ls -ltr
total 36592
-rw-r-----. 1 root root      240 Feb 16 23:14 @.sql
-rw-r-----. 1 root root      575 Feb 16 23:14 repl_test.json
-rw-r-----. 1 root root      240 Feb 16 23:14 @.post.sql
-rw-r-----. 1 root root      689 Feb 16 23:14 @.json
-rw-r-----. 1 root root      813 Feb 16 23:14 employees.json
-rw-r-----. 1 root root      579 Feb 16 23:14 employees.sql
-rw-r-----. 1 root root      999 Feb 16 23:14 employees@dept_emp.sql
-rw-r-----. 1 root root      706 Feb 16 23:14 employees@dept_emp_latest_date.pre.sql
-rw-r-----. 1 root root     1019 Feb 16 23:14 employees@dept_manager.sql
-rw-r-----. 1 root root     1263 Feb 16 23:14 employees@dept_emp_latest_date.sql
-rw-r-----. 1 root root      747 Feb 16 23:14 employees@departments.sql
-rw-r-----. 1 root root      856 Feb 16 23:14 employees@salaries.sql


Exclude Table from Multi Schema/DB backup: Use the below command to exclude specific table from schema backup.

 MySQL  localhost:3306 ssl  JS > util.dumpSchemas(["employees"],"/u01/backup4/schema_excludetab", {excludeTables: ["employees.employees","employees.salaries"], threads: 8, showProgress: true})
Acquiring global read lock
Global read lock acquired
Gathering information - done
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Preparing data dump for table `employees`.`titles`


[root@test-machine01 schema_excludetab]# ls -ltr
total 6552
-rw-r-----. 1 root root     240 Feb 16 23:17 @.sql
-rw-r-----. 1 root root     634 Feb 16 23:17 @.json
-rw-r-----. 1 root root     686 Feb 16 23:17 employees.json
-rw-r-----. 1 root root     240 Feb 16 23:17 @.post.sql
-rw-r-----. 1 root root     612 Feb 16 23:17 employees@titles.json
-rw-r-----. 1 root root     620 Feb 16 23:17 employees@dept_manager.json
-rw-r-----. 1 root root     616 Feb 16 23:17 employees@dept_emp.json
-rw-r-----. 1 root root     575 Feb 16 23:17 employees@departments.json


Table Backup: Use the below command to take table-level backup.

 MySQL  localhost:3306 ssl  JS >  util.dumpTables("employees", [ "employees", "salaries" ], "/u01/backup4/tablebkp", {threads: 8, showProgress: true})
Acquiring global read lock
Global read lock acquired
Gathering information - done
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Preparing data dump for table `employees`.`salaries`
Writing DDL for schema `employees`
Data dump for table `employees`.`salaries` will be chunked using column `emp_no`
Preparing data dump for table `employees`.`employees`
Data dump for table `employees`.`employees` will be chunked using column `emp_no`

[root@test-machine01 tablebkp]# ls -ltr
total 29976
-rw-r-----. 1 root root      633 Feb 16 23:19 @.json
-rw-r-----. 1 root root      301 Feb 16 23:19 employees.json
-rw-r-----. 1 root root      240 Feb 16 23:19 @.sql
-rw-r-----. 1 root root      240 Feb 16 23:19 @.post.sql
-rw-r-----. 1 root root      615 Feb 16 23:19 employees@salaries.json
-rw-r-----. 1 root root      670 Feb 16 23:19 employees@employees.json
-rw-r-----. 1 root root      476 Feb 16 23:19 employees.sql
-rw-r-----. 1 root root      856 Feb 16 23:19 employees@salaries.sql
-rw-r-----. 1 root root      826 Feb 16 23:19 employees@employees.sql
-rw-r-----. 1 root root      112 Feb 16 23:19 employees@employees@@0.tsv.zst.idx
-rw-r-----. 1 root root  5490060 Feb 16 23:19 employees@employees@@0.tsv.zst
-rw-r-----. 1 root root      760 Feb 16 23:19 employees@salaries@@0.tsv.zst.idx
-rw-r-----. 1 root root 25150471 Feb 16 23:19 employees@salaries@@0.tsv.zst
-rw-r-----. 1 root root      329 Feb 16 23:19 @.done.json
[root@test-machine01 tablebkp]#


Part II : Mysql Logical Backup using mysqlsh – Import

This document is only for learning purpose and always validate in the LAB environment first before applying in the LIVE environment.


Hope so you like this article
!
Please share your valuable feedback/comments/subscribe and follow us below and don’t forget to click on the bell icon to get the latest update. Click here to know more about our pursuit.

 801 Total Views,  4 Views Today

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

As you found this post useful...

Follow us on social media!

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Spread the Knowledge!

Leave a Reply

Your email address will not be published. Required fields are marked *

eleven − three =

Follow us on Social Media
Open chat
1
Contact Us:
Hi,

Greetings of the day! How can we help you?

Thanks.
Team DBsGuru
Direct Call:+91-9310167776
Email: info@dbsguru.com

Share Learn Grow!


Click on the below technical group to join us to share expertise in PostgreSQL and MySQL on WhatsApp group. The purpose of this group is only help to the DBA community.

PostgreSQL Technical Group.

MySQL technical Group.

NOTE: Excuse us for spammer/promoter i.e don't join the group for spam, will be kicked off without warnings.

Thanks.
Team DBsGuru.

Share Learn Grow!

Welcome to DBsGuru! We wish you a very healthy day, hope and pray things to go in a good way for all of humanity. Stay safe!

We encourage technology experts to contribute share technical knowledge in form of writing technical articles/blogs, SQL commands for daily usage (basic to a high level), Carrier guidance on any technology, and become an author.

We have a ready platform for you with no profit no loss (as of now, in the future you may also earn revenue) if you are ready to contribute to writing articles, click on the registration link and the article will be published as an individual contributor on your name.

Click here for registration

Thanks,
Team DBsGuru
We Commit We Deliver