How to dump MySQL data to S3 Compatible Object Storage using the MySQL Shell

In Part 1 of this blog series I shared how to create a Pure Storage FlashBlade S3 compliant Object Storage account, user and bucket. I also demonstrated how to download the MySQL airportdb database, and upload objects into the Object Store S3 bucket.

In Part 2, I showed how we can use the MySQL Shell to install the aiportdb database dump into a MySQL 8 database using the S3 Object Storage bucket.

In this blog post I will show how we can also use the MySQL Shell to dump a MySQL 8 database to an S3 Compatible Object Storage bucket.

Create Bucket

Let’s start by creating a new Bucket in our previously configured S3 Compatible Object Storage.

Use the s3cmd ls command or aws s3api list-buckets command to confirm Object Storage access and that the bucket name does not already exist, for example.

We can use the aws s3api create-bucket command to create a new bucket for example.

MySQL Shell

The MySQL Shell supports dumping a MySQL 8 database to Oracle Cloud Infrastructure (OCI) Object Storage, Azure Blob Storage, AWS S3 Object Storage and other S3 compatible Object Storage solutions, such as Pure Storage FlashBlade Object Storage.

S3 Compatible Services

The MySQL Shell provides the following parameters for s3 compatible services:

ArgumentDescription
s3BucketName: “stringThe name of the S3 bucket where the dump files are located.
By default, the default profile in the Amazon Web Services (AWS) CLI config and credentials files located at ~/.aws/ are used to establish a connection to the S3 bucket.
s3CredentialsFile: “stringA credentials file that contains the user’s credentials to use for the connection, instead of the one in the default location, ~/.aws/credentials.
Typically, the credentials file contains the aws_access_key_id and aws_secret_access_key to use for the connection.
s3ConfigFile: “string”An AWS CLI configuration file that contains the profile to use for the connection, instead of the one in the default location ~/.aws/config.
Typically, the config file contains the region and output type to use for the connection.
s3Profile:
string
The profile name of the s3 CLI profile to use for the connection, instead of the default profile in the AWS CLI configuration file used for the connection.
s3Region:
“string”
The name of the region to use for the connection.
s3EndpointOverride: “stringThe URL of the endpoint to use instead of the default.
MySQL Shell S3 Parameters

Consult the documentation for full details of MySQL Dump Loading Utility

Dumping MySQL Data

The MySQL Dump Loading Utility provides the ability to dump data at the Instance, Schemas and Table level using the following:

util.dumpInstance(outputUrl[, options]) 
util.dumpSchemas(schemas, outputUrl[, options])
util.dumpTables(schema, tables, outputUrl[, options])

Let’s start at the Instance level.

Dump Instance

Using the same details we used the aws s3api command we can dump the MySQL database to the FlashBlase airport-dump S3 bucket with util.dumpInstance, using the syntax below:

util.dumpInstance("airport-db",{s3bucketName: "airport-dump", s3EndpointOverride: "http://192.168.4.180", s3Profile: "fbstaines03", threads: 4, compatibility: ["strip_restricted_grants", "strip_definers", "ignore_missing_pks"]})

In the above we can see the first argument provided, airport-db, this is the name of a folder which will be created within the s3BucketName airport-bucket.

We can also see s3EndPointOrverride and s3Profile arguments are set to the same values as the aws s3api command.

The s3CredentialsFiles3ConfigFile and s3Region arguments have been left unset, using default values.

The number of parallel threads is the default value of 4.

And the compatibility options used are:

strip_restricted_grants, to remove specific privileges that are restricted by MySQL HeatWave Service from GRANT statements, so users and their roles cannot be given these privileges.

strip_definers to remove the DEFINER clause from views, routines, events, and triggers, so these objects are created with the default definer.

ignore_missing_pks to make the instance dump utility ignore any missing primary keys when the dump is carried out.

Ok, let’s see that in action.

Great, using the aws s3 ls command we can list bucket objects, and by greping for ‘Total Objects’ we can see the total number of objects written.

Logging onto the FlashBlade and navigating to Storage -> Object and selecting the airport-dump bucket we can confirm the size and number of objects written agree.

Storage – Object Store

Dump Schemas

We can also use the MySQL Shell Dump Loading Utility util.dumpSchemas command to dump a list of one or more schemas from a MySQL instance using the as similar syntax as util.dumpInstance, thus:

util.dumpSchemas(["airportdb"],"airport-db",{s3bucketName: "airport-schemas", s3EndpointOverride: "http://192.168.4.180", s3Profile: "fbstaines03", threads: 4, compatibility: ["strip_restricted_grants", "strip_definers", "ignore_missing_pks"]})

Let’s logon to the MySQL 8 database and give it a schema level dump.

Dump Tables

The MySQL Shell Dump Loading Utility util.dumpTables option can be used to dump a list of tables from a MySQL schema using the following:

util.dumpTables("airportdb",["airline","airplane","airport"],"airport-tables",{s3bucketName: "airport-tables", s3EndpointOverride: "http://192.168.4.180", s3Profile: "fbstaines03", threads: 4, compatibility: ["strip_restricted_grants", "strip_definers", "ignore_missing_pks"]})

And again logon to the MySQL 8 database and run the command.

Before we move on, let’s have a look at the objects written to the S3 airport-tables bucket for the airline, airport and airplane tables.

To dump all of the views and tables from the specified schema, use the all option and set the tables parameter to an empty array, as in this example:

util.dumpTables("airportdb",[],"airport-all-tables",{s3bucketName: "airport-tables", s3EndpointOverride: "http://192.168.4.180", s3Profile: "fbstaines03", threads: 16, compatibility: ["strip_restricted_grants", "strip_definers", "ignore_missing_pks"], "all": true })

And repeat for all tables.

And again, let’s have a look at objects written to the S3 airport-tables bucket for all tables.

Summary

In this blog post I have shared how to use the MySQL Shell Dump Loading Utility to perform MySQL 8 instance, schema and table level dumps to an on-premises FlashBlade S3 Object Store.

One thought on “How to dump MySQL data to S3 Compatible Object Storage using the MySQL Shell

Add yours

Leave a Reply

Create a website or blog at WordPress.com

Up ↑

Discover more from Ron Ekins' - Oracle Technology, DevOps and Kubernetes Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading