MySQL InnoDB Cluster in Oracle Public Cloud: A Tutorial

In this tutorial, we are going to guide you through the entire process of creating a 3 node InnoDB cluster in Oracle Public Cloud (OPC). We will address each of the steps, from the initial configuration and setup of the OPC IaaS resources to the InnoDB cluster creation and setup.

Setting up Three MySQL Instances on OPC

Before we can get started, we have to create three MySQL instances on OPC.

Log into the Oracle Cloud Dashboard and open the MySQL Cloud Service Console.

On the Oracle MySQL Cloud Service Console press [Create Service] to add a new MySQL Instance. For this tutorial, we will use 3 MySQL instances with the respective names of ic01, ic02, ic03. Enter a Service Description and click [Next >].

On the next screen, we select the virtual machine type–which will run Oracle Linux 6 as the OS–provide our SSH Public Key–so that we’re able to access the machine later on–and we specify some basic MySQL configuration details.

Note: For this tutorial we will use root as Administrator User. Please ensure to use the same Administration Password on all 3 instances.

It will take a few minutes to create the virtual machine. We can use that time to start creating the other two MySQL service instances as well.

After the MySQL service instance is available, we can click the instance name–ic01–to get more information about the MySQL instance. Please make sure to note the Public IP address as we will need to update the local hosts file with this information.

In this case, the MySQL service instance is running on 140.86.13.239. Repeat this for all instances and then update your local /etc/hosts file. Open the file with your editor of choice–I’ll use nano:

And add entries for all three of your MySQL service instances, specifying their hostnames and IP addresses:

After this has been completed, we can update the Access Rules to allow the MySQL instances to talk to each other within the OPC network and enable public access to the machines (if desired). Click the context menu and select [Access Rules].

 

On the Access Rules screen click [Create Rule] to bring up the Create Access Rule dialog.

Create three new rules and enable public access to the machines:

Create Access Rule #1
Rule name: local_mysql_ic_access
Description: Allow local InnoDB cluster communication
Source: <custom> 10.196.0.0/16
Destination: mysql_MASTER
Destination Port(s): 3306, 13306

Create Access Rule #2
Rule name: local_mysql_ic_access2
Description: Allow local InnoDB cluster communication
Source: <custom> 10.196.0.0/16
Destination: mysql_MASTER
Destination Port(s): 13306

Create Access Rule #3
Rule name: mysql_p2admin_x
Description: Permit public access to MySQL X Protocol Port
Source: PUBLIC-INTERNET
Destination: mysql_MASTER
Destination Port(s): 33060

Enable Access Rule for public 3306 access
Rule Name: ora_p2admin_mysql

After these steps, you have three instances that are ready to be prepared for InnoDB cluster usage from an SSH session.

Preparing the MySQL Service Instances for InnoDB Cluster Usage

In the previous section, we created the MySQL Cloud Service instances within OPC and uploaded our public SSH key. We also added the OPC instance names to our local hosts file so we can access them by (short) name instead of IP. Now we can access our MySQL service instances in OPC via SSH on our local machine.

Open a terminal and type the following to connect to the first instance:

If everything is configured correctly you will see the following welcome message:

Installing Python 2.7

Since Python is required by MySQL Shell, it needs to be installed before continuing. In this case, Python is going to be installed using the Software Collection Library 2.3 for Oracle Linux. Use the following command to download the yum repository file that includes an entry for the Software Collections repository:

An easy way to enable repositories is to use the yum-utils package, which is used in this tutorial. Since it’s not installed by default, I will install it with the following command:

Once the installation of yum-utils package finishes, enable the Software Collection repository:

To continue, install the Software Collection Library Utility and Python 2.7:

Now, enable the use of Python 2.7:

Installing the MySQL Shell

To install MySQL Shell first we need to install the official MySQL yum repository. Run the following command to install the yum repository:

Type the following command in the terminal to install MySQL Shell:

Preparing the MySQL Instances for InnoDB Cluster Usage

First, we need to switch to the “oracle” user and then start MySQL Shell specifying that Python 2.7 should be enabled for it:

We switch to the “oracle” user because the MySQL Service (mysqld) is only exposed to the “oracle” user, and it is not in the PATH for the “opc” user.

Since the OPC MySQL service instance uses the Password Validation Plugin, and its default could potentially be too strict for the auto-generated Router bootstrap password, we will avoid possible issues by lowering the password policy a level with the following on each instance using MySQL Shell:

Next, we configure the local instance by calling the following function, typing the password for root user when prompted:

The MySQL configuration file (my.cnf) needs to be specified and is found in /u01/bin/mysql/my.cnf. If you used a different Administrative User other than “root”, MySQL Shell gives you the option to create a new Administrative User for the InnoDB cluster when you are configuring the host using dba.configureLocalInstance(), but since we were using “root” for this tutorial, no new user needs to be created.

Then you will see a report with the changes made by MySQL Shell and a message saying that you need to restart the MySQL service instance to apply them:

To restart the MySQL server, first quit the MySQL Shell:

Then run the following command using the “oracle” user to stop the MySQL Server instance:

Then, start the MySQL server again:

Once the service is running again, start MySQL Shell specifying that Python 2.7 should be enabled for it:

Then run the following command to verify that the host us ready for cluster usage:

Creating the InnoDB Cluster

To create the InnoDB cluster we will run the shell on our local machine using the root user, and specify the port where MySQL is running:

Remember to configure your local /etc/hosts file to be able to use the short hostname for a remote connection instead of the IP address

Now, create a classic session to the host, typing the password for root when prompted:

In the following command pay attention to the ipWhitelist option parameter. Since each host has its own network, it’s necessary to specify the range of IPs that are going to be permitted to connect to the cluster. The variable that stores the value by default is set to “AUTOMATIC”, which permits connections just from private subnetworks on the host. You can add multiple IP ranges separating them by a comma. The number after the slash is the number of network bits when using CIDR values. The cluster creation is assigned to a variable for later usage:

A couple of messages with information about the cluster creation and the function required to add instances to the cluster is will appear once the cluster is created.

Then we can verify the status of the cluster with the following command:

Adding Instances to the InnoDB Cluster

Now it’s time to add instances to the cluster. Run the following function to add the second instance that is already configured. For the reason explained above, the optional parameter ipWhitelist is specified as well:

Run the following command to verify the status of our instance in the cluster:

Then add the third instance:

And run again the command to verify the status of our cluster:

Persist the Cluster Configuration

In order to persist the cluster configuration on each instance–so that when a restart happens the instances automatically rejoin the cluster–we must use dba.configureLocalInstance() again on each instance. That command will update the local my.cnf file with the parameters necessary to automatically rejoin to the cluster.

To do this, run the following command locally on each instance:

Configure a MySQL Router

The next step is to bootstrap our cluster. We assume that you already have MySQL Router 2.1.3+ installed. Open a terminal or a command prompt and type the following command:

Windows:

Linux:

Starting the MySQL Router

Windows:

Linux (assuming that myrouter directory it’s on the home path):

Testing the InnoDB Cluster

Now we will demonstrate how the failover works.

Start MySQL Shell and connect locally using the MySQL Router port that points to our remote cluster:

Then get the cluster handle with the following function:

And then, display the status of the cluster:

You can see which instance is currently the PRIMARY by looking for the instance marked as having mode:”R/W”. Then you can connect remotely to that instance and stop the MySQL service:

Once the MySQL service is stopped, check the cluster status again:

You will see a message saying that the connection to MySQL was lost during query.

To reconnect, run the following command:

Then execute the function to verify the cluster status again:

Notice that the instance ic01-mysql-1 has a “MISSING” status and one of the other instances has taken the PRIMARY (or “master”) role changing its mode from “R/O” to “R/W”.

Start the MySQL service again on the remote host connection:

Check the cluster status again:

Notice that the instance ico1-mysql-1 is back “ONLINE” but now it has a SECONDARY (or “slave”) role with its mode set to “R/O”.

Conclusion

You have now acquired the basic knowledge to properly create and configure a cluster using Oracle Public Cloud and MySQL Cloud Service instances, including how to bootstrap a remote cluster so that you can access it locally. This scenario is useful when you don’t want to expose the servers where MySQL Server is running but instead provide the IP address of another server which just handles the application traffic.

There’s a lot of other functionalities that we still want to share with you, so stay tuned for Part II of this series!

If you have any questions about the basic setup described here, please let us know here in the comments. As always, thank you for using MySQL!

12 thoughts on “MySQL InnoDB Cluster in Oracle Public Cloud: A Tutorial

  1. Can I follow the “add instances” portion if I want to add additional instances to an existing cluster in the case where I have data in the cluster but I don’t have the binlogs back to the start?

    1. All MySQL InnoDB Cluster nodes share the same data set. You cannot (you should not even try, you must not..) add a node to the cluster that has a different transaction set executed but the rest of the cluster. The transaction set is persisted in the binary log (binlog).

      If you add a new instance (node) to the cluster, it must either have a subset of the clusters transaction set in its binary log or the binary log must be empty.

  2. Hi,

    Two questions :

    1- What about if we want to isolate several webapplications on different ports ? (use of mysqld_multi in a “traditional” single MySQL server).
    How can we use the equivalent with this type of cluster ?

    Otherwise, if we decide to use only one port (3306 for example) for all the webapps, is there a way to “mount” or “unmount” only one database without stoping the all server (like in the Oracle), which is the reason of using mysqld_multi (each instance runs as a service that we can start and stop independently from the others).

    2- Does MySQL Router need to run on a “node” of the cluster, or can it be installed directly on a webserver’s VM for example ?

    Thanks in advance.

  3. Regarding question 1

    The MySQL InnoDB Cluster is in it’s early days. We assume one application running on a set of dedicated MySQL server instances. That’s what the Shell and it’s AdminAPI cover today. From top of my head I am unaware of design limitations that would stop the AdminAPI to support the requested configuration in the future.

    Your use case is very valid, and it may even work today – somehow, doing this or that manually. But that’s not what we are after. If we decide to address the use case then it needs to work out-of-the box.

    Regarding question 2

    Co-locating the router and MySQL instance is a MySQL InnoDB Cluster reference design decision but no system requirement. You may run them on the same machine or not, whatever you prefer.

  4. Hi,
    After the configuring the cluster I have run dba.configureLocalInstance(); to persist the configuration as instructed. However when I restart an instance it does not automatically rejoin the cluster

    1. Hello.

      Please can you share the following information?
      – Which version of MySQL Shell are you using?
      – What steps you follow to persist the configuration?

      If you are using MySQL Shell v1.0.9, once the command dba.configurelocalInstance() it’s executed on each instance (locally) that it’s part of the InnoDB Cluster the configuration must persist. The following is an example to persist the configuration on instances that are part of an InnoDB Cluster:

      – Open MySQL Shell locally in the instance that it’s part of the InnoDB Cluster
      – Execute the command dba.configurelocalInstance()

      To verify that the configuration should persists:
      – Run the following command in MySQL Shell: session.runSql(“show variables like ‘group_replication%'”);
      – Look for the variable ‘group_replication_start_on_boot’ and verify is set to ‘ON’.

      In case that the variable is not ‘ON’ or there are no variables like ‘group_replication%’ please share with us the steps you follow to figure out what the problem is.

      Thaks for your time.

      1. Hi,

        I have followed all the steps up to “Persist the Cluster Configuration”. And the cluster is up and running and all the nodes are online.

        As per the step “Persist the Cluster Configuration”. I have run dba.configureLocalInstance(); on each instance. And when I checked
        show variables LIKE ‘GROUP_REPLICATION’; group_replication_start_on_boot is ON

        then I restarted the MySQL server ic03-mysql-1:3306 and checked the cluster status from the other two servers it’s always missing. I had to do a rejoinInstance to add the server back to the cluster.

        Note that I am not using OPC

        1. Hello Shenal.

          Please can you share the whole steps that you used to create the InnoDB cluster as well as the version of the products you are using?

          Are you using virtual machines or real hosts?
          What OS have your instances?

          This to try to replicate your environment and verify if there is an issue.

          Thanks for your time.

Leave a Reply

Your email address will not be published. Required fields are marked *

Please enter * Time limit is exhausted. Please reload CAPTCHA.