MySQL InnoDB Cluster – What’s New in the 5.7.17 Preview Release

We carefully listened to the feedback we got from the last preview release and incorporated most of the suggested changes, the rest will follow in the next release.

Here are the highlights of this release!

Remote Instances support

The last preview release of InnoDB cluster only allowed to work with sandbox instances on localhost. We did this on purpose to limit the scope of the first release.

Typical H/A setups include instances on separate network hosts of course. In order to allow the testing of these setups, we now have officially enabled the support for remote instances in all X AdminAPI calls. Using IP addresses or resolvable hostnames, and having Administrative accounts remotely accessible, one can manage a cluster remotely.

This has been the most requested feature and we now promptly deliver this to you, the community.

Security Model

The security model of the InnoDB cluster was simplified. It does now rely entirely on the standard MySQL security mechanisms.

Initially, the InnoDB Cluster security was based on an administrative MASTER key encryption. However, that has been greatly simplified and we now simply rely on the MySQL accounts system, thus removing the master key requirement. And in order to provide the highest level of security by default – without adding complexity to the end user – we automatically create separate replication user accounts for each instance when it is added to a cluster.

MySQL Shell / X AdminAPI

The X AdminAPI was highly improved and extended with a set of new commands that introduce new features to easily overcome typical difficulties, issues, and concerns on H/A deployments.

At the same time, we keep aiming at our goal of hiding the complexity associated with configuring and managing an H/A setup. Furthermore, we managed to simplify it even more always keeping in mind how challenging can be to properly setup H/A without compromising security and reliability.


Unencrypted connections are often a problem when sensitive information is transported over the network. In order to make secure deployments of an InnoDB cluster, we have made the use of SSL possible.

SSL is used at initialization time of the cluster and for instances added to the cluster. Extended SSL settings were added to dba.createCluster(), Cluster.addInstance() and Cluster.rejoinInstance():

  • memberSsl: boolean, indicates if SSL is used for the instance to start the cluster, by default: true. Set this option to false to not use SSL.
  • memberSslCa: Path for the file containing a list of trusted SSL CAs to set for the instance.
  • memberSslCert: Path for the file containing the X509 certificate in PEM format to set for the instance.
  • memberSslKey: Path for the file containing the X509 key in PEM format to set for the instance.

Instance Sandbox Management

Regarding the Sandbox Instances management, we have renamed most of the available commands for the sake of keeping the API clear and straightforward, as well as introduced a new feature.

  • dba.startLocalInstance() renamed to dba.startSandboxInstance()
  • dba.stopLocalInstance() renamed to dba.stopSandboxInstance()
  • dba.killLocalInstance() renamed to dba.killSandboxInstance()
  • dba.deleteLocalInstance() renamed to dba.deleteSandboxInstance()
  • dba.validateInstance() renamed to dba.checkInstanceConfig()
Configure a local instance for InnoDB Cluster usage

Most people would like to use their own MySQL pre-configured instances to work with the InnoDB cluster. However, that may not be straightforward since Group Replication has some important configuration requirements in order to be ready to use. To make that process manageable, we have introduced a new feature to automate the configuration of an existent MySQL instance for InnoDB Cluster usage:


When this function is called, it reviews the current instance configuration to check if it is valid for InnoDB cluster usage and if not, it automatically configures the instance. Non sandbox instances require a valid configuration file path passed as a parameter to the function.

Please note that this function only works with locally installed MySQL instances. To use this function on all MySQL instance on the network, simply install the MySQL Shell on all your hosts.

Cluster Management

Being the core of the AdminAPI, cluster management got the spotlight on it receiving several updates and bug fixes. It was enhanced with new commands to overcome previous limitations.

dba.createCluster(‘cluster name’, {adoptFromGR: true})

A common scenario is that the user already has a previously configured setup of Group Replication but is not managed by the InnoDB Cluster. As so, we have introduced a new option called “adoptFromGR” to the dba.createCluster() command which creates the metadata of an InnoDB Cluster for an existing group.


If an instance is added or removed from the GR group from outside the Shell, the AdminAPI won’t be aware of it, so there’s the need for a command to detect it and add it to the Metadata.

Related to the above, a new command was introduced to the cluster object to rescan a group for newly added / removed members outside of the AdminAPI control.


When an instance joins a cluster, either a recovery process starts and the instance enters in “RECOVERY” mode which may take a while depending on the amount of data, or it aborts if the data is incompatible.

In order to verify those scenarios, we have introduced a new command to validate if the MySQLd Instance transactions are compatible with the servers belonging to the Default ReplicaSet, before actually joining it to a cluster. The goal is to provide an easy way for the user to check if an instance will be able to join the cluster without any error from the GR layer, being that a previously cloned instance or not.

Cluster.status() enhanced

Cluster.status() has been revamped to correctly handle the possible states that a replica set can be in. Here are the main changes:

  • A new ENUM status field was introduced to represent the status of the replica set as a whole: OK | OK_PARTIAL | OK_NO_TOLERANCE | NO_QUORUM and UNKNOWN, if the state cannot be correctly determined.
  • For single-master topology, the tree structure showing the R/O instances as leaves of the R/W instance was removed since in GR there’s no concept of slave instances and this could induce the user in error.
  • A new field called “primary” was introduced to display the current R/W master instance on single-master mode.
  • The failure tolerance calculation was corrected and messages explaining it were introduced

In a catastrophic scenario, in which simultaneous failures of several members of a cluster happen, one may end in a quorum loss scenario. Similarly, if a replica set becomes partitioned because of e.g. loss of communication, you could get into a situation where none of the partitions have a majority. In such scenarios, cluster operations are completely blocked being unable to do any kind of change in its configuration. In order to unblock this scenario, we have introduced a completely new command called “forceQuorumUsingPartitionOf()”.

This function restores the cluster’s default ReplicaSet back into an operational status. Note that this is a very dangerous operation as it can create a split-brain scenario if incorrectly used and should be considered a last resort. One must check if there are no partitions of the group still operating somewhere in the network but not accessible from the current location, before running this operation. Also, we made sure that the command receives a mandatory parameter which is the instance from which the group shall force its reconfiguration from. All members that are online from the point of view of this instance will be added to the newly redefined group.


One could question, what if all the members of the group went offline for some reason but the DBA was able to bring them back online? In this scenario, it’s impossible to restore the cluster; we need to restart it from scratch. To overcome this situation, we will introduce a new command called “rebootClusterFromCompleteOutage()”.

This function reboots a cluster from a complete outage. It picks the instance the MySQL Shell is connected to as new seed instance and recovers the cluster based on the existent Metadata of that instance.

This function will be included in the next release of the MySQL Shell / X AdminAPI.

Command availability checking

A Cluster consists of ReplicaSets and Instances which likewise the Cluster entity itself can have different states. For example, a ReplicaSet can be in a quorum-less state, and an Instance could be for example in a Recovering phase. Multiple combinations of states are possible, thereby we had to define a matrix of available commands for each different state combination.

As part of this release, we include command availability verifications, so a command might be available, forbidden or available but with warnings.

Soon another blog post will cover this topic in depth.

MySQL Shell features

Regarding the Shell core features, we have introduced a few new features and simplified some existent ones.

shell.connect(connectionData[, password])

Typically, when already in the shell interactive mode one could change the current session using \connect <uri>. However, now is also possible to use the connect() function. This comes handy when writing scripts for the MySQL Shell.

–uri is now the default option when starting the shell

You can now directly pass a MySQL URI to the shell, for example:

The above command line will connect to the MySQL server at localhost, running on port 3307, with the root user and set the sakila DB as the default schema.

X Session entry points removed

In order to keep things simple, we have removed the X Session entry points on the command-line for now, such as: mysqlx.getSession(). These will come back at a later point in time when we add support for sharding.

Removed stored sessions

The stored sessions handling was removed for now. It will return at a later point in time.

Auto-load of mysqlx and mysql modules at shell startup

Previously, the modules: “mysql” and “mysqlx” had to be manually loaded in order to use them. We have introduced the auto-loading of those modules and automatically assign the global variables “mysqlx” and “mysql” accordingly.

MySQL Router

The biggest highlight on MySQL Router was the improvements made to the bootstrapping mechanism. As in the previous release, the Router is capable of connecting to the Metadata storage and quickly configure itself for immediate usage on the InnoDB Cluster setup.


Following the security model updates, the bootstrap operation does not require the master-key anymore and uses a regular Administrator MySQL user account instead.

Metadata Registration

To allow monitoring tools to detect instances existing on the same host, the Router does now register itself with the Metadata.

Self-Contained Router instances

In a typical basic setup, only one instance of the Router is configured on the system. However, if multiple instances are desired this wasn’t possible. That has changed, and we have introduced a new option: “–directory” to allow the deployment of a self-contained Router instance on a specific direction.

By default, only one instance of the router may be configured automatically in the system. If additional instances are desired, they must be configured using the –directory option, which will put all router related files in a self-contained directory structure.


The effort to improve the InnoDB Cluster is part of an ongoing work to improve the user’s experience with the latest MySQL features regarding High Availability. Our goal is to make life easier for the common user and the power user, bringing simplicity and power together.

Please read the tutorial, and give it a try!

About Miguel Araújo

Miguel Araújo is a Senior Software Engineer on the MySQL Middleware and Clients Team, at Oracle. He has worked on different projects and teams, mostly related with Middleware and High-Availability. Currently working on the MySQL Shell, leading the AdminAPI developments as part of the MySQL InnoDB Cluster project. He has a Computer Science Engineering degree and Master's degree, from the University of Minho, Portugal, where he was also a researcher. His backgrounds are on distributed systems, scalability, database replication and high-availability.

4 thoughts on “MySQL InnoDB Cluster – What’s New in the 5.7.17 Preview Release

  1. I have a question. All of the demos, including Youtube, show using only sandboxed instances on a single computer. Is there some magic involved on building a multi-box cluster? If there isn’t, what is the command set I would need to use to build it?

    Thanks in advance!

    1. Hi Tom,
      we are rolling InnoDB cluster out in steps. The first Labs release we did back in September only had support for Sandbox instances. That’s why the current videos only cover that. The 2nd Labs release we did last week now also allows real-world multi-box clusters.

      The difference compared to Sandbox setups is that in a real-world setup one needs to install MySQL Server instances on the different boxes before starting with the InnoDB cluster setup through the MySQL Shell.

      On Linux this is usually done by opening a SSH connection to the remote hosts and using the MySQL repos as described here and here

      Next step would be to validate the MySQL configuration and ensure it is ready for HA usage. This can be done via the MySQL Shell using the dba.checkInstanceConfig() function.

      After that has been done one can continue following the standard procedure of how to setup the InnoDB cluster as described here

      We are working on a tutorial the specifically focuses on multi-box setups. Please keep following the blog posts here!

Leave a Reply

Your email address will not be published. Required fields are marked *

Please enter * Time limit is exhausted. Please reload CAPTCHA.