Making the MTR rpl suite GTID_MODE Agnostic

In MySQL 5.6 we introduced GTID_MODE as a new server option. A global transaction identifier (GTID) is a unique identifier created and associated with each transaction when it is committed on the server of origin (master). This identifier is unique not only to the server on which it originated, but is unique across all servers in a given replication setup. There is a 1-to-1 mapping between all transactions and all GTIDs. For additional information, please refer to the MySQL manual.

Prior to 5.6.17 and 5.7.4, we had GTID specific replication (referred to as “rpl” within the MTR suite) regression tests, and we had to separately run the binlog suite with the GTID_MODE=ON option enabled. To improve the test coverage further we wanted to make the MTR rpl suite GTID_MODE agnostic.

From 5.6.17 and 5.7.4 on, the MTR rpl suite is run daily within the continuous integration testing framework of MySQL, with GTID_MODE=ON for better code coverage.

However we cannot just turn on the necessary mysqld switches and expect all of the test cases to pass, as the tests are bound to behave differently when run with GTID_MODE=ON. There were a number of challenges associated with this new change, and this post will explain the challenges and the necessary solutions that were implemented in order to make this change.


We wanted to periodically run the rpl suite with --GTID_MODE=ON. However, one cannot just turn on the necessary mysqld switches and expect all of the test cases to pass due to some differences in behavior when run with GTID_MODE=ON.

Some of the challenges that we expected were:

  • C1. Result file differences due to additional output from SHOW BINLOG EVENTS. This breaks many tests due to the result file difference, and MTR is results file oriented.

  • C2. Tests that mix transactional and non transactional engines in the same statement/ transaction. GTIDs have options and requirements around such behavior.

  • C3. The fact that some tests do not require the logging of slave updates to be ON, and thus with GTID_MODE=ON we fail to start the server. This is because all servers involved in a GTID based replication group must have GTID_MODE=ON.

  • C4. Currently, even slaves using the “--gtid-mode=on --enforce-gtid-consistency --log-slave-updates” server options will still connect to the master the using MASTER_AUTO_POSITION=0, because by default the replication setup in the mysql-test rpl suite has MASTER_AUTO_POSITION=0. But for GTID_MODE=ON, we should have MASTER_AUTO_POSITION=1 since that will allow for automatic connection and replication setup, using the correct GTID values.


To overcome these challenges, the following solutions were implemented :

  • Solution to C1: We use “” which filters out the additional GTID output from the result files.

    However, this is not always enough. In some cases we need to create two wrappers for the test case. One works on traditional event positioning and the other with GTIDs. The former keeps the original test name, the latter gets “_gtid_” injected into it just after “rpl”. For example, rpl.rpl_binlog_errors becomes rpl.rpl_gtid_binlog_errors.

    These should also wrap the original test case that it is moved into (keeping the original name, with the extension renamed from .test to .inc).

  • Solution to C2: These tests are not supported, so they were simply added to the skip-test-list when GTID_MODE=ON.

  • Solution to C3: We simply needed to skip these tests as well.

  • Solution to C4: We will set MASTER_AUTO_POSITION=1 by setting –let $use_gtids=1 before including

    This setting must be automatic in, before calling

    And it should be unset on before calling again, which reverts CHANGE MASTER to MASTER_AUTO_POSITION=0

  • Apart from these more recurring challenges, we should also deprecate --sync_slave_with_master (and similar ones) and only use “include/”, which handles both legacy file-positions, as well as GTID-based replication protocols.


    Having solved the above challenges, we now run the MTR rpl suite with GTID_MODE=ON on a daily basis. This has greatly improved the test coverage, thus allowing us to identify GTID related problems well before they are pushed in the working release branch.

    If you have any questions or feedback regarding this project, please post them here. I would love to hear what the community thinks about all of this.

MySQL 5.7 and GIS, an Example

This post will offer a very simple demonstration of how you can use MySQL 5.7 for Spatial features within your applications. In order to demonstrate this, I’ll walk through how we can determine the 10 closest Thai restaurants to a particular location. For this example, we’ll be using the apartment that I lived in when I first started working at MySQL, back in 2003.

For more details on all of the new GIS related work that we’ve done in MySQL 5.7, please read through these blog posts from the developers:

Installing MySQL
Let’s first install the MySQL 5.7 labs release that contains all of the refactored GIS functions, as well as the R-tree based InnoDB Spatial Indexes. For the first time in MySQL’s history, you can now use an ACID/MVCC engine for geospatial data! It’s a bit hard for me to contain my excitement on that particular point. :)

Importing Open Street Map Data
We can now use a customized version of the Open Street Map (OSM) MySQL data import scripts, which will additionally generate a geometry column from the “longitude,latitude” coordinate pairs, and create a new InnoDB spatial index on it.

You can download a copy of my customized scripts here.

We can then use these scripts to load a NYC area OSM data extract, using these steps:

The resulting nodes table looks like this after the initial data load:

You can read more about playing with the resulting OSM data here.

Now we can also de-normalize the data a bit in order to avoid unnecessary and heavy subquery usage, and instead make use of the new (in MySQL 5.6) fulltext indexing feature in InnoDB. To do that, we can load all of the nodetags into a single column for each node this way:

You can download a dump of the final schema that I ended up with here.

Creating Distance Calculation Functions
Next we can create a new function to calculate the geographical distance between two points (“longitude,latitude” coordinate pairs, or nodes) using the spherical law of cosines (SLC) formula for determining the “Great-circle distance”:

This is necessary because with Geometry types, the ST_distance and related functions only return degrees—which isn’t really helpful for us here—as they’re dealing with points in a 2 dimensional cartesian plane (euclidian geometry) and we have to manually calculate these distances on an earthlike spheriod. When MySQL has Geography type support, then the need for haversine or SLC functions will go away and we can simply use the ST_distance and similar functions to calculate the actual distance on the surface of the earth, between two geographic points.

Creating the Bounding Rectangle for Our Spatial Index
Again, as a starting point in our search for Thai restaurants, I’m going to be using the location of my apartment (40.716743,-73.951368) at the time that I first started working at MySQL.

For creating the envelope or bounding box around that location—so that we can take advantage of the spatial index on it—we can use the average distance between longitude and latitude degrees of 111km. It’s pretty close for latitude, but much more variant for longitude. If you want the bounding box or envelope to be more accurate, you can use abs(cos(radians(${latitude}))*111) for the longitude calculations. In our case though, we’re only using it for an envelope to push down to the spatial index (R-tree) for the Minimum Bounding Rectangle (MBR), so we could instead just increase the size of the envelope—since we’re still calculating the actual distance later (using the SLC function), which is what we care about in the end here—and keep things simple. Thus the calculation would be something like:

Or, if you want to have a more precise envelope/MBR:

Our Final Spatial Query
So now let’s use that (simpler) MBR formula along with our SLC function to try and find out what the 10 closest Thai restaurants are to my old place, using the spatial index to weed out anything that’s not within an envelope that covers approximately 20 square km around it:

OK, we got some values, and thanks to a combination of the spatial index and the full text index, we got them very quickly!

Examining the Results
OK, now on to the data. Let’s assume that we simply want the closest one, and examine that record:

So it’s about 600 meters from my old place, its name is Tai Thai, and its phone number is 718-599-5556. Let’s do a quick sanity check using google maps: directions, and location info. As we can see, our data matches up pretty well with Google maps (which I consider to be the gold standard today).

Now I know that there are many other Thai restaurants in the area, but they simply don’t have any records in OSM currently (at least in the extracted snapshot for the NYC area that I’m working with). As always, your data is key. :)

Presenting the Data
Now we know what our data is telling us are the closest 10 Thai restaurants. We also have relevent information to present to the application user about each place: name, address, phone number, website, etc. So let’s look at a very simple example of how we can push these new locations out to google maps so that we can see each one on a map of the city, to go along with all of the metadata that we have for them. The simplest way is to feed the “latitude, longitude” coordinate pairs into google maps. Let’s try generating a URL to map my old place to the next closest Thai place. Again, the “latitude, longitude” coordinates for my old apartment are “40.716743, -73.951368″ and the coordinates for the next closest Thai place, One More Thai, are “40.7210541, -73.983871″. We can plug these into a simple GET call like this:,+-73.951368/40.7210541,+-73.983871. Or we can use the embedded API like this:

As you can see it’s a pretty easy GET call either way, passing in the coordinate pairs in the given format. You can read more about the google maps APIs here.

Hopefully this has opened your eyes a bit to what’s possible with MySQL 5.7 in the GIS world, and whet your appetite for jumping in and playing with it!

Lastly, a big thank you to all of the developers who worked so hard on these new features! Please stay tuned in here for additional features that truly make MySQL a leading player in the geospatial market.

The MySQL Optimizer Cost Model Project

You may not be aware of this but the foundation that the MySQL optimizer builds on when choosing a query plan – the cost model – is for the most part very old. At least in tech terms.

Much of it was written in another millennium, at a time when “Forest Gump” and “Titanic” won Oscars and “Baywatch” was the big thing on TV. Although the revision history doesn’t go that far back, it wouldn’t come as a surprise if it predates that annoying “Macarena” song and even “The Sign” (Ace Of Base) – don’t follow those links unless you’re feeling very brave…

Thankfully, a lot has happened since Ace of Base. Modern CPUs have three orders of magnitude more transistors than the Pentium had. A cheap laptop has a bigger memory size than desktop hard drive sizes back then. And then there’s new kinds of hardware like SSDs and flash and whatnot.

In the end, it boils down to this: The foundation of the MySQL optimizer – the cost model – is in dire need of refactoring because it no longer models the hardware MySQL runs on in a convincing way. This is something the MySQL optimizer engineers have to deal with, and that’s why there is an ongoing project simply called “The Cost Model Project”.

The Cost Model Project

We want to give a heads up on what we’re discussing and/or working on in this project, and here it is. :-)

The following are considered by the MySQL optimizer team to be the most important limitations of the current cost model, and some thoughts on how they can best be solved:

  1. The “cost” in MySQL is an abstract number closely related to the cost of doing a random read from a HDD. At the time, this was an obvious choice since the main goal was to efficiently utilize the disk bandwidth. In the future we would probably like to relate cost to something else that makes more sense to users, e.g. wall clock time.
  2. There is a hard-coded relationship between CPU cost and IO cost. This relationship wasn’t half bad 15 or 20 years ago, but CPU speeds have improved considerably more than IO speeds since then. This will probably be fixed by making costs configurable and in a way that can better model future hardware when that time comes.
  3. IO is assumed to be HDD in the model. There is no concept of SSDs, flashdrives or FusionIO. E.g., some of the IO optimizations are only relevant for spinning disks. To fix this, we need to make the cost model aware of the underlying hardware, and different kinds of hardware will need different configurations.
  4. The optimizer has no idea how much of a table (or index) has been cached. It is typically assumed that every query has to read all rows from HDD. Since memory gets cheaper at an amazing rate, more and more data can be cached. This, in turn, affects the cost of row fetch operations. The MySQL optimizer has to get info about caching from the storage engines, but we also have to modify the cost model to make use of the new information.
  5. Before the InnoDB and MySQL server teams were joined, InnoDB didn’t have a lot of influence on the MySQL optimizer development. Instead of getting incorrect cost calculations in MySQL fixed, InnoDB had to hack their own code to get reasonable execution plans. However, two wrongs don’t make a right, so it’s better to remove the hacks and fix the cost model instead.
  6. Index statistics has a way too low precision for indexes with good selectivity. Currently, there is no way to separate index statistics for an index that has an average of 1.9 rows per lookup value and another one that has 1.1 rows per lookup value. The fix is, of course, to improve the precision of the statistics. Some traces of this is already in the 5.7.4 milestone release, and you can read more in worklog 7338.
  7. The cost model would benefit from additional and improved statistics, both on indexed and non-indexed columns. Simple statistics for non-indexed columns similar to what we currently have for indexes, histograms and correlation histograms could all play their valuable part, but of course we’ll have to prioritize.
  8. The number of rows from a table that will be joined with the next table only takes into account the conditions used by the access method. This will often lead to very wrong cost estimates. The fix was to implement condition filtering; it is already in the latest 5.7 lab release and was discussed in two earlier blog posts (part1, part2). Also note that better statistics would improve the precision of this work.
  9. Cost calculations are spread out all over the place and duplicated extensively. If we’re ever going to get consistency and maintainability into the cost model we need to refactor the code and hide calculations behind APIs. Before we get there, the cost model will be extremely expensive and error prone to modify because there are so many lines of code to change. Again, some traces of this can be seen in the 5.7.4 milestone, and by reading worklog 7182 and worklog 7209.
  10. Add networking cost to the model so that MySQL Cluster can get proper cost calculations.

Finally, with all of this major refactoring, you will also need the ability to provide a wide variety of additional Optimizer Hints in order to get a specific execution plan in various cases. But that is a topic for another blog. :)

Please let us know if you have any suggestions or comments on the project. We’d love to hear from you!

Making Use of Boost Geometry in MySQL GIS

This post talks about how we make use of Boost.Geometry in MySQL to implement reliable and efficient GIS functionality, as well as changes to the GIS features in the lab release.


This article assumes the reader knows about the basic geometry database concepts defined by the OGC. That includes WKT, WKB, the 8 kinds of spatial relationship checks—contains, within, intersects, disjoint, crosses, touches, overlaps, and equals—along with the 4 types of spatial set operations—intersection, union, difference, and symdifference. You can find a list of OGC Simple Feature Access standards, along with additional information on the aforementioned topics here.

Terms & Abbreviations

BG: Boost.Geometry
CCW: Counter-clockwise
CW: Clockwise
GIS: Geographic Information System
OGC: Open Geospatial Consortium
SRID: Spatial Reference Identifier
WKB: Well Known Binary geometry representation
WKT: Well Known Text geometry representation

Now that InnoDB is the default storage engine for MySQL, our user base is rapidly transitioning to InnoDB. One capability that they have been demanding is a performant and scalable GIS implementation. Along with adding R-tree index support to InnoDB, we also decided to replace the original GIS algorithms with a more powerful, reliable, effective and efficient geometric engine. After many comparisons of the available open source geometry engines on multiple aspects, BG was finally chosen. Please see here for additional information on how we came to choose Boost.Geometry.

So far the implementation covered in this article is only in the “April” MySQL lab release, it is not available in a MySQL 5.7 DMR just yet. In the lab release, what has changed is the internal geometric algorithms used for GIS computations in the spatial relation check functions (e.g. st_contains, etc.) and the spatial set operations (e.g. st_intersection, etc.). They are now implemented using Boost.Geometry, and these GIS functions are expected to work more reliably, precisely, and efficiently than previous versions of MySQL. This also means that MySQL now depends on the Boost library. So far we only depend specifically on Boost.Geometry, but users will still need the complete Boost library in order to build the MySQL lab release. For additional information on how to build MySQL with Boost.Geometry, please see here.

Also, now unclosed polygon rings coming from user data are automatically closed before they are stored or provided to other GIS functions, and this is backward compatible in terms of historical user data because previous versions of MySQL simply didn’t accept polygons with open rings. Additionally, now binary GIS functions require arguments with identical SRID values; this allows for future work where we will support the Geography type, and standard spatial reference systems like WGS 84. And finally, users can also now use spatial indexes with the InnoDB storage engine for better query performance. Other aspects of MySQL GIS remain the same as in previous versions to end users, including the GEOMETRY data format, and all the spatial functions.


Boost.Geometry is an excellent geometric engine which can do many kinds of geometry computations with standard OGC defined geometries, it has good performance and reliablity, and it has an active and vibrant development community. As great as BG is though, we couldn’t make use of it very easily in MySQL, mainly because of the following challenges:

1. BG currently only reads WKT geometry data, and outputs geometry results as WKT as well. But in MySQL GIS, a geometry’s internal data format is the GEOMETRY format, which is a blob consisting of an SRID (4 byte unsigned integer) followed by its WKB byte string, and all the integers and doubles inside the GEOMETRY blob are little endian. This format is both the durable storage format within the storage engines, and the in-memory format when a geometry object is passed around Item_xxx nodes during query execution. We don’t want to be constantly doing conversions between WKT and WKB when passing arguments to BG functions, and when receiving results from them. Instead (for performance reasons) we want to always operate in GEOMETRY format everywhere, including within BG, and without doing any unnecessary conversions or duplication of data.

2. BG uses exceptions to indicate errors to its caller/user. MySQL on the other hand has not historically used C++ exceptions.

3. BG uses a generic programming paradigm, just as the entire Boost library does. BG is mostly composed of function templates and class templates, so it’s not OK to use the dynamic polymorphism paradigm here, we instead have to statically dispatch geometry objects of each type to the right version of each BG spatial function call.

4. BG doesn’t have a ‘geometry collection’ type, which is defined by the OGC, so we had to implement support for geometry collections in MySQL using BG’s existing features.

5. BG doesn’t support all types of arguments in some spatial functions, so we had to adapt to this in the short term (until we can make further improvements to BG itself, and share that upstream).

And finally, we had to merge the support for BG into the existing MySQL GIS implementation, by keeping the useful and good components of the original implementation, while removing the old geometric algorithms.


How were we able to meet these challenges and make use of BG in an efficient and compatible (with MySQL) way?

For challenge #1, we expanded the existing geometry classes so that they can be used by BG.

BG defines a clean interface for geometry data in the form of concepts. Each type of geometry is defined as a concept, so we have point, linestring, polygon, multipoint, multilinestring and multipolygon concepts in BG. See here for additional details.

All BG algorithms manipulate geometry data via these concepts, so that not only BG’s own model of these geometry concepts can be used, but also any other existing GIS implementation can make use of BG algorithms as long as it implements these geometry concepts properly. Each BG algorithm is a function template, and BG uses traits or meta functions to access the type information as well as interface function templates defined for the function’s type argument. An implementation of such concepts would need to implement the traits classes, as well as defining a few member function templates for existing geometry classes.

We do so in MySQL by expanding our existing geometry classes, namely Gis_point, Gis_line_string, Gis_polygon, Gis_multi_point, Gis_multi_line_string, and Gis_multi_polygon. Firstly we needed to add a few member functions for Gis_point and Gis_polygon, then we needed to define a few trait classes for all of the geometry classes, as required by the concepts above, see gis_bg_traits.h for our used traits definitions; also, BG’s linestring, multipoint, multilinestring and multipolygon concepts all use the Boost.Range concept, so we also needed to implement a model of the Boost.Range concept. So we implemented a class template called Gis_wkb_vector, whose instantiations are now the new parent classes for the Gis_line_string (Gis_wkb_vector), Gis_multi_point (Gis_wkb_vector), Gis_multi_line_string (Gis_wkb_vector), and Gis_multi_polygon (Gis_wkb_vector) classes, the bracketed type is the parent class for each. The code for the Gis_wkb_vector class template can be found in gis_wkb_vector.h.

As before, the Geometry class is the root base class for all concrete geometry classes mentioned above, all data members are defined there, so that all geometry instances are of equal size. In the Geometry class, we now hold the WKB buffer address and length, along with other geometric information about this geometry instance. In the Geometry class, we also now have a vector of components of the geometry. For example, for a linestring its components are points. We hold the components for fast access, which is required by BG algorithms. Point and polygon concepts are different, as defined by BG, they don’t use the Boost.Range concept. Beyond that, BG assumes that a polygon consists of an outer ring object and an ‘inner rings’ object, which contains a vector of inner rings (BG sees a ring as different from a linestring).

The Gis_wkb_vector class template is the key interface between BG and MySQL. It is used for BG to read WKB data from instances of Gis_xxx classes from MySQL (in read-only mode), to write WKB data from BG into instances of these classes (write mode), and to read geometry data from them (read-write mode). BG also uses such instances during geometric computation in its various internal algorithms in read-only and/or read-write mode. The principles of designing this class template is performance — we want to avoid unnecessary data copies and conversions, as well as repetitive WKB parsing. Note that WKB data tends to be large, often a few dozen to a few hundred bytes for a single geometry, or even larger. A standalone 2D point is 25 bytes (SRID of 4 bytes, WKB header of 5 bytes, and 2 double precision coordinates of 16 bytes each), a 2D linestring containing 5 points takes 93 bytes. In read-only mode, no WKB data is copied; in read-write mode, a copy is only done at WKB buffer reallocation when a WKB buffer isn’t large enough to fit more data. And WKB data is always kept as a single valid WKB byte string to avoid reassembling them when handing back the geometry result to MySQL. WKB data are of different lengths and are often nested, e.g. imagine a multilinestring’s WKB data. This makes read and write access less than straight forward, as detailed below.

When used in read-only mode, a Gis_wkb_vector instance is created using an existing WKB buffer, which often comes from MySQL Item_xxx::val_str. The WKB is parsed once so that the internal structure is set up. For example given a WKB of a multilinestring X, in Gis_multi_line_string’s constructor, X’s WKB is parsed, all linestrings within X are stored into X.m_geo_vect which is a vector of Geometry objects; and recursively in each of the linestring objects, i.e. Gis_line_string object L, the Gis_point objects are stored into L.m_geo_vect, the Gis_multi_line_string X is ultimately constructed as a tree of Geometry objects, where the root node is X, and X has a few Gis_line_string child nodes, each of which has some Gis_point nodes. Each of the nodes refer to different positions of the same WKB buffer of X. No WKB data is copied in read-only mode, all these Gis_xxx objects do is read-only access to the WKB data (often used as input arguments to BG algorithms).

In read-write mode, a geometry object is created empty, then geometry data is written into the object piece by piece, via Gis_wkb_vector::push_back, Gis_wkb_vector::resize and Gis_point::set functions (by BG algorithms or MySQL GIS code). These functions are the only ways needed by BG, and supported in our implementation, to modify a geometry. Why is ‘resize’ also used?
The iterators of the Boost.Range concept are read-only in BG, except when the range contains all points. A read-only iterator is never used to modify/delete an existing element in the range. Typically BG writes data into a Boost.Range instance via the ‘push_back’ function, which appends a geometry object into the range, and geometry components already in a range are never updated unless the range is a point range. A point range can be updated via an iterator at any position of the range, and this is OK since points are of equal size and have no components. In our implementation, besides the geometry object append, we also append its WKB data into the owner’s WKB buffer and increment the ‘geometry count’ field in the WKB byte string, so that all components always refer to different positions of the same piece of the WKB buffer.
When the geometry object is completely created, its WKB data can be directly used without reassembling the WKB pieces of the geometry components by MySQL, to pass to other Item_xxx nodes during query execution.

The polygon concept is an exception to the ‘single WKB buffer, no reassembling’ rule, because BG assumes a polygon has an outer ring and an ‘inner rings’ object. And BG modifies them separately without any defined order, which means we have to keep the two geometry objects’ WKB buffers separate, each maintaining its own WKB buffer. Before handing over the polygon to MySQL, we have to reassemble the polygon’s two parts into a single WKB buffer.

BG also updates the inner rings of a polygon differently — it has to first resize the ‘inner rings’ object by appending exactly one empty ring into it, then call ‘push_back’ to append points into the newly appended ring. That’s why Gis_wkb_vector::resize may also be used in an update, and we must always make the polygon’s inner rings use one piece of a WKB buffer.

The implementation of the above interface also ensures that BG gets integers/doubles of local endianness, and BG writes such data elements into a geometry’s WKB buffer as little endian, which is MySQL’s portable format for all data.

With the above design and implementation, support for BG is merged into the existing MySQL GIS architecture. We use WKB/GEOMETRY data internally everywhere, we use the same group of Geometry classes whose instances can be used both by BG and by MySQL seamlessly, and BG algorithms get geometry data from our Geometry objects via BG’s defined interface.

The rest of the challenges are easily resolved when we have everything above.

For #2, we catch Exceptions in each Item_xxx::val_xxx function where BG functions are called, to make sure any Exception that can be thrown by BG is caught and properly handled using the MySQL error handling facilities. We use std::auto_ptr to do resource management in contexts where exceptions may be thrown (this latest change isn’t pushed to this lab release though). I’d also like to take this opportunity talk a bit more about error handling in MySQL GIS regarding user data validation. Firstly, we now have more robust checks for invalid GEOMETRY/WKB data input, so that the server is able to handle almost any invalid binary input properly, by rejecting such data.  Also, BG’s polygon concept requires defined ring orientation and closedness, and we use counter-clockwise (CCW) orientation and closed rings, that means that the outer ring must be CCW and inner rings (if any) must be clockwise and all rings must be closed. If the input polygon data (in the form of WKT or WKB) isn’t like this, it’s automatically reversed on the fly and closed permanently (in the permanently stored data within tables, the rings are not reversed but they are closed). This means the polygons stored in tables may not be literally (in WKB/WKT form) identical to the input data, although they are equivalent. Historical data will work correctly, and unmodified, because older versions of MySQL don’t accept polygons with open rings, so all polygons in existing data all have closed rings. This new functionality better provides users with pre-normalized polygon rings in order to avoid extra CPU cycles to normalize them. The check for ring orientation can also identify some invalid rings as a side effect, e.g. some of those with spikes, or polygon rings which degrade to linestrings. Note that although users can provide GEOMETRY data directly to MySQL, we don’t internally close rings of polygons provided in the form of a GEOMETRY format/object (the rings will be reversed on the fly just as WKT/WKB data), or do any endianess conversions, as we believe it’s valid in every way, because the GEOMETRY format is MySQL’s internal geometry data format. If users want to directly use this format, then they are required to provide valid data from clients, otherwise GIS algorithms may not work correctly.

Finally, if a user explicitly specifies an SRID value when creating a geometry, we now require the two arguments have identical SRIDs. Since SRID usage isn’t effective yet, it’s advised that users don’t explicitly specify any SRID values. Again, the SRID usage here is really to support future work when we add support for Geography types and standard spatial coordinate systems like WGS 84. If you specify an SRID value, its max value is 0xFFFFFFFF, and values larger than this are truncated to use the lower 4 bytes as a 32bit unsigned integer SRID value.

For #3, we have to do type specific dispatches to call the right version of BG functions — all BG algorithms used so far in MySQL are function templates which require input arguments of an exact geometry type, and we can’t pass a Geometry pointer directly into such functions, so we have to create a specific type of geometry object using WKB data and pass it to the BG functions according to their geometric types.

For #4, we have implemented support for geometry collections using existing features in BG according to OGC defined semantics for each spatial operation. Geometry collection support is very important because the set operations, i.e. intersection/union/difference/symdifference, will often produce geometry collections. If geometry collections are not supported, users would not generally be able to call GIS functions in any nested way, e.g. this query: “select st_within(st_union(GeomFromText(‘point(1 1)’), GeomFromText(‘linestring(3 3, 4 4)’)), GeomFromText(‘polygon((0 0, 5 0, 5 5, 0 5, 0 0))’))” would not be supported without geometry collection support. The detailed algorithms about such support won’t be covered in this article, but it may be covered in a separate article in future.

For #5, we have dedicated BG developers to implement missing features in BG, which will also be contributed back upstream and included in future BG releases. Before all such new features are available, we have to fall back to old MySQL GIS algorithms to do geometric computation when BG doesn’t support a specific type or type combination. For example, currently BG can’t do LINESTRING WITHIN LINESTRING computations, and when it’s called by a query, old GIS algorithms are used. This also means that although many old GIS bugs are gone now, some old GIS bugs will have to exist yet for a while. When all missing features are available in BG, we can remove old GIS algorithms entirely and the old GIS bugs which were caused by bugs of old GIS algorithms are expected to disappear completely. Our goal is to cover all seven geometry types (as defined by OGC) in all unary geometry functions, and cover all type combinations (7*7=49) for all binary geometry functions, so that any nested GIS queries can be executed.

Note that for some binary GIS functions, some type combinations are inapplicable or not supported, and we return false or NULL respectively for such cases.

Undefined/inapplicable type combinations for each function are:

  • overlap: any combination of two argument types of different dimensions
  • crosses: any of the two argument types is point or multipoint
  • touches: the 1st argument is polygon or multipolygon, AND/OR the 2nd argument is point or multipoint

Unsupported type combinations for each function are: cross/overlaps/touches with one or both geometry collection argument(s).

Finally, there are some type combinations for some geometry functions that are not directly supported by BG but can be easily implemented using BG’s existing features, e.g. multipoint DISJOINT multipoint, for such cases we have implemented them in MySQL and BG won’t have to directly support them.

Using all of the above strategies, we are now able to seamlessly make use of BG efficiently and reliably in MySQL, while also making good use of historical MySQL GIS code as much as possible, in order to avoid reinventing the wheels.

MySQL-5.7.4- Change master without stopping slave altogether

At MySQL, we have been working on simplifying the failover process
making it faster, more flexible and easier to use. In MySQL 5.6 we added
support for Global Transaction Identifiers (GTID), which was a huge leap in the
direction of easing the failover process hiding the details about
replication logs and positions. With MySQL 5.7.4, we are introducing a
new feature that further adds to flexibility and onliness- the user can
only shut down components that he needs to re-configure. 

What we allow with this new feature is to execute CHANGE MASTER TO
command without stopping slave altogether. We realized that stopping
slave altogether is not mandatory in all cases and doing that was more
of a cautious approach to switching master restricting more than what’s
required at times.

Lets dive deep into this to see what can be relaxed here. For this, lets
break the replication process into two modules:

M1) Receiver module (concerned with IO thread) and
M2) Applier module (concerning SQL thread or coordinator and worker
threads, whichever be the case)

We can now divide options under the command ‘CHANGE MASTER TO’ into
three groups based on
 the above classification: 
G1) Options that change a receiver configuration. 

G2) Options that change an applier configuration. 
G3) Options that relate to both (1) and (2).

For the precise division look at the picture below. Note that the illustration takes into account all the CHANGER MASTER TO options present currently (MySQL-5.7.4). 



Note that given its current usage, we could put the MASTER_AUTO_POSITION
option under group G1(i.e., receive side). Currently only the receiver
module uses GTID positioning but we foresee that in future it will be
good to allow the applier module to use GTID positioning. We thus keep
the master_auto_position option under group G3 to keep things
future-proof. Worried that obstructs the failover process again like
before? Well that’s not a problem as MASTER_AUTO_POSITION is a slave’s
configuration that you only set once. Then it affects all future times
that you redirect to a new immediate master. So you don’t specify it on

With the classifications stated above, we propose a 3-point rule stated as:

R1) For CHANGE MASTER TO options under group G1, stop only receiver
module (M1) using the command STOP SLAVE IO_THREAD command.
R2) For CHANGE MASTER TO options under group G2, stop only applier
module (M2) using the command STOP SLAVE SQL_THREAD command.
R3) For CHANGE MASTER TO options under group G3, stop both receiver (M1)
and applier modules (M2) using the command STOP SLAVE.


Lets explore more about our 3-point rule(R1-R3):

  • Starting with rule R1, we stated that we only need to stop the
    receiver thread to change receive options. What happens to the applier
    module? Well the applier module keeps applying pending transactions,
    if any. If you have a situation where the slave was lagging behind with
    a lot of transactions queued into the slave’s logs, you can allow the
    applier module to catch up while you switch masters or change a
    configuration on the receive side keeping the master same.
  • Under rule R2, we stated that we can change configuration of applier
    module after stopping the applier threads ONLY, receiver module can be
    running while you do this. So while you fine-tune your slave applier
    module the receiver module keeps reading master’s log and copying
    transactions to the slave’s log. These could then be applied in-parallel
    by the slave when the applier module is up and running.
  • Under rule R3, you stop both receiver and applier modules. So, this is
    analogous to
    CHANGE MASTER TO <master_def>;
    used before this feature was available.

Worried how relay log purge would be handled now? Well its pretty
simple- Under rules R1 and R2, we do not purge logs implicitly on
executing the CHANGE MASTER command so that the receiver or applier
whichever is running just keeps processing/adding relay logs as it would
do if no replication thread was stopped.

Finally note that you need not always look at the figure above to find
which options are allowed which thread being stopped. You just need to
ask yourself if the parameter is related to receiver thread and stop the
concerned thread. And if you go wrong there are error messages to guide
you on the right path. Look at the next section for the usage and the


example 1:

Previously, to change master heartbeat period, you would do a


Now, with this feature you just have to stop the receiver (io) thread as
heartbeat has nothing to do with the applier thread(s).


Note that the applier thread keeps executing the transactions in the
relay log while you change the heartbeat period for the master-slave
connection. Likewise, you could do this with all the attributes mentioned in group G1 in the figure above.

example 2:

Similarly, to change applier thread attributes, you just have to stop
the applier threads.

So instead of


it is enough to do the following with this feature.


Lastly, if you go wrong there are nicely worded error message to guide
you. So in the first case, if your receiver module is active and you
execute a


you get an error saying:

This operation cannot be performed with a running slave io thread; run STOP SLAVE IO_THREAD first.

and if you forgot changing applier module when it was required, the
server will say:

This operation cannot be performed with a running slave sql thread; run STOP SLAVE SQL_THREAD first.

Lastly you still have the error message saying both the threads should
stop appearing only for MASTER_AUTO_POSITION option now:

This operation cannot be performed with a running slave; run STOP SLAVE first.

Lets see some examples once again:

example 3:

ERROR 1900 (HY000): This operation cannot be performed with a running
slave sql thread; run STOP SLAVE SQL_THREAD first

example 4:

ERROR 1904 (HY000): This operation cannot be performed with a running
slave io thread; run STOP SLAVE IO_THREAD first.

example 5:

ERROR 1198 (HY000): This operation cannot be performed with a running
slave; run STOP SLAVE first


While implementing this, we have taken special care to make sure we dont
break anything for a user switching masters like:

CHANGE MASTER to <master_def>;

There are absolutely NO side-effects to worry you. Note that as stated
before, CHANGE MASTER TO will not delete relay logs if one of the
receiver or applier thread is running.

Try it out and give us your feedback. As always, we look forward to
hearing from you to improve this feature. Enjoy :)

A new dimension to MySQL query optimizations – part 2

This is “A new dimension to MySQL query optimizations – part 2″. If you didn’t read part 1 already I suggest you skim through it before reading on.

To summarize, the problem at hand is this: Given a query with a join between two or more tables, the MySQL optimizer’s mission is to find the best combination of join order and access method so that the response time becomes as low as possible. The optimizer does this by calculating the cost of each combination and then picks the cheapest one.

Consider the following query:

The optimizer will calculate the cost of the alternative plans as follows:

As explained in part 1, the problem with this calculation is that the cost of accessing table2 should not be multiplied with the number of rows returned by the chosen access method on table1 but rather the number of rows in table1 that evaluate to true for all conditions. Up until 5.6, MySQL had this wrong.

And that’s where condition filtering comes in: it provides a much better prefix rows estimate by taking into account not only conditions that are used by the chosen access method but all other relevant conditions as well.

How it works

Before we start with the examples, here are the most important things you need to know:

  • The conditions are investigated for each table, and a condition will contribute to the filtering estimate for that table only if:
    1. it refers to the table at hand, and
    2. the condition depends only on constant values or values from tables earlier in the join sequence, and
    3. the condition is not in use by the access method
  • If a condition contributes to the filtering estimate, the estimate will be based on the range optimizer’s analysis since this is very accurate. If not available, index statistics is used instead. If that is not available either, heuristic numbers are used.
  • Conditions are assumed to have no correlation.
  • The condition filter estimate is shown in the filter column of EXPLAIN as per cent. While “rows” shows the estimated number of rows fetched by the chosen access method, prefix rows for the next table is rows multiplied by filter.
  • Condition filtering is only calculated if it can cause a change of plan. Since it only affects the cost of accessing tables later in the join sequence, it is not calculated for the last table. Thus, by definition it is not calculated for single-table queries. However, there is one exception: it is always calculated for EXPLAIN so that you can see its value.
  • It can be turned on and off by optimizer_switch condition_fanout_filter (“set optimizer_switch=’condition_fanout_filter=on’” etc).

Examples, please!

In the example queries below, there are:

  • 1024 rows in the employee table
  • 12 rows in the department table
  • 150 rows with hire_date BETWEEN “2012-01-01″ AND “2012-06-01″
  • 8 rows with first_name=”John”
  • 1 row with first_name=”John” AND hire_date BETWEEN …

First, let’s see what it looks like in MySQL 5.6:

MySQL estimates that it will read 8 rows through ref access. Now let’s try to join with department. MySQL 5.6 now assumes that prefix rows for department is 8 and the chosen access method of department therefore has to be executed 8 times.  However, we already know that the correct number is 1 since there is only one row that matches both conditions. Although we can’t see this from the EXPLAIN, the cost of accessing department is greatly exaggerated because of this.

Now let’s take a look at MySQL 5.7. Notice that prefix rows ( rows * filtered = 8 * 16.31% = 1.3 ) is now much closer to reality. Just like before, 8 in the “rows” column is the estimated number of rows that will be read by ref access, while the new condition filtering information is shown in the “filtered” column. Since first_name=”John” is used by the ref access method, 16.31% is the condition filtering effect estimated from the remaining BETWEEN condition. When joined with department, prefix rows for department is now 1.3 instead of 8. In turn, the cost calculation is much more accurate.

If we force a table scan, none of the conditions are used by the access method and the filtered column is updated accordingly. Now we get  rows * filtered = 1024 * 0.12% = 1.23 , which is also pretty close to the correct value of 1.

These are of course only basic examples to illustrate how it works. It gets much more interesting once we look at many table joins, e.g. the queries in DBT-3 that show up to 88% reduction in response time. I might followup with a part 3 to explain these bigger queries later. In the mean time, you can experiment with your own data by downloading the MySQL 5.7 labs release.

Oh, and by the way: “Condition filtering” is only one of many planned steps towards a new and improved cost model which includes brand new features and a lot of refactoring. There are some subtle traces of this work in the 5.7.4 release; a few new APIs that don’t do much on their own. Stay tuned for more info on this subject!

InnoDB Transparent PageIO Compression

We have released some code in a labs release that does compression at the InnoDB IO layer. Let me answer the most frequently asked question. It will work on any OS/File system that supports sparse files and has “punch hole” support. It is not specific to FusionIO. However, I’ve been told by the FusionIO developers that you will get two benefits from FusionIO + NVMFS, no fragmenation issues and more space savings because of a smaller file system block size. Why the block size matters I will attempt to explain next.

The high level idea is rather simple. Given a 16K page we compress it using your favorite compression algorithm and write out the only the compressed data. After writing out the data we “punch a hole” to release the unused part of the original 16K block back to the file system. Let me illustrate with an example:


D – Data
F – Free
C – Compressed

DCF are in 1K units

Write out [CCCC] and then free [FFFFFFFFFFFF] using punch hole. The obvious next questions are:

  1. What if the compression results in data that is > 16K? Yes that can happen.
  2. What is the saving if the data is compressed to say 45 bytes from 16K?

For the first case we end up wasting CPU cycles because we write out the uncompressed (or original) contents as is. Given the above example that would be:


For the second question there is some extra detail. The writes are always aligned on the file system block size. So, if the original 16K was compressed to 45 bytes and you are running on XFS with a block size of 4K the compressed data will be rounded to 4K and the punch hole will be on the remaining 12K.

The labs release works at a system wide level, this means that it will compress all InnoDB tablespaces, including the UNDO logs and the system tablespace. Currently the REDO log is not compressed, it is written out as is.

To get the most out of this feature you need to use the multiple page cleaner functionality that is part of MySQL 5.7.4. There is a new configuration variable:

–innodb-page-cleaners := 1..64

A good starting point is 2 x number of buffer pools. So, why is this parameter important? The compression is done in the page cleaner threads, the more the better.

For decompression the important parameter is –innodb-read-io-threads. When AIO is used, the labs release only works on Linux, the decompression is done in the AIO threads, again the more the better. Since this code is experimental I can’t really recommend anything specific. I’ve been testing with 32 threads, but I also use a very beefy server.

The official release has support for two compression algorithms:

  1.  LZ4
  2. ZLib

There is example code that is commented out to demonstrate how you can use LZO and LZMA. To enable these algorithms you will need to build from source and uncomment some lines from storage/innodb/cmake.innodb.

To select the algorithm for compression you can use –innodb-compression-algorithm := 0,1,2 Where:

0 – None
1 – ZLib
2 – LZ4

You can select the compression level by changing:

–innodb-compression-level := 1..9

For the labs release I’ve also introduced some new configuration variables, mainly for experimenting.

–innodb-read-async := boolean
–innodb-read-block := boolean

The above configuration parameters are dynamic and you can change the algorithm on the fly. Because it works at the page level, you can have a tablespace that has a mix of uncompressed, ZLib and LZ4 pages at the same time and it will all work effortlessly.

We like this idea because it will help us simplify the buffer pool code and that will allow us to do more optimizations in the buffer pool code more easily.

We think that there is a lot of room for improving on this idea. For example writing custom compression that exploits the page format.

Server-side SELECT statement timeouts

MySQL 5.7.4 introduces the ability to set server side execution time limits, specified in milliseconds, for top level read-only SELECT statements. This feature is introduced as part of WL#6936. It is based on a contribution submitted by Davi Arnaut with Bug#68252. Thank you, Davi!

The statement timeouts work by interrupting the execution of the statement when it takes longer than a specified number of milliseconds to complete. After the specified number of milliseconds has passed, the server aborts the individual query without affecting the larger transaction or connection contexts. The following error is then reported to the client when the query is aborted:

  1907: Query execution was interrupted, max_statement_time exceeded.

To be clear, the execution time limit specified is really a “soft hint”, because the query interruption may not happen precisely at the specified execution time limit. There can be a minor delay between the timer expiration and the actual query interruption.

A time limit for any SELECT statement run against a MySQL instance can be set by specifying a timeout value in milliseconds for the GLOBAL system variable max_statement_time.

For example:

Then any SELECT statement run against this MySQL instance will be aborted if it takes more than 1 second to complete. The default for the GLOBAL variable is 0, which means that there is no global time limit.

An execution time limit can also be set for all SELECT statements run within a particular session by specifying the timout value in milliseconds for the SESSION system variable max_statement_time.

For example:

Then any SELECT statements run in this particular session are aborted if they take more than 2 seconds to complete.

Finally, the maximum execution time can also be set for a specific SELECT statement using the MAX_STATEMENT_TIME clause directly in the query.

For example:

This clause in a SELECT statement can be used to set the max execution time only for the entire statement, so it can be used only within top level SELECTs. This clause cannot be used within non top level SELECT statements, for example within subqueries or derived tables. A non-zero time limit set using the MAX_STATEMENT_TIME clause in a SELECT statement takes precedence over the system variable max_statement_time.

The max statement time limits only apply to read-only SELECTs. Here, read-only meaning:

  • A SELECT statement which does not access a table at all.
    For example: SELECT 23+1;
  • Or a SELECT statement that only results in the reading of data.

SELECTs which directly or indirectly result in modifications to table data are considered not read-only.

For example:
INSERT INTO table VALUES( now() );



The above SELECT is not considered read-only, because it indirectly results in a table change. The read-only determination is made during query runtime, not during query parsing. So if a SELECT statement is determined to not be read-only, then any timer set for it is cancelled and the following NOTE message is reported to the user:

  Note 1908 Select is not a read only statement, disabling timer

The max statement time for SELECT statements is applicable as follows:

  • It applies only to top level SELECT statements. It does not apply to non top level SELECTs, such as subqueries and derived tables. Using the MAX_STATEMENT_TIME=N clause in non top level statements will result in an error.
  • It applies only to read-only SELECT statements. Any timer will be cancelled for SELECT statements after determining they are not read-only, and a note will be reported to the client.
  • It does not apply to SELECT statements within stored programs. Using the MAX_STATEMENT_TIME clause in SELECT statements within a stored program will result in an error.

When a time limit is set for a SELECT statement, then a corresponding timer is started for it. This timer is cancelled if the SELECT statement completes within the time limit that was set. If it takes longer than the specified time limit, then the internal timer expires and the SELECT statement is aborted. The timer implementation differs on various platforms:

  • On Linux, the timer is implemented using POSIX per process timers with SIGEV_THREAD_ID (Linux specific) as a signal event notify option.

  • On Solaris, the timer is implemented using POSIX per process timers with I/O completion ports for the event notify option.

  • On BSD, the timer is implemented using the “kqueue” event notification mechanism.

  • On Windows, the timer is implemented using windows timers and the I/O completion ports for the event notification.

The count of timers set, timers exceeded, and timer failures is maintained with the status variables max_statement_time_set, max_statement_time_exceeded and max_statement_time_failed. Please check here for more information on these variables.

Here’s a list of the major changes made to the contribution as part of WL#6936:

  • The feature was expanded to cover Windows and Solaris, in order to cover all of our supported platforms.
  • The ability to set execution time limits at the USER account level was removed.
    We did not really see a good use case for it (we would like to add more fine-grained resource controls), so we decided not to include it.
  • The feature is restricted to read-only SELECT statements, rather than supporting it for all DML and DDL statements.
    Some storage engines are non-transactional, so interrupting non read-only SELECT statements and other DML/DDL operations may lead to the database being in an inconsistent state. Our DDL statements are also not yet transactional, so interrupting DDL statements may also create inconsistencies. To avoid such situations, the feature is restricted to read-only SELECT statements.
  • Support for setting max execution time limits was also removed in relation to stored programs.
    Stored programs can generally contain any number of DML or DDL statements in them. To avoid the situations mentioned above, support for setting max execution time for stored programs themselves, as well as SELECT statements within them, was removed. This also prevents multiple timers being set for one outer or top level statement (as the SELECT could call a stored program which contains SELECTs with the timeout clause, which contain…), which is not supported.

  • We added a GLOBAL counterpart for the SESSION system variable max_statement_time.

Hope you enjoy using MySQL!


MySQL Performance Schema : Prepared Statements Instrumentation

MySQL 5.7.4 has a new Performance Schema feature, namely instrumentation for prepared statements. This instrumentation gives details of PREPARE and EXECUTE statistics for a prepared statement.

New Table
New table added to display run time statistics of Prepared Statements is named as prepared_statements_instances.

mysql> describe performance_schema.prepared_statements_instances;
| Field                        | Type
| OBJECT_INSTANCE_BEGIN | bigint(20) unsigned
| STATEMENT_ID                          | bigint(20) unsigned
| STATEMENT_NAME                  | varchar(64)
| SQL_TEXT                                     | longtext
| OWNER_THREAD_ID               | bigint(20) unsigned
| OWNER_EVENT_ID                   | bigint(20) unsigned
| OWNER_OBJECT_SCHEMA | varchar(64)
| OWNER_OBJECT_NAME       | varchar(64)
| TIMER_PREPARE                       | bigint(20) unsigned
| COUNT_REPREPARE               | bigint(20) unsigned
| COUNT_EXECUTE                     | bigint(20) unsigned
<Other Execution Stats>
35 rows in set (0.05 sec)

An Example
Here is an example for execution stats of a prepared statement to show how this table looks like :

mysql> prepare stmt1 from “select * from test.t1″;
Query OK, 0 rows affected (0.01 sec)
Statement prepared

mysql> execute stmt1;
| ch   |
|    2 |
1 row in set (0.01 sec)

mysql> select <snip> from performance_schema.events_statements_history_long\G
*************************** 1. row ***************************
EVENT_NAME: statement/sql/prepare_sql
SQL_TEXT: prepare stmt1 from “select * from test.t1″
*************************** 2. row ***************************
EVENT_NAME: statement/sql/execute_sql
SQL_TEXT: execute stmt1

mysql> select <snip> from performance_schema.prepared_statements_instances\G
*************************** 1. row ***************************
OBJECT_INSTANCE_BEGIN: 140646757207136
SQL_TEXT: select * from test.t1
       OWNER_EVENT_ID: 4
TIMER_PREPARE: 3481513000
1 row in set (0.00 sec)

Now, it can be seen above that ‘stmt1′ is prepared in thread with id ’20′, and the event which prepared it has id ’4′. This information is shown in preapred_statements_instances table i.e. from this table we can relate that this particular prepared statement is prepared by which statement execution and from which thread. TIMER_PREPARE shows the time it took to prepare the statement. And once executed, its execution stats get collected (like COUNT_EXECUTE here).

Prepared statement within Stored Procedure
Things to be noted in above examples are the columns : OWNER_OBJECT_TYPE/SCHEMA/NAME. These columns are shown as NULL here because this stmt1 is not prepared inside any stored procedure. Had it been prepared inside a stored procedure, these column would have been populated with its details. For example :

mysql> delimiter |
mysql> CREATE PROCEDURE test.p1 () BEGIN prepare stmt1 from “SELECT * FROM t1″; END|
Query OK, 0 rows affected (0.00 sec)

mysql> delimiter ;
mysql> call test.p1();
Query OK, 0 rows affected (0.00 sec)

mysql> execute stmt1;
| ch   |
|    2 |
1 row in set (0.00 sec)

mysql> select <snip> from performance_schema.events_statements_history_long\G
*************************** 2. row ***************************
EVENT_NAME: statement/sql/create_procedure
SQL_TEXT: CREATE PROCEDURE test.p1 () BEGIN prepare stmt1 from “SELECT * FROM t1″; END
*************************** 4. row ***************************
EVENT_NAME: statement/sql/call_procedure
SQL_TEXT: call test.p1()

mysql> select <snip> from performance_schema.prepared_statements_instances\G
*************************** 1. row ***************************
OBJECT_INSTANCE_BEGIN: 140483816899296
        TIMER_PREPARE: 3174380000
1 row in set (0.00 sec)

Note that event which created the procedure is ’4′ but the event which prepared the statement stmt1 i.e. call to procedure p1 event is ’33′ and that is the OWNER_EVENT_ID for stmt1 in prepared_statements_instances.

When metadata of tables or views, which are referred to by a prepared statement, changes, it causes automatic repreparation of the prepared statement when it is next executed. COUNT_REPREPARE column here denotes how many time this prepared statement has been reprepared. For Ex:

mysql> create table test.t1 (c1 int, c2 int);
Query OK, 0 rows affected (0.25 sec)

mysql> insert into test.t1 values (’1′, ’2′);
Query OK, 1 row affected (0.06 sec)

mysql> prepare stmt1 from “select * from test.t1″;
Query OK, 0 rows affected (0.00 sec)
Statement prepared

mysql> select <snip> from performance_schema.prepared_statements_instances\G
*************************** 1. row ***************************
SQL_TEXT: select * from test.t1
1 row in set (0.00 sec)

mysql> alter table test.t1 drop column c2;
Query OK, 0 rows affected (0.42 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> execute stmt1;

| c1   |
|    1 |
1 row in set (0.01 sec)

mysql> select <snip> from performance_schema.prepared_statements_instances\G
*************************** 1. row ***************************
SQL_TEXT: select * from test.t1
        COUNT_EXECUTE: 1
1 row in set (0.00 sec)

Deleting a Prepared Statement
Dropping a prepared statement: Once a prepared statement is created/executed, its statistics are captured in P_S. Now when that statement is dropped (deleted) corresponding statistics are deleted from P_S table. For ex:

mysql> prepare stmt1 from “select * from test.t1″;
Query OK, 0 rows affected (0.00 sec)
Statement prepared

mysql> select <snip> from performance_schema.prepared_statements_instances\G
*************************** 1. row ***************************
OBJECT_INSTANCE_BEGIN: 139695153515552
SQL_TEXT: select * from test.t1
TIMER_PREPARE: 3444041000
1 row in set (0.00 sec)

mysql> drop prepare stmt1;
Query OK, 0 rows affected (0.00 sec)

mysql> select <snip> from performance_schema.prepared_statements_instances\G
Empty set (0.01 sec)

New server/status variables:
A new server variable added:
It denotes the maximum number of prepared statements which could be instrumented.

A new status variable added:
It denotes how many prepared statements could not be instrumented.

Collection of Statistics:
It depends on existing instrumentations’ value:
- “statement/sql/prepare_sql” for SQLCOM_PREPARE
If enabled, prepare statistics of statements prepared from SQL query would be instrumented.
- “statement/com/prepare” for COM_STMT_PREPARE
If enabled, prepare statistics of statements prepared from C API would be instrumented.
- “statement/sql/execute_sql” for SQLCOM_EXECUTE
If enabled, execute statistics of statements executed from SQL query would be instrumented.
- “statement/com/execute” for COM_STMT_EXECUTE
If enabled, execute statistics of statements executed from C API would be instrumented.

Why Boost.Geometry in MySQL?

In the newly released MySQL 5.7 labs release, users will find refactored GIS functions, as well as R-tree indexes in InnoDB. One notable change is that we have replaced a lot of the native code for geometry computation with an external library, namely Boost.Geometry. In fact, 5.7 will be the first MySQL release that depends on Boost. So why did we make such a move?

In recent years, we have seen a tremendous growth in location based services, and consequently GIS has become a strategically important area for MySQL. Our existing customers require more GIS features, and we believe this is a growing market opportunity.

Up to MySQL 5.6, GIS support in MySQL has been very limited, and the geometry algorithms implemented in MySQL were overly simple and couldn’t handle many corner cases. For example: BUG#68091, and BUG#69111.

It would have been possible to fix or rewrite the existing algorithms, but MySQL only supports the Cartesian coordinate system, and the geometry algorithms needed for handling corner cases are already complex. Once we move to oblate spheroid or 3D, it would be very challenging to implement and test the geometry algorithms on our own. After all, we are a database vendor, and computational geometry is not our field of expertise. On the other hand, relying on an external geometry library that is developed and maintained by experts will not only fix the correctness and accuracy issues we had at hand, but also provide a solid foundation for future development.

We evaluated multiple geometry libraries, and Boost.Geometry proved to fit our needs best. It had a compatible license which allows us to distribute it with our community releases. It is robust, it correctly handles corner cases, and it is fast. And finally, it has an engaging community around it. I had the pleasure of getting to know the lead developer in Boost.Geometry, Barend Gehrels. His focus on quality, attention to detail, and openness towards contributions made me confident that our cooperation will be a success.

We are not only consumers of Boost.Geometry though, we also actively contribute to it. Two of our developers, Menelaos Karavelas and Adam Wulkiewicz, both with solid experience in the field of computational geometry, work full time on enhancing Boost.Geometry to cover our needs. We then submit these enhancements upstream. We would like to see Boost.Geometry become widely adopted, and together with the rest of the Boost community, we can make it an even more feature rich product.

Along the way, we did encounter a few issues. For example, we had to change the compiler used (the Sun Studio compiler lacks the necessary C++ template support) for our MySQL builds on Solaris. See Norvald’s blog on Building MySQL with Boost for details.

GIS is a strategic and long term investment for MySQL. Our focus for now is on completing the refactoring work for the existing geometry functions and algorithms, and adopting the upcoming enhancements of Boost.Geometry. Down the road, however, we will look into support for Geography, the WGS 84 spatial reference system, projections, SRID, GeoJSON, and a lot more. As always, we highly value input from our users, so please share your thoughts and ideas with us! You can submit feature requests through the MySQL bug system, or simply comment on this blog.