MySQL Debian/Ubuntu packaging sprint

Debian/Ubuntu packaging sprint participants. From left: James Page, Norvald H. Ryeng, George Lorch, Akhil Mohan, Otto Kekäläinen, Robie Basak.
Debian/Ubuntu packaging sprint participants. From left: James Page, Norvald H. Ryeng, George Lorch, Akhil Mohan, Otto Kekäläinen, Robie Basak.

Last week, Canonical invited the MySQL packaging team in Debian to a packaging sprint in their London office, and most of us were able to participate. We’ve met online on IRC and UOSs before, but this was the first time we were all in the same room.

The results of our sprint will soon be available in a .deb near you. Since Debian Jessie is currently in feature freeze, most of it will hit Ubuntu first. The two main things we achieved on the MySQL side were to make MySQL 5.6 ready for Ubuntu Vivid (15.04) and to split MySQL, Percona and MariaDB configuration files. The configuration file split reduces coupling between MySQL and variant packages, and it is key to solving most of the challenges the team faces. We can now do packaging work on one variant without coordinating with all the others. That is a major achievement!

In addition to this, we also made smaller improvements: parallel builds and MTR test runs, defined and implemented a test plan in order to run the right tests at the right time (build vs. DEP8), and cleanup of the debian/rules file. Not all of this is visible to the end user, but to the package maintainers it’s a big deal.

Time flies when you’re having fun, so when the week was over, we still had more we’d like to do with the packages. The work will continue, but at least most of the heavy lifting is now done. I don’t think we ever would have got this far without meeting in person.

Let me end by extending my thanks to Canonical for organizing and hosting this sprint! It was really useful and fun! I think we all achieved and learned a lot. So once again, to Canonical and everybody that participated, thank you, and merry Christmas!

Some Notes on Index Statistics in InnoDB

In MySQL 5.6 we introduced a huge improvement in the way that index and table statistics are gathered by InnoDB and subsequently used by the Optimizer during query optimization: Persistent Statistics. Some aspects of the way that Persistent Statistics work could be improved further though, and we’d really like your input on that.

How much to sample?

The statistics are gathered by picking some pages semi-randomly, analyzing them, and deriving some conclusions about the entire table and/or index from those analyzed pages. The number of pages sampled can be specified on a per-table basis with the STATS_SAMPLE_PAGES clause. For example: ALTER TABLE t STATS_SAMPLE_PAGES=500;
This way you can request that more pages be sampled for larger tables in order to make the estimates more accurate. The more pages that are sampled, the better the estimates will be and thus the better the chance that the Optimizer will be able to choose optimal query plans, with the only downside being that the index statistics gathering operation (the sampling) itself becomes slower.

Being able to specify the number of pages to sample on a per table basis is flexible, but what if one can specify the value as a percentage of the table size, rather than an absolute number? Would that make sense? Setting such a global value would not meet every need and use case, because the given percentage may not be sufficient to get a good estimate on a small table and the same percentage may make the sampling process too slow for a very large table. Setting the percentage on a per table basis may also have little advantage over the existent functionality as one already knows if the table is huge or tiny and can set the appropriate value as an absolute number of pages to sample. What do you think? We would love to get your input on this! Do you see a lot of value in being able to set the number of pages to sample as a percentage of the table size or have it depend on the table size in some other way? Cast your vote:

Make it possible to set the amount of pages to ANALYZE as a % of the table size?

View Results

Loading ... Loading ...

How often to sample?

Currently, if 10% of the table has changed (rows updated, deleted, or inserted) since the previous statistics gathering operation then InnoDB will automatically trigger a recalculation of the statistics (it will analyze some leaf pages as discussed in the previous paragraph, reading them from disk if they are not already cached).

In MySQL 5.5 and older, where the auto recalculation threshold was set to 6.25%, it was a problem that the statistics were updated too often which caused too many execution plan changes. In general, we’ve found that 10% is a good overall threshold for this and we haven’t seen a great need to make this dynamic. But do you agree? Do you want to be able to change that value globally or even on a per table basis? Would you increment it or decrement it if you could? Cast your vote:

Make it possible to change the 10% threshold that triggers an automatic ANALYZE?

View Results

Loading ... Loading ...

We look forward to your feedback! And as always, THANK YOU for using MySQL!

MySQL 5.7: only_full_group_by Improved, Recognizing Functional Dependencies, Enabled by Default!

Here’s an account of recent work which has kept me busy and excited for a few months. For those unfamiliar with the only_full_group_by sql mode, let me provide some context. I’ll use the world database, which can be downloaded from this MySQL Documentation page. You can find details on how to install the database on the same page. We have a table of countries and a table of languages spoken in each country. It’s a 1-to-N relationship: a country can have many languages so CountryLanguage.CountryCode is a foreign key referencing Country.Code:

Here are two sample rows from the Country table:

And corresponding rows in CountryLanguage:

Consider this grouped  SELECT:

This SELECT groups by “Language”, so the first thing it has to do is “segregate” rows by language:

Then it must collapse (“aggregate”) all rows of the same group to form one row; this row should have, according to the columns listed in the SELECT, a value for “Language” and a value for “CountryCode”. For the “French” group, the value for “Language” is obviously “French”; but what is the value for “CountryCode”: “BEL” or “FRA” or something else? That’s non-deterministic, and that means the query is likely nonsensical.

The SQL-92 Standard solved this non-determinism by requiring that all selected columns be part of the GROUP BY clause. So it would then reject the example query above.

Historically, MySQL took a different route: it allowed the query above, and picked one value of “CountryCode” among the group. So you may obtain
(French, BEL), (Spanish, ABW)
(French, FRA), (Spanish, AND)
depending on MySQL’s implementation details, the access paths chosen to read the table, and subject to change in future versions; so in practice it was simply unpredictable.

In other words, users were allowed to execute non-standard SQL queries and it was up to them to think and decide whether the query was non-sensical or not. That made it far too easy for users to make mistakes that resulted in incorrect data presented to the user/application. Imagine that the user meant this reasonable query:

They wanted to know how many countries speak each language. Alas he/she used the wrong column in the COUNT() clause though and the query accidentally became the bad one which we saw earlier where we produce incorrect and unpredictable results. MySQL didn’t raise any error or warning here, which is not being helpful at all in this scenario.

Users wary of this “dangerous permissive behaviour” would prefer MySQL to complain here, that’s why the only_full_group_by flag was added to the sql_mode server variable years ago. When this sql_mode was set, then MySQL rejected the bad query by implementing the behaviour prescribed by the SQL-92 standard. Here’s a test to demonstrate this behavior in MySQL 5.6:

See: MySQL 5.6 requires that all SELECTed columns be in the GROUP BY clause; “CountryCode” is not, so an error message signals it.

Using this new safe GROUP BY mode was rightfully advertised in the manual, and by users such as

Still, this safe mode wasn’t enabled by default. I wasn’t around at that time (as it was more than twelve years ago), but I can speculate that there were the following obstacles:

  1. Certain non-standard queries can be perfectly reasonable; for example this one, which wants a list of country names and how many languages are spoken in each country:

    In a group of rows of the same co.Code value, all rows have the same co.Name because co.Code is a primary key of the Country table. So co.Name is deterministic, but it’s not in the GROUP BY clause so SQL-92 and 5.6’s only_full_group_by mode rejected it.
  2. So enabling the mode would make some users’ life unduly difficult (the kind of user who “knows what they are doing”); they would have to put all selected columns in the GROUP BY clause, thus making this clause more  complex and thus increasing the chances that MySQL switches from an index scan to a table scan plus a filesort and a temporary table, thus making the query far slower.
  3. The Standard itself evolved to accomodate these reasonable queries! Starting from SQL-99, the concept of “functional dependency” was introduced; a column B is functionally dependent on columns {A1, A2} in a result set if all rows which share the same value of A1 and the same value of A2 necessarily share the same value of B. Then SQL-99 accordingly modifies the constraints on GROUP BY: by default it keeps the same rules as in SQL-92, but it allows a DBMS to optionally use a relaxed rule, specifically that all columns in the SELECT list merely need to be functionally dependent on the GROUP BY columns. That is known as “SQL optional feature T301″ and it guarantees that inside a group of rows, all selected columns have identical values, thus making the query’s result deterministic. That is exactly the case of the last query we showed where “co.Name” is determined by “co.Code”. Of course the old SQL-92 rule satisfies the relaxed rule: if all selected columns are part of the GROUP BY clause then they are functionally dependent on the GROUP BY columns. So we can say that the relaxed rule allows all SQL-92 legal queries, plus some new ones, while preserving determinism.

That was well explained by my former colleague Roland Bouman in .

For us then, it was a problem. The safe mode, only_full_group_by, was too restrictive and thus we could not make it the default. But that also meant that our innocent users were left prey to the potentially dangerous default mode. Once in a while a user would come to us with a strange result and we would have to explain, once again, that it’s a non-deterministic query and she/he should try only_full_group_by.
Another headache was that our QA team, who runs many thousands of automatically generated queries every day, was also experiencing this non-determinism; query result differences between, say, MySQL 5.5 and 5.6, would be flagged as regressions, though it was just a non-deterministic query with a GROUP BY. To avoid these false positives, QA would be running with only_full_group_by, which alas limited them to queries which were only a subset of what our users were allowed to do with the default permissive setting. So we were losing some testing depth, which was obviously not desirable.

Good news — those days are now over!

In MySQL 5.7.5, I have made only_full_group_by detect functional dependencies, which can be inferred not only from keys’ definitions as in the example above, but also from column equalities in the WHERE clause or [LEFT] JOIN ON clauses and from selected expressions in a view’s or derived table’s body. It was not an easy job, and it took a while to get it right, especially with outer joins and with views. Along the way, I even found a couple of small bugs in the relevant section of the SQL-2011 Standard, and thanks to the help from a colleague who has access to the Standard’s redaction committee, those bugs should be corrected in the next version of the SQL Standard.

The different types of functional dependencies are best explained with examples; so together with my colleague Paul DuBois we put them in a new section within the 5.7 manual. I really encourage you to read it as it shows well when columns are determined by other columns and when they are not, in spite of sometimes deceptive appearances! We used the world database there too, and our queries are understandable — so please check it out!

And, for a higher-level view, this is the master page which describes the new behaviour of only_full_group_by: .

Because this mode is now smarter than before, there is very little reason not to use it. So I have made it a default in 5.7.5 and later. This means that some old queries might return an error when upgrading. Should this happen, please check them; quite possibly they are non-deterministic and should be corrected (remember the COUNT() example earlier in this post). If they are deterministic, and MySQL simply missed it, or if you really want non-determinism, a new function called ANY_VALUE has been added to force MySQL to accept the intentionally/knowingly non-deterministic query. Of course, you can also take the more brutal approach and turn only_full_group_by off in your session or globally.

Several prominent DMBSs that I know continue using the SQL-92 rule, while another one detects functional dependencies due to keys only. I think MySQL is now the DBMS with the most complete implementation, inferring functional dependencies from keys, equalities, and in views. We don’t have everything that’s described in SQL-2011, but we’re not far.

Implementing this feature was also an occasion to refactor the relevant code. It used to be scattered in several functions, accumulating pieces of information as resolution of the query progressed (in Item::fix_fields() calls), and drawing a conclusion in a final phase of resolution. Consolidating all this into one single entity of code, which looks only at the fully resolved query, made things clearer and automatically fixed an obscure bug with views. The main code file is sql/aggregate_check.h and it has an extensive Doxygen multi-page comment to explain the logic.

Here’s another eliminated annoyance: in only_full_group_by mode, using an alias of a SELECT list element in the HAVING clause was forbidden. That was known as, which I have also now fixed as part of this work. You can now do this in only_full_group_by mode:

That’s far more readable than copying the full COUNT() expression into the HAVING clause and it’s also a bit faster to execute.

There was one last fix done as part of this Worklog. That is having to do with some rare non-grouped queries of the form SELECT DISTINCT ... ORDER BY also being non-deterministic, but this post is already long so I’ll discuss that in a subsequent post. Stay tuned!

That’s all for now. I really hope that this new feature makes users’ lives easier and improves their experience with MySQL! We also look forward to your feedback on this new feature! If you have any questions or encounter any bugs, please do let us know by opening a support ticket or filing a bug. As always, THANK YOU for using MySQL!

Write Yourself a Query Rewrite Plugin: Part 2

In my last post I covered how to use the query rewrite framework to write your own pre-parse plugin. The interface is simplistic: a string goes in, a string comes out, and the rest is up to your plugin’s internal workings. It doesn’t interact that much with the server. Today I am going to show you the other type of plugins that rewrite queries, post-parse query rewrite plugins. This type is, out of necessity, more tightly coupled with the server; it operates on the internal data structures that make up the query’s parse tree.

Creating the Plugin

Declaring the plugin is similar to declaring a pre-parse plugin: you declare the plugin in the usual way but with the addition of a specific plugin descriptor for the post-parse query rewrite plugin type. This is a struct, as usual:

As you can see there’s no function to free up memory, as we had with the pre-parse query rewrite plugin type. This is not likely needed in a post-parse scenario because you will be working much more on the data structures that the server offers and roll your own to a much smaller degree. Another thing to note is that you can ask the server to give you the statement digest along with the parse tree. Statement digests are essentially hashed values for queries which you can read all about in the manual. They are calculated from normalized statements (see the link) during parsing and you are likely to want them if you write a post-parse plugin. The provided Rewriter plugin (read about it in this post) makes heavy use of them.

The plugin’s entry point (I’ll call it the ‘rewrite function’ from here on) is called when a statement has been parsed. This happens either when a statement is sent over the wire the normal way, or when a prepared statement is prepared. This makes it possible for your plugin to operate so that prepared statements are only rewritten when they are prepared and after that the rewritten version is always run.

The rewrite function gets called with a Mysql_rewrite_post_parse_param struct, which gives you a flag that you can raise to tell the server that you rewrote the query, and a THD object. The THD object is key to accessing the internals of the server, and it allows you to examine the parse tree.

At Your Service

A plugin working on the parse tree is not that useful without the necessary primitives to operate on that parse tree. That’s where the parser service comes in. The MySQL 5.7 server offers certain services to plugins—read all about them here in the manual—and long with the query rewrite framework, the server now offers a parser service. This service lets you parse strings, manipulating the parse tree and the related functionality. The full documentation will end up in the manual in due course, but I will give you a quick walk-through today.

First Things First

MySQL has a habit of resolving the “current database” when parsing a string. This means that once you’ve parsed a query, there is no more dependency on the “current database” context. The “current database” is a session-dependent setting, so you may or may not want to use the “current database” from the session that triggered the plugin. If you wish to modify it, then you can use the mysql_parser_set_current_database(MYSQL_THD,</code> const MYSQL_LEX_STRING) interface to set the “current database” to whatever you want.

Let’s Parse

Alright, let’s parse something. The function to use for that is: int mysql_parser_parse(MYSQL_THD thd, const MYSQL_LEX_STRING query, unsigned char is_prepared, sql_condition_handler_function handle_condition, void *condition_handler_state)

This function lets you send in a query string for parsing. If you set is_prepared to a non-zero value then the string will be parsed as a prepared statement, i.e. you’re allowed to use the ? placeholder in lieu of literal values. There are interfaces that let you work with these placeholders (count, locate, etc.) later on.

A word of caution here: when you parse a string it will be the session’s new “current” parse tree, replacing the old one. There are ways around it which I may cover in a future post. But for now, let’s assume that parsing a string means replacing the “current” parse tree.

You may also want to handle parse errors, in which case you can send in a function pointer to a callback function handle_condition. If you want a state to be passed along to it, for instance to count the number of non-fatal warnings, then you would pass a pointer to it as condition_handler_state.

Take a Walk on the Server Side

The parse tree is walked using callbacks, so you will need to declare a parse_node_visit_function to handle them. This will be called for each literal in the parse tree, and it also lets you pass a state object around. The signature for this is: int (*parse_node_visit_function)(MYSQL_ITEM item, unsigned char* arg)

The MYSQL_ITEM represents a literal which you examine by printing it with: MYSQL_LEX_STRING mysql_parser_item_string(MYSQL_ITEM item)

The string is allocated on the server side so it has to be freed by the server if you are to have any chance of your plugin running on OS’es that have separate heaps for dynamic libraries (yes, I’m looking at you, Windows). You do this by calling: void mysql_parser_free_string(MYSQL_LEX_STRING string)

Once you have a function to process literals in place—printing them out is a good start—then you’re ready to walk the parse tree. It is available from the MYSQL_THD, and we walk it by calling: int mysql_parser_visit_tree(MYSQL_THD thd, parse_node_visit_function processor, unsigned char* arg)

If this function returns a non-zero value then the server has probably knocked on your door already by calling your handle_condition() function, if you bothered to declare it. In other words, a non-zero return value means something went wrong.

Rewriting Something

Typically the first thing your plugin will want to do is look at the current query to see whether it should be rewritten. There are some different ways that you can do that. You can look at the digest—the normalized representation of the query—the query string as it looked when it came in, and the literals in the query. The first three have an increasing level of precision: the digest will accurately identify any normalized query, but there’s a slim chance of collisions, especially if the queries strings are very long, as the digest is only calculated on the first 1024 tokens (something we’d like to make configurable in future work). The normalized query format will group highly similar queries together by replacing constant expressions with ‘?’ symbols, while the exact query string is the way it looked prior to parsing. If you don’t want to use something like that, then you’re probably better off writing a pre-parse plugin.

The framework doesn’t let you make destructive changes to the parse tree, so you have to create a new one. How do we create a new parse tree, then? Anyone?

If you said “parse a string” out loud right now, you’re right. That’s what we do, and we use mysql_parser_parse() to do it. If you paid attention earlier, you know that the tree we parse will now be the new “current” parse tree.

Once you’re done you can raise the FLAG_REWRITE_PLUGIN_QUERY_REWRITTEN note in your rewrite function and return 0. Then you can look for the “Query rewritten” SQL Notes in your client applications.

I hope that this was helpful. As always, THANK YOU for using MySQL. Happy rewriting!

Write Yourself a Query Rewrite Plugin: Part 1

With the query rewrite framework in the latest MySQL (Optimizer/InnoDB/Replication) labs release, you get the opportunity to author plugins that can rewrite queries. You can choose whether to rewrite the queries before and/or after parsing. Today I am going to walk you through how to write a pre-parse query rewrite plugin.

When would you want to use a pre-parse query rewrite plugin? The greatest benefit compared to post-parse rewrites — which I cover in a separate post — is the efficiency, especially the lack of overhead for those queries that are actually rewritten. Typical cases where you may want to write a pre-parse plugin are:

  • When you want to remove certain specific clauses from queries. For example, perhaps you want to remove all ENGINE clauses from DDL statements.
  • When you want to replace all queries against table_foo with table_foo2.
  • When you want to add a hint, for example a USE INDEX hint, to all queries against tbl3.
  • Any general case where you might want to do a broad REGEXP style find and replace.

Example Plugin

There will be an example pre-parse plugin shipped with MySQL. It is not very useful, but you can take it as a starting point for developing your own plugins. What it does is rewrite all queries to lowercase. You would typically issue the following commands to install it and then to demonstrate its usage and effects:

As you can see, it really rewrites all queries, including the show warnings query that I just issued. I did that to show you that the server knows that the query was rewritten and lets you know by raising a note.

The Basics

The plugin is conceptually very simple: it’s a single C function. The input is a const char *, the output another const char *. If your plugin returns a new query string, that’s the new query and the server will forget about the old one. Only an SQL note remains as a testimony of the original query.

You really only need three files to create a plugin:

  • Your source code file
  • A file
  • A CMakeLists.txt file

Most of what goes into writing general plugins is already covered well in the manual, and is a good starting point. What is new is the specific pre-parse query rewrite plugin type. You need to declare your specific plugin of this new type as shown below:

This should be fairly straightforward. The first argument is the version number of the interface that the plugin is using, the second is the function doing the rewrite, and the third is called by plugin framework after the rewrite is complete in order to free up any resources you might have been using—for example, in the rewrite_example plugin’s rewrite_lower function we allocate a new char[] that we return a pointer to, so we need to clean that up afterwards in the free_rewritten_query function.

As always, THANK YOU for using MySQL. Happy rewriting!

Logging with MySQL: Error-Logging to Syslog & EventLog

You’ve already read it in What’s new in 5.7 (So Far) — the MySQL server now has new-and-improved supported for syslog (on unix-like systems) and EventLog (on Windows). In the next few paragraphs, we’ll take a look at what they are, what has changed, and how they can make your life easier.

The MySQL server supplies information in two main ways:

  1. The client will receive a reply to every statement. If everything goes right, then we’ll see a simple OK for success, or a result set for SELECT, SHOW, etc.; and even a successful statement may be qualified by a set of warnings or notices. If the statement fails for some reason then we’ll receive an error regarding the failure.
  2. On the server, we’ll see a variety of logs depending on the server configuration. Queries exceeding a certain execution time may get logged to the slow query log, while in some debug scenarios all queries may be logged to the general query log either as soon as they are received (--log-raw) or after parsing and having security sensitive information such as passwords replaced. Both the slow query log and the general query log may be sent to a file or table. The third major log is the error log.

The error log — what and why

Like the clients, the error log may receive messages of varying severities. It collects messages that do not have an obvious query, client, or connection that is to blame, messages that need persistence beyond that of a connection or the server’s runtime, and others that have an intended audience of the DBA or system administrator rather than the application developer. Examples are exhaustion of resources on the host machine, certain errors reported by the host operating system, backtraces of crashes, messages about damaged databases and indexes, errors on start-up, failed or refused connections, etc.

The error log — where and why

Unlike the slow and general query logs, the error log cannot be optionally stored in a database table (this would lead to questions such as how to safely log an error about the table engine malfunctioning while using that same engine for error logging), but rather only in a plain text file on the filesystem. This makes the error log persistent beyond the runtime of the server, and makes the log file accessible to specialized log analyzer tools.

Enter syslog

Both Unix and Windows do however already have standardized logging facilities — syslog on Unix and EventLog on Windows. Using those facilities gives us access to a plethora of tools written specifically for them, it lets us log to a different machine, etc. While the MySQL server (mysqld) previously had very basic support for the Windows EventLog, syslogging was implemented by simply writing error messages to a stream (STDOUT/STDERR). Through this stream, whatever started the server (e.g. mysqld_safe) may receive or redirect this information at the OS level (note: if you start mysqld directly these streams will be visible in your terminal window; on Windows you have to use the --console option). The mysqld_safe wrapper script would start the MySQL server and optionally direct this error output to a program that would write it to the Unix syslog.

Onwards and upwards

While that syslog support solved the immediate problem, there was room for improvement — all MySQL server errors were filed with a syslog severity of error, even if the message said something different. A line could contain one time-stamp from the MySQL server, and another from the syslog facilities. And those were just the cosmetic issues!

Native syslogging

Having native support for syslog in the MySQL server itself enables us to turn syslogging on or off at runtime, and to query its status.

The syslog facility—information about what category of software the message were from—is also no longer fixed to daemon but can be queried and configured at runtime. This should make it easier to adapt the MySQL syslog messages to your syslog environment and enable better message filtering. See the manual page for syslog.conf (shell> man syslog.conf) for more on filtering.

The following options have also been added and can be set at start-up as well as viewed and set at runtime via system variables (setting them requires the SUPER privilege):

  1. --log_syslog
  2. --log_syslog_include_pid
  3. --log_syslog_facility
  4. --log_syslog-tag


While the mysqld_safe wrapper script has been updated to be aware of the server’s new native syslog support, the options to the script remain unchanged. Where before, mysqld_safe used the program logger to pass the server’s error messages to syslog, the script now selects server options consistent with the options given to the script. Therefore mysqld_safe --syslog --syslog-tag=AZOU would result in mysqld being started with mysqld --log-syslog=1 --log-syslog-facility=daemon --log-syslog-tag=AZOU.

It bears noting that the options to mysqld_safe control both the server and the mysqld_safe script itself. In the above example, the MySQL server would log as mysqld-AZOU, whereas the mysqld_safe wrapper script would log as mysqld_safe-AZOU.


A variety of linux systems have recently adopted systemd. Its logging facilities offer a sink that is compatible to traditional syslog, making the MySQL server’s native support compatible out of the box. When systemd journaling is used, corresponding tools should be used to query and filter the logs, e.g. the journalctl program to query the logs. In the following example, our running MySQL server has the tag production (and therefore logs as mysqld-production).

We could then use journalctl to ask for all log entries using that identifier:
journalctl -f SYSLOG_IDENTIFIER=mysqld-production

Due to the richness of systemd-journal logging, this is somewhat verbose. That’s why in the following example we ask for the output to be JSON-formatted. We’ll then use the jq tool to filter this JSON stream so that only the fields we are interested in remain—SYSLOG_IDENTIFIER, SYSLOG_FACILITY, and MESSAGE (see the systemd.journal-fields manual page for others):
journalctl -f SYSLOG_IDENTIFIER=mysqld-production -o json | jq '.SYSLOG_IDENTIFIER + " " + .SYSLOG_FACILITY + ": " + .MESSAGE' 2>/dev/null


On Windows EventLog is used in lieu of syslog. Like on Unix logging can be enabled, disabled, and queried at runtime. The facility and process ID options, however, are not available on Windows. A tag can be set and queried at runtime, but to create a new tag (start using a tag that does not already exist) the MySQL server needs to be started with sufficient Windows system privileges needed to create an entry in the Windows Registry. Once this entry is created then the elevated system privileges are not required to use the newly created tag. As the entry exists in the Windows registry and therefore outside the MySQL server, it will persist through restarts of the MySQL server and indeed the host machine. Care should therefore be taken not to pollute the registry with a great number of unused tags.

All’s well that sends well

Finally, after talking so much about the server (and laying the groundwork for future Log Lady reports), I would be amiss not to note that the MySQL client now also supports syslog. I look forward to your feedback on these new features! If you encounter any issues, please let us know by filing a bug or opening a support ticket.

Thank you for using MySQL!

GDB Tips: Inspecting MySQL Plugin Variables in Core File

Recently I had a need to determine the session value of a MySQL plugin variable in a core file.  Here I use the word plugin variable to refer to MySQL system variables provided by plugins.  While this is not a very difficult task, it is not as straight forward as printing a global variable. It took some time to figure out the layout of the plugin variables and get the needed information. This short article is to share this gdb tip with other interested MySQL developers and support engineers.

In this article, let us inspect the session value of the plugin variable innodb_strict_mode, which is of type boolean. Quite obviously, this plugin variable is provided by InnoDB storage engine.

The Quick Answer

This is what you can do to get the session value of the plugin  variable innodb_strict_mode from the mysql server core file.  For more details, refer to subsequent sections.

According to the above output, innodb_strict_mode has been enabled.

MySQL System Variables

The global values of the MySQL system variables are available in the global object (refer to sql/

Here we are interested in the session value of a system variable.  So we will not make use of this global object.   The session values of the MySQL system variables are available in the member variables of the THD object:

There are two types of system variables — the system variables provided by the MySQL server, and the system variables provided by the plugins (also known as “plugin variables”). The system variables provided by the MySQL server will be proper members of the struct system_variables .  But the plugin variables are kept opaque at the server layer. The plugin variables are dynamically allocated and are pointed to by the member dynamic_variables_ptr.

All the session values of the plugin variables are available in this memory block.  To access the session value of the plugin variables, we need to access it at a certain offset within the memory pointed to by THD::variables::dynamic_variables_ptr.

The memory for the session variables are allocated in the function  alloc_and_copy_thd_dynamic_variables() , which is documented to “Allocate memory and copy dynamic variables from global system variables to per-thread system variables copy.”

Accessing Session Value of Plugin Variables

As mentioned in previous section, all session values of plugin variables are stored in a single block of memory.  To access the session value of any particular plugin variable we need to know the offset within that block of memory.   This information is provided by a global object for each plugin variable.

All MySQL plugin variables will need a global object pointing to their location.  This global object is declared and defined using the macro MYSQL_THDVAR_BOOL , if the plugin variable is of type boolean.  For plugin variables of different types, different macros are available.  For the variable innodb_strict_mode this declaration is as follows:

This creates a global object of an anonymous struct type.  There are a series of macros involved in expanding the above. All the related macros are given below for easy reference:

When run through the pre-processor our declaration of “strict_mode” is expanded as follows:

A global object of an anonymous struct type has been defined by the macro MYSQL_THDVAR_BOOL .  As seen above, the offset is initialized to an invalid value of -1.  The offset will be set to the correct value at process run time, when a plugin variable is registered with the server process (refer to register_var() ).  Using this offset we can access the session value of the plugin variable.  If you look at the Quick Answer section now, it might be more clear.

If we want to inspect the session value of another plugin variable innodb_optimize_point_storage then the following gdb command can be used:

The above output indicates that innodb_optimize_point_storage is disabled.  I’ll leave it as an exercise for the reader to inspect innodb_support_xa.

The THDVAR macro

The previous sections explained the layout of the plugin variables and provided information to access their session value manually through a debugger like gdb.  If you are a MySQL developer and writing code, then you need to make use of the THDVAR() macro to access a session value of the plugin variable.  To access innodb_strict_mode value, one will use code as follows:


This article provided information to help in quickly identifying the session value of a plugin variable within a mysqld core file. It also provided some details regarding the layout of the plugin variables and the macros to use when you declare, define, and access them. Comments and feedback are always welcome. Thank you for using MySQL!




Everything about MySQL Users and Logins You Didn’t Know and Were Afraid to Ask

Logging into a MySQL server is generally dead simple—supply a username, a password, and you’re all set!

There are, however, more complex use cases such as when making use of our Enterprise Authentication plugins. It’s also sometimes helpful to have a more detailed understanding of what happens “under the hood”. So I’ll attempt to lift the hood and walk you through all of the nitty-gritty details regarding exactly just what happens when you log into a MySQL server.

Firstly, there are no less than 4 “users” involved in the authentication process. And a distinction between a user id and a user account exists. And it gets more and more advanced with each release. So I thought I’d take the time to try to walk you through the MySQL authentication process and clarify once and for all the stages involved, the information exchanged at each stage and the expected outcome.

I’ll use the tools that come with the server for my examples, but I’m sure it can be extended to most of the existing clients.

It all starts with the application, e.g. the mysql command line tool.

To open a connection it calls mysql_real_connect() and supplies, among other parameters,  a user id and a password. Let’s call this user id “application user name“.

Now the client library opens a network connection to the server. It then reads the server hello packet.

The client library takes from the server hello packet the data coming from the server’s default authentication plugin, combines them with the password supplied by the application and calls the client’s default authentication plugin.

The client’s default authentication plugin uses the information to create the authentication data to be sent into the client’s hello packet, together with the application user name.

Now the server has the all it needs to begin its own authentication process:

  • The application user name
  • The host from which the client is connecting from. We’ll call this “application user host
  • The name of the client’s default authentication plugin
  • The first batch of authentication data sent by the client authentication plugin
  • The server’s user account table

The server first stores the application user name into the security context so it can later be retrieved through the USER() SQL function.

Next it needs to find the user account to authenticate against. A user account is the object created by CREATE USER or GRANT, that also can have privileges granted to it.

MySQL user accounts are a user name@host combinations. But we happen to have one at this point. We’ll call “application user” the combination of “application user name“@”application user host“.

Now the server iterates over the user accounts in “specificity order” and tries to match the application user against one of them.

Specificity order means that it starts from the most specific ones (no wildcards) and them moves towards the mixed ones (letters and wildcards) and finally scans the least specific ones, consisting of wildcards only.

This is where we need to mention the infamous anonymous account. An anonymous account is defined as having an empty user name (“”). This means that there can be several anonymous accounts : e.g. “”@localhost, “”@””, “”@”%” etc.

After the matching process is done we end up with a user account of the form: <user name mask>@<host name mask>.

Since it’s a wildcard match the user account may or may not be equal to the application user.

E.g. one can expect that the application user joro@localhost will match the user account “joro”@”%”.

Of course if there was another user account “joro”@”l%” the application user “joro”@”localhost” would have matched that since it’s more specific than the “joro”@”%” user account.

What the server does next is to store the user account found into the session’s security context so it can be returned later. We’ll call this account “authentication user account“.

Since each MySQL user account defines what authentication plugin to use to authenticate to it the server now knows what server authentication plugin to call to conclude the authentication process.

If the selected server authentication plugin is compatible with the default client authentication plugin it can work with the data already submitted in the client hello packet. In this case the server just passes along the data already received to the server’s authentication plugin.

If they are not compatible then the server needs to ask the client to re-start the authentication dialog using another client authentication plugin that’s compatible with the selected server authentication plugin.

Either way, the client/server authentication plugin pair concludes the authentication process and the server plugin returns the following pieces of information to the server:

  1. Result of the authentication (success or failure)
  2. The user name of the external user account used by the plugin to conclude the authentication against an external repository
  3. An proxy user name this session should use instead of the authentication user account name.

#1 is obvious. It’s a “yes” or “no”.

The server stores #2 into the system variable @@external_user. Since it’s not a MySQL user account nor is it a application login name it’s just for information and can be used to track the access to 3d party user directories that the plugin does. Currently @@external_user is used by the windows authentication plugin to return the windows user account name that was used (since it can be different from the supplied application user id).

Now #3 is a bit less straightforward. The MySQL server supports the concept of proxy users. The authentication plugin for the authentication user account can “redirect” the session to use the credentials of a more suitable MySQL user account based on some authentication plugin logic. This allows external selection of MySQL user accounts relevant for the external user at hand.

Why is this useful ? Imagine you want to use UNIX as a back-end to authenticate your MySQL server users. But you may have 1000s of UNIX users. You can’t define a corresponding MySQL user account for each of them ! That would defeat the purpose, since you’ll have to maintain both sets in parallel and keep them in sync.

You can of course define a wildcard account (e.g. “”@”%”) and all the application user names will match it. But this only gives you a single set of MySQL permissions (the ones defined for the wildcard account) to work with. And you definitely don’t want all of your external users to share the same MySQL permissions !

Proxy users to the rescue. You define a handful of “proxy” user accounts that carry  different permissions sets. You then make your authentication plugin return one of these when it sees that a UNIX account is a member of a certain UNIX group. Now all you have to do to make your UNIX users part of one of these groups. And they map to the relevant MySQL “proxy” user account through the magic of the proxy users support.

Note that the proxy user name defaults to the application user name. So if the plugin chooses not to submit a different one the server knows that no proxying will be used and can conclude the authentication and use the authentication user account.

Note that the proxy user name is only a user name. The application user host is reused. We’ll call the “proxy user name”@”application user host” couple “proxy user“.

The proxy user is also matched in specificity order to the list of MySQL user accounts until an account is found. We’ll call this a “proxy user account“.

Now the server has the MySQL user account to use for the session. It’s either the authentication user account or the proxy user account.

The server stores this account into the security context so that it can be returned through the CURRENT_USER() SQL function.

And, if a proxy user account is used, the authentication user account is stored into the @@proxy_user system variable.

So to recap, when a new session is created we have the following user functions and status variables:

  • USER(): the application user, or a combination of the application user name and the application host name.
  • @@EXTERNAL_USER: the non-MySQL related user id that was used by the plugin to authenticate. Plugin specific.
    • if proxying was used this is the selected proxy user account
    • if no proxying was used this is the selected authentication user account
    • if proxying was used this is the authentication user account
    • if not proxying was used it’s empty (NULL)

Now you know everything there is to know about users, logins and accounts in MySQL.

The Story of One Contribution

The MySQL Server 5.7.5 Development Milestone Release includes support for acquiring multiple user-level locks within the same connection. The implementation of this feature is based on a contributed patch by Konstantin Osipov. This post tells the story about what happened with this patch on its way into the MySQL Server codebase.

If you are more interested in using this new functionality and the feature itself, rather than in the history behind it, then it is better to simply read the corresponding entry in the Release Notes or the updated documentation for the GET_LOCK() function.

Requests to extend the semantics of user-level locks in MySQL to support acquisition of multiple locks in the same connection have been around at least since 2003 (see bug#1118).

But for a long time we lacked crucial infrastructure for implementing this change. In MySQL Server 5.5 we at last introduced the metadata locking (MDL) subsystem — an engine-independent lock manager with support for deadlock detection. This finally made the implementation of the feature request feasible. Unfortunately, tasks with higher priority took precedence and support for multiple user-level locks didn’t get attention in either the 5.5 or in the 5.6 release.

Naturally, we were really happy when our former colleague Konstantin Osipov came up with a patch implementing this feature and was kind enough to contribute his patch to MySQL Server.

So what happened after the necessary steps (as described in this post by Erlend Dahl)—like signing the OCA, reporting a bug (bug #67806) and adding the patch to it—were all completed by Konstantin?

First of all, even though this was a contribution we still needed to provide a proper design document for it. There are several reasons why:

  • Having a design document allows us to perform a proper design review without diving too much into implementation details and see if there are issues with the design or if there is some nice-to-have functionality which might be missing.
  • The design document serves as input for the QA Team for building test plans.
  • The design document also serves as input for the Documentation Team when time comes to document the changes implemented.

In this case we simply changed the existing WorkLog #1159 “Allow multiple locks in GET_LOCK()” to follow the implementation in the contributed patch.

After that the design was reviewed, review comments discussed and adjustments to the document and the design were made. A few examples of issues which we identified while working on the design document for this particular contribution:

  • It is a bad idea to use ER_LOCK_DEADLOCK for deadlocks caused by user-level locks as this error code implies transaction rollback, which doesn’t happen in the case of user-level locks.
  • It would be nice to have a function which releases all locks held by a connection. So we decided to add RELEASE_ALL_LOCKS().
  • Existing code always assumed that lock names are in UTF-8 and compared them in case-insensitive fashion. We decided that we need to convert names to UTF-8 if they are provided in different charset and preserve case-insensitive comparison.

Then the patch was adjusted according to the updated design (some contributions are even totally rewritten as a result of this process) and the code was polished. For example, we decided to use a somewhat different API for finding the lock owner in the metadata locking subsystem instead of the one provided in the patch contributed. Sometimes we might find bugs in the old code during this step. In this case we found bug #73123 “Possible access to freed memory in IS_FREE_LOCK() and IS_USED_LOCK()”.

Around the same time the QA team reviewed the design, provided their comments, and prepared the test plan. Occasionally design and code needs to be changed as result of these discussions. This didn’t happen in this case though. Test cases were extended accordingly.

Code review happened. Issues discovered during it were fixed. Finally, the patch was approved by the reviewer and passed into the hands of the QA team.

The QA team ran the tests according to their test plan (including performance benchmarks and RQG runs) and checked code coverage. Often issues are discovered which need to be fixed at this stage. In this particular case no issues were found. We also checked that the code builds and passes tests on all platforms we support.

Once the QA team said that patch was good enough to go into trunk, it was then pushed into the main MySQL development branch (MySQL 5.7).

The work on the task/patch didn’t stop there. After it was pushed the Documentation Team updated the documentation describing the related functionality. See the updated descriptions of GET_LOCK() and RELEASE_ALL_LOCKS() functions in our manual. Also an entry on the 5.7.5 Release Notes was added and the corresponding feature requests were closed in the bug system.

And only after these final steps did the work on this contribution came to completion.

A big THANK YOU is owed to several people:

  • First of all, to Konstantin Osipov for his contribution!
  • Rune Humborstad and Jon Olav Hauglid for reviewing the design and the code.
  • Matthias Leich for doing the QA review of the design and performing the QA work.
  • Paul Dubois for documenting the changes.

Resizing the InnoDB Buffer Pool Online

As described in the MySQL 5.7 manual here, we can now resize the buffer pool without restarting the mysqld process starting with MySQL 5.7.5.

You can now use the "SET GLOBAL innodb_buffer_pool_size = xxxx" command which causes a resizing job to begin in background. We can monitor the progress and state of the resizing work using the Innodb_buffer_pool_resize_status variable and the server error log.

In order to minimize the impact of the resulting memory allocation/freeing process, thus avoiding huge memory copy tasks which would then block other transaction processing related tasks, the resizing is done in fixed size chunks. And because InnoDB is designed to use equally sized buffer pool instances, the total buffer pool size is (upper) aligned to a multiple of [innodb_buffer_pool_instances × innodb_buffer_pool_chunk_size].

All allocate/free/search calls for pages within the buffer pool are only blocked during the critical path in the resizing operation. The critical path consists of the following parts:

  • Allocating memory from the OS and freeing memory back to the OS
  • Resizing internal hash tables (page_hash, zip_hash), if needed

The length of the time for the critical path is largely dependent on the OS memory manager’s performance.

The preparation phase for a resizing operation that shrinks the buffer pool may potentially take a long time, but it doesn’t block other concurrent transaction processing related tasks. The preparation phase consists of the following parts:

  • Reduce the number of pages in order to fit within the new smaller size
  • Relocate the pages from chunks that must now be freed

Naturally if there will now be too many dirty pages for the now smaller buffer pool then they will need to be flushed and then freed. This work could cause significant disk IO and thus could potentially indirectly interfere with other transaction related tasks while it runs.

Also of note, if a transaction has locks on any of the pages in the chunk to be freed, then the relocation of those pages should wait for the transaction end. So high transaction throughput or long running transactions can potentially block the buffer pool resize operation. Any resulting long waits are logged to the server error log file along with information about the suspected transactions that are causing the operation to wait/block.

Lastly, there is also one limitation to note. During the buffer pool resizing process the adaptive hash index (AHI) is disabled. This is done because the AHI uses buffer pool pages and when active it’s very difficult to relocate the necessary pages.

That’s all for now. I look forward to your feedback on this new feature! If you have any questions or encounter any bugs, please do let us know by opening a support ticket or filing a bug. Thank you for using MySQL!