Using Perl to Send Tweets Stored in a MySQL Database to Twitter

Using twitter can sometimes feel like driving downtown, screaming what you want to say out the window, and hoping someone hears you. There might be tens of thousands of people downtown, but your message will only be heard by a few. Because of this, your best bet is to repeat your message as often as possible.

Twitter is free and if you want to reach as many people as possible, it’s another great tool for getting your message out there. But sending tweets on a scheduled basis can be a pain. There are client programs available which allow you to schedule your tweets (Hootsuite is one I have used in the past). You can load your tweets in the morning, and have the application tweet for you all day long. But you still have to load the application with your tweets—one by one.

A friend of mine asked me if there was a way to send the same 200 tweets over and over again, spaced out every 20 minutes or so. He has a consulting business, and just wants to build up a list of twitter followers by tweeting inspirational quotes. If he tweets for twenty hours a day, and sends three quotes an hour, it will take him a little more than three days to burn through his 200 quotes. And he can always add more quotes or space out the tweets as necessary. I decided to write a Perl script to do this for him.

To start, we will need a MySQL database to store the tweets. I use the MySQL Workbench product as my client application for connecting to MySQL. From within Workbench, I can create my tweet database:

I will then need a table inside my database to store my tweets.

The tweet messages will be stored in the tweet column, and the last date the tweet was sent will have a time stamp in the last_tweet_date column. When we perform a search to find the next tweet, we will simply sort our search by the last_tweet_date and then id, and limit our output to one tweet. After we send the tweet, we will update the last_tweet_date column and send that tweet to the end of the line. The script will also incrementally change the tweet_count value (number of times the tweet has been sent), and record the length of the tweet in tweet_length. I do not do any error checking in the script to make sure the tweet was sent, but errors are printed.

We now need to insert some tweets into our table. Since my friend is going to be sending inspirational quotes, I found a few I can import. In order to make it easier for importing, all single quote marks () will be replaced by the carat symbol (^). I can then swap these symbols inside the Perl script. You could use the backslash (\) before the single quote, but I prefer a single character substitution so I know how long the tweet will be.

I will also use the tilde (~) as a way to designate a carriage return in my tweet. The Perl script will replace the tilde with a carriage return (\n). Two tildes give me two carriage returns and a blank line.

I also created a history table to store the tweet identification numbers. Each tweet is assigned a unique number by twitter, and this is how you can access this tweet. I save this information so I can delete the tweets later using this number. I have included a short script for deleting tweets near the end of this post.

You will need to register your application with twitter via, and obtain the following:

You will also need to register your mobile phone in order to link your twitter account to your application. I have not figured out how to use this script with someone else’s account, as the instructions for scripting Perl for use with twitter are not very thorough. I will try to add this at a later date.

Now that you have your application information and all of your tables created with data inserted, here is the Perl script for sending tweets. (You will need to install the necessary Perl modules that are used.)

In the subroutine ConnectToMySql, I store the MySQL login credentials in a text file one directory below where my Perl script is located. This file contains the following information:

You can instead include your information inside the file if you prefer.

If you want to delete your tweets, you can create a script to access the tweets in your history table, and then delete them one at a time. Here is an example without the database connections:

Be sure to replace the value of

with the value from the tweet you want to delete.

That’s it for now. Hopefully this was fun, interesting, and even useful! Thank you for using MySQL!


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] and on LinkedIn.

Creating and Restoring Database Backups With mysqldump and MySQL Enterprise Backup – Part 2 of 2

In part one of this post, I gave you a couple examples of how to backup your MySQL databases using mysqldump. In part two, I will show you how to use the MySQL Enterprise Backup (MEB) to create a full and partial backup.

MySQL Enterprise Backup provides enterprise-grade backup and recovery for MySQL. It delivers hot, online, non-blocking backups on multiple platforms including Linux, Windows, Mac & Solaris. To learn more, you may download a whitepaper on MEB.

MySQL Enterprise Backup delivers:

  • NEW! Continuous monitoring – Monitor the progress and disk space usage
  • “Hot” Online Backups – Backups take place entirely online, without interrupting MySQL transactions
  • High Performance – Save time with faster backup and recovery
  • Incremental Backup – Backup only data that has changed since the last backup
  • Partial Backup – Target particular tables or tablespaces
  • Compression – Cut costs by reducing storage requirements up to 90%
  • Backup to Tape – Stream backup to tape or other media management solutions
  • Fast Recovery – Get servers back online and create replicated servers
  • Point-in-Time Recovery (PITR) – Recover to a specific transaction
  • Partial restore – Recover targeted tables or tablespaces
  • Restore to a separate location – Rapidly create clones for fast replication setup
  • Reduce Failures – Use a proven high quality solution from the developers of MySQL
  • Multi-platform – Backup and Restore on Linux, Windows, Mac & Solaris(from mysqldump is free to use, MEB is part of MySQL’s Enterprise Edition (EE) – so you need a license to use it. But if you are using MySQL in a production environment, you might want to look at EE, as:MySQL Enterprise Edition includes the most comprehensive set of advanced features, management tools and technical support to achieve the highest levels of MySQL scalability, security, reliability, and uptime. It reduces the risk, cost, and complexity in developing, deploying, and managing business-critical MySQL applications.
    Before using MEB and backing up your database for the first time, you will need some information:

    Information to gather – Where to Find It – How It Is Used

    • Path to MySQL configuration file – Default system locations, hardcoded application default locations, or from –defaults-file option in mysqld startup script. - This is the preferred way to convey database configuration information to the mysqlbackup command, using the –defaults-file option. When connection and data layout information is available from the configuration file, you can skip most of the other choices listed below.
    • MySQL port – MySQL configuration file or mysqld startup script. Used to connect to the database instance during backup operations. Specified via the –port option of mysqlbackup. –port is not needed if available from MySQL configuration file. Not needed when doing an offline (cold) backup, which works directly on the files using OS-level file permissions.
    • Path to MySQL data directory – MySQL configuration file or mysqld startup script. – Used to retrieve files from the database instance during backup operations, and to copy files back to the database instance during restore operations. Automatically retrieved from database connection for hot and warm backups. Taken from MySQL configuration file for cold backups.
    • ID and password of privileged MySQL user – You record this during installation of your own databases, or get it from the DBA when backing up databases you do not own. Not needed when doing an offline (cold) backup, which works directly on the files using OS-level file permissions. For cold backups, you log in as an administrative user. – Specified via the –password option of the mysqlbackup. Prompted from the terminal if the –password option is present without the password argument.
    • Path under which to store backup data – You choose this. See Section 3.1.3, of the MySQL online manual – “Designate a Location for Backup Data” for details. – By default, this directory must be empty for mysqlbackup to write data into it, to avoid overwriting old backups or mixing up data from different backups. Use the –with-timestamp option to automatically create a subdirectory with a unique name, when storing multiple sets of backup data under the same main directory.
    • Owner and permission information for backed-up files (for Linux, Unix, and OS X systems) – In the MySQL data directory. – If you do the backup using a different OS user ID or a different umask setting than applies to the original files, you might need to run commands such as chown and chmod on the backup data. See Section A.1, of the MySQL online manual – “Limitations of mysqlbackup Command” for details.
    • Size of InnoDB redo log files – Calculated from the values of the innodb_log_file_size and innodb_log_files_in_group configuration variables. Use the technique explained for the –incremental-with-redo-log-only option. – Only needed if you perform incremental backups using the –incremental-with-redo-log-only option rather than the –incremental option. The size of the InnoDB redo log and the rate of generation for redo data dictate how often you must perform incremental backups.
    • Rate at which redo data is generated – Calculated from the values of the InnoDB logical sequence number at different points in time. Use the technique explained for the –incremental-with-redo-log-only option. – Only needed if you perform incremental backups using the –incremental-with-redo-log-only option rather than the –incremental option. The size of the InnoDB redo log and the rate of generation for redo data dictate how often you must perform incremental backups.For most backup operations, the mysqlbackup command connects to the MySQL server through –user and –password options. If you aren’t going to use the root user, then you will need to create a separate user. Follow these instructions for setting the proper permissions.All backup-related operations either create new files or reference existing files underneath a specified directory that holds backup data. Choose this directory in advance, on a file system with sufficient storage. (It could even be remotely mounted from a different server.) You specify the path to this directory with the –backup-dir option for many invocations of the mysqlbackup command.Once you establish a regular backup schedule with automated jobs, it is preferable to keep each backup within a timestamped subdirectory underneath the main backup directory. To make the mysqlbackup command create these subdirectories automatically, specify the –with-timestamp option each time you run mysqlbackup.For one-time backup operations, for example when cloning a database to set up a replication slave, you might specify a new directory each time, or specify the –force option of mysqlbackup to overwrite older backup files.

      If you haven’t downloaded and installed mysqlbackup, you may download it from (registration is required). Install the MySQL Enterprise Backup product on each database server whose contents you intend to back up. You perform all backup and restore operations locally, by running the mysqlbackup command on the same server as the MySQL instance.

      Now that we have gathered all of the required information and installed mysqlbackup, let’s run a simple and easy backup of the entire database. I installed MEB in my /usr/local directory, so I am including the full path of mysqlbackup. I am using the backup-and-apply-log option, which combines the –backup and the –apply-log options into one. The –backup option performs the initial phase of a backup. The second phase is performed later by running mysqlbackup again with the –apply-log option, which brings the InnoDB tables in the backup up-to-date, including any changes made to the data while the backup was running.

      Now, I can take a look at the backup file that was created:

      As you can see, the backup was created in /Users/tonydarnell/hotbackups. If I wanted to have a unique folder for this backup, I can use the –with-timestamp.

      The –with-timestamp option places the backup in a subdirectory created under the directory you specified above. The name of the backup subdirectory is formed from the date and the clock time of the backup run.

      I will run the same backup command again, but with the –with-timestamp option:

      (I am not going to duplicate the entire output – but I will only show you the output where it creates the sub-directory under /Users/tonydarnell/hotbackups)

      So, I ran the backup again to get a unique directory. Instead of the backup files/directories being placed in /Users/tonydarnell/hotbackups, it created a sub-directory with a timestamp for the directory name:

      Note: If you don’t use the backup-and-apply-log option you will need to read this: Immediately after the backup job completes, the backup files might not be in a consistent state, because data could be inserted, updated, or deleted while the backup is running. These initial backup files are known as the raw backup.

      You must update the backup files so that they reflect the state of the database corresponding to a specific InnoDB log sequence number. (The same kind of operation as crash recovery.) When this step is complete, these final files are known as the prepared backup.

      During the backup, mysqlbackup copies the accumulated InnoDB log to a file called ibbackup_logfile. This log file is used to “roll forward” the backed-up data files, so that every page in the data files corresponds to the same log sequence number of the InnoDB log. This phase also creates new ib_logfiles that correspond to the data files.

      The mysqlbackup option for turning a raw backup into a prepared backup is apply-log. You can run this step on the same database server where you did the backup, or transfer the raw backup files to a different system first, to limit the CPU and storage overhead on the database server.

      Note: Since the apply-log operation does not modify any of the original files in the backup, nothing is lost if the operation fails for some reason (for example, insufficient disk space). After fixing the problem, you can safely retry apply-log and by specifying the –force option, which allows the data and log files created by the failed apply-log operation to be overwritten.

      For simple backups (without compression or incremental backup), you can combine the initial backup and the apply-log step using the option backup-and-apply-log.

      MEB 3.9 and later creates two .cnf files based on the output of SHOW GLOBAL VARIABLES: server-my.cnf (non-default values) and server-all.cnf (all values).

      Now that we have a completed backup, we are going to copy the backup files and the my.cnf file over to a different server to restore the databases. We will be using a server that was setup as a slave server to the server where the backup occurred. If you need to restore the backup to the same server, you will need to refer to the mysqlbackup manual. I copied the backup files as well as the my.cnf file to the new server:

      On the new server (where I will restore the data), I shutdown the mysqld process (mysqladmin -uroot -p shutdown), copied the my.cnf file to the proper directory, and now I can restore the database to the new server, using the copy-back option. The copy-back option requires the database server to be already shut down, then copies the data files, logs, and other backed-up files from the backup directory back to their original locations, and performs any required postprocessing on them.

      I can now restart MySQL. I have a very small database (less than 50 megabytes). But it took less than a minute to restore the database. If I had to rebuild my database using mysqldump, it would take a lot longer. If you have a very large database, the different in using mysqlbackup and mysqldump could be in hours. For example, a 32-gig database with 33 tables takes about eight minutes to restore with mysqlbackup. Restoring the same database with a mysqldump file takes over two hours.

      An easy way to check to see if the databases match (assuming that I haven’t added any new records in any of the original databases – which I haven’t), I can use one of the MySQL Utilities – mysqldbcompare. I wrote about how to do this in an earlier blog about using it to test two replicated databases, but it will work here as well – see Using MySQL Utilities Workbench Script mysqldbcompare To Compare Two Databases In Replication.

      The mysqldbcompare utility “compares the objects and data from two databases to find differences. It identifies objects having different definitions in the two databases and presents them in a diff-style format of choice. Differences in the data are shown using a similar diff-style format. Changed or missing rows are shown in a standard format of GRID, CSV, TAB, or VERTICAL.” (from: mysqldbcompare — Compare Two Databases and Identify Differences)

      Some of the syntax may have changed for mysqldbcompare since I wrote that blog, so you will need to reference the help notes for mysqldbcompare. You would need to run this for each of your databases.

      You can try and run this for the mysql database, but you may get a few errors regarding the mysql.backup_history and mysql.backup_progress tables:

      For example, when you compare the mysql.backup_history tables, the original database will have two entries – as I ran mysqlbackup twice. But the second backup entry doesn’t get entered until after the backup has occurred, and it isn’t reflected in the backup files.

      Original Server

      Restored Server

      For the mysql.backup_progress tables, the original database has ten rows, while the restored database has seven.

      There are many options for using mysqlbackup, including (but not limited to) incremental backup, partial backup , compression, backup to tape, point-in-time recovery (PITR), partial restore, etc. If you are running MySQL in a production environment, then you should look at MySQL Enterprise Edition, which includes MySQL Enterprise Backup. Of course, you should always have a backup and recovery plan in place. Finally, if and when possible, practice restoring your backup on a regular basis, to make sure that if your server crashes, you can restore your database quickly.


      Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] and on LinkedIn.
      Tony is the author of Twenty Forty-Four: The League of Patriots

      Visit for more information.

Creating and Restoring Database Backups With mysqldump and MySQL Enterprise Backup – Part 1 of 2

Part 1 of 2: (part two)
If you have used MySQL for a while, you have probably used mysqldump to backup your database. In part one of this blog, I am going to show you how to create a simple full and partial backup using mysqldump. In part two, I will show you how to use MySQL Enterprise Backup (which is the successor to the InnoDB Hot Backup product). MySQL Enterprise Backup allows you to backup your database while it is online and it keeps the database available to users during backup operations (you don’t have to take the database offline or lock any databases/tables – but to do this, you need to use the –no-locking option).

This post will deal with mysqldump. For those of you that aren’t familiar with mysqldump:

The mysqldump client is a utility that performs logical backups, producing a set of SQL statements that can be run to reproduce the original schema objects, table data, or both. It dumps one or more MySQL database for backup or transfer to another SQL server. The mysqldump command can also generate output in CSV, other delimited text, or XML format.

The best feature about mysqldump is that it is easy to use. The main problem with using mysqldump occurs when you need to restore a database. When you execute mysqldump, the database backup (output) is an SQL file that contains all of the necessary SQL statements to restore the database – but restoring requires that you execute these SQL statements to essentially rebuild the database. Since you are recreating your database, the tables and all of your data from this file, the restoration procedure can take a long time to execute if you have a very large database.

There are a lot of features and options with mysqldump – (a complete list is here). I won’t review all of the features, but I will explain some of the ones that I use.

Here is the command to use mysqldump to simply backup all of your databases (assuming you have InnoDB tables). This command will create a dump (backup) file named all_databases.sql.

After you hit return, you will have to enter your password. You can include the password after the –password option (example: –password=my_password), but this is less secure and you will get the following error:

Here is some information about the options that were used:

If you are using Global Transaction Identifier’s (GTID’s) with InnoDB (GTID’s aren’t available with MyISAM), you will want to use the –set-gtid-purged=OFF option. Then you would issue this command:

Otherwise you will see this error:

You can also execute a partial backup of all of your databases. This example will be a partial backup because I am not going to backup the default databases for MySQL (which are created during installation) – mysql, test, PERFORMANCE_SCHEMA and INFORMATION_SCHEMA

Note: mysqldump does not dump the INFORMATION_SCHEMA database by default. To dump INFORMATION_SCHEMA, name it explicitly on the command line and also use the –skip-lock-tables option.

mysqldump never dumps the performance_schema database.

mysqldump also does not dump the MySQL Cluster ndbinfo information database.

Before MySQL 5.6.6, mysqldump does not dump the general_log or slow_query_log tables for dumps of the mysql database. As of 5.6.6, the dump includes statements to recreate those tables so that they are not missing after reloading the dump file. Log table contents are not dumped.

If you encounter problems backing up views due to insufficient privileges, see Section E.5, “Restrictions on Views” for a workaround.

To do a partial backup, you will need a list of the databases that you want to backup. You may retrieve a list of all of the databases by simply executing the SHOW DATABASES command from a mysql prompt:

In this example, since I don’t want to backup the default mysql databases, I am only going to backup the comicbookdb, coupons, scripts and watchdb databases. I am going to use the following options:

Here is the command that I will run from a prompt:

I will need to enter my password on the command line. After the backup has completed, if your backup file isn’t too large, you can open it and see the actual SQL statements that will be used if you decide that you need to recreate the database(s). If you accidentally dump all of the databases into one file, and you want to separate the dump file into smaller files, see my post on using Perl to split the dump file.

For example, here is the section of the dump file (partial_database_backup.db) for the comicbookdb database (without the table definitions). (I omitted the headers from the dump file.)

If you are using the dump file to create a slave server, you can use the –master-data option, which includes the CHANGE MASTER information, which looks like this:

If you used the –set-gtid-purged=OFF option, you would see the value of the Global Transaction Identifier’s (GTID’s):

You may also test your backup without exporting any data by using the –no-data option. This will show you all of the information for creating the databases and tables, but it will not export any data. This is also useful for recreating a blank database on the same or on another server.

When you export your data, mysqldump will create INSERT INTO statements to import the data into the tables. However, the default is for the INSERT INTO statements to contain multiple-row INSERT syntax that includes several VALUES lists. This allows for a quicker import of the data. But, if you think that your data might be corrupt, and you want to be able to isolate a given row of data – or if you simply want to have one INSERT INTO statement per row of data, then you can use the –skip-extended-insert option. If you use the –skip-extended-insert option, importing the data will take much longer to complete, and the backup file size will be larger.

Importing and restoring the data is easy. To import the backup file into a new, blank instance of MySQL, you can simply use the mysql command to import the data:

Again, you will need to enter your password or you can include the value after the -p option (less secure).

There are many more options that you can use with a href=””>mysqldump. The main thing to remember is that you should backup your data on a regular basis, and move a copy of the backup file off the MySQL server.

Finally, here is a Perl script that I use in cron to backup my databases. This script allows you to specify which databases you want to backup via the mysql_bak.config file. This config file is simply a list of the databases that you want to backup, with an option to ignore any databases that are commented out with a #. This isn’t a secure script, as you have to embed the MySQL user password in the script.


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots


Visit for more information.

InnoDB 전문 검색 : N-gram Parser

기본 InnoDB 전문 검색(Full Text) 파서는 공백이나 단어 분리자가 토큰인 라틴 기반 언어들에서는 이상적이지만 개별 단어에 대한 고정된 구분자가 없는 중국어, 일본어, 한국어(CJK)같은 언어들에서는 각 단어는 여러개의 문자들의 조합으로 이루어집니다. 그래서 이경우엔 단어 토큰들을 처리할 수 있는 다른 방법이 필요합니다.

우리는 CJK에서 사용할 수 있는 n-gram 파서를 제공하기 위한 새로운 플러그블 전문 파서(pluggable full-text parser)를 MySQL 5.7.6 에서 제공할 수 있게되어 정말 기쁩니다.

N-gram이 정확히 뭘까요?

전문 검색에서 n-gram은 주어진 문자열에서 n개 문자의 인접한 순서입니다. 예를들어 n-gram 을 이용해 우리는 “abcd” 문자열을 다음과 같이 토큰나이즈합니다.

InnoDB에서 n-gram 파서를 어떻게 사용할 수 있을까요?

새 n-gram 파서는 기본적으로 로드되고 활성화되어 있기때문에 그것을 사용하기 위해서는 여러분의 대상 DDL문들에 WITH PARSER ngram 문을 간단히 기술하기만 하면 됩니다. 예를들어 MySQL 5.7.6 과 그 이후버전에서는 다음의 문장을 모두 사용할 수 있습니다.

MySQL 5.7.6에서는 ngram_token_size(토큰은 n 문자로 만들어진 단어와 거의 동등함)라는 새로운 글로벌 서버 변수도 도입되었습니다. 기본값은 2(bigram)이고 1에서 10까지 변경 가능합니다. 다음 질문은  토큰사이즈를 어떻게 선택할까? 일 것입니다. 일반적인 경우엔 2 또는 bigram이 CJK에서 권장되어 지지만, 여러분은 아래 간단한 규칙에 따라 유효한 값을 선택할 수 있습니다.

규칙 : 여러분이 검색하려고 하는 가장 큰 토큰으로 토큰 사이즈를 설정한다.

만약 단일 문자를 검색하려면, ngram_token_size을 1로 설정해야합니다.  ngram_token_size가 작은 쪽이 인덱스를 작게 할 수있어 그 인덱스를 이용한 전체 텍스트 검색이 더 빨라집니다. 그러나 단점은 당신이 검색 할 수있는 토큰 크기를 제한하는 것입니다. 예를 들어 영어의 “Happy Birthday”전통적인 중국어로는 ‘生日高興” 라고 번역됩니다. ( ‘Happy’== ‘高興’,’Birthday’=’生日’)이 예와 같이 각 단어/토큰은 2 문자로 구성되기 때문에이 토큰을 검색하기 위해서는ngram_token_size을 2 이상으로 설정해야합니다.

N-gram 토큰화의 상세

n-gram 파서는 기본적으로 전문(full text) 파서와 다음과 같은 차이점이 있습니다.

  1. 토큰 크기 : innodb_ft_min_token_sizeinnodb_ft_max_token_size는 무시됩니다. 대신 토큰을 제어하기 위해 ngram_token_size을 지정합니다.
  2. 스탑워드 처리 : 스탑워드(stopwords) 처리도 조금 다릅니다. 일반적으로 토큰 화된 단어 자체(완전히 일치)가 스탑워드 테이블에 있다면 그 단어는 전문 검색 인덱스에 추가되지 않습니다. 그러나, n-gram 파서의 경우, 토큰화된 단어에 스탑워드가 포함되어 있는지 확인하고 포함된 경우엔 인덱스를 생성하지 않습니다. 이렇게 동작이 다른 이유는 CJK에서는 매우 빈번하게 사용되는 무의미한 문자, 단어, 문장 부호를 가지고 있기 때문입니다. 스탑워드와 일치하는 문자가 포함되어 있는지를 확인하는 방식을 사용하면 쓸모없는 토큰을 더 많이 제거 할 수 있습니다.
  3. 공백 : 공백은 항상 하드 코드된 스탑워드입니다 예를 들면, ‘my sql’는 항상 ‘my’, ‘y’, ‘s’, ‘sq’, ‘ql’로 토큰화되어지고 ‘y’와 ‘s’는 인덱싱되지 않습니다.

우리는 INFORMATION_SCHEMA.INNODB_FT_INDEX_CACHE테이블과 INFORMATION_SCHEMA.INNODB_FT_TABLE_TABLE 테이블을 참조하여 특정 전문 검색 인덱스에 어떤 토큰이 인덱스화되어 있는지 정확하게 확인할 수 있습니다. 이것은 디버깅을 위해 매우 유용한 도구입니다. 예를 들어, 단어가 예상대로 전문 검색 결과에 표시되지 않는 경우, 그 단어는 어떤 이유 (스탑워드, 토큰, 크기, 등)로 인덱스화되어 있지 않은지, 이 테이블을 참조하여 확인할 수 있습니다. 간단한 예를 소개합니다.

N-gram 검색 처리의 상세

텍스트 검색

NATURAL LANGUAGE MODE에서 검색되는 텍스트는 n-gram의 합집합으로 변환됩니다. 예를 들어, ‘sql’는 ‘sq ql’로 변환됩니다 (기본 토큰 크기인 2 또는 bigram의 경우).

BOOLEAN MODE 에서 검색되는 텍스트는 n-gram 구문 검색으로 변환됩니다. 예를 들어, ‘sql’은 ‘ “sq ql”‘로 변환됩니다.

번역자 주석  : “sq ql”는 ‘sq’와 ‘ql’가 순서대로 모두 일치해야하므로 ‘sq’나 ‘ql’은 검색 결과에 나오지 않는다.

와일드 카드를 사용한 검색

접두사 (prefix)가 ngram_token_size 보다 작은 경우, 검색 결과는 그 접두사로 시작하는 n-gram 토큰을 포함한 모든 행을 반환합니다.

접두사 길이가 ngram_token_size과 같거나 큰 경우 와일드 카드를 사용한 검색은 구문 검색으로 변환되고 와일드 카드는 무시됩니다. 예를 들어, ‘sq *’는 ‘ “sq”‘로 변환되어 ‘sql *’는 ‘ “sq ql”‘로 변환됩니다.

구문 검색

구문 검색은 n-gram 토큰 문구 검색으로 변환됩니다. 예를 들어, “sql”는 “sq ql”로 변환됩니다.

InnoDB의 전문 검색 전반에 관하여 자세히 알고 싶다면, 사용자 설명서 InnoDB Full-Text Index 부분과 Jimmy의 기사 (Dr. Dobb’s article)를 참조하십시오. N-gram 파서 대한 자세한 내용은 사용자 설명서의 N-gram parser 부분을 참조하십시오.

우리는 이 새로운 기능이 여러분에게 도움이 되기를 바랍니다! 또한 MySQL 5.7을 통해 전문 검색이 CJK에도 적용된 것을 매우 기쁘게 생각합니다. 이 개선은 MySQL 5.7의 개선 사항 중에서도 큰 것입니다. 질문이 있으면, 이 블로그에 댓글을 다시거나 오라클 서포트(support ticket )로 문의 바랍니다. 만약 버그를 발견하면 이 블로그에 댓글을 다시거나 버그 리포트에 게시, 또는 오라클 서포트에 문의 바랍니다.

항상 MySQL을 이용해 주셔서 감사합니다!

Upgrading Directly From MySQL 5.0 to 5.6 With mysqldump

Upgrading MySQL

Upgrading MySQL is a task that is almost inevitable if you have been managing a MySQL installation for any length of time. To accomplish that task, we have provided a utility and documentation to upgrade from one version of MySQL to another. The general recommendation is to perform the upgrade by stepping from one major release to the next, without skipping an intermediate major release. For example, if you are at 5.1.73, and you want to go to 5.6.24, the safest and recommended method is to upgrade from 5.1.73 to 5.5.43 (the latest 5.5 release at the time of this writing), and then upgrade from 5.5.43 to 5.6.24 (or any version of 5.6). This allows the upgrade process to apply the changes for one major release at a time.

We test upgrading from one version to the next quite extensively during each release cycle to ensure that all user data and settings are safely and successfully upgraded. In these cases, we run through an extensive set of test cases involving users, privileges, tables, views, procedures, functions, datatypes, partitions, character sets, triggers, performance schema, the mysql system schema, and more. We also create new test cases for every release as needed. We test and validate each topic at the initial, upgraded, and downgraded stages of the process. This also includes tests involving replication between versions. Validation includes ensuring the stability of the mysql server, reviewing integrity of the data, and testing functionality at all stages.

So What Are My Upgrade Options?

There are 2 basic options available when upgrading from one mysql instance to another:

  1. Perform a ‘Dump Upgrade’
  2. Perform an ‘In-place Upgrade’

A ‘Dump Upgrade’ involves dumping the data from your existing mysql instance using mysqldump, loading it into a fresh MySQL instance running the new version (e.g. MySQL 5.7), then running mysql_upgrade. Alternatively, you can perform a purely logical upgrade by dumping only the user schema(s), and loading them into a fresh MySQL instance at the new version. In this case, mysql_upgrade is not necessary.

An ‘In-place Upgrade’ involves shutting down the existing (older) MySQL instance, upgrading the installed MySQL server package and binaries, starting up that (newer) instance using the new mysqld server binary, and then running mysql_upgrade.

It is always a good idea to take a backup of the database instance prior to making any changes. Before any upgrade process, be sure to read the related upgrade documentation for the version to which you are moving. This can include important tips and information about running the upgrade: upgrading to 5.1upgrading to 5.5 , upgrading to 5.6, or upgrading to 5.7 .

There are potentially other methods for upgrading when using native OS packages. I will not cover those processes here. We will focus on the dump based upgrade here, and we will discuss the in-place upgrade in another follow-up article.

What If I Don’t Want to Upgrade Through Every Major Version?

We know that upgrading a MySQL installation can be a big proposition to undertake. With all the preparation, testing and dry runs required for a successful project, upgrades are not taken lightly. Because of the magnitude of this type of effort, we understand that some of our customers may not upgrade at every GA release. This leaves some customers with several major versions to hop through to get to the most current version. The recommended upgrade process can be very time-consuming, and sometimes take more resources, both human and machine, than a customer has.

With all of the interest around a faster upgrade path, we have been testing various upgrade options to see what works. I started upgrading from 5.0 to 5.6 to see what would happen, and to establish a baseline.

My starting MySQL instance was version 5.0.85 with all default settings and the sakila schema loaded.  I used Oracle Linux 7 as the platform.

I followed the following steps to perform the upgrade:

  1. Start with a basic mysql 5.0.85 server instance with the sakila schema loaded. I also used --no-defaults here for simplicity:
  2. Dump all databases/schemas from the existing mysql server using mysqldump:
  3. Initialize a new MySQL 5.6.24 server instance, including any new options or parameters. Again, I used --no-defaults here for simplicity. You can use a new data directory, port, and socket or you can shutdown the 5.0.85 server, clean out the datadir, and reuse those settings. Either way the directory must be empty:
  4. Load the dump file into the new MySQL 5.6 server:
  5. Run mysql_upgrade (to get all the system tables upgraded):
  6. Load in the help tables (optional step):

With mysqldump, these are the parameters that I used, and why I include them:

  • --all-databases — this extracts data and definitions for all databases/schemas stored in the MySQL server except for performance_schema and information_schema.
  • --add-drop-table — this will force the table to be recreated with the load so that in case the existing definition (if defined) is different from the load file, it will be recreated to avoid load failures.
  • --force — this will force the dump to continue in the case that there is an error.

You may be saying to yourself, why don’t you include --routines?  And that is a great question. There are 2 ways of getting the functions and procedures included in the dump file:

  1. Include them by using the --routines parameter. The CREATE statements are then included as part of the user schema dump.
  2. Include them by including the mysql.proc table itself (the system table where the routine definitions are stored) in the dump. This will load the functions and procedures as rows of the mysql.proc table.  These are not create statements for the procedure, but insert statements into the mysql.proc table.

If you use the --routines option with mysqldump in these upgrade scenarios, you will get an error when trying to load the schema including functions and procedures because of an incompatibility of the mysql.proc table. The resulting error will look like this:

This error is related to code added to prevent a MySQL server crash when the mysql.proc table is not in the proper format. The reason this happens is that, although the new MySQL 5.6.24 server was initialized properly, the proc table was then reverted to an earlier format via the CREATE TABLE 'proc' statement in the dump file that we subsequently loaded. Later in that same dump file were the definitions for procedures and functions in the sakila schema. The definition of the proc table then differed from what MySQL 5.6 expected to be there. This error can be avoided by not including the --routines parameter on the mysqldump command. Including this option was something that did trip me up in my testing, as I typically include --routines when I perform a dump of my user schemas.

The steps noted above for performing the ‘Dump Upgrade’ were successful in the upgrade from 5.0.85 to 5.1.73 or 5.5.43 to 5.6.43. Validation was done using mysqlcheck, running basic select/insert/update/delete statements on the user schema, and by executing/calling user functions and procedures.

What About a Logical Upgrade?

A variation of the full ‘Dump Upgrade’ method is to only pull in the user/application schema(s) into a freshly built MySQL 5.6.24 server (in our example). This allows you to skip running mysql_upgrade because all the system tables (the mysql schema) and procedures inside the MySQL server will be at the new 5.6.24 version. The steps are slightly different, and are as follows.

  1. Start with a basic MySQL 5.0.85 server instance with the sakila schema loaded. I used --no-defaults again here for simplicity:
  2. Dump only the user databases/schemas (skipping the mysql system schema)—which is only the Sakila schema in our example—from the existing MySQL server using mysqldump:
  3. Initialize a new MySQL 5.6.24 server instance, including any new options or parameters. Again, I used --no-defaults here for simplicity. You can use a new data directory, port, and socket or you can shutdown the 5.0.85 MySQL server, clean out the data dir, and re-use those settings. Either way the data directory must be empty:
  4. Load the dump file into the new MySQL 5.6 server instance:

With mysqldump in this scenario, these are the parameters that I use, and why I include them:

  • --databases — this extracts data and definitions for all databases included in the database list
  • --add-drop-table — this will force the table to be recreated with the load so that in case the existing definition (if defined) is different from the load file, it will be recreated to avoid load failures.
  • --routines — this will include the functions and procedures in the user schema dump. We need this because we are not including them with the mysql.proc dump.
  • --force — this will force the dump to continue in the case that there is an error.

One thing to keep in mind with this type of upgrade, is that it does not load any user connection or privilege related data, nor any server settings. All of that would need to be re-created separately.

Actually, I would recommend anyone who considers skipping many major versions to use a pure logical upgrade. With a pure logical upgrade there are no assumptions made about meta-data and thus no mysql_upgrade step is needed. This increases the likelihood of success dramatically. The only downside is that it will not include users and privileges, and if you have archived data in a non pure form you will need to load it into your old version and produce the pure logical dump from there.

In my next article, I will tackle the ‘In-Place Upgrade’.

That’s it for now. THANK YOU for using MySQL!

Spring Cleaning in the GIS Namespace

In MySQL 5.7.6 we’ve done some major spring cleaning within the GIS function namespace. We have deprecated 66 function names and added 13 new aliases. Please see the release notes for a complete list of all the changes. But why have we done this, and what impact does this have for you?


GIS is a growing area, and to keep MySQL up to speed we have made the GIS namespace more SQL/MM compliant. SQL/MM is an ISO standard that defines a set of spatial routines and schemas for handling spatial data. As part of this effort we have adapted the standard naming convention where all spatial functions are prefixed with “ST_“. This also means that it is a lot easier to take your queries from other SQL/MM compliant database systems and run them on your MySQL databases.

Implementation Confusion

Almost all of the spatial functions in MySQL had two versions—one version prefixed with “ST_” and one prefixed with “MBR“. The “ST_” prefixed version does a precise calculation on the geometries, while the “MBR” version uses a minimum bounding rectangle (MBR) instead of the exact shape of the geometries. Because of this, the two versions would often give different answers, as seen in the example below:

In addition to these two function name variants, a third name without any prefix sometimes existed as well. Which of the calculations does this function perform? Precise or with a minimum bounding rectangle? This wasn’t always clear or consistent, so in order to avoid this kind of confusion, all spatial functions without an “ST_” or “MBR” prefix are now deprecated.

However, according to and in compliance with the SQL/MM standard, exceptions to the above rule are the geometry construction functions: Point, LineString, Polygon, MultiPoint, MultiLineString, MultiPolygon and GeometryCollection. They have the same name as their corresponding column type, and will not be deprecated or receive an “ST_” prefix.

Inconsistencies in Error Reporting

Given all the function variants noted above, we also had some inconsistencies when it came to function naming and error reporting. An error within one of the Equals or MBREqual (yes, without the last s!) functions would report an error containing the string “MBREquals”, even though the function MBREquals did not exist! In addition, the GLength function did not follow any of the naming conventions we used. We have now cleaned all of this up so that the function names follow the SQL/MM standard, and the errors reported print the correct function name(s).

Okay, but What Do I Have to Do?

From MySQL 5.7.6 on all of the deprecated functions will raise a warning, and they will later be removed in a future Server release. Thus we strongly encourage you to ensure that all of your applications are using either the “ST_” prefixed function name variants or the “MBR” prefixed versions in MySQL 5.7. If you are unsure which of the prefixed function variants you should replace your current call(s) with, please take a look at the specific warnings produced in each case as they will tell you exactly which functions you should use instead.

We hope that all of this work helps to simplify your MySQL usage, bring us more into standards compliance, and further pave the way for MySQL playing a prominent part in the Open Source GIS database field.

That’s all for now. As always, THANK YOU for using MySQL!

MySQL 5.7 Labs — Inserting, Updating, and Deleting Records via HTTP

In the MySQL Labs version of MySQL version 5.7, there is a new HTTP plugin. The HTTP plugin documentation from the labs site provides this information (from MySQL Labs):

The HTTP Plugin for MySQL adds HTTP(S) interfaces to MySQL. Clients can use the HTTP respectively HTTPS (SSL) protocol to query data stored in MySQL. The query language is SQL but other, simpler interfaces exist. All data is serialized as JSON. This version of MySQL Server HTTP Plugin is a Labs release, which means it’s at an early development stage. It contains several known bugs and limitation, and is meant primarily to give you a rough idea how this plugin will look some day. Likewise, the user API is anything but finalized. Be aware it will change in many respects.

In other words, with a simple HTTP URL, you can access and modify your data stored in MySQL. Here is an overview from the documentation:

The HTTP Plugin for MySQL is a proof-of concept of a HTTP(S) interface for MySQL 5.7.

The plugin adds a new protocol to the list of protocols understood by the server. It adds the HTTP respectively HTTPS (SSL) protocol to the list of protocols that can be used to issue SQL commands. Clients can now connect to MySQL either using the MySQL Client Server protocol and programming language-dependent drivers, the MySQL Connectors, or using an arbitrary HTTP client.
Results for SQL commands are returned using the JSON format.

The server plugin is most useful in environments where protocols other than HTTP are blocked:
• JavaScript code run in a browser
• an application server behind a firewall and restricted to HTTP access
• a web services oriented environment

In such environments the plugin can be used instead of a self developed proxy which translates HTTP requests into MySQL requests. Compared to a user-developed proxy, the plugin means less latency, lower complexity and the benefit of using a MySQL product. Please note, for very large deployments an architecture using a proxy not integrated into MySQL may be a better solution to clearly separate software layers and physical hardware used for the different layers.

The HTTP plugin implements multiple HTTP interfaces, for example:
• plain SQL access including meta data
• a CRUD (Create-Read-Update-Delete) interface to relational tables
• an interface for storing JSON documents in relational tables

Some of the interfaces follow Representational State Transfer (REST) ideas, some don’t. See below for a description of the various interfaces.

The plugin maps all HTTP accesses to SQL statements internally. Using SQL greatly simplifies the development of the public HTTP interface. Please note, at this early stage of development performance is not a primary goal. For example, it is possible to develop a similar plugin that uses lower level APIs of the MySQL server to overcome SQL parsing and query planning overhead.

In this post, I will show you how to install the plugin and use HTTP commands to retrieve data. The documentation also provides other examples. We aren’t going to explain everything about the plugin, as you will need to download the documentation.

First, you will need to download the MySQL Labs 5.7 version which includes the plugin. This download is available from the MySQL Labs web site.

After MySQL 5.7 is installed, you will want to add these lines to your my.cnf/my.ini file under the [mysqld] section:

There are other options for the plugin, but we will skip them for this post.

After modifying the my.cnf/my.ini file, restart mysql and then install the plugin from a mysql prompt. Before proceeding, be sure to also check to make sure the plugin is installed:

We will need to create the user for accessing our database, and grant permissions:

We will need to create a table for our example. The table will be a very simple table with three fields – ID, first and last names:

We need to insert some data into the table:

Now that we have our table and table data, we can test a select statement with an HTTP URL. You may use a browser for this, but since I like to work with command line tools, I am going to use curl, a command line tool for doing all sorts of URL manipulations and transfers. Here is a simple select statement via curl. Use the plus sign (+) for spaces.

Select all of the names in the table:

If you want to use a browser, you might have to authenticate the connection (enter the user name and password):

And here is the output from submitting the URL in a browser:

Selecting a single name:

Deleting a row:

Inserting a row:

In a future post, I will show you how to use Perl to connect via HTTP and then parse the results. That’s all for now. THANK YOU for using MySQL!


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. Tony may be reached at info [at] and on LinkedIn.

MySQL 5.7 — Native Systemd Support


Systemd is a management and configuration platform available in all major Linux distributions. It provides infrastructure for service start, stop, restart and several other novel functionalities to manage services. Systemd replaces SysV and upstart initialization systems and is the default init system in most modern Linux distributions including Red Hat Enterprise Linux, Oracle Linux, Debian, Ubuntu, Fedora, SLES and openSUSE.

Preliminary support for systemd was introduced in earlier versions of MySQL. However, it had the following limitations and disadvantages:

  1. Use of ping tricks to check whether mysqld is ready to serve client connections.
  2. Though systemd had superior process control for automatic restarts, mysqld_safe was still used to identify abnormal mysqld termination and do automatic restarts.
  3. There was no auto-detection of systemd configuration paths to generate service unit files for various distributions.
  4. There was no integration with our CMake build system.

To overcome the above limitations and disadvantages, some minimal support should be provided within the server. With the release of MySQL 5.7.6, these limitations have been overcome and complete support for systemd is added on systemd based platforms Red Hat Enterprise Linux 7, Oracle Linux 7, CentOS 7, SLES 12, Fedora 20 & Fedora 21. Thus, MySQL 5.7.6 provides full native support for systemd, thereby leveraging the latest functionalities of all the major Linux distributions to their fullest capacity.

Command-line Option to Start mysqld in Daemon Mode

A new command-line option –-daemonize has been added which when enabled does a SysV style daemon initialization. This enables us to use a forking type service unit file for mysqld. Earlier on, we were using the simple type unit file for mysqld. This had the disadvantage that the mysqld service was had a state=running before the server was actually ready to handle client connections. This could cause problems by denying client connections from follow-up services that depend on the mysqld service. Also in some cases the server undergoes crash recovery—for which the completion time can vary wildly from sub-second times to many minutes—before the server is actually ready to handle connections. This resulted in the client connections of follow-up units to timeout and report failures.

With the forking type server (which uses the –-daemonize option), the disadvantage of using ping tricks to identify that mysqld is able to begin serving client connections is overcome. When mysqld is invoked with –-daemonize option, the parent process does a double fork and the grand child sets up the pid file and the listen socket initialization among other initialization steps. The grand child then notifies the parent to exit and this notifies systemd to set the service state to “running”.

Start, Stop, Restart and Status Functionality

We can now start, stop, restart and manage other functionalities of the MySQL server using systemd. Installing the rpm package for MySQL server will automatically enable the mysqld service. The systemctl command is the primary interface to control, query, and manage the functionality provided by systemd. After installing the MySQL server package, you can check to see whether mysqld is enabled or not, this way:

$ systemctl is-enabled mysqld

This means that mysqld is automatically started at boot time or when you start the mysqld service after package install. The appropriate data directory creation and initialization is automatically taken care of as well.

We can start the mysqld service using:

$ systemctl start mysqld

We can see the status of mysqld with:

$ systemctl status mysqld
● mysqld.service - MySQL Server
   Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled)
   Active: active (running) since Mon 2015-04-06 06:56:56 BST; 13min ago
  Process: 1030 ExecStart=/usr/sbin/mysqld --daemonize $MYSQLD_OPTS (code=exited, status=0/SUCCESS)
  Process: 957 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
 Main PID: 1071 (mysqld)
   CGroup: /system.slice/mysqld.service
           └─1071 /usr/sbin/mysqld --daemonize

We can restart mysqld with:

$ systemctl restart mysqld

We can stop mysqld with:

$ systemctl stop mysqld

Automatic Restarts

One of the major pieces of functionality offered by systemd is integrated process monitoring and automatic restarts in the event of a service failure/termination. Starting with MySQL 5.7.6, process monitoring and auto-restarts are now handled by systemd on systems that have it. If mysqld fails due to a restartable failure like a crash, then systemd automatically restarts mysqld. As an example, this can be verified by:

$ systemctl start mysqld
$ systemctl status mysqld
● mysqld.service - MySQL Server
   Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled)
   Active: active (running) since Mon 2015-04-06 07:18:51 BST; 11s ago
  Process: 3010 ExecStart=/usr/sbin/mysqld --daemonize $MYSQLD_OPTS (code=exited, status=0/SUCCESS)
  Process: 2994 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
 Main PID: 3014 (mysqld)
   CGroup: /system.slice/mysqld.service
           └─3014 /usr/sbin/mysqld --daemonize
$ kill -SEGV 3014
$ systemctl status mysqld
● mysqld.service - MySQL Server
   Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled)
   Active: active (running) since Mon 2015-04-06 07:19:53 BST; 2s ago
  Process: 3057 ExecStart=/usr/sbin/mysqld --daemonize $MYSQLD_OPTS (code=exited, status=0/SUCCESS)
  Process: 3042 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
 Main PID: 3061 (mysqld)
   CGroup: /system.slice/mysqld.service
           └─3061 /usr/sbin/mysqld --daemonize

Passing Custom Options to mysqld

mysqld_safe has a --malloc-lib option which allows you to specify a non-default library (e.g. jemalloc). mysqld_safe then sets the LD_PRELOAD environment variable and starts mysqld under this environment setting. On systemd based machines, the mysqld service unit file has the EnvironmentFile key set to /etc/sysconfig/mysql. You can then use an alternative malloc library by adding an entry in the /etc/sysconfig/mysqld file like this:

$ echo "LD_PRELOAD=/path/of/malloc/" >> /etc/sysconfig/mysqld

You can also pass other custom options to mysqld by creating additional entries/lines in /etc/sysconfig/mysql using the MYSQLD_OPTS=”option” format. Lastly, you can also specify custom options and environment variables using the systemctl command:

$ systemctl set-environment MYSQLD_OPTS="--general_log=1"

The custom options can also be unset using:

$ systemctl unset-environment MYSQLD_OPTS

You must then restart the mysqld service for the new environment or configuration settings to take effect.

Systemd Service and Support Files

Two systemd service configuration files and one support file are shipped with MySQL 5.7.6 in order to enable support for systemd:

  1. mysqld.service: This is the systemd service definition file that tells it what service to start, specifies auto-restart settings, the type of service it is and all of the dependencies between various units, etc. Here is the content of the mysqld.service file that is now installed in /usr/lib/systemd/system on Fedora 21:
    Description=MySQL Server
    # Execute pre and post scripts as root
    # Needed to create system tables
    # Start main service
    ExecStart=/usr/sbin/mysqld --daemonize $MYSQLD_OPTS
    # Use this to switch malloc implementation
  2. mysql.conf: This file describes configuration settings like the location of tmp files, permission mode and ownership, the age of tmpfiles that relate to mysqld service, and so on. For example, this file is installed in /usr/lib/tmpfiles.d on Fedora 21 with these contents:
    d /var/run/mysqld 0755 mysql mysql -
  3. mysqld_pre_systemd: This is a bash script file that generates the data directory when mysqld is started for the first time via systemctl. This script is run by systemd before starting mysqld in order to check for the presence of a proper data/mysql directory within the data directory location specified. If the data/mysql directory doesn’t exist, then systemd runs mysqld with the --initialize option to create the data directory (this files is typically installed in /usr/bin). For more information on the new bootstrap process in MySQL 5.7, please see Joro’s blog post.


MySQL 5.7.6 brings complete systemd support and is the first among the many variants of MySQL available in the marketplace to support systemd natively. We hope that you enjoy managing MySQL under systemd!

We also look forward to your feedback on this new work!. You can leave a comment here on the blog post or in a support ticket. If you feel that you encountered any related bugs, please do let us know via a bug report. Also you can find more details about managing MySQL 5.7.6 using systemd and the systemd related build options in the MySQL 5.7 docs here.

As always, THANK YOU for using MySQL!

MySQL Enterprise Database Firewall — Control and Monitor SQL Statement Executions

As of MySQL 5.6.24, MySQL Enterprise Edition includes MySQL Enterprise Firewall, an application-level firewall (it runs within the mysql database process) that enables database administrators to permit or deny SQL statement execution based on matching against whitelists of accepted statement patterns. This helps harden MySQL Server against attacks such as SQL injection or attempts to exploit applications by using them outside of their legitimate query workload characteristics.

Each MySQL account registered with the firewall has its own whitelist of statement patterns (a tokenized representation of a SQL statement), enabling protection to be tailored per account. For a given account, the firewall can operate in recording or protecting mode, for training in the accepted statement patterns or protection against unacceptable statements. The diagram illustrates how the firewall processes incoming statements in each mode.

MySQL Enterprise Firewall Operation


If you do not have a MySQL Enterprise Edition license, you may download a trial version of the software via Oracle eDelivery. The MySQL Firewall is included in the MySQL Product Pack, specifically for MySQL Database 5.6.24 or higher.

MySQL Enterprise Firewall has these components:

  • A server-side plugin named MYSQL_FIREWALL that examines SQL statements before they execute and, based on its in-memory cache, renders a decision whether to execute or reject each statement.
  • Server-side plugins named MYSQL_FIREWALL_USERS and MYSQL_FIREWALL_WHITELIST implement INFORMATION_SCHEMA tables that provide views into the firewall data cache.
  • System tables named firewall_users and firewall_whitelist in the mysql database provide persistent storage of firewall data.
  • A stored procedure named sp_set_firewall_mode() registers MySQL accounts with the firewall, establishes their operational mode, and manages transfer of firewall data between the cache and the underlying system tables.
  • A set of user-defined functions provides an SQL-level API for synchronizing the cache with the underlying system tables.
  • System variables enable firewall configuration and status variables provide runtime operational information.


Installing the Firewall

Installing the firewall is fairly easy. After you install MySQL version 5.6.24 or greater, you simply execute an SQL script that is located in the $MYSQL_HOME/share directory. There are two versions of the script, one for Linux and one for Windows (the firewall isn’t supported on the Mac yet).

The scripts are named win_install_firewall.sql for Windows and linux_install_firewall.sql for linux. You may execute this script from the command line or via MySQL Workbench. For the command line, be sure you are in the directory where the script is located.

The script create the firewall tables, functions, stored procedures and installs the necessary plugins. The script contains the following:

After you run the script, the firewall should be enabled. You may verify it by running this statement:

Testing the Firewall

To test the firewall, you may use a current mysql user, but we are going to create a test user for this example – webuser@localhost. (The user probably doesn’t need all privileges, but for this example we will grant everything to this user)

OPTIONAL: For our test, we will be using the sakila schema provided by MySQL. You may download the sakila database schema (requires MySQL 5.0 or later) at If you don’t want to use the sakila database, you may use your own existing database or create a new database.

After downloading the sakila schema, you will have two files, named sakila-schema.sql and sakila-data.sql. Execute the sakila-schema.sql first, and then sakila-data.sql to populate the database with data. If you are using the command line, simply do the following: (substitute UserName for a mysql user name)

After creating the sakila schema and importing the data, we now set the firewall to record those queries which we want to allow:

We can check to see the firewall mode via this statement, to be sure we are in the recording mode:

Now that we have recording turned on, let’s run a few queries:

We turn off the recording by turning on the protection mode:

We can check to see the firewall mode via this statement:

And we can look at our whitelist of statements:

The firewall is now protecting against non-whitelisted queries. We can execute a couple of the queries we previously ran, which should be allowed by the firewall.

Now we run two new queries, which should be blocked by the firewall.

The server will write an error message to the log for each statement that is rejected. Example:

You can use these log messages in your efforts to identify the source of attacks.

To see how much firewall activity you have, you may look look at the status variables:

The variables indicate the number of statements rejected, accepted, and added to the cache, respectively.

The MySQL Enterprise Firewall Reference is found at


Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] and on LinkedIn.
Tony is the author of Twenty Forty-Four: The League of Patriots 

Visit for more information.

SSL/TLS and RSA Improvements for OpenSSL Linked MySQL 5.7 Binaries


MySQL 5.7 server binaries compiled with the OpenSSL library now make it easy to set up SSL/TLS and RSA artifacts, and to enable them within MySQL. Two new read-only global options have been introduced through this work:

  • --auto-generate-certs: Enables automatic generation and detection of SSL artifacts at server start-up.
  • --sha256-password-auto-generate-rsa-keys: Enables automatic generation of an RSA key pair.

These options govern automatic generation and detection of SSL/TLS artifacts and RSA key pairs respectively. Auto generated files are placed inside the data directory, and both options now default to ON.

For the sha256_password authentication plugin, the private key and public key file locations already default to the data directory and hence, automatic detection of these files was already in place. Due to this existing functionality, the sole function of --sha256-password-auto-generate-rsa-keys is related to automatic key generation.


Using encrypted connections in communications with the server protects one’s data from the eyes of malicious entities while in transit. This is especially important when the server and clients are connected through open and/or insecure networks. While MySQL does provide a definitive guide to help users set up certificates and keys, one still needs to take the following steps in order to enable SSL/TLS manually within the MySQL server:

  1. Use the steps provided in the documentation to generate the certificates
  2. Move these certificates and keys to a secure location
  3. Update the MySQL server configuration file to specify the location of these certificates
  4. Start the MySQL server in order to use the new SSL artifacts

The case is similar when it comes to RSA keys. While the documentation helps you in generating an RSA key pair, using the newly generated key still requires steps similar to those mentioned above.

Our aim is to make MySQL secure by default. At the same time, we also want to make sure that it is easy to setup this secure environment with very little user intervention. These new options are a step towards this goal. These new server options default to ON, and hence in the absence of existing SSL/TLS artifacts and/or an RSA key pair, automatic generation of them will take place resulting in the MySQL server automatically having the capability to create secure connections immediately. This will be convenient for users who wish to create secure connections to the MySQL server without going through the trouble of generating SSL/TLS artifacts and/or RSA key pairs by themselves and then configuring the server to use them.

Note that the purpose of this functionality is to encourage users to use secure methods when connecting to the server by making the initial secure configuration easy. For better security, it is strongly recommended that users later switch to a valid set of certificates signed by a recognized certificate authority as soon as possible, rather than continuing to use the auto generated certificates indefinitely.


Auto-enabling SSL support

The option --auto-generate-certs kicks in if none of the ssl command line options (except --ssl of course!) are specified. It works in following manner:

  • Step 1: Check whether any of the ssl command line options except --ssl are specified, if so, the server will skip automatic generation and try to use the supplied options.
  • Step 2: Check for existing SSL/TLS artifacts in the data directory. If they exist then the automatic creation process is skipped with a message similar to following:

    Note that we check for the presence of ca.pem, server-cert.pem, and server-key.pem files as these three files are essential for enabling SSL support within the MySQL server.
  • Step 3: If the certificate files are not present in the data directory then the new certificate files—ca.pem, server-cert.pem, and server-key.pem—are generated and placed within the data directory.
    Upon successful automatic generation of these files, the MySQL server will log a message similar to following:

From this set of generated files, ca.pem, server-cert.pem, and server-key.pem are used for the --ssl-ca, --ssl-cert and --ssl-key options respectively. These auto generated files allow SSL/TLS support to be automatically enabled within the MySQL server from the get-go.

Furthermore, an extra set of X509 certificates and private keys are generated, which can be used as the client certificate and key.

Some of the properties of the automatically generated certificates and keys are:

  • The RSA key is 2048 bits.
  • The certificates are signed using the sha256 algorithm.
  • The certificates are valid for 10 years.
  • The subject line of the certificates contain only the common name (CN) field.
  • The naming convention for the generated CN is:
    Where MySQL_Server_Version is fixed at compile time. TYPE can be one of the CA, Server and Client. e.g. CN=MySQL_Server_X.Y.Z_Auto_Generated_Server_Certificate
  • The new CA certificate is self-signed and other certificates are signed by this new auto generated CA certificate and private key.
 Auto-enabling RSA support for the sha256_password authentication plugin

Much like auto-enabling SSL/TLS support, --sha256-password-auto-generate-rsa-keys is responsible for automatic generation of the RSA key pair. When the client tries to connect to the server using the sha256_password authentication plugin, a password is never sent in cleartext. By default, the sha256_password plugin attempts to use an SSL connection. If MySQL is built with OpenSSL, an additional option of using RSA encryption is also available to the client. The MySQL server exposes --sha256_password_private_key_path and --sha256_password_public_key_path, which can be used to point to an RSA private key and public key respectively at server startup.

The new --sha256-password-auto-generate-rsa-keys option works in following manner:

  • Step 1: Check if a non-default value for either --sha256_password_private_key_path or --sha256_password_public_key_path is used. If so, the server will skip automatic generation and try to obtain keys from the specified location(s).
  • Step 2: If the default location is used for both of these options, then check if the private_key.pem and public_key.pem files are present in the data directory. If these files are found, then auto generation is skipped.
  • Step 3: Otherwise we generate the private_key.pem and public_key.pem files with a key length of 2048 bits. These files are then placed within the data directory and are picked up automatically by the server.

Again, these keys are then automatically picked up by the MySQL server thus enabling RSA support for the sha256_password authentication plugin from the get-go.