Last updated August 13, 2015. Created on February 25, 2006.
Edited by othermachines, cthos, Matt V., skandalfish. Log in to edit this page.

  • For a general overview read the High Performance MySQL book and/or the MySQL Performance Blog, especially the InnoDB performance basics article.
  • Start with an appropriate MySQL option file. For servers with at least 2GB RAM dedicated to MySQL we recommend my-huge.cnf. For servers with a lot of writes, we recommend my-innodb.cnf instead of the default MyISAM engine type.
  • To reduce the overhead of creating connections we recommend using persistent DB connections. Note: Deprecated as of PHP 5.5.0.
  • If a query is called at least twice with no modifications to the queried tables a significant performance improvement can be gained by avoiding the processing of the query and the execution of the query by reading the query from the MySQL query cache. To learn how to set up the query cache read a practical set-up.
  • Be sure to have enough cached threads or you will launch too many new threads as described in this story about a Yahoo site.
  • The biggest performance boosts can come from identifying and tuning the slowest queries using the MySQL slow query logs.
  • You can use the DB Maintenance module or use the mysqlcheck commands below in a cronjob.
    echo "OPTIMIZE TABLE accesslog,cache,comments,node,users,watchdog;FLUSH TABLES;" |mysql -u user -ppasswd

    If you have complete control of the database server you can use:

    mysqlcheck -o -A -p
  • MySQL supports many different engine types including Archive (MySQL 5+), MyISAM, and InnoDB. Performance sites should use InnoDB for most tables particularly tables such as node that get a lot of writes and reads. MyISAM exclusive table locks for updates before selects versus InnoDB's row level locking can mean MyISAM blocks reads if there are many writes. Convert from MyISAM to InnoDB.
  • In MySQL 5 a new table type called the archive table type was introduced to deal with common requirements for web applications like access logs where only INSERTS and SELECTS were done. If tables such as the Access Log table are determined to not have DELETE, UPDATE, and REPLACE then this MySQL engine type can often offer significant performance improvements as they have done for sites like Slashdot, Yahoo, and Live Journal.
  • Make use of the Percona Toolkit, especially pt-variable-advisor to check for problem settings and pt-mysql-summary to get an at-a-glance look at your configuration.
  • Avoid RAID-5 in a high write environment. It can degrade performance significantly.
  • Consider changing the IO scheduler if you are running on a dedicated MySQL server. The noop or deadline schedulers can have significant performance increases over anticipatory and cfq.

Looking for support? Visit the forums, or join #drupal-support in IRC.


Caleb G2’s picture

All the stuff it contains led me to the info which I've put together here (more on configuring mysql).

romca’s picture

I've experienced very bad site shutdown due to the too many opened (not closed) connections in mysql. (Already posted, but as I was looking for help in this section, I would add here as well)

Reading/surfing I found that InnoDB tables can have this problem. One of the causes is the very high traffic and loss of connections (but deadlock is even something different). Combined you may easily reach the max_users_connection limit and this will render your site useless.

some hints
- use cache, if you can
- use throttle
- use bad behaviour module (as this should shield you against spammers)
- set interactive_timeout so that your connections expire quickly

I have made these changes to the - to the function mysql_connect

the old version:
$connection = mysql_connect($url['host'], $url['user'], $url['pass'], TRUE) or die(mysql_error());

the new version:
$connection = mysql_connect($url['host'], $url['user'], $url['pass'], TRUE, MYSQL_CLIENT_INTERACTIVE) or die(mysql_error());
mysql_query("SET SESSION interactive_timeout=120", $connection);

to get the timeout run phpinfo() and watch for the connection timeout inside your mysql, and also timeout of your apache process (or IIS if you use IIS)

basically, after the timeout, your script will be cut off, BUT the opened connection will be there eating out of your max_users_connections. If you set the interactive_timeout limit, it will be closed automatically (note: mysql has a global wait_timeout limit, but as a default there is oftten 28800 = 8 hours, the connection will expire in 8 hours!)


joetsuihk’s picture

Mysql bug 'SESSION 'interactive_timeout' variable is useless'

which means 'interactive_timeout' just do nothing at all.

reikiman’s picture

I have multiple times tried the OPTIMIZE command and that mysqlcheck command as well, usually it results in a broken website. Why have I tried this multiple times? Well sometimes I forget things so, e.g., the other day I was browsing these Drupal tuning tips, saw the above discussion, saw it say to run this mysqlcheck command and I said .. "okay, whatever, they're not going to lead me wrong" ...

Well... I'd forgotten about the last time I did this, and that time I'd forgotten about the previous time, etc... my server has 6 web sites with data provided in a MySQL instance, and the mysqlcheck command "optimized" all those sites into a state of brokenness.

What happens is that some tables becomes broken or missing. The typical problem is reports of e.g. cannot open the cache or watchdog table because the underlying .MYI file is missing or corrupted. Eek. Okay, so cache and watchdog are innocuous to have destroyed but some other tables can be hurt or damaged as well.

On one of my sites every node was being shown as blank. It knew the nodes were there, and listed all 200 or so pages of the listings of the node teasers, but instead of the teaser content each one read "n/a".

The most beneficial thing to do if you follow the above advice and end up in the same situation is


This fixes most of the problems.

- David Herron -

mdowsett’s picture

how do you know what table is damaged tho?

yhager’s picture

Just hit 'mysqlrepair ' on the command line and hope for the best.

Most of the times it helped, but sometimes restoring from backup is your best bet.. (node_revisions table got damaged.. *sigh*)


1kenthomas’s picture

You did remember to backup before optimizing, right :P ?

jgoodwill01’s picture

Every time I try to change my mysql config file my Drupal site fails to load and gives this error.

Warning: Incorrect information in file: '.\access.frm' query: SELECT 1 FROM access WHERE type = 'host' AND LOWER('xxx.xx.x.xx') LIKE LOWER(mask) AND status = 0 LIMIT 0, 1 in ...\includes\ on line 128

Is there any way around this?

Also in the error list are these:

Notice: Undefined variable: variables in ...\includes\ on line 480

Warning: Incorrect information in file: '.\users.frm' query: SELECT u.*, s.* FROM users u INNER JOIN sessions s ON u.uid = s.uid WHERE s.sid = 's7lh6f63q0j96gvjr083bpatk2' in ...\includes\ on line 128

mikeytown2’s picture

Run this code, you can do it in a code block, as it will only do it once. Be sure to remove code block once the database has been converted.

$tables = db_query('SHOW TABLE STATUS');
while ($table = db_fetch_array($tables)) {
  if ($table['Engine'] == 'MyISAM') {
    db_query('ALTER TABLE {%s} ENGINE = InnoDB', $table['Name']);

Odds are newly created tables will still be MyISAM; you need to change the default storage engine to fix that. Be aware that the size of your database will grow by about 1/4. Also be aware that for large databases, it will take time since it has to copy the data.

cjscullyca’s picture

Using InnoDB as the default storage engine can cause problems if not configured correctly.

In addition to adding "default-storage-engine = InnoDB" to your my.cnf file, you should also add "innodb_file_per_table". The default behavior of InnoDB is to store all database tables in a single ibdata file. Adding the line "innodb_file_per_table" causes InnoDB to store each new table in it's own file, somewhat similar to the way the MyISAM engine does.

If you leave this line out your ibdata file will grow rapidly and can eventually consume all available space on your disk. I learned this the hard way. The only way to convert existing tables from the single tablespace ibdata set up to separate tablespaces is to dump the entire database with mysqldump, delete the huge ibdata file from disk and restore the database from the dump. The InnoDB engine will then create the tables in individual tablespaces. It still uses the ibdata file for its internal data dictionary and undo logs.

Read Innodb Performance Tuning Tips for additional InnoDB tuning information.

mcurry’s picture

Here's an interesting technique to recover space in the ibdata file after switch to InnoDB if you neglected to use the innodb_file_per_table option prior to converting to InnoDB. It won't work if there are foreign keys, but might be helpful to the typical Drupal user.

atomicus’s picture

    $tables = db_query('SHOW TABLE STATUS');
    while ($table = $tables->fetchAssoc()) {
        if ($table['engine'] == 'MyISAM') {
            echo "Converting table:" . $table['name'] . '
'; db_query('ALTER TABLE ' . $table['name'] . ' ENGINE = InnoDB'); } } die();

This will work with D7

ramkrk’s picture