Set - 6

Question 21 :

How MySQL Optimizes WHERE Clauses ?

Answer :

The WHERE optimizations are put in the SELECT part here because they are mostly used with SELECT, but the same optimizations apply for WHERE in DELETE and UPDATE statements.

Also note that this section is incomplete. MySQL does many optimizations, and we have not had time to document them all.

Some of the optimizations performed by MySQL are listed below:

Removal of unnecessary parentheses:
((a AND b) AND c OR (((a AND b) AND (c AND d))))
-> (a AND b AND c) OR (a AND b AND c AND d)

Constant folding:
(a-> b>5 AND b=c AND a=5

Constant condition removal (needed because of constant folding):
(B>=5 AND B=5) OR (B=6 AND 5=50) OR (B=7 AND 5=6)
-> B=5 OR B=6

Constant expressions used by indexes are evaluated only once.
COUNT(*) on a single table without a WHERE is retrieved directly from the table information. This is also done for any NOT NULL expression when used with only one table.
Early detection of invalid constant expressions. MySQL quickly detects that some SELECT statements are impossible and returns no rows.
HAVING is merged with WHERE if you don't use GROUP BY or group functions (COUNT(), MIN()...).
For each sub-join, a simpler WHERE is constructed to get a fast WHERE evaluation for each sub-join and also to skip records as soon as possible.
All constant tables are read first, before any other tables in the query. A constant table is:
An empty table or a table with 1 row.
A table that is used with a WHERE clause on a UNIQUE index, or a PRIMARY KEY, where all index parts are used with constant expressions and the index parts are defined as NOT NULL.
All the following tables are used as constant tables:
mysql> SELECT * FROM t WHERE primary_key=1;
mysql> SELECT * FROM t1,t2
WHERE t1.primary_key=1 AND t2.primary_key=t1.id;

The best join combination to join the tables is found by trying all possibilities. If all columns in ORDER BY and in GROUP BY come from the same table, then this table is preferred first when joining.
If there is an ORDER BY clause and a different GROUP BY clause, or if the ORDER BY or GROUP BY contains columns from tables other than the first table in the join queue, a temporary table is created.
If you use SQL_SMALL_RESULT, MySQL will use an in-memory temporary table.
Each table index is queried, and the best index that spans fewer than 30% of the rows is used. If no such index can be found, a quick table scan is used.
In some cases, MySQL can read rows from the index without even consulting the data file. If all columns used from the index are numeric, then only the index tree is used to resolve the query.
Before each record is output, those that do not match the HAVING clause are skipped.
Some examples of queries that are very fast:

mysql> SELECT COUNT(*) FROM tbl_name;
mysql> SELECT MIN(key_part1),MAX(key_part1) FROM tbl_name;
mysql> SELECT MAX(key_part2) FROM tbl_name
WHERE key_part_1=constant;
mysql> SELECT ... FROM tbl_name
ORDER BY key_part1,key_part2,... LIMIT 10;
mysql> SELECT ... FROM tbl_name
ORDER BY key_part1 DESC,key_part2 DESC,... LIMIT 10;

The following queries are resolved using only the index tree (assuming the indexed columns are numeric):

mysql> SELECT key_part1,key_part2 FROM tbl_name WHERE key_part1=val;
mysql> SELECT COUNT(*) FROM tbl_name
WHERE key_part1=val1 AND key_part2=val2;
mysql> SELECT key_part2 FROM tbl_name GROUP BY key_part1;

The following queries use indexing to retrieve the rows in sorted order without a separate sorting pass:

mysql> SELECT ... FROM tbl_name ORDER BY key_part1,key_part2,..
. mysql> SELECT ... FROM tbl_name ORDER BY key_part1 DESC,key_part2 DESC,...


Question 22 :

How MySQL Optimizes DISTINCT ?

Answer :

DISTINCT is converted to a GROUP BY on all columns, DISTINCT combined with ORDER BY will in many cases also need a temporary table.

When combining LIMIT # with DISTINCT, MySQL will stop as soon as it finds # unique rows.

If you don't use columns from all used tables, MySQL will stop the scanning of the not used tables as soon as it has found the first match.

SELECT DISTINCT t1.a FROM t1,t2 where t1.a=t2.a;

In the case, assuming t1 is used before t2 (check with EXPLAIN), then MySQL will stop reading from t2 (for that particular row in t1) when the first row in t2 is found.


Question 23 :

How MySQL Optimizes LEFT JOIN and RIGHT JOIN ?

Answer :

A LEFT JOIN B in MySQL is implemented as follows:

The table B is set to be dependent on table A and all tables that A is dependent on.
The table A is set to be dependent on all tables (except B) that are used in the LEFT JOIN condition.
All LEFT JOIN conditions are moved to the WHERE clause.
All standard join optimizations are done, with the exception that a table is always read after all tables it is dependent on. If there is a circular dependence then MySQL will issue an error.
All standard WHERE optimizations are done.
If there is a row in A that matches the WHERE clause, but there wasn't any row in B that matched the LEFT JOIN condition, then an extra B row is generated with all columns set to NULL.
If you use LEFT JOIN to find rows that don't exist in some table and you have the following test: column_name IS NULL in the WHERE part, where column_name is a column that is declared as NOT NULL, then MySQL will stop searching after more rows (for a particular key combination) after it has found one row that matches the LEFT JOIN condition.
RIGHT JOIN is implemented analogously as LEFT JOIN.

The table read order forced by LEFT JOIN and STRAIGHT JOIN will help the join optimizer (which calculates in which order tables should be joined) to do its work much more quickly, as there are fewer table permutations to check.

Note that the above means that if you do a query of type:

SELECT * FROM a,b LEFT JOIN c ON (c.key=a.key) LEFT JOIN d (d.key=a.key) WHERE b.key=d.key

MySQL will do a full scan on b as the LEFT JOIN will force it to be read before d.

The fix in this case is to change the query to:

SELECT * FROM b,a LEFT JOIN c ON (c.key=a.key) LEFT JOIN d (d.key=a.key) WHERE b.key=d.key


Question 24 :

How MySQL Optimizes LIMIT ?

Answer :

In some cases MySQL will handle the query differently when you are using LIMIT # and not using HAVING:

If you are selecting only a few rows with LIMIT, MySQL will use indexes in some cases when it normally would prefer to do a full table scan.
If you use LIMIT # with ORDER BY, MySQL will end the sorting as soon as it has found the first # lines instead of sorting the whole table.
When combining LIMIT # with DISTINCT, MySQL will stop as soon as it finds # unique rows.
In some cases a GROUP BY can be resolved by reading the key in order (or do a sort on the key) and then calculate summaries until the key value changes. In this case LIMIT # will not calculate any unnecessary GROUP BY's.
As soon as MySQL has sent the first # rows to the client, it will abort the query.
LIMIT 0 will always quickly return an empty set. This is useful to check the query and to get the column types of the result columns.
The size of temporary tables uses the LIMIT # to calculate how much space is needed to resolve the query.


Question 25 :

MySQL - Speed of INSERT Queries ?

Answer :

The time to insert a record consists approximately of:

Connect: (3)
Sending query to server: (2)
Parsing query: (2)
Inserting record: (1 x size of record)
Inserting indexes: (1 x number of indexes)
Close: (1)
where the numbers are somewhat proportional to the overall time. This does not take into consideration the initial overhead to open tables (which is done once for each concurrently running query).

The size of the table slows down the insertion of indexes by N log N (B-trees).

Some ways to speed up inserts:

If you are inserting many rows from the same client at the same time, use multiple value lists INSERT statements. This is much faster (many times in some cases) than using separate INSERT statements.
If you are inserting a lot of rows from different clients, you can get higher speed by using the INSERT DELAYED statement.

Note that with MyISAM you can insert rows at the same time SELECTs are running if there are no deleted rows in the tables. When loading a table from a text file, use LOAD DATA INFILE. This is usually 20 times faster than using a lot of INSERT statements.

It is possible with some extra work to make LOAD DATA INFILE run even faster when the table has many indexes. Use the following procedure:
Optionally create the table with CREATE TABLE. For example, using mysql or Perl-DBI.
Execute a FLUSH TABLES statement or the shell command mysqladmin flush-tables.
Use myisamchk --keys-used=0 -rq /path/to/db/tbl_name. This will remove all usage of all indexes from the table.
Insert data into the table with LOAD DATA INFILE. This will not update any indexes and will therefore be very fast.
If you are going to only read the table in the future, run myisampack on it to make it smaller.

Re-create the indexes with myisamchk -r -q /path/to/db/tbl_name. This will create the index tree in memory before writing it to disk, which is much faster because it avoids lots of disk seeks. The resulting index tree is also perfectly balanced. Execute a FLUSH TABLES statement or the shell command mysqladmin flush-tables.
This procedure will be built into LOAD DATA INFILE in some future version of MySQL.
You can speed up insertions by locking your tables:
mysql> LOCK TABLES a WRITE;
mysql> INSERT INTO a VALUES (1,23),(2,34),(4,33);
mysql> INSERT INTO a VALUES (8,26),(6,29);
mysql> UNLOCK TABLES;

The main speed difference is that the index buffer is flushed to disk only once, after all INSERT statements have completed. Normally there would be as many index buffer flushes as there are different INSERT statements. Locking is not needed if you can insert all rows with a single statement. Locking will also lower the total time of multi-connection tests, but the maximum wait time for some threads will go up (because they wait for locks). For example:
thread 1 does 1000 inserts
thread 2, 3, and 4 does 1 insert
thread 5 does 1000 inserts

If you don't use locking, 2, 3, and 4 will finish before 1 and 5. If you use locking, 2, 3, and 4 probably will not finish before 1 or 5, but the total time should be about 40% faster. As INSERT, UPDATE, and DELETE operations are very fast in MySQL, you will obtain better overall performance by adding locks around everything that does more than about 5 inserts or updates in a row. If you do very many inserts in a row, you could do a LOCK TABLES followed by an UNLOCK TABLES once in a while (about each 1000 rows) to allow other threads access to the table. This would still result in a nice performance gain. Of course, LOAD DATA INFILE is much faster for loading data.
To get some more speed for both LOAD DATA INFILE and INSERT, enlarge the key buffer.