-
Notifications
You must be signed in to change notification settings - Fork 9
/
mysql
289 lines (233 loc) · 11.8 KB
/
mysql
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
# http://forge.mysql.com/wiki/Top10SQLPerformanceTips
GRANT ALL PRIVILEGES ON `dbname`.* TO 'username'@'%.bdgn.net' IDENTIFIED BY 'pass';
UPDATE mysql.user SET Password = PASSWORD('pass') WHERE User = 'username';
FLUSH PRIVILEGES;
SHOW STATUS; # mysqladmin extended-status
SHOW VARIABLES; (GLOBAL|SESSION) # mysqladmin variables
mysql> SHOW PROCESSLIST # mysqladmin processlist
mysql> KILL 27 # mysqladmin kill 27
SHOW CREATE TABLE tbl\G -- engine, indexes
SHOW TABLE STATUS LIKE 'tbl'\G -- sizes
EXPLAIN SELECT ...\G -- clues of inefficiencies
SHOW VARIABLES LIKE '%buffer%'; -- cache size
CREATE DATABASE dbname CHARACTER SET 'utf8' COLLATE 'utf8_turkish_ci';
GRANT ALL PRIVILEGES ON `dbname`.* TO 'username'@'%.bdgn.net' IDENTIFIED BY 'pass';
FLUSH PRIVILEGES;
##### performance #####
# http://www.mysqlperformanceblog.com/2006/09/29/what-to-tune-in-mysql-server-after-installation/
# http://docs.cellblue.nl/2007/03/17/easy-mysql-performance-tweaks/
max_heap_table_size, default 16Mb
mysql> SHOW VARIABLES LIKE 'key%'
key_buffer_size - important if you use myisam, holds the indexes of tables in memory, Bigger is better, but prevent swapping at all costs. generally 1/4 of system memory.
sort_buffer_size - used for grouping and sorting and is a per-thread buffer. there is no reason to increase sort_buffer_size even if you have 64GB of memory to waste. Furthermore doing so may decrease performance.
table_cache - number of tables a thread can keep open at the same time, 1024 is good value for applications with couple hundreds tables (remember each connection needs its own entry) if you have many connections or many tables increase it larger.
thread_cache - normally set this value to at least 16. If application has large jumps in amount of concurrent connections and I see fast growth of Threads_Created variable I boost it higher. The goal is not to have threads created in normal operation.
query_cache_size - Generally 32M to 512M.
query_cache_limit - do not cache queries smaller than this (4MB)
tmp_table_size - used for sorting and grouping (64MB)
delay_key_write - be careful, on power failure you fail too
wait_timeout - ist default is 1 hour, but in practice 1 minute(60) is enough
innodb_buffer_pool_size
innodb_additional_mem_pool_size
innodb_log_file_size
innodb_log_buffer_size
innodb_flush_log_at_trx_commit
# Use Slow Query Log (always have it on!)
# Use SQL_NO_CACHE when you are SELECTing frequently updated data or large sets of data
● log_slow_queries=/var/lib/mysql/slowqueries.log
long_query_time=2
Use mysqldumpslow
# use MERGE tables ARCHIVE tables for logs
# from http://forge.mysql.com/wiki/Top10SQLPerformanceTips
Don't use DISTINCT when you have or could use GROUP BY
Use LOAD DATA instead of INSERT
LIMIT m,n may not be as fast as it sounds. Learn how to improve it and read more about Efficient Pagination Using MySQL
Use SQL_NO_CACHE when you are SELECTing frequently updated data or large sets of data
innodb_flush_commit=0 can help slave lag
Optimize for data types, use consistent data types. Use PROCEDURE ANALYSE() to help determine the smallest data type for your needs.
enable and increase the query and buffer caches if appropriate
--skip-name-resolve
use --safe-updates for client
Keep an eye on buffer pool and keybuffer hit rate
increase myisam_sort_buffer_size to optimize large inserts (this is a per-connection variable)
increase temp table size in a data warehousing environment (default is 32Mb) so it doesn't write to disk (also constrained by max_heap_table_size, default 16Mb)
Run in SQL_MODE=STRICT to help identify warnings
consider battery-backed RAM for innodb logfiles
BLACKHOLE engine and replication is much faster than FEDERATED tables for things like logs.
-Know your storage engines and what performs best for your needs, know that different ones exist.
- ie, use MERGE tables ARCHIVE tables for logs
Archive old data -- don't be a pack-rat! 2 common engines for this are ARCHIVE tables and MERGE tables
Storing flags in a database can slow down execution due to a bad cardinality. Try using bit flags
Storing flags in a database can slow down execution due to a bad cardinality. Try using bit flags
use one of the supplied config files
key_buffer, unix cache (leave some RAM free), per-connection variables, innodb memory variables
be aware of global vs. per-connection variables
check SHOW STATUS and SHOW VARIABLES (GLOBAL|SESSION in 5.0 and up)
be aware of swapping esp. with Linux, "swappiness" (bypass OS filecache for innodb data files, innodb_flush_method=O_DIRECT if possible (this is also OS specific))
defragment tables, rebuild indexes, do table maintenance
If you use innodb_flush_txn_commit=1, use a battery-backed hardware cache write controller
more RAM is good so faster disk speed
use 64-bit architectures
== replication ==
#from http://aciddrop.com/2008/01/10/step-by-step-how-to-setup-mysql-database-replication/
#from http://www.gra2.com/article.php/setting-up-database-replication-on-mysql
=== master ===
master-mysql> CREATE USER 'slave_user'@'X.X.X.X' IDENTIFIED BY 'password'; -- X.X.X.X is the slave
master-mysql> GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'X.X.X.X' IDENTIFIED BY 'your_password';
master-mysql> FLUSH PRIVILEGES;
user@master:~$ vim my.cnf # in [mysqld] section
log-bin=/var/lib/mysql/mysql-bin.log # not necessary but if not given uses hostname
server-id=1
binlog-do-db=my_database # binlog-ignore-db=mysql
#You should also make sure skip-networking has not been enabled.
# If you use innodb add the next lines
innodb_flush_log_at_trx_commit=1
sync_binlog=1
user@master:~$ /etc/rc.d/init.d/mysqld restart
=== slave ===
user@slave:~$ vim my.cnf # in [mysqld] section
server-id=2
master-host=128.0.0.1
master-user=slave_user
master-password=slave_password
master-connect-retry=60
replicate-do-db=my_database # or replicate-ignore-db=mysql
relay-log = /var/lib/mysql/slave-relay.log
relay-log-index = /var/lib/mysql/slave-relay-log.index
#You should also make sure skip-networking has not been enabled.
# in redhat
log-slave-updates
log-warnings
report-host=slave.mydomain.com
user@slave:~$ /etc/rc.d/init.d/mysqld restart
=== get data to slave ===
master-mysql> FLUSH TABLES WITH READ LOCK; -- Don't close the mysql client
Query OK, 0 rows affected (0.00 sec)
master-mysql> show master status\G;
*************************** 1. row ***************************
File: mysql-bin.00002
Position: 230
Binlog_Do_DB: database_to_be_replicated
Binlog_Ignore_DB:
1 row in set (0.00 sec)
user@master:~$ mysqldump my_database -u root -p > /home/user/database.sql;
user@slave:~$ scp [email protected]:/home/user/database.sql.gz /home/user/
slave-mysql> CREATE DATABASE `my_database`;
user@slave:~$ mysql -u root -p my_database </home/user/database.sql
=== start ===
master-mysql> SHOW MASTER STATUS;
+---------------------+----------+-------------------------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+---------------------+----------+-------------------------------+------------------+
| mysql-bin.000001 | 21197930 | my_database,my_database | |
+---------------------+----------+-------------------------------+------------------+
slave-mysql> SLAVE STOP;
Query OK, 0 rows affected (0.00 sec)
slave-mysql> CHANGE MASTER TO
#+> MASTER_HOST='128.0.0.1',
#+> MASTER_USER='slave_user',
#+> MASTER_PASSWORD='slave_password',
+> MASTER_LOG_FILE='mysql-bin.000001',
+> MASTER_LOG_POS=21197930,
+> MASTER_CONNECT_RETRY=10;
Query OK, 0 rows affected (0.01 sec)
slave-mysql> SLAVE START;
Query OK, 0 rows affected (0.01 sec)
slave-mysql> SHOW SLAVE STATUS\G;
... # these twu must be "Yes"
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
...
# If you have Slave_IO_Running set to No, you haven't probably locked correctly the database prior to dumping it. You will have to repeat the dumping process locking correctly the database.
master-mysql> unlock tables; -- if you previously ran FLUSH TABLES WITH READ LOCK;
slave-mysql> SHOW PROCESSLIST;
master-mysql> SHOW PROCESSLIST;
== recovery ==
user@slave:~$ rm /var/lib/mysql/*relay*
user@slave:~$ rm /var/lib/mysql/master.info
user@master:~$ rm -f /home/mysql/logs/*
# re-get data to slave
== create database ==
create database `dbname` character set 'utf8' COLLATE 'utf8_turkish_ci';
== copy a table into memory ==
# from http://stackoverflow.com/questions/5097088/how-to-copy-mysql-table-structure-to-table-in-memory
CREATE TABLE t_copy ENGINE=MEMORY SELECT * FROM t_original;
root@mysqlmaster:~# sudo aptitude install mysql-server
root@mysqlmaster:~# service mysql stop
root@mysqlslave:~# vi /usr/share/doc/mysql-server-5.1/README.Debian.gz
root@mysqlmaster:~# vi /etc/mysql/my.cnf
...
tmpdir = /var/tmp/mysql
...
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
binlog_do_db = include_database_name
#binlog_ignore_db = exclude_database_name
...
root@mysqlmaster:~# service mysql start
root@mysqlmaster:~# mysql -u root -p
mysql> SLAVE STOP;
mysql> CREATE USER 'slave_user'@'%' IDENTIFIED BY 'slave_pass';
mysql> GRANT SUPER,REPLICATION CLIENT,RELOAD, REPLICATION SLAVE ON *.* TO 'slave_user'@'%' IDENTIFIED BY 'slave_pass';
mysql> FLUSH PRIVILEGES;
mysql> CREATE DATABASE include_database_name;
mysql> SHOW MASTER STATUS;
+------------------+----------+-----------------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+-----------------------+------------------+
| mysql-bin.000002 | 455 | include_database_name | |
+------------------+----------+-----------------------+------------------+
1 row in set (0.00 sec)
mysql> SHOW PROCESSLIST;
root@mysqlslave:~# sudo aptitude install mysql-server
root@mysqlslave:~# service mysql stop
root@mysqlslave:~# chown mysql. /var/tmp/mysql
root@mysqlslave:~# mkdir /var/tmp/mysql
root@mysqlslave:~# vi /etc/mysql/my.cnf
...
tmpdir = /var/tmp/mysql
...
bind-address = 0.0.0.0
...
server-id = 2
master-host = mysqlmaster
master-user = slave_user
master-password = slave_pass
master-connect-retry = 60
replicate-do-db = include_database_name
...
root@mysqlslave:~# service mysql start
root@mysqlslave:~# mysql -u root -p
mysql> SLAVE STOP;
#mysql> CHANGE MASTER TO MASTER_HOST='mysqlmaster', MASTER_USER='slave_user', MASTER_PASSWORD='slave_pass', MASTER_LOG_FILE='mysql-bin.000002', MASTER_LOG_POS=455;
mysql> CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000002', MASTER_LOG_POS=455;
mysql> START SLAVE;
mysql> SHOW SLAVE STATUS \G
mysql> SHOW PROCESSLIST;
===== copy database =====
# http://dev.mysql.com/doc/refman/5.5/en/copying-databases.html
source_shell> mysqldump db_name | mysql -h 'target' db_name
# or
source_target> mysqldump -h 'source' --compress db_name | mysql db_name
# or
source_shell> mysqldump --quick db_name | gzip > db_name.gz
target_shell> gunzip < db_name.gz | mysql db_name
#For large tables (big tables), this is much faster than simply using mysqldump.
source_shell> mkdir DUMPDIR
source_shell> mysqldump --tab=DUMPDIR db_name
target_shell> mysqladmin create db_name # create database
target_shell> cat DUMPDIR/*.sql | mysql db_name # create tables in database
target_shell> mysqlimport db_name DUMPDIR/*.txt # load data into tables
===== mysqltuner =====
# from http://www.debianadmin.com/check-your-mysql-server-performance-with-mysqltuner.html
wget http://mysqltuner.com/mysqltuner.pl
perl mysqltuner.pl
===== mysql dump encoding =====
# from http://www.loopbacking.info/blog/2012/09/25/farkli-kodlamali-mysql-veri-tabanina-veri-aktarmak/
# add next three lines in the begiining of the dump before importing
SET NAMES 'utf8';
SET CHARACTER SET 'utf8';
SET COLLATION_CONNECTION = 'utf8_bin';
#SET COLLATION_CONNECTION = 'utf8_general_ci';