bitscn.com
深入解析mysql分区(partition)功能
自5.1开始对分区(partition)有支持
= 水平分区(根据列属性按行分)=
举个简单例子:一个包含十年发票记录的表可以被分区为十个不同的分区,每个分区包含的是其中一年的记录。
=== 水平分区的几种模式:===
* range(范围) – 这种模式允许dba将数据划分不同范围。例如dba可以将一个表通过年份划分成三个分区,80年代(1980's)的数据,90年代(1990's)的数据以及任何在2000年(包括2000年)后的数据。
* hash(哈希) – 这中模式允许dba通过对表的一个或多个列的hash key进行计算,最后通过这个hash码不同数值对应的数据区域进行分区,。例如dba可以建立一个对表主键进行分区的表。
* key(键值) – 上面hash模式的一种延伸,这里的hash key是mysql系统产生的。
* list(预定义列表) – 这种模式允许系统通过dba定义的列表的值所对应的行数据进行分割。例如:dba建立了一个横跨三个分区的表,分别根据2004年2005年和2006年值所对应的数据。
* composite(复合模式) - 很神秘吧,哈哈,其实是以上模式的组合使用而已,就不解释了。举例:在初始化已经进行了range范围分区的表上,我们可以对其中一个分区再进行hash哈希分区。
= 垂直分区(按列分)=
举个简单例子:一个包含了大text和blob列的表,这些text和blob列又不经常被访问,这时候就要把这些不经常使用的text和blob了划分到另一个分区,在保证它们数据相关性的同时还能提高访问速度。
[分区表和未分区表试验过程]
*创建分区表,按日期的年份拆分
[sql]
mysql> create table part_tab ( c1 int default null, c2 varchar(30) default null, c3 date default null) engine=myisam
partition by range (year(c3)) (partition p0 values less than (1995),
partition p1 values less than (1996) , partition p2 values less than (1997) ,
partition p3 values less than (1998) , partition p4 values less than (1999) ,
partition p5 values less than (2000) , partition p6 values less than (2001) ,
partition p7 values less than (2002) , partition p8 values less than (2003) ,
partition p9 values less than (2004) , partition p10 values less than (2010),
partition p11 values less than maxvalue );
注意最后一行,考虑到可能的最大值
*创建未分区表
[sql]
mysql> create table no_part_tab (c1 int(11) default null,c2 varchar(30) default null,c3 date default null) engine=myisam;
*通过存储过程灌入800万条测试数据
mysql> set sql_mode=''; /* 如果创建存储过程失败,则先需设置此变量, bug? */
mysql> delimiter // /* 设定语句终结符为 //,因存储过程语句用;结束 */
[sql]
mysql> create procedure load_part_tab()
begin
declare v int default 0;
while v
do
insert into part_tab
values (v,'testing partitions',adddate('1995-01-01',(rand(v)*36520) mod 3652));
set v = v + 1;
end while;
end
//
mysql> delimiter ;
mysql> call load_part_tab();
query ok, 1 row affected (8 min 17.75 sec)
[sql]
mysql> insert into no_part_tab select * from part_tab;
query ok, 8000000 rows affected (51.59 sec)
records: 8000000 duplicates: 0 warnings: 0
* 测试sql性能
[sql]
mysql> select count(*) from part_tab where c3 > date '1995-01-01' and c3
+----------+
| count(*) |
+----------+
| 795181 |
+----------+
1 row in set (0.55 sec)
[sql]
mysql> select count(*) from no_part_tab where c3 > date '1995-01-01' and c3
+----------+
| count(*) |
+----------+
| 795181 |
+----------+
1 row in set (4.69 sec)
结果表明分区表比未分区表的执行时间少90%。
* 通过explain语句来分析执行情况
[sql]
mysql > explain select count(*) from no_part_tab where c3 > date '1995-01-01' and c3
/* 结尾的/g使得mysql的输出改为列模式 */
*************************** 1. row ***************************
id: 1
select_type: simple
table: no_part_tab
type: all
possible_keys: null
key: null
key_len: null
ref: null
rows: 8000000
extra: using where
1 row in set (0.00 sec)
[sql]
mysql> explain select count(*) from part_tab where c3 > date '1995-01-01' and c3
*************************** 1. row ***************************
id: 1
select_type: simple
table: part_tab
type: all
possible_keys: null
key: null
key_len: null
ref: null
rows: 798458
extra: using where
1 row in set (0.00 sec)
explain语句显示了sql查询要处理的记录数目
* 试验创建索引后情况
[sql]
mysql> create index idx_of_c3 on no_part_tab (c3);
query ok, 8000000 rows affected (1 min 18.08 sec)
records: 8000000 duplicates: 0 warnings: 0
[sql]
mysql> create index idx_of_c3 on part_tab (c3);
query ok, 8000000 rows affected (1 min 19.19 sec)
records: 8000000 duplicates: 0 warnings: 0
创建索引后的数据库文件大小列表:
2008-05-24 09:23 8,608 no_part_tab.frm
2008-05-24 09:24 255,999,996 no_part_tab.myd
2008-05-24 09:24 81,611,776 no_part_tab.myi
2008-05-24 09:25 0 part_tab#p#p0.myd
2008-05-24 09:26 1,024 part_tab#p#p0.myi
2008-05-24 09:26 25,550,656 part_tab#p#p1.myd
2008-05-24 09:26 8,148,992 part_tab#p#p1.myi
2008-05-24 09:26 25,620,192 part_tab#p#p10.myd
2008-05-24 09:26 8,170,496 part_tab#p#p10.myi
2008-05-24 09:25 0 part_tab#p#p11.myd
2008-05-24 09:26 1,024 part_tab#p#p11.myi
2008-05-24 09:26 25,656,512 part_tab#p#p2.myd
2008-05-24 09:26 8,181,760 part_tab#p#p2.myi
2008-05-24 09:26 25,586,880 part_tab#p#p3.myd
2008-05-24 09:26 8,160,256 part_tab#p#p3.myi
2008-05-24 09:26 25,585,696 part_tab#p#p4.myd
2008-05-24 09:26 8,159,232 part_tab#p#p4.myi
2008-05-24 09:26 25,585,216 part_tab#p#p5.myd
2008-05-24 09:26 8,159,232 part_tab#p#p5.myi
2008-05-24 09:26 25,655,740 part_tab#p#p6.myd
2008-05-24 09:26 8,181,760 part_tab#p#p6.myi
2008-05-24 09:26 25,586,528 part_tab#p#p7.myd
2008-05-24 09:26 8,160,256 part_tab#p#p7.myi
2008-05-24 09:26 25,586,752 part_tab#p#p8.myd
2008-05-24 09:26 8,160,256 part_tab#p#p8.myi
2008-05-24 09:26 25,585,824 part_tab#p#p9.myd
2008-05-24 09:26 8,159,232 part_tab#p#p9.myi
2008-05-24 09:25 8,608 part_tab.frm
2008-05-24 09:25 68 part_tab.par
* 再次测试sql性能
[sql]
mysql> select count(*) from no_part_tab where c3 > date '1995-01-01' and c3
+----------+
| count(*) |
+----------+
| 795181 |
+----------+
1 row in set (2.42 sec) /* 为原来4.69 sec 的51%*/
重启mysql ( net stop mysql, net start mysql)后,查询时间降为0.89 sec,几乎与分区表相同。
[sql]
mysql> select count(*) from part_tab where c3 > date '1995-01-01' and c3
+----------+
| count(*) |
+----------+
| 795181 |
+----------+
1 row in set (0.86 sec)
* 更进一步的试验
** 增加日期范围
[sql]
mysql> select count(*) from no_part_tab where c3 > date '1995-01-01' and c3
+----------+
| count(*) |
+----------+
| 2396524 |
+----------+
1 row in set (5.42 sec)
[sql]
mysql> select count(*) from part_tab where c3 > date '1995-01-01' and c3
+----------+
| count(*) |
+----------+
| 2396524 |
+----------+
1 row in set (2.63 sec)
** 增加未索引字段查询
[sql]
mysql> select count(*) from part_tab where c3 > date '1995-01-01' and c3
'1996-12-31' and c2='hello';
+----------+
| count(*) |
+----------+
| 0 |
+----------+
1 row in set (0.75 sec)
[sql]
mysql> select count(*) from no_part_tab where c3 > date '1995-01-01' and c3
+----------+
| count(*) |
+----------+
| 0 |
+----------+
1 row in set (11.52 sec)
= 初步结论 =
* 分区和未分区占用文件空间大致相同 (数据和索引文件)
* 如果查询语句中有未建立索引字段,分区时间远远优于未分区时间
* 如果查询语句中字段建立了索引,分区和未分区的差别缩小,分区略优于未分区。
= 最终结论 =
* 对于大数据量,建议使用分区功能。
* 去除不必要的字段
* 根据手册, 增加myisam_max_sort_file_size 会增加分区性能
[分区命令详解]
= 分区例子 =
* range 类型
[sql]
create table users (
uid int unsigned not null auto_increment primary key,
name varchar(30) not null default '',
email varchar(30) not null default ''
)
partition by range (uid) (
partition p0 values less than (3000000)
data directory = '/data0/data'
index directory = '/data1/idx',
partition p1 values less than (6000000)
data directory = '/data2/data'
index directory = '/data3/idx',
partition p2 values less than (9000000)
data directory = '/data4/data'
index directory = '/data5/idx',
partition p3 values less than maxvalue data directory = '/data6/data'
index directory = '/data7/idx'
);
在这里,将用户表分成4个分区,以每300万条记录为界限,每个分区都有自己独立的数据、索引文件的存放目录,与此同时,这些目录所在的物理磁盘分区可能也都是完全独立的,可以提高磁盘io吞吐量。
* list 类型
[sql]
create table category (
cid int unsigned not null auto_increment primary key,
name varchar(30) not null default ''
)
partition by list (cid) (
partition p0 values in (0,4,8,12)
data directory = '/data0/data'
index directory = '/data1/idx',
partition p1 values in (1,5,9,13)
data directory = '/data2/data'
index directory = '/data3/idx',
partition p2 values in (2,6,10,14)
data directory = '/data4/data'
index directory = '/data5/idx',
partition p3 values in (3,7,11,15)
data directory = '/data6/data'
index directory = '/data7/idx'
);
分成4个区,数据文件和索引文件单独存放。
* hash 类型
[sql]
create table users (
uid int unsigned not null auto_increment primary key,
name varchar(30) not null default '',
email varchar(30) not null default ''
)
partition by hash (uid) partitions 4 (
partition p0
data directory = '/data0/data'
index directory = '/data1/idx',
partition p1
data directory = '/data2/data'
index directory = '/data3/idx',
partition p2
data directory = '/data4/data'
index directory = '/data5/idx',
partition p3
data directory = '/data6/data'
index directory = '/data7/idx'
);
分成4个区,数据文件和索引文件单独存放。
例子:
[sql]
create table ti2 (id int, amount decimal(7,2), tr_date date)
engine=myisam
partition by hash( month(tr_date) )
partitions 6;
create procedure load_ti2()
begin
declare v int default 0;
while v
do
insert into ti2
values (v,'3.14',adddate('1995-01-01',(rand(v)*3652) mod 365));
set v = v + 1;
end while;
end
//
* key 类型
[sql]
create table users (
uid int unsigned not null auto_increment primary key,
name varchar(30) not null default '',
email varchar(30) not null default ''
)
partition by key (uid) partitions 4 (
partition p0
data directory = '/data0/data'
index directory = '/data1/idx',
partition p1
data directory = '/data2/data'
index directory = '/data3/idx',
partition p2
data directory = '/data4/data'
index directory = '/data5/idx',
partition p3
data directory = '/data6/data'
index directory = '/data7/idx'
);
分成4个区,数据文件和索引文件单独存放。
* 子分区
子分区是针对 range/list 类型的分区表中每个分区的再次分割。再次分割可以是 hash/key 等类型。例如:
[sql]
create table users (
uid int unsigned not null auto_increment primary key,
name varchar(30) not null default '',
email varchar(30) not null default ''
)
partition by range (uid) subpartition by hash (uid % 4) subpartitions 2(
partition p0 values less than (3000000)
data directory = '/data0/data'
index directory = '/data1/idx',
partition p1 values less than (6000000)
data directory = '/data2/data'
index directory = '/data3/idx'
);
对 range 分区再次进行子分区划分,子分区采用 hash 类型。
或者
[sql]
create table users (
uid int unsigned not null auto_increment primary key,
name varchar(30) not null default '',
email varchar(30) not null default ''
)
partition by range (uid) subpartition by key(uid) subpartitions 2(
partition p0 values less than (3000000)
data directory = '/data0/data'
index directory = '/data1/idx',
partition p1 values less than (6000000)
data directory = '/data2/data'
index directory = '/data3/idx'
);
对 range 分区再次进行子分区划分,子分区采用 key 类型。
= 分区管理 =
* 删除分区
[sql]
alert table users drop partition p0;
删除分区 p0。
* 重建分区
o range 分区重建
[sql]
alter table users reorganize partition p0,p1 into (partition p0 values less than (6000000));
将原来的 p0,p1 分区合并起来,放到新的 p0 分区中。
o list 分区重建
[sql]
alter table users reorganize partition p0,p1 into (partition p0 values in(0,1,4,5,8,9,12,13));
将原来的 p0,p1 分区合并起来,放到新的 p0 分区中。
o hash/key 分区重建
[sql]
alter table users reorganize partition coalesce partition 2;
用 reorganize 方式重建分区的数量变成2,在这里数量只能减少不能增加。想要增加可以用 add partition 方法。
* 新增分区
o 新增 range 分区
[sql]
alter table category add partition (partition p4 values in (16,17,18,19)
data directory = '/data8/data'
index directory = '/data9/idx');
新增一个range分区。
o 新增 hash/key 分区
[sql]
alter table users add partition partitions 8;
将分区总数扩展到8个。
[ 给已有的表加上分区 ]
[sql]
alter table results partition by range (month(ttime))
(partition p0 values less than (1),
partition p1 values less than (2) , partition p2 values less than (3) ,
partition p3 values less than (4) , partition p4 values less than (5) ,
partition p5 values less than (6) , partition p6 values less than (7) ,
partition p7 values less than (8) , partition p8 values less than (9) ,
partition p9 values less than (10) , partition p10 values less than (11),
partition p11 values less than (12),
partition p12 values less than (13) );
默认分区限制分区字段必须是主键(primary key)的一部分,为了去除此
限制:
[方法1] 使用id
[sql]
mysql> alter table np_pk
-> partition by hash( to_days(added) )
-> partitions 4;
error 1503 (hy000): a primary key must include all columns in the table's partitioning function
however, this statement using the id column for the partitioning column is valid, as shown here:
[sql]
mysql> alter table np_pk
-> partition by hash(id)
-> partitions 4;
query ok, 0 rows affected (0.11 sec)
records: 0 duplicates: 0 warnings: 0
[方法2] 将原有pk去掉生成新pk
[sql]
mysql> alter table results drop primary key;
query ok, 5374850 rows affected (7 min 4.05 sec)
records: 5374850 duplicates: 0 warnings: 0
[sql]
mysql> alter table results add primary key(id, ttime);
query ok, 5374850 rows affected (6 min 14.86 sec)
records: 5374850 duplicates: 0 warnings: 0
bitscn.com