まずは、ashift=9から。
# zpool create tank \
mirror scsi-SATA_ST3000DM001-9YN_Z1F0AAAA-part1 \
scsi-SATA_ST3000DM001-9YN_Z1F0BBBB-part1 \
mirror scsi-SATA_ST3000DM001-9YN_Z1F0CCCC-part1 \
scsi-SATA_ST3000DM001-9YN_Z1F0DDDD-part1
# zpool status
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
scsi-SATA_ST3000DM001-9YN_Z1F0AAAA-part1 ONLINE 0 0 0
scsi-SATA_ST3000DM001-9YN_Z1F0BBBB-part1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
scsi-SATA_ST3000DM001-9YN_Z1F0CCCC-part1 ONLINE 0 0 0
scsi-SATA_ST3000DM001-9YN_Z1F0DDDD-part1 ONLINE 0 0 0
errors: No known data errors
# zdb |grep ashift
ashift: 9
ashift: 9
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
tank 5.44T 110K 5.44T 0% 1.00x ONLINE -
ashift=9のbonnie++の結果は、次のようになります。
# bonnie++
~ 略 ~
Version 1.03e ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
abcde.locald 15488M 45686 99 190095 60 99181 36 43745 93 192523 32 335.0 6
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 12889 97 +++++ +++ 14336 99 12505 99 +++++ +++ 14838 99
abcde.localdomain,15488M,45686,99,190095,60,99181,36,43745,93,192523,32,335.0,6,16,12889,97,+++++,+++,14336,99,12505,99,+++++,+++,14838,99
次は、ashift=12。
# zpool destroy tank
# zpool create -o ashift=12 tank \
mirror scsi-SATA_ST3000DM001-9YN_Z1F0AAAA-part1 \
scsi-SATA_ST3000DM001-9YN_Z1F0BBBB-part1 \
mirror scsi-SATA_ST3000DM001-9YN_Z1F0CCCC-part1 \
scsi-SATA_ST3000DM001-9YN_Z1F0DDDD-part1
# zdb |grep ashift
ashift: 12
ashift: 12
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
tank 5.44T 110K 5.44T 0% 1.00x ONLINE -
ashift=12の結果は、次のようになります。
# bonnie+++
~ 略 ~
Version 1.03e ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
abcde.locald 15488M 44152 99 240200 74 112329 42 44438 95 195719 32 342.1 6
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 12948 99 +++++ +++ 13296 98 12095 99 +++++ +++ 14483 99
abcde.localdomain,15488M,44152,99,240200,74,112329,42,44438,95,195719,32,342.1,6,16,12948,99,+++++,+++,13296,98,12095,99,+++++,+++,14483,99
並べるとこんな感じになります。
Sequential Writeが2割強UPしてるのが目立ちます。
Version 1.03e ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
ashift=9 15488M 45686 99 190095 60 99181 36 43745 93 192523 32 335.0 6
ashift=12 15488M 44152 99 240200 74 112329 42 44438 95 195719 32 342.1 6
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
ashift=9 16 12889 97 +++++ +++ 14336 99 12505 99 +++++ +++ 14838 99
ashift=12 16 12948 99 +++++ +++ 13296 98 12095 99 +++++ +++ 14483 99
前回の2台構成のmirrorと比べるとRAID1とRAID10の比較になるので予想通りですが、Sequential Read/Writeがほぼ2倍になっていることが分かります。
0 件のコメント:
コメントを投稿