월간 보관물: 2016 11월

ovirt to virt-manager convert …

항상 당연히 되겟지라고 생각만 하고 있던 것인데 한번 해봤다. 해본것과 안해본것은 천지차이니까..!!

nested kvm 으로 구성된 ovirt hypervisor 위에 돌고있던 ovirtguest os를 일반 kvm환경으로 옮겨봤다.

일단 nested kvm hypervisor안에있는 guest os를 확인 해보자.

root@dual-fedora4:/data/vms2/a608c81c-204b-45ef-bd5f-4fcd2496e22a/images/294ab2e9-7033-4139-a776-580560c09862# ls

67ac4bdc-5f70-4085-845d-eeac9c15bc7b 67ac4bdc-5f70-4085-845d-eeac9c15bc7b.meta

67ac4bdc-5f70-4085-845d-eeac9c15bc7b.lease

### 이렇게 file3개가 있는데 .meta로 끝나는게 이 가상머신 disk metadata이다. ###

# cat 67ac4bdc-5f70-4085-845d-eeac9c15bc7b.meta

DOMAIN=a608c81c-204b-45ef-bd5f-4fcd2496e22a

CTIME=1479622466

FORMAT=RAW

DISKTYPE=2

LEGALITY=LEGAL

SIZE=83886080

VOLTYPE=LEAF

DESCRIPTION={“DiskAlias”:”win2008-test_Disk1″,”DiskDescription”:””}

IMAGE=294ab2e9-7033-4139-a776-580560c09862

PUUID=00000000-0000-0000-0000-000000000000

MTIME=0

POOL_UUID=

TYPE=SPARSE

EOF

### 위와 같은 정보들이 눈에 들어오는데 중요한것은 ‘FORMAT’ 이다. 여기서는 RAW로 되어있는것을 확인 할 수 있다. ###

사실 위에작업 안하고 qemu-img로 봐도 대충 보이긴 한다.

# qemu-img info 67ac4bdc-5f70-4085-845d-eeac9c15bc7b

image: 67ac4bdc-5f70-4085-845d-eeac9c15bc7b

file format: raw

virtual size: 40G (42949672960 bytes)

disk size: 6.7G

# file 67ac4bdc-5f70-4085-845d-eeac9c15bc7b

67ac4bdc-5f70-4085-845d-eeac9c15bc7b: DOS/MBR boot sector MS-MBR Windows 7 english at offset 0x163 “Invalid partition table” at offset 0x17b “Error loading operating system” at offset 0x19a “Missing operating system”, disk signature 0x19cbbc07; partition 1 : ID=0x7, active, start-CHS (0x0,32,33), end-CHS (0xc,223,19), startsector 2048, 204800 sectors; partition 2 : ID=0x7, start-CHS (0xc,223,20), end-CHS (0x3ff,254,63), startsector 206848, 83677184 sectors

위에 3개 커맨드 모두 diskfile format을 알 수가 있다. file 커맨드로는 여러정보들이 나오는데 이게 raw disk일때 저런 정보들이 나오더라. disk filembr을 분석해서 보여주는것일거다.

자 이제 converting을 해보자.

# qemu-img convert -O qcow2 67ac4bdc-5f70-4085-845d-eeac9c15bc7b /data/kvms/win2008r2.qcow2

요렇게 하면 ‘67ac4bdc-5f70-4085-845d-eeac9c15bc7b’ disk fileqcow2 format으로 바뀐다. 사실 convert안하고 그냥 가져와서 실행 시켜도 된다. 그냥 저파일 그대로 실행시킨것과 qcow2converting 한다음에 실행시킨것은 사람이 느끼기엔 별 차이가 없다. 둘다 해봤는데 잘 되더라.

virt-manager에서 가상머신을 만들어 준 후 … 실행을 시켜보자.

# virsh start win2008r2

도메인 win2008r2가 시작됨

### 제대로 시작된 것을 볼 수가 있고…###

ovirt-to-kvm

화면도 잘 보인다. ㅎㅎ

RHGS(Red Hat Glusterfs) tuning point 몇개…

1. network H/W, RDMA, infniband

아래 그래프는 Gigabyte EthernetRDMAI/O performance차이를 나타내고 있습니다.

1

http://blog.gluster.org/category/performance/

현재 GbE에서 RDMA지원하는 nic으로 바꿨을 때 이정도의 performance 향상을 기대 할 수 있을것 같습니다.

아래는 GbETCP over infiniband의 비교자료입니다.

2

https://glennklockwood.blogspot.kr/2013/06/whats-killing-cloud-interconnect.html

일반적인 업무환경에서는 1k sizefile 위주로 많이 다루는데 이럴경우 network laytency에 아주 큰 영향을 받습니다. 현재 GbE환경에서 TCP over infiniband로 교체시 작은 sizefile같은 경우, 약간의 성능향상이 있을것으로 생각됩니다.

glusterfsnative infiniband를 지원하지 않습니다. TCP over infiniband(IP over infiniband)만을 지원합니다.

2. Storage H/W

현재구성이 HDD 위에 NVMe protocol을 사용한 SSDdm-cache로 사용하고 있다면, 현재구성에서는 HDDSAS protocol을 사용하는 SSD로 교체한다고 해서 성능향상을 기대하기는 어렵습니다.

그리고 raid controller에서 지원하는 H/W raid levelRed Hat에서는 보통 raid 6 또는 raid 10 을 권장하고 있습니다. 아래는 Red Hat manual의 내용입니다.

  • 13.1.1. Hardware RAID

    The RAID levels that are most commonly recommended are RAID 6 and RAID 10. RAID 6 provides better space efficiency, good read performance and good performance for sequential writes to large files.

    When configured across 12 disks, RAID 6 can provide ~40% more storage space in comparison to RAID 10, which has a 50% reduction in capacity. However, RAID 6 performance for small file writes and random writes tends to be lower than RAID 10. If the workload is strictly small files, then RAID 10 is the optimal configuration.

    An important parameter in hardware RAID configuration is the stripe unit size. With thin provisioned disks, the choice of RAID stripe unit size is closely related to the choice of thin-provisioning chunk size.For RAID 10, a stripe unit size of 256 KiB is recommended.

    For RAID 6, the stripe unit size must be chosen such that the full stripe size (stripe unit * number of data disks) is between 1 MiB and 2 MiB, preferably in the lower end of the range. Hardware RAID controllers usually allow stripe unit sizes that are a power of 2. For RAID 6 with 12 disks (10 data disks), the recommended stripe unit size is 128KiB.

raid 6stripe unit size 128KiB가 적용되어 있는 상태이면, 여기에서 다른 방법으로 성능을 높히기는 힘듭니다. (물론 raid 5나 raid 0가 빠르긴 함 ㅋ 그런데 이렇게 되면 raid controller의 bandwidth에 걸리게 된다. 이게 보통 sata protocol version 3 의 bandwidth 인 6Gbps 에 걸려서 1개의 raid에 disk를 아무리 많이 늘려도 저 6Gbps에 걸린다고 함.)

3. filesystem(xfs) tuning

Red Hat GlusterFSxfs filesystemsupport하고 있습니다.

아래는 xfs filesystem format을 할 때, Red Hat 권장 사항입니다.

XFS Recommendataions

  • XFS Inode Size

    As Red Hat Gluster Storage makes extensive use of extended attributes, an XFS inode size of 512 bytes works better with Red Hat Gluster Storage than the default XFS inode size of 256 bytes. So, inode size for XFS must be set to 512 bytes while formatting the Red Hat Gluster Storage bricks. To set the inode size, you have to use -i size option with the mkfs.xfscommand as shown in the following Logical Block Size for the Directory section.

  • XFS RAID Alignment

    When creating an XFS file system, you can explicitly specify the striping parameters of the underlying storage in the following format:

    mkfs.xfs other_options -d su=stripe_unit_size,sw=stripe_width_in_number_of_disks device

    For RAID 6, ensure that I/O is aligned at the file system layer by providing the striping parameters. For RAID 6 storage with 12 disks, if the recommendations above have been followed, the values must be as following:

    # mkfs.xfs other_options -d su=128k,sw=10 device

    For RAID 10 and JBOD, the -d su=<>,sw=<> option can be omitted. By default, XFS will use the thin-p chunk size and other parameters to make layout decisions.

  • Logical Block Size for the Directory

    An XFS file system allows to select a logical block size for the file system directory that is greater than the logical block size of the file system. Increasing the logical block size for the directories from the default 4 K, decreases the directory I/O, which in turn improves the performance of directory operations. To set the block size, you need to use -n size option with the mkfs.xfs command as shown in the following example output.


 format option

# mfks.xfs -i size=512 -n size=8192 -d su=128k,sw=10 /dev/sdb

-iinode option인데 , size512로 하겠다는 의미 입니다. Red Hat glusterfs에서는 기본 256byte 보다 512로 하는것이 더 좋다고 나와 있습니다.

nnaming option인데, directorylogical block sizedefault 4096 byte에서 8192 byte로 늘렸습니다. 늘렸을 때의 좋은점은 directory I/O가 줄어들어서, directory operation(file listing(`ls`), file finding(`find`) …)의 성능이 향상 됩니다.

-ddata section option인데 raid controller에서 정의한 값에 맞게 xfs에서도 맞춰 줍니다.

su(stripe unit size)raid controller에서 정의한 stripe unit size128k를 지정해주며, sw(stripe width)parity disk를 제외한 data disk의 갯수를 지정해 줍니다.

3. Glusterfs tuned

아래는 Red Hat Glusterfs admin guide에서 권장하는 tuned setting입니다.

Performance tuning option in Red Hat Gluster Storage

tuned profile is designed to improve performance for a specific use case by tuning system parameters appropriately. Red Hat Gluster Storage includes tuned profiles tailored for its workloads. These profiles are available in both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7.

Table 13.1. Recommended Profiles for Different Workloads

Workload

Profile Name

Large-file, sequential I/O workloads

rhgs-sequential-io

Small-file workloads

rhgs-random-io

Random I/O workloads

rhgs-random-io


현재 적용되어 있는 profilerhgs-random-io 입니다.

아래와 같은 방법으로 적용합니다.

# tuned-adm profile rhgs-random-io

4. Glusterfs parameter tunning

glusterfs volume을 일반 file을 저장할 목적으로 사용할 때와, virtual machineimage를 저장하는 RHEVdatadomain으로서 사용할 때의 parameter는 틀려집니다.

RHEVdatadomain으로서 사용할 때는 아래와 같은 옵션들을 써줘야 합니다.

https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html-single/Configuring_Red_Hat_Enterprise_Virtualization_with_Red_Hat_Gluster_Storage/index.html#chap-Hosting_Virtual_Machine_Images_on_Red_Hat_Storage_volumes

/var/lib/glusterd/groups/virt file에 설정값이 아래와 같은지 확인 합니다.

performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.stat-prefetch=off
cluster.eager-lock=enable
network.remote-dio=enable
cluster.quorum-type=auto
cluster.server-quorum-type=server

# gluster volume set VOLNAME group virt

# gluster volume set VOLNAME storage.owner-uid 36

# gluster volume set VOLNAME storage.owner-gid 36

# gluster volume set VOLNAME storage.owner-uid 107

# gluster volume set VOLNAME storage.owner-gid 107

gluster volume set VOLNAME group virt command는 위에서 설정한 parametervolume에 적용시켜주는 옵션 command입니다.

gluster volume set VOLNAME storage.owner-uid 36 commandstorageowneruid 36 (kvm)으로 지정해주는 command이며, 아래는 gid36번으로 지정해주는 command입니다. 그 아래의 uid, gid 107qemu입니다.

위 작업을 해야지만 RHEV에서 정상적으로 datadomain으로 사용 할 수가 있습니다.

cd burn … 하는 방법… cd 굽기

맨날 dvd만 굽다가 용량이 작은 data를 구울려고 cd 를 써서 구운적이 있다.

그런데 다 굽고나서 잘되는지 mount를 하려고 하면 error가 뜨더라.

[1121 15:22] sr 2:0:0:0: [sr0] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE

[ +0.000016] sr 2:0:0:0: [sr0] tag#0 Sense Key : Illegal Request [current]

[ +0.000012] sr 2:0:0:0: [sr0] tag#0 Add. Sense: Illegal mode for this track

[ +0.000010] sr 2:0:0:0: [sr0] tag#0 CDB: Read(10) 28 00 00 00 d5 5c 00 00 02 00

[ +0.000009] blk_update_request: I/O error, dev sr0, sector 218480

[ +0.010817] sr 2:0:0:0: [sr0] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE

[ +0.000019] sr 2:0:0:0: [sr0] tag#0 Sense Key : Illegal Request [current]

[ +0.000020] sr 2:0:0:0: [sr0] tag#0 Add. Sense: Illegal mode for this track

[ +0.000013] sr 2:0:0:0: [sr0] tag#0 CDB: Read(10) 28 00 00 00 d5 5d 00 00 01 00

[ +0.000010] blk_update_request: I/O error, dev sr0, sector 218484

[ +0.000012] Buffer I/O error on dev sr0, logical block 54621, async page read

[ +0.365453] ISO 9660 Extensions: Microsoft Joliet Level 3

[ +0.002880] ISO 9660 Extensions: RRIP_1991A

대충 살펴 보니까… /dev/sr0218480 sector를 읽을 수 없다고 하는것 같다.

그런데… mount를 해보니 data는 잘 보여서 그냥 쓸 까… 했는데 의심쩍어서 찾아봤더니 이런게 있더라.

Peter (darkcentrino) wrote :

#27

The Problem is the CD, when you burn the iso to a DVD, everything is fine!
I have found this links:
http://<email address hidden>
http://www.troubleshooters.com/linux/coasterless.htm

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/266951

위에 petercd로 구워서 그런거고, dvd로 구우면 문제없이 잘 된다..!! 라고 알려준다. 실제로 DVD 한장을 낭비해서 구워보니까 error안뜨고 잘 mount 되더라.

그리고 링크 하나를 던져 주는데… 들어가서 확인 해보니까

NOTE

There are many ways to burn a CD, including applications like xcdroast and gtoaster. Most are good, so use whichever suits your needs. Just remember the basic principles from this section:

  • Burn at the right speed

  • Pad properly (padsize=63s and -pad)

  • Use disk at once (-dao)

  • Unless your burner prevents buffer underflow, do no other work while you’re burning.

cd 구울때 위의 기본 원칙들을 꼭 주의 하란다. ( wodim으로 그냥 아무 옵션없이 구웟음…)

저 친구가 알려준 원칙대로 구워보자.

# cdrecord dev=0,0,0 speed=10 padsize=63s -pad -dao -v -eject /home/myuid/myiso.iso

위에서 바뀌어야 할 부분은 dev , speed, /home/… 요렇게 이다. speed를 그냥 기본으로 하면 알아서 맞춰주는줄 알앗는데 아닌가보다.

dev는 아래의 command로 확이 가능하다.

# cdrecord -scanbus

scsibus2:

2,0,0 200) ‘HL-DT-ST’ ‘BD-RE BP40NS20 ‘ ‘ML01’ Removable CD-ROM

2,1,0 201) *

2,2,0 202) *

speed2개중 낮은 값으로 해야되는데… 뭐냐면 cd writer기와 cdrom에 적혀있는 1x-52x 이런 값중 낮은값을 선택해서 해야한단다. 반드시…!

확인을 해보면…

# sysctl dev.cdrom.info

dev.cdrom.info = CD-ROM information, Id: cdrom.c 3.20 2003/12/17

dev.cdrom.info =

dev.cdrom.info = drive name: sr0

dev.cdrom.info = drive speed: 24

dev.cdrom.info = drive # of slots: 1

위 처럼 sysctl을 이용해서 drivespeed를 확인해도 되고 직접 sysfs를 뒤져봐도 되고 proc을 뒤져봐도 되고등등 여러가지 방법이 있다.

cdromspeedcdrom 위에 1x-52x 이렇게 명시되어 있을 것이다. 저뜻은 1부터 52배속까지 원하는걸로 쓸 수 있다는 뜻 같다.

이제 다 됐으니… 구워보자.

# cdrecord dev=2,0,0 speed=24 padsize=10 -pad -dao -v -eject ~/iso/MegaRaid_Software.iso

TOC Type: 1 = CD-ROM

wodim: Operation not permitted. Warning: Cannot raise RLIMIT_MEMLOCK limits.

scsidev: ‘2,0,0’

scsibus: 2 target: 0 lun: 0

WARNING: the deprecated pseudo SCSI syntax found as device specification.

Support for that may cease in the future versions of wodim. For now,

the device will be mapped to a block device file where possible.

Run “wodim –devices” for details.

Linux sg driver version: 3.5.36

Wodim version: 1.1.11

SCSI buffer size: 64512

Device type : Removable CD-ROM

Version : 0

Response Format: 2

Capabilities :

Vendor_info : ‘HL-DT-ST’

… wodim을 불러오네 … wodim으로 구워서 잘못됏다고 생각했는데 그냥 사용자가(내가..)잘못 쓴거였음.

암턴 다시 mount를 해보자…

# sudo mount /dev/sr0 /mnt/cdrom/

# dmesg -wH

[1121 16:20] ISO 9660 Extensions: Microsoft Joliet Level 3

[ +0.145748] ISO 9660 Extensions: RRIP_1991A

잘된다!!

hypervisor 에서 xrdp를 썼을 때 생기는 문제…

xrdp를 사용하다가 난감했던 경우가 있다.

한번은 고객상서 xrdp 접속이 안된다는 것이다. 그래서 가봤더니 진짜 안되더라. 문제는 특정 vmxrdp가 써야하는 5910 port를 쓰고 있어서 안되는것이엇다. 그래서 이 vm을 죽이고 띄웠더니 되더라.

이것은 한시적인 해결법이고 언제든 또 저 port를 누군가가 점유하고 있으면 xrdp접속이 안될것이다.

그래서 저 5910 포트를 다른것으로 고쳐야지 완전한 해결책이 되는데…

그 때 정말 한참 찾아봤던걸로 기억한다.

hypervisor에서 xrdp를 사용 할 경우 vm이 사용하는 port와 다른 port를 사용하게 해줄 필요가 있다.

xrdp는 기본적으로 5910번 부터 5920번까지 10개의 세션까지 붙을 수 있게끔 되어 있다. 아래서 옵션을 확인해보자.

만약 hypervisorvm10개 미만으로 올라가 있다면 문제는 생기지 않는다. 하지만 위에서 생긴 문제는 hypervisorvm100개 가량 돌고 있었고…ㄷㄷ 당시 이것을 구축한 사람은 문제해결을 못해서 쩔쩔매고 있었다. 나도 엄청 찾았다. 그도 그럴게 xrdp가 동작방식이 되게 베베꼬여있다.

그냥 3389 portservice를 해주는 것이 아닌… 일반적인 windows 환경에서 RDP 3389 port로 요청을 보내면 xrdpxrdp-sesman 3350 port로 요청을 전달한다. 그럼 xrdp-sesmanXvnc를 써서 vncserver를 구동 한 후 그것을 다시 3389로 보내서 연결 시켜주는 형식이다. 이걸 당시 debug mode를 써서 알아냈었다…

암턴 … 저때의 Xvnc 가 생성하는 port가 기본적으로 5910번부터 5920번 까지 사용하는데… 이걸 다른 process가 막아버리면 얘는 당연히 뜨질 못하고 연결이 안되는것이다.

자 그럼… config file을 열어보면…

# vim /etc/xrdp/sesman.ini

[Sessions]

X11DisplayOffset=10

MaxSessions=10

KillDisconnected=0

IdleTimeLimit=0

DisconnectedTimeLimit=0

요런 구문이 있는데 “X11DisplayOffset이놈이 Xvnc vncserver display session offset을 결정하는 구문이다. 쉽게말해 저걸 100으로 고치면 5900 + 100 이 되어 Xvnc 6000번 부터 port를 사용하게 된다.

그리고 아래의 “MaxSessions는 허용할 session의 총 갯수이다. 기본으로 10개까지 허용하고 있다. 이러면 Xvnc 6000 ~ 6010 까지의 port를 사용하게 될것이다.

후… 옛날에 설정 해놓고 잊어버렸던거라 가물가물 했었는데 … posting 하니까 한결 편해졌다. 이제 안까먹겠지… ㅋㅋ

MegaRAID CLI Tool simple usage

megacli를 사용해 보았다.

설치는 간단하다.

avago homepage에서 8-07-14_MegaCLI.zip download받는다.

# unzip 8-07-14_MegaCLI.zip

# cd Linux

# yum localinstall MegaCli-8.07.14-1.noarch.rpm

no architecture 이기 때문에 rhel 에는 아무데나 깔아도 되는것같다.

사용법은 동일하게 avago homepage에서 검색하면 나오는데

embedded_mr_sw_ug.pdf

라는게 있다. 여기에 MegaCli64의 사용법과 MSM(MegaRAID Storage Manager)의 사용법도 같이 있다.

4MegaCLI Command Tool 에서 보면 각종 설정과 display command들이 있는데 오늘 내가 해볼것은 현재 설정을 보는것과 event log를 보는것… 그리고 raid rebuild 상황을 보는것 정도를 해볼것이다.

시작하기전에 편의를 위해 사전설정을 몇개 해두자.

# vim /etc/bashrc

alias MegaCli64=”/opt/MegaRAID/MegaCli/MegaCli64″

alias megacli=”/opt/MegaRAID/MegaCli/MegaCli64″

요렇게 두줄을 추가해놨다.

# . /etc/bashrc alias import 해주자.

일단 현재 설정을 보면…

# megacli -adpallinfo -a0

### command를 잠시 보면 adaptor all info adaptor 0 이라는 뜻이다. 처음에는 command option이 좀 특이해서 어려웠는데 좀 보다보니까 괜찮더라. -a-aall | -aN | -a0,1,2 이렇게도 들어갈 수 있다. ###

commandoption대소문자 구분을 하지 않는다.

결과를 좀 보면…

Adapter #0

==============================================================================

Versions

================

Product Name : PERC H700 Integrated

Serial No : 22I01AE

FW Package Build: 12.10.2-0004

Mfg. Data

================

Mfg. Date : 02/21/12

Rework Date : 02/21/12

Revision No : A06

Battery FRU : N/A

Image Versions in Flash:

================

BIOS Version : 3.18.00_4.09.05.00_0x0416A000

FW Version : 2.100.03-1405

raid controller h/w 의 거의 모든 정보를 볼 수있다.

# megacli -adpautorbld -dsply -a0

Adapter 0: AutoRebuild is Enabled.

Exit Code: 0x00

### adaptor auto rebuild enable 되어 있는지 확인 할 수 있다. 현재는 enable되어 있는상태이다. ###

# megacli –AdpGetProp RebuildRate | BgiRate | CCRate | CoercionMode

-aN|-a0,1,2|-aALL

### 각각 rebuildrate, bgirate, ccrate, corecionmode 를 볼 수있다. 뭔진 모르겠지만 일단 나온다. ###

# megacli adpgettime -aall

Adapter 0:

Date: 11/21/2016

Time: 11:04:39

Exit Code: 0x00

### 이렇게 date를 확인 할 수도 있다. adaptor마다 timer가 따로 있나보다. ###

# megaraid –AdpBootDrive {-Set –LDID} | -Get -aN|-a0,1,2|-aAL

### 이건 bootable logical drive를 지정하는거다. reboot 하면 지정된 logical drive로 부팅이 될것이다. ###

# megaraid –AdpBIOS -Enbl|-Dsbl|-Dsply| SOE | BE -aN|-a0,1,2|-aALL

### bios option을 볼 수 있는데… enable 또는 disable할 수 있다. 이는 bios에서 disk의 상태를 보여 줄지를 지정한다. 내껀 disable로 되어 있더라. SOEstop on error인데 bios 설정 문제로 멈추면 stop하겠다는 뜻이다. BEbypasses bios인데 위에꺼의 반대임. ###

# megacli –AdpEventLog –GetEventlogInfo |{–GetEvents | GetSinceShutdown|

GetSinceReboot | IncludeDeleted | {GetLatest <number>} –f <filename>}

|Clear -aN|-a0,1,2|-aALL

### 이것은 eventlog를 볼 수 있는데 여러가지 옵션들이 있다. 이중에 -getevents를 이용해서 eventlog를 볼 수 있다. ###

# megacli -cfgdsply -aall

==============================================================================

Adapter: 0

Product Name: PERC H700 Integrated

Memory: 512MB

BBU: Present

Serial No: 22I01AE

==============================================================================

Number of DISK GROUPS: 1

DISK GROUP: 0

Number of Spans: 1

SPAN: 0

Span Reference: 0x00

Number of PDs: 1

Number of VDs: 1

Number of dedicated Hotspares: 0

Virtual Drive Information:

Virtual Drive: 0 (Target Id: 0)

Name :

RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0

Size : 278.875 GB

Sector Size : 512

Parity Size : 0

State : Optimal

Strip Size : 64 KB

Number Of Drives : 1

Span Depth : 1

Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU

Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU

### 현재 adaptor의 설정을 볼 수 있다. virtual drive 정보와 physical drive정보를 한꺼번에 볼 수 있다. 위에 Cache Policy 가 보이는데, writeback으로 되어 있어야 속도가 빠르고 제기능을 하고 있다는 뜻이다. writethrough로 되어 있으면 느리다. bettery에 문제가 생기면 이 writeback cachewritethrough로 바뀔 수 있다. ###

# megacli -cfgfreespaceinfo -aall

Number of DISK GROUPS: 1

DISK GROUP: 0 Number of Spans: 1

SPAN: 0 Number of Free Space Slots: 0

Exit Code: 0x00

### free space를 보여준다. 현재는 놀고있는 disk가 없는 상태이다. ###

# megacli -ldinfo -lall -aall

Adapter 0 — Virtual Drive Information:

Virtual Drive: 0 (Target Id: 0)

Name :

RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0

Size : 278.875 GB

Sector Size : 512

Parity Size : 0

State : Optimal

Strip Size : 64 KB

Number Of Drives : 1

Span Depth : 1

Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU

### 위에서 나온 모든정보중에 logical drive(virtual drive)의 정보만 추려서 보여준다. ###

# megacli ldpdinfo -aall |grep -i ‘enclosur\|slot’

Enclosure Device ID: 32

Slot Number: 0

# megacli -pdrbld -showprog -physdrv [32:0] -aall

Device(Encl-32 Slot-0) is not in rebuild process

Exit Code: 0x00

### 이게 오늘 posting 목적의 raid controllerrebuild 진행상황을 볼 수 있는 command이다. 위에 enclosureslot number를 알아낸 뒤 그 physical drive에 대한 raid rebuilding 상황을 볼 수 가 있다. 현재는 rebuild process가 없다는 뜻이다. 있다면 percentage로 얼만큼 진행되었는지 얼마나 시간이 남았는지를 나타내 준다. ###

# megacli -pdrbld -progdsply -physdrv [32:0] -aall

Device(Encl-32 Slot-0) is not in rebuild process

### 위와 동일한데 계속해서 progress를 보여준다. 유용함. ###

위의 manual 에는 raid를 구성한다든가 하는 여러가지 configration option들도 있다. MSM(Megaraid Storage Manager)라고 하는 GUI Toolmanual도 함께 있다. MSM을 사용해서 하면 더 편하겠지만 오늘은 일단 CLI Tool만 알아본다.

ovirt-engine routing table 추가

glusterfs storage console glusterfs server를 연동하게 되면 vdsm에 의해 /etc/sysconfig/network-script/{route-ovirtmgmt,rule-ovirtmgmt,ifcfg-ovirtmgmt} 파일이 생기게 되면서 이전에 있던 routing table 설정이 없어지게 된다.

이 때 다시 routing table을 설정 해줘야 하는데, ovirt.org 에서는 다음과 같은 방법을 제안하고 있다.

https://www.ovirt.org/develop/release-management/features/network/multiple-gateways/

위와 같은 방법으로 해보면…

# echo 200 ovirtmgmt-table >> /etc/iprotue2/rt_table

# cat >> /etc/sysconfig/network-script/route-ovirtmgmt

172.16.201.114/32 via 192.168.127.220 dev ovirtmgmt table ovirtmgmt-table

# cat >> /etc/sysconfig/network-script/rule-ovirtmgmt

from 172.16.201.114/32 table ovirtmgmt-table

from all to 172.16.201.114/32 table ovirtmgmt-table

# ifup ovirtmgmt

# ip ru

0: from all lookup local

32758: from all to 172.16.201.114 lookup ovirtmgmt-table

32759: from 172.16.201.114 lookup ovirtmgmt-table

32760: from all to 192.168.127.0/24 iif ovirtmgmt lookup 3232268273

32761: from 192.168.127.0/24 lookup 3232268273

32762: from all to 192.168.127.0/24 iif ovirtmgmt lookup 3232268273

32763: from 192.168.127.0/24 lookup 3232268273

32764: from all to 192.168.127.0/24 iif ovirtmgmt lookup 3232268273

32765: from 192.168.127.0/24 lookup 3232268273

32766: from all lookup main

32767: from all lookup default

# ip r l t ovirtmgmt-table

172.16.201.114 via 192.168.127.220 dev ovirtmgmt

이렇게 해결 할 수 있다.

ovirtmgmt-table 이라는 routing table을 새로 만들어서, table을 이용한 routing policy rule을 만들고, 그 다음 ovirtmgmt-table routing table 172.16.201.114/32 network에 대한 routing을 설정 해준다.

172.16.201.114/32 default gateway가 아닌 다른 쪽으로 보내고싶은 network를 지정해주며,

192.168.127.220 172.16.201.114/32 nexthop 이다.

vdsm-network.service 때문인지 rebooting을 하면 ifcfg-ovirtmgmt ovirt에서 만든 network interface 설정은 refresh가 된다. 하지만 route-ovirtmgmtrule-ovirtmgmt는 삭제되지 않으며, 여기에 설정을 하면 원하는 routing table을 만들 수 있다.

만약 glusterfs serverconsole에서 땟다가 다시 붙이게 되면 위의 routerule file은 지워지고 다시 만들어지게 된다.

vdbench test script multi host

### read, write mix ###

#hd=default,user=root,shell=ssh
hd=default,user=root,shell=ssh,jvm=2
hd=vm1,system=rhgs1
hd=vm2,system=rhgs2
hd=vm3,system=rhgs3
hd=vm4,system=rhgs4
#hd=vm5,system=rhgs5
#hd=vm6,system=rhgs6
#hd=vm7,system=rhgs7
#hd=vm8,system=rhgs8

### fsd start ###
fsd=fs-read-32k,anchor=/storage-test/read-32k/dir,count=(1,8),depth=2,width=2,files=3,size=32k
fsd=fs-read-64k,anchor=/storage-test/read-64k/dir,count=(1,8),depth=2,width=2,files=3,size=64k
fsd=fs-read-512k,anchor=/storage-test/read-512k/dir,count=(1,8),depth=2,width=2,files=24,size=512k
fsd=fs-read-1024k,anchor=/storage-test/read-1024k/dir,count=(1,8),depth=2,width=2,files=18,size=1m
fsd=fs-read-10240k,anchor=/storage-test/read-10240k/dir,count=(1,8),depth=2,width=2,files=6,size=1m
fsd=fs-read-51200k,anchor=/storage-test/read-51200k/dir,count=(1,8),depth=2,width=2,files=6,size=5m

fsd=fs-write-32k,anchor=/storage-test/write-32k/dir,count=(1,8),depth=2,width=2,files=2,size=32k
fsd=fs-write-64k,anchor=/storage-test/write-64k/dir,count=(1,8),depth=2,width=2,files=2,size=64k
fsd=fs-write-512k,anchor=/storage-test/write-512k/dir,count=(1,8),depth=2,width=2,files=16,size=512k
fsd=fs-write-1024k,anchor=/storage-test/write-1024k/dir,count=(1,8),depth=2,width=2,files=12,size=1m
fsd=fs-write-10240k,anchor=/storage-test/write-10240k/dir,count=(1,8),depth=2,width=2,files=4,size=1m
fsd=fs-write-51200k,anchor=/storage-test/write-51200k/dir,count=(1,8),depth=2,width=2,files=4,size=5m
### fsd start ###
#### fwd start ####

fwd=fw-vm1-32k-read,host=vm1,fsd=fs-read-32k1,operation=read,fileselect=random,threads=1
fwd=fw-vm1-64k-read,host=vm1,fsd=fs-read-64k1,operation=read,fileselect=random,threads=1
fwd=fw-vm1-512k-read,host=vm1,fsd=fs-read-512k1,operation=read,fileselect=random,threads=1
fwd=fw-vm1-1024k-read,host=vm1,fsd=fs-read-1024k1,operation=read,fileselect=random,threads=1
fwd=fw-vm1-10240k-read,host=vm1,fsd=fs-read-10240k1,operation=read,fileselect=random,threads=1
fwd=fw-vm1-51200k-read,host=vm1,fsd=fs-read-51200k1,operation=read,fileselect=random,threads=1
fwd=fw-vm1-32k-write,host=vm1,fsd=fs-write-32k1,operation=write,fileselect=random,threads=1
fwd=fw-vm1-64k-write,host=vm1,fsd=fs-write-64k1,operation=write,fileselect=random,threads=1
fwd=fw-vm1-512k-write,host=vm1,fsd=fs-write-512k1,operation=write,fileselect=random,threads=1
fwd=fw-vm1-1024k-write,host=vm1,fsd=fs-write-1024k1,operation=write,fileselect=random,threads=1
fwd=fw-vm1-10240k-write,host=vm1,fsd=fs-write-10240k1,operation=write,fileselect=random,threads=1
fwd=fw-vm1-51200k-write,host=vm1,fsd=fs-write-51200k1,operation=write,fileselect=random,threads=1
fwd=fw-vm2-32k-read,host=vm2,fsd=fs-read-32k2,operation=read,fileselect=random,threads=1
fwd=fw-vm2-64k-read,host=vm2,fsd=fs-read-64k2,operation=read,fileselect=random,threads=1
fwd=fw-vm2-512k-read,host=vm2,fsd=fs-read-512k2,operation=read,fileselect=random,threads=1
fwd=fw-vm2-1024k-read,host=vm2,fsd=fs-read-1024k2,operation=read,fileselect=random,threads=1
fwd=fw-vm2-10240k-read,host=vm2,fsd=fs-read-10240k2,operation=read,fileselect=random,threads=1
fwd=fw-vm2-51200k-read,host=vm2,fsd=fs-read-51200k2,operation=read,fileselect=random,threads=1
fwd=fw-vm2-32k-write,host=vm2,fsd=fs-write-32k2,operation=write,fileselect=random,threads=1
fwd=fw-vm2-64k-write,host=vm2,fsd=fs-write-64k2,operation=write,fileselect=random,threads=1
fwd=fw-vm2-512k-write,host=vm2,fsd=fs-write-512k2,operation=write,fileselect=random,threads=1
fwd=fw-vm2-1024k-write,host=vm2,fsd=fs-write-1024k2,operation=write,fileselect=random,threads=1
fwd=fw-vm2-10240k-write,host=vm2,fsd=fs-write-10240k2,operation=write,fileselect=random,threads=1
fwd=fw-vm2-51200k-write,host=vm2,fsd=fs-write-51200k2,operation=write,fileselect=random,threads=1
fwd=fw-vm3-32k-read,host=vm3,fsd=fs-read-32k3,operation=read,fileselect=random,threads=1
fwd=fw-vm3-64k-read,host=vm3,fsd=fs-read-64k3,operation=read,fileselect=random,threads=1
fwd=fw-vm3-512k-read,host=vm3,fsd=fs-read-512k3,operation=read,fileselect=random,threads=1
fwd=fw-vm3-1024k-read,host=vm3,fsd=fs-read-1024k3,operation=read,fileselect=random,threads=1
fwd=fw-vm3-10240k-read,host=vm3,fsd=fs-read-10240k3,operation=read,fileselect=random,threads=1
fwd=fw-vm3-51200k-read,host=vm3,fsd=fs-read-51200k3,operation=read,fileselect=random,threads=1
fwd=fw-vm3-32k-write,host=vm3,fsd=fs-write-32k3,operation=write,fileselect=random,threads=1
fwd=fw-vm3-64k-write,host=vm3,fsd=fs-write-64k3,operation=write,fileselect=random,threads=1
fwd=fw-vm3-512k-write,host=vm3,fsd=fs-write-512k3,operation=write,fileselect=random,threads=1
fwd=fw-vm3-1024k-write,host=vm3,fsd=fs-write-1024k3,operation=write,fileselect=random,threads=1
fwd=fw-vm3-10240k-write,host=vm3,fsd=fs-write-10240k3,operation=write,fileselect=random,threads=1
fwd=fw-vm3-51200k-write,host=vm3,fsd=fs-write-51200k3,operation=write,fileselect=random,threads=1
fwd=fw-vm4-32k-read,host=vm4,fsd=fs-read-32k4,operation=read,fileselect=random,threads=1
fwd=fw-vm4-64k-read,host=vm4,fsd=fs-read-64k4,operation=read,fileselect=random,threads=1
fwd=fw-vm4-512k-read,host=vm4,fsd=fs-read-512k4,operation=read,fileselect=random,threads=1
fwd=fw-vm4-1024k-read,host=vm4,fsd=fs-read-1024k4,operation=read,fileselect=random,threads=1
fwd=fw-vm4-10240k-read,host=vm4,fsd=fs-read-10240k4,operation=read,fileselect=random,threads=1
fwd=fw-vm4-51200k-read,host=vm4,fsd=fs-read-51200k4,operation=read,fileselect=random,threads=1
fwd=fw-vm4-32k-write,host=vm4,fsd=fs-write-32k4,operation=write,fileselect=random,threads=1
fwd=fw-vm4-64k-write,host=vm4,fsd=fs-write-64k4,operation=write,fileselect=random,threads=1
fwd=fw-vm4-512k-write,host=vm4,fsd=fs-write-512k4,operation=write,fileselect=random,threads=1
fwd=fw-vm4-1024k-write,host=vm4,fsd=fs-write-1024k4,operation=write,fileselect=random,threads=1
fwd=fw-vm4-10240k-write,host=vm4,fsd=fs-write-10240k4,operation=write,fileselect=random,threads=1
fwd=fw-vm4-51200k-write,host=vm4,fsd=fs-write-51200k4,operation=write,fileselect=random,threads=1
#fwd=fw-vm5-32k-read,host=vm5,fsd=fs-read-32k5,operation=read,fileselect=random,threads=1
#fwd=fw-vm5-64k-read,host=vm5,fsd=fs-read-64k5,operation=read,fileselect=random,threads=1
#fwd=fw-vm5-512k-read,host=vm5,fsd=fs-read-512k5,operation=read,fileselect=random,threads=1
#fwd=fw-vm5-1024k-read,host=vm5,fsd=fs-read-1024k5,operation=read,fileselect=random,threads=1
#fwd=fw-vm5-10240k-read,host=vm5,fsd=fs-read-10240k5,operation=read,fileselect=random,threads=1
#fwd=fw-vm5-51200k-read,host=vm5,fsd=fs-read-51200k5,operation=read,fileselect=random,threads=1
#fwd=fw-vm5-32k-write,host=vm5,fsd=fs-write-32k5,operation=write,fileselect=random,threads=1
#fwd=fw-vm5-64k-write,host=vm5,fsd=fs-write-64k5,operation=write,fileselect=random,threads=1
#fwd=fw-vm5-512k-write,host=vm5,fsd=fs-write-512k5,operation=write,fileselect=random,threads=1
#fwd=fw-vm5-1024k-write,host=vm5,fsd=fs-write-1024k5,operation=write,fileselect=random,threads=1
#fwd=fw-vm5-10240k-write,host=vm5,fsd=fs-write-10240k5,operation=write,fileselect=random,threads=1
#fwd=fw-vm5-51200k-write,host=vm5,fsd=fs-write-51200k5,operation=write,fileselect=random,threads=1
#fwd=fw-vm6-32k-read,host=vm6,fsd=fs-read-32k6,operation=read,fileselect=random,threads=1
#fwd=fw-vm6-64k-read,host=vm6,fsd=fs-read-64k6,operation=read,fileselect=random,threads=1
#fwd=fw-vm6-512k-read,host=vm6,fsd=fs-read-512k6,operation=read,fileselect=random,threads=1
#fwd=fw-vm6-1024k-read,host=vm6,fsd=fs-read-1024k6,operation=read,fileselect=random,threads=1
#fwd=fw-vm6-10240k-read,host=vm6,fsd=fs-read-10240k6,operation=read,fileselect=random,threads=1
#fwd=fw-vm6-51200k-read,host=vm6,fsd=fs-read-51200k6,operation=read,fileselect=random,threads=1
#fwd=fw-vm6-32k-write,host=vm6,fsd=fs-write-32k6,operation=write,fileselect=random,threads=1
#fwd=fw-vm6-64k-write,host=vm6,fsd=fs-write-64k6,operation=write,fileselect=random,threads=1
#fwd=fw-vm6-512k-write,host=vm6,fsd=fs-write-512k6,operation=write,fileselect=random,threads=1
#fwd=fw-vm6-1024k-write,host=vm6,fsd=fs-write-1024k6,operation=write,fileselect=random,threads=1
#fwd=fw-vm6-10240k-write,host=vm6,fsd=fs-write-10240k6,operation=write,fileselect=random,threads=1
#fwd=fw-vm6-51200k-write,host=vm6,fsd=fs-write-51200k6,operation=write,fileselect=random,threads=1
#fwd=fw-vm7-32k-read,host=vm7,fsd=fs-read-32k7,operation=read,fileselect=random,threads=1
#fwd=fw-vm7-64k-read,host=vm7,fsd=fs-read-64k7,operation=read,fileselect=random,threads=1
#fwd=fw-vm7-512k-read,host=vm7,fsd=fs-read-512k7,operation=read,fileselect=random,threads=1
#fwd=fw-vm7-1024k-read,host=vm7,fsd=fs-read-1024k7,operation=read,fileselect=random,threads=1
#fwd=fw-vm7-10240k-read,host=vm7,fsd=fs-read-10240k7,operation=read,fileselect=random,threads=1
#fwd=fw-vm7-51200k-read,host=vm7,fsd=fs-read-51200k7,operation=read,fileselect=random,threads=1
#fwd=fw-vm7-32k-write,host=vm7,fsd=fs-write-32k7,operation=write,fileselect=random,threads=1
#fwd=fw-vm7-64k-write,host=vm7,fsd=fs-write-64k7,operation=write,fileselect=random,threads=1
#fwd=fw-vm7-512k-write,host=vm7,fsd=fs-write-512k7,operation=write,fileselect=random,threads=1
#fwd=fw-vm7-1024k-write,host=vm7,fsd=fs-write-1024k7,operation=write,fileselect=random,threads=1
#fwd=fw-vm7-10240k-write,host=vm7,fsd=fs-write-10240k7,operation=write,fileselect=random,threads=1
#fwd=fw-vm7-51200k-write,host=vm7,fsd=fs-write-51200k7,operation=write,fileselect=random,threads=1
#fwd=fw-vm8-32k-read,host=vm8,fsd=fs-read-32k8,operation=read,fileselect=random,threads=1
#fwd=fw-vm8-64k-read,host=vm8,fsd=fs-read-64k8,operation=read,fileselect=random,threads=1
#fwd=fw-vm8-512k-read,host=vm8,fsd=fs-read-512k8,operation=read,fileselect=random,threads=1
#fwd=fw-vm8-1024k-read,host=vm8,fsd=fs-read-1024k8,operation=read,fileselect=random,threads=1
#fwd=fw-vm8-10240k-read,host=vm8,fsd=fs-read-10240k8,operation=read,fileselect=random,threads=1
#fwd=fw-vm8-51200k-read,host=vm8,fsd=fs-read-51200k8,operation=read,fileselect=random,threads=1
#fwd=fw-vm8-32k-write,host=vm8,fsd=fs-write-32k8,operation=write,fileselect=random,threads=1
#fwd=fw-vm8-64k-write,host=vm8,fsd=fs-write-64k8,operation=write,fileselect=random,threads=1
#fwd=fw-vm8-512k-write,host=vm8,fsd=fs-write-512k8,operation=write,fileselect=random,threads=1
#fwd=fw-vm8-1024k-write,host=vm8,fsd=fs-write-1024k8,operation=write,fileselect=random,threads=1
#fwd=fw-vm8-10240k-write,host=vm8,fsd=fs-write-10240k8,operation=write,fileselect=random,threads=1
#fwd=fw-vm8-51200k-write,host=vm8,fsd=fs-write-51200k8,operation=write,fileselect=random,threads=1

#### fwd end ####

### rd start ###
#rd=rd1,fwd=fw-vm*,fwdrate=max,format=yes,elapsed=10,interval=1,xfersize=4k

rd=rd-8k,fwd=fw-vm*,fwdrate=max,format=no,elapsed=10,interval=1,xfersize=8k
rd=rd-16k,fwd=fw-vm*,fwdrate=max,format=no,elapsed=10,interval=1,xfersize=16k
rd=rd-32k,fwd=fw-vm*,fwdrate=max,format=no,elapsed=10,interval=1,xfersize=32k
rd=rd-64k,fwd=fw-vm*,fwdrate=max,format=no,elapsed=10,interval=1,xfersize=64k
### rd end ###

#rd=test-rd,fwd=test-fwd,fwdrate=max,format=yes,operations=read,foroperations=(read,write,delete,rmdir),fordepth=(5-10,1),forwidth=(5-10,1),forfiles=(5-10,1),forsizes=(5-10,1),fortotal=(5g,10g),forxfersizs=(1k-512k,d)
#rd=rd1,fwd=fw-vm*,fwdrate=max,format=yes,elapsed=10,interval=1,xfersize=4k

 

### read and write ### 

#hd=default,user=root,shell=ssh
hd=default,user=root,shell=ssh,jvm=2
hd=vm1,system=rhgs1
hd=vm2,system=rhgs2
hd=vm3,system=rhgs3
hd=vm4,system=rhgs4
#hd=vm5,system=rhgs5
#hd=vm6,system=rhgs6
#hd=vm7,system=rhgs7
#hd=vm8,system=rhgs8
#hd=vm9,system=rhgs9
#hd=vm10,system=rhgs10

### fsd start ###
fsd=fs-32k,anchor=/storage-test/fs-32k/dir,count=(1,10),depth=2,width=2,files=3,size=32k
fsd=fs-512k,anchor=/storage-test/fs-512k/dir,count=(1,10),depth=2,width=2,files=3,size=512k
### fsd end ###
#### fwd start ####
fwd=fw-32k-read-1,host=vm1,fsd=fs-32k1,operation=read,fileselect=random,threads=1
fwd=fw-32k-read-2,host=vm2,fsd=fs-32k2,operation=read,fileselect=random,threads=1
fwd=fw-32k-read-3,host=vm3,fsd=fs-32k3,operation=read,fileselect=random,threads=1
fwd=fw-32k-read-4,host=vm4,fsd=fs-32k4,operation=read,fileselect=random,threads=1
#fwd=fw-32k-read-5,host=vm5,fsd=fs-32k5,operation=read,fileselect=random,threads=1
#fwd=fw-32k-read-6,host=vm6,fsd=fs-32k6,operation=read,fileselect=random,threads=1
#fwd=fw-32k-read-7,host=vm7,fsd=fs-32k7,operation=read,fileselect=random,threads=1
#fwd=fw-32k-read-8,host=vm8,fsd=fs-32k8,operation=read,fileselect=random,threads=1
#fwd=fw-32k-read-9,host=vm9,fsd=fs-32k9,operation=read,fileselect=random,threads=1
#fwd=fw-32k-read-10,host=vm10,fsd=fs-32k10,operation=read,fileselect=random,threads=1
fwd=fw-32k-write-1,host=vm1,fsd=fs-32k1,operation=write,fileselect=random,threads=1
fwd=fw-32k-write-2,host=vm2,fsd=fs-32k2,operation=write,fileselect=random,threads=1
fwd=fw-32k-write-3,host=vm3,fsd=fs-32k3,operation=write,fileselect=random,threads=1
fwd=fw-32k-write-4,host=vm4,fsd=fs-32k4,operation=write,fileselect=random,threads=1
#fwd=fw-32k-write-5,host=vm5,fsd=fs-32k5,operation=write,fileselect=random,threads=1
#fwd=fw-32k-write-6,host=vm6,fsd=fs-32k6,operation=write,fileselect=random,threads=1
#fwd=fw-32k-write-7,host=vm7,fsd=fs-32k7,operation=write,fileselect=random,threads=1
#fwd=fw-32k-write-8,host=vm8,fsd=fs-32k8,operation=write,fileselect=random,threads=1
#fwd=fw-32k-write-9,host=vm9,fsd=fs-32k9,operation=write,fileselect=random,threads=1
#fwd=fw-32k-write-10,host=vm10,fsd=fs-32k10,operation=write,fileselect=random,threads=1
fwd=fw-512k-read-1,host=vm1,fsd=fs-512k1,operation=read,fileselect=random,threads=1
fwd=fw-512k-read-2,host=vm2,fsd=fs-512k2,operation=read,fileselect=random,threads=1
fwd=fw-512k-read-3,host=vm3,fsd=fs-512k3,operation=read,fileselect=random,threads=1
fwd=fw-512k-read-4,host=vm4,fsd=fs-512k4,operation=read,fileselect=random,threads=1
#fwd=fw-512k-read-5,host=vm5,fsd=fs-512k5,operation=read,fileselect=random,threads=1
#fwd=fw-512k-read-6,host=vm6,fsd=fs-512k6,operation=read,fileselect=random,threads=1
#fwd=fw-512k-read-7,host=vm7,fsd=fs-512k7,operation=read,fileselect=random,threads=1
#fwd=fw-512k-read-8,host=vm8,fsd=fs-512k8,operation=read,fileselect=random,threads=1
#fwd=fw-512k-read-9,host=vm9,fsd=fs-512k9,operation=read,fileselect=random,threads=1
#fwd=fw-512k-read-10,host=vm10,fsd=fs-512k10,operation=read,fileselect=random,threads=1
fwd=fw-512k-write-1,host=vm1,fsd=fs-512k1,operation=write,fileselect=random,threads=1
fwd=fw-512k-write-2,host=vm2,fsd=fs-512k2,operation=write,fileselect=random,threads=1
fwd=fw-512k-write-3,host=vm3,fsd=fs-512k3,operation=write,fileselect=random,threads=1
fwd=fw-512k-write-4,host=vm4,fsd=fs-512k4,operation=write,fileselect=random,threads=1
#fwd=fw-512k-write-5,host=vm5,fsd=fs-512k5,operation=write,fileselect=random,threads=1
#fwd=fw-512k-write-6,host=vm6,fsd=fs-512k6,operation=write,fileselect=random,threads=1
#fwd=fw-512k-write-7,host=vm7,fsd=fs-512k7,operation=write,fileselect=random,threads=1
#fwd=fw-512k-write-8,host=vm8,fsd=fs-512k8,operation=write,fileselect=random,threads=1
#fwd=fw-512k-write-9,host=vm9,fsd=fs-512k9,operation=write,fileselect=random,threads=1
#fwd=fw-512k-write-10,host=vm10,fsd=fs-512k10,operation=write,fileselect=random,threads=1
#### fwd end ####
### rd start ###
#rd=rd1,fwd=fw-*,fwdrate=max,format=yes,elapsed=10,interval=1,xfersize=8k
rd=rd-32k-read,fwd=fw-32k-read*,fwdrate=max,format=no,elapsed=10,interval=1,xfersize=8k
rd=rd-32k-write,fwd=fw-32k-write*,fwdrate=max,format=no,elapsed=10,interval=1,xfersize=8k
rd=rd-512k-read,fwd=fw-512k-read*,fwdrate=max,format=no,elapsed=10,interval=1,xfersize=8k
rd=rd-512k-write,fwd=fw-512k-write*,fwdrate=max,format=no,elapsed=10,interval=1,xfersize=8k
### rd end ###

#rd=test-rd,fwd=test-fwd,fwdrate=max,format=yes,operations=read,foroperations=(read,write,delete,rmdir),fordepth=(5-10,1),forwidth=(5-10,1),forfiles=(5-10,1),forsizes=(5-10,1),fortotal=(5g,10g),forxfersizs=(1k-512k,d)
#rd=rd1,fwd=fw-vm*,fwdrate=max,format=yes,elapsed=10,interval=1,xfersize=4k

 

### context create script ### 

#!/bin/bash

fwd create script ###
#for i in {1..8}
#do
# for H in read write
# do
# for I in {32,64,512,1024,10240,51200}k
# do
# echo fwd=fw-vm$i-$I-$H,host=vm$i,fsd=fs-$H-$I$i,operation=$H,fileselect=random,threads=1
# done
# done
#done
#
#
#### rd create script ###
#for I in {8,16,32,64}k
#do
# echo rd=rd-$I,fwd=fw-vm*,fwdrate=max,format=no,elapsed=10,interval=1,xfersize=$I
#done
#
#### mkdir script
for i in `cat list`
do

 [[ ${i:0:1} == “#” ]] && continue
for h in `echo /storage-test/{read,write}-{32,64,512,1024,10240,51200}k`
do
#echo “$i mkdir $h”
ssh $i mkdir $h
done
done
#

echo
echo ### —————– 32k start ———————–
for T in `seq 1 10`
do
echo fwd=fw-32k-read-$T,host=vm$T,fsd=fs-32k$T,operation=read,fileselect=random,threads=1
done

for T in `seq 1 10`
do
echo fwd=fw-32k-write-$T,host=vm$T,fsd=fs-32k$T,operation=write,fileselect=random,thwrites=1
done

echo ### —————– 32k end ———————–

echo
echo ### —————– 512k start ———————–

for R in `seq 1 10`
do
echo fwd=fw-512k-read-$R,host=vm$R,fsd=fs-512k$R,operation=read,fileselect=random,threads=1
done

for R in `seq 1 10`
do
echo fwd=fw-512k-write-$R,host=vm$R,fsd=fs-512k$R,operation=write,fileselect=random,thwrites=1
done

echo
echo ### —————– 512k end ———————–

vdbench filesystem multi-host example

*
* Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved.
*

*
* Author: Henk Vandenbergh.
*
*
*
* This is a multi-host file system test example.
* Note that an FSD can be used only from ONE host. This to make sure that
* one host is not deleting a file that an other host is using.
*
* Note: with Vdbench 5.00 not specifying specific host names in the FWD
* will cause a nullpointer Exception.
*
*
create_anchors=yes

hd=default,user=root
hd=rhgs1,system=rhgs1,shell=ssh
hd=rhgs2,system=rhgs2,shell=ssh
hd=rhgs3,system=rhgs3,shell=ssh
hd=rhgs4,system=rhgs4,shell=ssh

fsd=rock1,anchor=/test/anchor1,depth=2,width=2,files=5,size=128k
fsd=rock2,anchor=/test/anchor2,depth=2,width=2,files=5,size=128k
fsd=rock3,anchor=/test/anchor3,depth=2,width=2,files=5,size=128k
fsd=rock4,anchor=/test/anchor4,depth=2,width=2,files=5,size=128k
fsd=rock5,anchor=/test/anchor5,depth=2,width=2,files=5,size=128k
fsd=rock6,anchor=/test/anchor6,depth=2,width=2,files=5,size=128k
fsd=rock7,anchor=/test/anchor7,depth=2,width=2,files=5,size=128k
fsd=rock8,anchor=/test/anchor8,depth=2,width=2,files=5,size=128k
fsd=rock9,anchor=/test/anchor9,depth=2,width=2,files=5,size=128k
fsd=rock10,anchor=/test/anchor10,depth=2,width=2,files=5,size=128k
fsd=rock11,anchor=/test/anchor11,depth=2,width=2,files=5,size=128k
fsd=rock12,anchor=/test/anchor12,depth=2,width=2,files=5,size=128k
fsd=rock13,anchor=/test/anchor13,depth=2,width=2,files=5,size=128k
fsd=rock14,anchor=/test/anchor14,depth=2,width=2,files=5,size=128k
fsd=rock15,anchor=/test/anchor15,depth=2,width=2,files=5,size=128k
fsd=rock16,anchor=/test/anchor16,depth=2,width=2,files=5,size=128k
fsd=rock17,anchor=/test/anchor17,depth=2,width=2,files=5,size=128k
fsd=rock18,anchor=/test/anchor18,depth=2,width=2,files=5,size=128k
fsd=rock19,anchor=/test/anchor19,depth=2,width=2,files=5,size=128k
fsd=rock20,anchor=/test/anchor20,depth=2,width=2,files=5,size=128k
fsd=rock21,anchor=/test/anchor21,depth=2,width=2,files=5,size=128k
fsd=rock22,anchor=/test/anchor22,depth=2,width=2,files=5,size=128k
fsd=rock23,anchor=/test/anchor23,depth=2,width=2,files=5,size=128k
fsd=rock24,anchor=/test/anchor24,depth=2,width=2,files=5,size=128k
fsd=rock25,anchor=/test/anchor25,depth=2,width=2,files=5,size=128k
fsd=rock26,anchor=/test/anchor26,depth=2,width=2,files=5,size=128k
fsd=rock27,anchor=/test/anchor27,depth=2,width=2,files=5,size=128k
fsd=rock28,anchor=/test/anchor28,depth=2,width=2,files=5,size=128k
fsd=rock29,anchor=/test/anchor29,depth=2,width=2,files=5,size=128k
fsd=rock30,anchor=/test/anchor30,depth=2,width=2,files=5,size=128k

# vm 1 read
fwd=ro1,host=rhgs1,fsd=rock1,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro2,host=rhgs1,fsd=rock2,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro3,host=rhgs1,fsd=rock3,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro4,host=rhgs1,fsd=rock4,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro5,host=rhgs1,fsd=rock5,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro6,host=rhgs1,fsd=rock6,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro7,host=rhgs1,fsd=rock7,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro8,host=rhgs1,fsd=rock8,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro9,host=rhgs1,fsd=rock9,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro10,host=rhgs1,fsd=rock10,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro11,host=rhgs1,fsd=rock11,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro12,host=rhgs1,fsd=rock12,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro13,host=rhgs1,fsd=rock13,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro14,host=rhgs1,fsd=rock14,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro15,host=rhgs1,fsd=rock15,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro16,host=rhgs1,fsd=rock16,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro17,host=rhgs1,fsd=rock17,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro18,host=rhgs1,fsd=rock18,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro19,host=rhgs1,fsd=rock19,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro20,host=rhgs1,fsd=rock20,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro21,host=rhgs1,fsd=rock21,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro22,host=rhgs1,fsd=rock22,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro23,host=rhgs1,fsd=rock23,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro24,host=rhgs1,fsd=rock24,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro25,host=rhgs1,fsd=rock25,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro26,host=rhgs1,fsd=rock26,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro27,host=rhgs1,fsd=rock27,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro28,host=rhgs1,fsd=rock28,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro29,host=rhgs1,fsd=rock29,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=ro30,host=rhgs1,fsd=rock30,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read

# vm 1 write
fwd=wo1,host=rhgs1,fsd=rock1,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo2,host=rhgs1,fsd=rock2,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo3,host=rhgs1,fsd=rock3,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo4,host=rhgs1,fsd=rock4,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo5,host=rhgs1,fsd=rock5,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo6,host=rhgs1,fsd=rock6,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo7,host=rhgs1,fsd=rock7,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo8,host=rhgs1,fsd=rock8,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo9,host=rhgs1,fsd=rock9,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo10,host=rhgs1,fsd=rock10,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo11,host=rhgs1,fsd=rock11,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo12,host=rhgs1,fsd=rock12,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo13,host=rhgs1,fsd=rock13,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo14,host=rhgs1,fsd=rock14,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo15,host=rhgs1,fsd=rock15,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo16,host=rhgs1,fsd=rock16,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo17,host=rhgs1,fsd=rock17,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo18,host=rhgs1,fsd=rock18,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo19,host=rhgs1,fsd=rock19,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo20,host=rhgs1,fsd=rock20,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo21,host=rhgs1,fsd=rock21,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo22,host=rhgs1,fsd=rock22,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo23,host=rhgs1,fsd=rock23,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo24,host=rhgs1,fsd=rock24,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo25,host=rhgs1,fsd=rock25,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo26,host=rhgs1,fsd=rock26,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo27,host=rhgs1,fsd=rock27,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo28,host=rhgs1,fsd=rock28,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo29,host=rhgs1,fsd=rock29,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wo30,host=rhgs1,fsd=rock30,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write

# vm 1 read 30 : write 70
fwd=rwo1,host=rhgs1,fsd=rock1,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo2,host=rhgs1,fsd=rock2,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo3,host=rhgs1,fsd=rock3,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwo4,host=rhgs1,fsd=rock4,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo5,host=rhgs1,fsd=rock5,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo6,host=rhgs1,fsd=rock6,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwo7,host=rhgs1,fsd=rock7,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo8,host=rhgs1,fsd=rock8,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo9,host=rhgs1,fsd=rock9,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwo10,host=rhgs1,fsd=rock10,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo11,host=rhgs1,fsd=rock11,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo12,host=rhgs1,fsd=rock12,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwo13,host=rhgs1,fsd=rock13,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo14,host=rhgs1,fsd=rock14,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo15,host=rhgs1,fsd=rock15,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwo16,host=rhgs1,fsd=rock16,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo17,host=rhgs1,fsd=rock17,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo18,host=rhgs1,fsd=rock18,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwo19,host=rhgs1,fsd=rock19,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo20,host=rhgs1,fsd=rock20,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo21,host=rhgs1,fsd=rock21,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwo22,host=rhgs1,fsd=rock22,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo23,host=rhgs1,fsd=rock23,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo24,host=rhgs1,fsd=rock24,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwo25,host=rhgs1,fsd=rock25,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo26,host=rhgs1,fsd=rock26,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo27,host=rhgs1,fsd=rock27,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwo28,host=rhgs1,fsd=rock28,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo29,host=rhgs1,fsd=rock29,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwo30,host=rhgs1,fsd=rock30,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write

# vm 2 read
fwd=rt1,host=rhgs1,fsd=rock1,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt2,host=rhgs2,fsd=rock2,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt3,host=rhgs1,fsd=rock3,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt4,host=rhgs2,fsd=rock4,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt5,host=rhgs1,fsd=rock5,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt6,host=rhgs2,fsd=rock6,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt7,host=rhgs1,fsd=rock7,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt8,host=rhgs2,fsd=rock8,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt9,host=rhgs1,fsd=rock9,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt10,host=rhgs2,fsd=rock10,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt11,host=rhgs1,fsd=rock11,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt12,host=rhgs2,fsd=rock12,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt13,host=rhgs1,fsd=rock13,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt14,host=rhgs2,fsd=rock14,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt15,host=rhgs1,fsd=rock15,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt16,host=rhgs2,fsd=rock16,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt17,host=rhgs1,fsd=rock17,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt18,host=rhgs2,fsd=rock18,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt19,host=rhgs1,fsd=rock19,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt20,host=rhgs2,fsd=rock20,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt21,host=rhgs1,fsd=rock21,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt22,host=rhgs2,fsd=rock22,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt23,host=rhgs1,fsd=rock23,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt24,host=rhgs2,fsd=rock24,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt25,host=rhgs1,fsd=rock25,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt26,host=rhgs2,fsd=rock26,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt27,host=rhgs1,fsd=rock27,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt28,host=rhgs2,fsd=rock28,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt29,host=rhgs1,fsd=rock29,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rt30,host=rhgs2,fsd=rock30,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read

# vm 2 write
fwd=wt1,host=rhgs1,fsd=rock1,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt2,host=rhgs2,fsd=rock2,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt3,host=rhgs1,fsd=rock3,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt4,host=rhgs2,fsd=rock4,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt5,host=rhgs1,fsd=rock5,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt6,host=rhgs2,fsd=rock6,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt7,host=rhgs1,fsd=rock7,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt8,host=rhgs2,fsd=rock8,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt9,host=rhgs1,fsd=rock9,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt10,host=rhgs2,fsd=rock10,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt11,host=rhgs1,fsd=rock11,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt12,host=rhgs2,fsd=rock12,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt13,host=rhgs1,fsd=rock13,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt14,host=rhgs2,fsd=rock14,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt15,host=rhgs1,fsd=rock15,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt16,host=rhgs2,fsd=rock16,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt17,host=rhgs1,fsd=rock17,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt18,host=rhgs2,fsd=rock18,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt19,host=rhgs1,fsd=rock19,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt20,host=rhgs2,fsd=rock20,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt21,host=rhgs1,fsd=rock21,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt22,host=rhgs2,fsd=rock22,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt23,host=rhgs1,fsd=rock23,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt24,host=rhgs2,fsd=rock24,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt25,host=rhgs1,fsd=rock25,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt26,host=rhgs2,fsd=rock26,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt27,host=rhgs1,fsd=rock27,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt28,host=rhgs2,fsd=rock28,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt29,host=rhgs1,fsd=rock29,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wt30,host=rhgs2,fsd=rock30,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write

# vm 2 read 30 : write 70
fwd=rwt1,host=rhgs1,fsd=rock1,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt2,host=rhgs2,fsd=rock2,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt3,host=rhgs1,fsd=rock3,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwt4,host=rhgs2,fsd=rock4,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt5,host=rhgs1,fsd=rock5,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt6,host=rhgs2,fsd=rock6,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwt7,host=rhgs1,fsd=rock7,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt8,host=rhgs2,fsd=rock8,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt9,host=rhgs1,fsd=rock9,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwt10,host=rhgs1,fsd=rock10,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt11,host=rhgs2,fsd=rock11,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt12,host=rhgs2,fsd=rock12,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwt13,host=rhgs1,fsd=rock13,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt14,host=rhgs2,fsd=rock14,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt15,host=rhgs1,fsd=rock15,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwt16,host=rhgs1,fsd=rock16,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt17,host=rhgs2,fsd=rock17,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt18,host=rhgs2,fsd=rock18,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwt19,host=rhgs1,fsd=rock19,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt20,host=rhgs2,fsd=rock20,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt21,host=rhgs1,fsd=rock21,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwt22,host=rhgs1,fsd=rock22,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt23,host=rhgs2,fsd=rock23,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt24,host=rhgs2,fsd=rock24,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwt25,host=rhgs1,fsd=rock25,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt26,host=rhgs2,fsd=rock26,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt27,host=rhgs1,fsd=rock27,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwt28,host=rhgs1,fsd=rock28,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt29,host=rhgs2,fsd=rock29,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwt30,host=rhgs1,fsd=rock30,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write

# vm 4 read
fwd=rf1,host=rhgs1,fsd=rock1,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf2,host=rhgs2,fsd=rock2,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf3,host=rhgs3,fsd=rock3,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf4,host=rhgs4,fsd=rock4,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf5,host=rhgs1,fsd=rock5,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf6,host=rhgs2,fsd=rock6,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf7,host=rhgs3,fsd=rock7,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf8,host=rhgs4,fsd=rock8,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf9,host=rhgs1,fsd=rock9,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf10,host=rhgs2,fsd=rock10,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf11,host=rhgs3,fsd=rock11,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf12,host=rhgs4,fsd=rock12,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf13,host=rhgs1,fsd=rock13,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf14,host=rhgs2,fsd=rock14,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf15,host=rhgs3,fsd=rock15,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf16,host=rhgs4,fsd=rock16,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf17,host=rhgs1,fsd=rock17,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf18,host=rhgs2,fsd=rock18,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf19,host=rhgs3,fsd=rock19,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf20,host=rhgs4,fsd=rock20,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf21,host=rhgs1,fsd=rock21,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf22,host=rhgs2,fsd=rock22,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf23,host=rhgs3,fsd=rock23,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf24,host=rhgs4,fsd=rock24,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf25,host=rhgs1,fsd=rock25,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf26,host=rhgs2,fsd=rock26,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf27,host=rhgs3,fsd=rock27,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf28,host=rhgs4,fsd=rock28,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf29,host=rhgs1,fsd=rock29,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rf30,host=rhgs2,fsd=rock30,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
# vm 4 write
fwd=wf1,host=rhgs1,fsd=rock1,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf2,host=rhgs2,fsd=rock2,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf3,host=rhgs3,fsd=rock3,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf4,host=rhgs4,fsd=rock4,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf5,host=rhgs1,fsd=rock5,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf6,host=rhgs2,fsd=rock6,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf7,host=rhgs3,fsd=rock7,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf8,host=rhgs4,fsd=rock8,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf9,host=rhgs1,fsd=rock9,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf10,host=rhgs2,fsd=rock10,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf11,host=rhgs3,fsd=rock11,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf12,host=rhgs4,fsd=rock12,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf13,host=rhgs1,fsd=rock13,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf14,host=rhgs2,fsd=rock14,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf15,host=rhgs3,fsd=rock15,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf16,host=rhgs4,fsd=rock16,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf17,host=rhgs1,fsd=rock17,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf18,host=rhgs2,fsd=rock18,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf19,host=rhgs3,fsd=rock19,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf20,host=rhgs4,fsd=rock20,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf21,host=rhgs1,fsd=rock21,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf22,host=rhgs2,fsd=rock22,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf23,host=rhgs3,fsd=rock23,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf24,host=rhgs4,fsd=rock24,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf25,host=rhgs1,fsd=rock25,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf26,host=rhgs2,fsd=rock26,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf27,host=rhgs3,fsd=rock27,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf28,host=rhgs4,fsd=rock28,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf29,host=rhgs1,fsd=rock29,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=wf30,host=rhgs2,fsd=rock30,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write

# vm 4 read 30 : write 70
fwd=rwf1,host=rhgs1,fsd=rock1,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf2,host=rhgs2,fsd=rock2,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf3,host=rhgs3,fsd=rock3,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwf4,host=rhgs4,fsd=rock4,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf5,host=rhgs1,fsd=rock5,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf6,host=rhgs2,fsd=rock6,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwf7,host=rhgs3,fsd=rock7,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf8,host=rhgs4,fsd=rock8,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf9,host=rhgs1,fsd=rock9,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwf10,host=rhgs2,fsd=rock10,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf11,host=rhgs3,fsd=rock11,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf12,host=rhgs4,fsd=rock12,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwf13,host=rhgs1,fsd=rock13,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf14,host=rhgs2,fsd=rock14,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf15,host=rhgs3,fsd=rock15,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwf16,host=rhgs4,fsd=rock16,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf17,host=rhgs1,fsd=rock17,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf18,host=rhgs2,fsd=rock18,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwf19,host=rhgs3,fsd=rock19,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf20,host=rhgs4,fsd=rock20,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf21,host=rhgs1,fsd=rock21,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwf22,host=rhgs2,fsd=rock22,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf23,host=rhgs3,fsd=rock23,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf24,host=rhgs4,fsd=rock24,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwf25,host=rhgs1,fsd=rock25,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf26,host=rhgs2,fsd=rock26,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf27,host=rhgs3,fsd=rock27,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=read
fwd=rwf28,host=rhgs4,fsd=rock28,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf29,host=rhgs1,fsd=rock29,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write
fwd=rwf30,host=rhgs2,fsd=rock30,xfersize=4k,fileio=random,fileselect=random,threads=2,operation=write

# vm 1
rd=rord1,fwd=ro*,fwdrate=max,format=yes,elapsed=10,interval=1
rd=word1,fwd=wo*,fwdrate=max,format=yes,elapsed=10,interval=1
rd=rword1,fwd=rwo*,fwdrate=max,format=yes,elapsed=10,interval=1

# vm 2
rd=rtrd1,fwd=rt*,fwdrate=max,format=yes,elapsed=10,interval=1
rd=wtrd1,fwd=wt*,fwdrate=max,format=yes,elapsed=10,interval=1
rd=rwtrd1,fwd=rwt*,fwdrate=max,format=yes,elapsed=10,interval=1

# vm 4
rd=rfrd1,fwd=rf*,fwdrate=max,format=yes,elapsed=10,interval=1
rd=wfrd1,fwd=wf*,fwdrate=max,format=yes,elapsed=10,interval=1
rd=rwfrd1,fwd=rwf*,fwdrate=max,format=yes,elapsed=10,interval=1