글쓴이 보관물: leeyj7141

libvirt kvm 환경에서의 cloud-init 간단 사용법.

libvirt kvm 환경에서의 cloud-init 간단 사용법.
쓰고보니 별로 간단하지 않더라…

1. image download

wget https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1708.qcow2.xz
xz -d CentOS-7-x86_64-GenericCloud-1708.qcow2.xz

2. cloud-init version 확인.

guestfish -a CentOS-7-x86_64-GenericCloud-1708.qcow2
run 
list-filesystems
mount /dev/sda1 /
ls /usr/share/licenses/cloud-init-*

현재 centos7의 cloud image의 cloud-init 버전은 0.7.9 이다. 그럼 여기에 맞는 cloud-init의 manual을 참조해야 한다.
http://cloudinit.readthedocs.io/en/0.7.9/

3. datasource nocloud
http://cloudinit.readthedocs.io/en/0.7.9/topics/datasources/nocloud.html

기본적으로 user-data는 단순히 사용자 데이터이고 meta-data는 EC2 메타 데이터 서비스에서 찾을 수있는 것을 나타내는 yaml 형식의 file이다.
vfat 또는 iso9660 파일 시스템의 파일을 통해 로컬 VM 부트에 meta-data 및 user-data를 제공 할 수 있다. 파일 시스템 볼륨 레이블은 cidata 여야한다. 즉, floppy disk나 iso image가 가능하고, volume label(이름)이 cidata여야 한다는 뜻이다.
이러한 user-data 및 meta-data 파일은 해댱 filesystem(floppy disk, iso image)의 root에 위치 해야 한다.

/meta-data
/user-data

4. 설정.
http://cloudinit.readthedocs.io/en/0.7.9/index.html

cloud init에는 boot stage를 5단계로 나눠 놨다. 순서대로 실행 됨.

1. Generator cloud-init을 enable시켜 주는 역할. disable 하려면 /etc/cloud/cloud-init.disabled file을 만들어주거나 관련서비스 4개를 disable 시켜주면 됨.
2. Local local datasource(floppy, iso image)를 사용할 때 씀. cloud-init-local.service
3. Network cloud-init.service cloud_init_module section
4. Config cloud-config.service cloud_config_module section
5. Final cloud-final.service cloud_final_module section
아래는 centos7 cloud image안에 있는 cloud-init config file 내용이다.
기본 user와 sysinfo 2개의 section이 있고 booting시에 원하는 명령어를 실행 하려면 ,cloud_init_modules,cloud_config_modules, cloud_final_modules section에 알맞는 module을 넣으면 된다.
이걸 수정 하려면 guestfish에서 vi를 사용하면 된다.

[root@youngju3-test1 mnt]# cat /etc/cloud/cloud.cfg
users:                                    
 - default                                
                                          
disable_root: 1
ssh_pwauth:   0                           
                                          
mount_default_fields: [~, ~, 'auto', 'defaults,nofail', '0', '2']
resize_rootfs_tmp: /dev                   
ssh_deletekeys:   0                       
ssh_genkeytypes:  ~                       
syslog_fix_perms: ~                       
                                          
cloud_init_modules:                       
 - migrator                               
 - bootcmd                                
 - write-files        
 - growpart                               
 - resizefs                                        
 - set_hostname                           
 - update_hostname                        
 - update_etc_hosts                       
 - rsyslog                                
 - users-groups                           
 - ssh 
                                   
cloud_config_modules:                     
 - mounts                                 
 - locale                                 
 - set-passwords                          
 - rh_subscription
 - yum-add-repo                           
 - package-update-upgrade-install         
 - timezone
 - puppet
 - chef
 - salt-minion
 - mcollective
 - disable-ec2-metadata
 - runcmd

cloud_final_modules:
 - rightscale_userdata
 - scripts-per-once
 - scripts-per-boot
 - scripts-per-instance
 - scripts-user
 - ssh-authkey-fingerprints
 - keys-to-console
 - phone-home
 - final-message
 - power-state-change

system_info:
  default_user:
    name: centos
    lock_passwd: true
    gecos: Cloud User
    groups: [wheel, adm, systemd-journal]
    sudo: ["ALL=(ALL) NOPASSWD:ALL"]
    shell: /bin/bash
  distro: rhel
  paths:
    cloud_dir: /var/lib/cloud
    templates_dir: /etc/cloud/templates
  ssh_svcname: sshd

# vim:syntax=yaml

meta-data와 user-data 내용을 보면…
뭐가 문젠지 meta-data에서 network config가 잘 안된다. 그래서 user-data에서 network: {config: disabled}로 설정하고 bootcmd에서 원하는 설정을 넣어 줬다. shell script에 자신이 있으면 cmd에서 거의 모든것을 할 수 있다. cmd에서는 힘들지만…원하는 binary까지도 넣어 줄 수가 있다.(manual 참조)

[root@youngju3-test1 mnt]# cat meta-data 
instance-id: youngju3-test1
hostname: youngju3-test1
local-hostname: youngju3-test1


[root@youngju3-test1 mnt]# cat user-data                                                                                                                                 
#cloud-config                                                                                                                                                            
                                                                                   
password: root123                                                                    
chpasswd:
  list: |
    root:root123
  expire: False
ssh_pwauth: True

network: {config: disabled}

# Hostname management
preserve_hostname: False
hostname: youngju3-test1
fqdn: youngju3-test1

bootcmd:
  - set -x; echo '#user-data/bootcmd:' >> /etc/sysconfig/network-scripts/ifcfg-eth0
  - set -x; sed -i -e '/^BOOTPROTO/ s/dhcp/static/g' -e '/PERSISTENT_DHCLIENT/d' /etc/sysconfig/network-scripts/ifcfg-eth0
  - set -x; echo 'IPADDR=10.11.11.211' >> /etc/sysconfig/network-scripts/ifcfg-eth0
  - set -x; echo 'NETMASK=255.255.255.0' >> /etc/sysconfig/network-scripts/ifcfg-eth0
  - set -x; echo 'GATEWAY=10.11.11.1' >> /etc/sysconfig/network-scripts/ifcfg-eth0
  - set -x; echo 'DNS1=8.8.8.8' >> /etc/sysconfig/network-scripts/ifcfg-eth0
  - set -x; echo 'ONBOOT="yes"' >> /etc/sysconfig/network-scripts/ifcfg-eth0
  - set -x; echo 'NM_CONTROLLED=no' >> /etc/sysconfig/network-scripts/ifcfg-eth0
  - ifdown eth0
  - ifup eth0

# Run command when finished with it
runcmd:
  - 'systemctl disable NetworkManager'
  - 'timedatectl set-timezone Asia/Seoul'
  - 'sed -i "/server .* iburst/d" /etc/chrony.conf'
  - 'sed -i "/^# Please consider.*$/a\server time.bora.net iburst" /etc/chrony.conf'
  - 'chmod o-rwx /usr/bin/su'
  - 'sed -i "s/^PASS_MAX_DAYS.*$/PASS_MAX_DAYS   90/" /etc/login.defs'
  - 'sed -i "s/^PASS_MIN_LEN.*$/PASS_MIN_LEN    9/" /etc/login.defs'
  - 'yum install bash-completion vim *bin/netstat *bin/route -y'
  - 'touch /etc/cloud/cloud-init.disabled'
#  - 'systemctl disable cloud-init cloud-init-local cloud-config cloud-final'
#  - 'sed -i "/#PermitRootLogin yes/a\PermitRootLogin no" /etc/ssh/sshd_config'


# Configure where output will go
output:
  all: ">> /var/log/cloud-init.log"

# Install my public ssh key to the first user-defined user configured
# in cloud.cfg in the template (which is centos for CentOS cloud images)

ssh_authorized_keys:
  - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD2wgI9na26l99n/Hk3S1cnYkI5W5H4k3v39TvHxpxlv60Xp5mtrEhSlbS5pjgvG574dGV+mfdi3d4cA/59KFjiRawkrBP3K93hIvunFHt0U3QBgRgexZd/ApE7Pe3aE
7TVPWs8liCzPTEjm9ZaqgxS0ZaZlTTHMFxNowKPKSQ32tslwPHbnm7QqmRgjZQdS0D9LFpRIpDz2hzBvRLc/HGMHzQ7R+zwcUKc7lx4I+9A9NRhnOKpRV1C52Avk2/eFcisLJfywDTZj0l2j8iRUUDAY4OW2xs/zHrsxK2fJ2
CBrPXZ1XV8Zkqc1FBRgkK313Bp3HxYY/vJ1xk7sk4C+HoF krd@free.style.ted
  - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC89+KTru1TdaKbrSBdTSJMIdW0wHL6lXHwvNkk1W77ofr/Ys/qI7PBz56IBUOMKhk/Gy1z02RqsG4i4SLH6+Oq5GZOGXG4dNhfQcfKq9ouTyJ2fwqBr/49MB+kAxyZk
CFrjffPe/VWVPYQHxT5SItqLX7e4gLRNFUNR2el1vkOyk4b20PiVZQpJS4R6CuQ0mktwKsMqEu/lb9weIqgRFn9gw4GyWXnA5cSFGting5dbXUvrrlpDIXZ9WDJFKzONbdgw7AuxUF0DPLyj5STMjgZTAhANlUbdkorxwAMJx
5Al6KGtypwdzgEndXD2Z9CdEsV7ZZLfnEhpIw7JArzxo2F root@cloud-test02.osci.kr

disable_root: 0

5. vm instance 생성.

vm에 넣어 줄 iso file을 만들자. 이 때 중요한것이 volid 를 반드시 cidata로 넣어야 한다. file 이름도 user-data, meta-data로 지정되어 만들어져 있어야 한다.

# genisoimage -output test.iso -volid cidata -joliet -r user-data meta-data

booting 시킬 boot image도 만들어 준다.

# qemu-img -b CentOS-7-x86_64-GenericCloud-1708.qcow2 -f qcow2 test.qcow2 50G

이제 vm을 만들어 주자.

virt-install --import \
--name testvm \
--ram 1024 \
--vcpus 2 \
--os-type linux \
--os-variant rhel7 \
--cpu host-passthrough \
--disk test.qcow2,format=qcow2,bus=virtio \
--disk test.iso,device=cdrom \
--network network=youngju-net-10,model=virtio \
--graphics none \
--hvm \
--virt-type kvm \
--noautoconsole

vm instance 확인.

# virsh console testvm

위 설정대로 했으면 root password가 root123으로 되어 있을 것이다.
로그인해서 설정 확인을 해보자.

자 이제 이 일련의 과정을 자동화 시켜보자.

[root@kvm2 createGuestVM]# cat vars/guest.yml                        
---                                                                                 
#vm_name:                                 
image_file_path: /youngju/vms/              
#network_ipaddr: 10.11.11.251                                                                                                                                            
                                                                                                                                                                         
# HW                                                                   
#cpu: 2                                                                                                                                                                  
#mem: 8192                                                                                                                                                               
disk_size: 50      


                                                           
[root@kvm2 createGuestVM]# cat vars/default.yml                                                                                                                          
---                                                                                                                                                                      
# KVM Hypervisor info                                        
host: 192.168.92.1                             
virt_type: kvm                                 
virt_hypervisor: hvm                         
                                                                                 
# Default network config for intance      
network:                                  
    bridge: br0                                                      
    net_name1: youngju-net-10                                                       
    net_name2: youngju-net-20                                                       
    net_name3: youngju-net-30               
    net_name4: youngju-net-40                                                                                                                                            
    net_name5: youngju-net-50                                                                                                                                            
    interface: eth0                                                    
    netmask: 255.255.255.0                                                                                                                                               
    gateway: 10.11.11.1                                                                                                                                                                                                                                                                                                                           # for default user(centos for CentOS, cloud-user for RHEL)                                                                                                               
password: root123                                                                                                                                                                                                                                                                                                                                 os:                                                                                 
    type: linux                                                                                                                                                              variant: rhel7    
disk:                                     
    cloud_init: cloud-init.iso

    # RHEL Cloud Image
#    cloud_image: /root/createGuestVM/images/rhel-guest-image-7.3-35.x86_64.qcow2

    # CentOS Cloud Image
    cloud_image: /youngju/vms/CentOS-7-x86_64-GenericCloud-1708.qcow2

# Default ssh key
ssh_key1: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD2wgI9na26l99n/Hk3S1cnYkI5W5H4k3v39TvHxpxlv60Xp5mtrEhSlbS5pjgvG574dGV+mfdi3d4cA/59KFjiRawkrBP3K93hIvunFHt0U3QBgRgexZd/ApE
7Pe3aE7TVPWs8liCzPTEjm9ZaqgxS0ZaZlTTHMFxNowKPKSQ32tslwPHbnm7QqmRgjZQdS0D9LFpRIpDz2hzBvRLc/HGMHzQ7R+zwcUKc7lx4I+9A9NRhnOKpRV1C52Avk2/eFcisLJfywDTZj0l2j8iRUUDAY4OW2xs/zHrs
xK2fJ2CBrPXZ1XV8Zkqc1FBRgkK313Bp3HxYY/vJ1xk7sk4C+HoF krd@free.style.ted
ssh_key2: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC89+KTru1TdaKbrSBdTSJMIdW0wHL6lXHwvNkk1W77ofr/Ys/qI7PBz56IBUOMKhk/Gy1z02RqsG4i4SLH6+Oq5GZOGXG4dNhfQcfKq9ouTyJ2fwqBr/49MB+
kAxyZkCFrjffPe/VWVPYQHxT5SItqLX7e4gLRNFUNR2el1vkOyk4b20PiVZQpJS4R6CuQ0mktwKsMqEu/lb9weIqgRFn9gw4GyWXnA5cSFGting5dbXUvrrlpDIXZ9WDJFKzONbdgw7AuxUF0DPLyj5STMjgZTAhANlUbdkor
xwAMJx5Al6KGtypwdzgEndXD2Z9CdEsV7ZZLfnEhpIw7JArzxo2F root@cloud-test02.osci.kr
[root@kvm2 createGuestVM]# cat templates/meta-data.j2                                                                                                           [20/3651]
instance-id: {{vm_name}}                                                                                                                                                 
hostname: {{vm_name}}                                                                                                                                                    
local-hostname: {{vm_name}}   
                                                                                                                                           
[root@kvm2 createGuestVM]# cat templates/user-data.j2                                                                                                                    
#cloud-config                                                                                                                                                            
                                                                                                                                                                         
password: {{password}}                                                                                                                                                   
chpasswd:                                                                           
  list: |                                                                                                              
    root:root123                                                                                                                                 
  expire: False                                                                          
ssh_pwauth: True                          
                          
network: {config: disabled}               
                                          
# Hostname management                     
preserve_hostname: False                  
hostname: {{vm_name}}                                                                                  
fqdn: {{vm_name}}                                                                   
                                          
bootcmd:                                                                            
  - set -x; echo '#user-data/bootcmd:' >> /etc/sysconfig/network-scripts/ifcfg-eth0 
  - set -x; sed -i -e '/^BOOTPROTO/ s/dhcp/static/g' -e '/PERSISTENT_DHCLIENT/d' /etc/sysconfig/network-scripts/ifcfg-eth0
  - set -x; echo 'IPADDR={{network_ipaddr}}' >> /etc/sysconfig/network-scripts/ifcfg-eth0
  - set -x; echo 'NETMASK={{network.netmask}}' >> /etc/sysconfig/network-scripts/ifcfg-eth0
  - set -x; echo 'GATEWAY={{network.gateway}}' >> /etc/sysconfig/network-scripts/ifcfg-eth0
  - set -x; echo 'DNS1=8.8.8.8' >> /etc/sysconfig/network-scripts/ifcfg-eth0
  - set -x; echo 'ONBOOT="yes"' >> /etc/sysconfig/network-scripts/ifcfg-eth0
  - set -x; echo 'NM_CONTROLLED=no' >> /etc/sysconfig/network-scripts/ifcfg-eth0
  - ifdown eth0                                                                     
  - ifup eth0                                                                       

# Run command when finished with it       
runcmd:                                   
  - 'systemctl disable NetworkManager'    
  - 'timedatectl set-timezone Asia/Seoul' 
  - 'sed -i "/server .* iburst/d" /etc/chrony.conf'                                 
  - 'sed -i "/^# Please consider.*$/a\server time.bora.net iburst" /etc/chrony.conf'                                                                                     
  - 'chmod o-rwx /usr/bin/su'             
  - 'sed -i "s/^PASS_MAX_DAYS.*$/PASS_MAX_DAYS   90/" /etc/login.defs'              
  - 'sed -i "s/^PASS_MIN_LEN.*$/PASS_MIN_LEN    9/" /etc/login.defs'                
  - 'yum install bash-completion vim *bin/netstat *bin/route -y'                    
  - 'touch /etc/cloud/cloud-init.disabled'                                          

# Configure where output will go          
output:                                   
  all: ">> /var/log/cloud-init.log"       

# Install my public ssh key to the first user-defined user configured               
# in cloud.cfg in the template (which is centos for CentOS cloud images)            

ssh_authorized_keys:                      
  - {{ssh_key1}}                          
  - {{ssh_key2}}                          

disable_root: 0
[root@kvm2 createGuestVM]# cat virt-guest-multiple-nic.yaml
---                                                                                                                                                                      
- name: manage libvirt guests                                                                                                                                            
  user: root                                                                        
  hosts: vms                                                                        
                                                                                                                            
  vars_files:                                                                       
      - vars/default.yml                                                            
      - vars/guest.yml                                                 
                                                                                           
  tasks:                                                                            
      - name: start libvirtd                                                          
        service: name=libvirtd state=started enabled=yes                            
        register: libvirtd                                                          
                                                                                      
      - name: create directory   
        file: path={{ image_file_path }} state=directory mode=0755
                                                                                                                                                                        
      - name: wait for libvirtd to get up     
        pause: seconds=30              
        when: libvirtd.changed                                                           
                                                                                              
      - name: get list of vms                                                       
        virt: command=list_vms                                         
        register: virt_vms                                                                                                  
                                                                       
      - name: create cloud-init data directory                         
        file: path=~/cloud-init/{{ vm_name }} state=directory mode=0777
                                                                                           
      - name: create user-data                                                      
        template: src=templates/user-data.j2 dest=~/cloud-init/{{ vm_name }}/user-data
                                                                                    
      - name: create meta-data                                                      
        template: src=templates/meta-data.j2 dest=~/cloud-init/{{ vm_name }}/meta-data
                                 
      - name : create cloud-init iso      
        shell: /bin/bash -c 'genisoimage -output {{ image_file_path }}/{{ vm_name }}-{{ disk.cloud_init }} -volid cidata -joliet -r ~/cloud-init/{{ vm_name }}/user-data ~/cloud-init/{{ vm_name }}/meta-data'        
                                       
     # - name: copy image                                                                
     #   command: cp -a {{ disk.cloud_image }} {{ image_file_path }}/{{ vm_name }}.qcow2      
                                                                       
     # - name: resize image               
     #   shell: qemu-img resize {{ image_file_path }}/{{ vm_name }}.qcow2 +{{ disk_size }}G
     #   when: disk_size                                                            
   
      - name: backing image create                                     
        shell: qemu-img create -b {{ disk.cloud_image }} {{ image_file_path }}/{{ vm_name }}.qcow2 -f qcow2 {{ disk_size }}G
        when: disk_size    
                                       
      - name: create vm                                                             
        command: virt-install --import        
                 --name {{ vm_name }}
                 --ram  {{ mem }}
                 --vcpus {{ cpu }}
                 --os-type {{ os.type }}
                 --os-variant {{ os.variant }}
                 --cpu host-passthrough
                 --disk {{ image_file_path }}/{{ vm_name }}.qcow2,format=qcow2,bus=virtio
                 --disk {{ image_file_path }}/{{ vm_name }}-{{ disk.cloud_init }},device=cdrom
                 --network network={{ network.net_name1 }},model=virtio
                 --network network={{ network.net_name2 }},model=virtio
                 --network network={{ network.net_name3 }},model=virtio
                 --network network={{ network.net_name4 }},model=virtio
                 --network network={{ network.net_name5 }},model=virtio
                 --graphics none
                 --{{virt_hypervisor}}    
                 --virt-type {{ virt_type }}                                        
                 --noautoconsole          
                 #--network bridge={{ network.bridge }},model=virtio                
        when: vm_name not in virt_vms.list_vms                                      
        with_items: guests

      - name: get guest info
        virt: command=info
        register: virt_info

      - name: make sure all vms are running
        virt: name={{ vm_name }} command=start
        when: virt_info[vm_name]['state'] != 'running'
        with_items: guests
[root@kvm2 createGuestVM]# cat hosts-kolla-v2 

[node]
192.168.92.1 

[vms]
youngju2-con1   ansible_host="192.168.92.1" cpu=4 mem=8192 network_ipaddr="10.11.11.11" vm_name="youngju2-con1"
youngju2-con2   ansible_host="192.168.92.1" cpu=4 mem=8192 network_ipaddr="10.11.11.12" vm_name="youngju2-con2"  
youngju2-con3   ansible_host="192.168.92.1" cpu=4 mem=8192 network_ipaddr="10.11.11.13" vm_name="youngju2-con3"  
youngju2-ceph1  ansible_host="192.168.92.1" cpu=4 mem=4096 network_ipaddr="10.11.11.31" vm_name="youngju2-ceph1" 
youngju2-ceph2  ansible_host="192.168.92.1" cpu=4 mem=4096 network_ipaddr="10.11.11.32" vm_name="youngju2-ceph2" 
youngju2-ceph3  ansible_host="192.168.92.1" cpu=4 mem=4096 network_ipaddr="10.11.11.33" vm_name="youngju2-ceph3" 
youngju2-com1   ansible_host="192.168.92.1" cpu=4 mem=4096 network_ipaddr="10.11.11.21" vm_name="youngju2-com1"  
youngju2-com2   ansible_host="192.168.92.1" cpu=4 mem=4096 network_ipaddr="10.11.11.22" vm_name="youngju2-com2"  
youngju2-deploy ansible_host="192.168.92.1" cpu=8 mem=8192 network_ipaddr="10.11.11.2" vm_name="youngju2-deploy"
[root@kvm2 createGuestVM]# ansible-playbook -i hosts-kolla-v2 virt-guest-multiple-nic.yaml
...


PLAY RECAP **************************************************************************************************************************************************************
youngju2-ceph1             : ok=11   changed=5    unreachable=0    failed=0         
youngju2-ceph2             : ok=11   changed=5    unreachable=0    failed=0         
youngju2-ceph3             : ok=11   changed=5    unreachable=0    failed=0         
youngju2-com1              : ok=11   changed=5    unreachable=0    failed=0         
youngju2-com2              : ok=11   changed=5    unreachable=0    failed=0         
youngju2-con1              : ok=11   changed=5    unreachable=0    failed=0         
youngju2-con2              : ok=11   changed=5    unreachable=0    failed=0         
youngju2-con3              : ok=11   changed=5    unreachable=0    failed=0         
youngju2-deploy            : ok=11   changed=5    unreachable=0    failed=0

잘된다. ㅎㅎ

galera cluster recovery scenario

galera cluster recovery scenario

 

http://galeracluster.com/documentation-webpages/gettingstarted.html 참조함.

galera cluster install
node 3대 준비.

귀찮으므로 3대 다 보안설정 없앰.
# systemctl stop firewalld
# setenforce 0

– yum repository 준비. 3 node 전부
[root@galera1 ~]# cat /etc/yum.repos.d/galera.repo
[galera]
name=galera
baseurl=http://ftp.kaist.ac.kr/mariadb//mariadb-10.0.33/yum/centos7-amd64/
gpgcheck=0
enabled=1
[root@galera1 ~]#

– install 3 node 전부
# yum install MariaDB-Galera-server MariaDB-client galera

– config file (node name과 node address 확인)
[root@galera1 ~]# grep -iv ‘^#\|^$’ /etc/my.cnf
[client-server]
!includedir /etc/my.cnf.d
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
binlog_format=ROW
bind-address=0.0.0.0
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=122M
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_provider_options=”gcache.size=300M; gcache.page_size=300M”
wsrep_cluster_name=”youngju_cluster”
wsrep_cluster_address=”gcomm://galera1,galera2,galera3
wsrep_sst_method=rsync
wsrep_node_name=galera1
wsrep_node_address=”10.11.10.171″
[mysql_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[root@galera1 ~]#

– mysql wsrep start
[root@galera1 ~]# service mysql start –wsrep-new-cluster
[root@galera1 ~]# tail -f /var/lib/mysql/galera1.test.dom.err

[root@galera1 mysql]# mysql -e “show status like ‘wsrep_%’; ” |grep -i ‘size\|stat’
wsrep_local_state_uuid 0889b7dd-d107-11e7-a4ab-5fe94a6d7f29
wsrep_local_state 4
wsrep_local_state_comment Synced
wsrep_cert_index_size 0
wsrep_evs_state OPERATIONAL
wsrep_cluster_size 1
wsrep_cluster_state_uuid 0889b7dd-d107-11e7-a4ab-5fe94a6d7f29
wsrep_cluster_status Primary
[root@galera1 mysql]#
– node 2
[root@galera2 ~]# grep -iv ‘^$\|^#’ /etc/my.cnf
[client-server]
!includedir /etc/my.cnf.d
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
binlog_format=ROW
bind-address=0.0.0.0
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=122M
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_provider_options=”gcache.size=300M; gcache.page_size=300M”
wsrep_cluster_name=”youngju_cluster”
wsrep_cluster_address=”gcomm://galera1,galera2,galera3
wsrep_sst_method=rsync
wsrep_node_name=galera2
wsrep_node_address=”10.11.10.172″
[mysql_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[root@galera2 ~]#

[root@galera2 ~]# service mysql start
[root@galera2 mysql]# mysql -e “show status like ‘wsrep_%’; ” |grep -i ‘size\|stat’
wsrep_local_state_uuid 0889b7dd-d107-11e7-a4ab-5fe94a6d7f29
wsrep_local_state 4
wsrep_local_state_comment Synced
wsrep_cert_index_size 0
wsrep_evs_state OPERATIONAL
wsrep_cluster_size 2
wsrep_cluster_state_uuid 0889b7dd-d107-11e7-a4ab-5fe94a6d7f29
wsrep_cluster_status Primary
[root@galera2 mysql]#
– node 3
[root@galera3 ~]# grep -iv ‘^$\|^#’ /etc/my.cnf
[client-server]
!includedir /etc/my.cnf.d
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
binlog_format=ROW
bind-address=0.0.0.0
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=122M
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_provider_options=”gcache.size=300M; gcache.page_size=300M”
wsrep_cluster_name=”youngju_cluster”
wsrep_cluster_address=”gcomm://galera1,galera2,galera3
wsrep_sst_method=rsync
wsrep_node_name=galera3
wsrep_node_address=”10.11.10.173″
[mysql_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[root@galera3 ~]#

[root@galera3 mysql]# mysql -e “show status like ‘wsrep_%’; ” |grep -i ‘size\|stat’
wsrep_local_state_uuid 0889b7dd-d107-11e7-a4ab-5fe94a6d7f29
wsrep_local_state 4
wsrep_local_state_comment Synced
wsrep_cert_index_size 0
wsrep_evs_state OPERATIONAL
wsrep_cluster_size 3
wsrep_cluster_state_uuid 0889b7dd-d107-11e7-a4ab-5fe94a6d7f29
wsrep_cluster_status Primary
[root@galera3 mysql]#
– galera cluster restart scenario
https://dba.stackexchange.com/questions/151941/how-to-restart-mariadb-galera-cluster
Solution 2:
Another way to restart a MariaDB Galera Cluster is to use –wsrep-new-cluster parameter.
1) Kill all mysql processes:
killall -KILL mysql mysqld_safe mysqld mysql-systemd
2) On the most up to date node start a new cluster:
/etc/init.d/mysql start –wsrep-new-cluster
3) Now other nodes can be connected:
service mysql start –wsrep_cluster_address=”gcomm://192.168.0.101,192.168.0.102,192.168.0.103” \
–wsrep_cluster_name=”my_cluster”

galera cluster restart scenario in kolla-ansible

kolla환경 galera cluster는 grastate.dat file에 seqno를 비교해서 가장 높은 놈을 mariadb bootstrap container로 만들어서 올린다. 이 때 safe_to_bootstrap가 0일 경우 1로 만들지는 않는다. galera는 wsrep_cluster_size가 1일 경우, 즉 마지막으로 살아남은 cluster member일 경우 safe_to_bootstrap을 1로 set한다. 만약 3 node 모두 seqno가 같고 safe_to_bootstrap가 0일 경우 kolla-ansible mariadb_recovery로 복구 불가능 하게 된다. 반드시 시간차를 두고 내리자. galera node 3대를 동시에 내릴 경우 이렇게 될 수 있는데 이러면 좀 난감한 상황이 된다.
이렇게 됐을 때 해결하는 방법에 대해 고민을 해봤다.

아래는 kolla-ansible mariadb_recovery가 실행하는 ansible playbook중 seqno를 비교 하는 구문이다.
# cat /usr/share/kolla-ansible/ansible/roles/mariadb/tasks/recover_cluster.yml

– name: Comparing seqno value on all mariadb hosts
shell: “if [[ {{ hostvars[inventory_hostname][‘seqno’] }} -lt {{ hostvars[item][‘seqno’] }} ]]; then echo {{ hostvars[item][‘seqno’] }}; fi”
with_items: “{{ groups[‘mariadb’] }}”
changed_when: false
register: seqno_compare

이게 controller01부터 controller03까지 순서대로 seqno를 각각 3번씩 총 9번을 비교하는데, 다 같으니까 결국엔 controller03이 bootstrap node가 되어버린다.

kolla환경 galera cluster의 경우 controller03 node의 mariadb docker container 에 –wsrep-new-cluster 가 하드코딩 되어 있어 항상 controller03이 먼저 start 되어야 한다. 그렇다면 controller03이 항상 가장 마지막에 멈추어야 seqno가 가장 높을 것이다.
kolla-ansible mariadb_recovery -i /etc/kolla/multinode 를 실행 하고 “TASK [mariadb : Waiting for MariaDB service to be ready]” 여기에서 진행이 안될텐데 이유는 controller03의 mariadb가 살아났다가 safe_to_bootstrap가 0이라서 바로 죽어버린것이 원인이다.
다음은 bootstrap이 된 node와 안그런 node의 docker inspect mariadb 의 diff 뜬 값이다.
[root@controller03 ~]# diff -u mariadb2 mariadb3
— controller02-docker-inspect-mariadb 2017-11-24 15:00:08.372199741 +0900
+++ controller03-docker-inspect-mariadb 2017-11-24 14:59:47.565970708 +0900

“StdinOnce”: false,
“Env”: [
“KOLLA_SERVICE_NAME=mariadb”,
+ “BOOTSTRAP_ARGS=–wsrep-new-cluster,
“KOLLA_CONFIG_STRATEGY=COPY_ALWAYS”,
“PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin“,
“KOLLA_BASE_DISTRO=centos”,

자 이제 controller03에서 /var/lib/docker/volumes/mariadb/_data/grastate.dat 안에 safe_to_bootstrap 을 1로 셋팅하자
이제 controller01과 controller02에서 계속 reboot되고 있는 mariadb를 docker stop mariadb로 죽인 후,
다시 kolla-ansible mariadb_recovery -i /etc/kolla/multinode를 실행 하면 잘 끝날 것이다.

또 생길 수 있는 경우가 비정상 종료를 해서 seqno가 모두 -1인 경우이다.
이러면 좀 수고롭게 찾아야 되는데 각 node에서 galera.cnf file에서 아래 두줄을 comment처리를 하고 1 node에서만 돌게끔 한다.
[root@controller02 ~]# vim /etc/kolla/mariadb/galera.cnf

#wsrep_cluster_address = gcomm://10.10.10.11:4567,10.10.10.12:4567,10.10.10.13:4567
#wsrep_provider_options = gmcast.listen_addr=tcp://10.10.10.12:4567;ist.recv_addr=10.10.10.12:4568

[root@controller02 ~]# docker start mairadb
[root@controller02 ~]# docker exec -it -u root mariadb mysql -u root -e “show status like ‘wsrep_last_committed’; ” -p
Enter password:
+———————-+———+
| Variable_name | Value |
+———————-+———+
| wsrep_last_committed | 3281877 |
+———————-+———+
이렇게 찾을 수 가 있다.

각 3 node에서 모두 seqno를 찾은 후 가장 높은 node의 /var/lib/docker/volumes/mariadb/ directory를 controller03의 /var/lib/docker/volumes/mariadb/ 으로 교체한다.
[root@controller03 volumes]# mv /var/lib/docker/volumes/mariadb{,-`date +%Y%m%d`-backup}
[root@controller03 volumes]# ls -altd /var/lib/docker/volumes/mariadb*
drwxr-xr-x 3 42434 42434 18 11월 24 20:50 /var/lib/docker/volumes/mariadb-20171125-backup

[root@controller02 ~]# scp -rp /var/lib/docker/volumes/mariadb/ controller03:/var/lib/docker/volumes/mariadb/

[root@controller03 volumes]# chwon -R 42434:42434 /var/lib/docker/volumes/mariadb

[root@cloud-deploy ~]# kolla-ansible mariadb_recovery -i /etc/kolla/multinode
root user로 scp를 이용하여 file을 옮겼기 때문에 owner가 root로 되어 있는데 반드시 42434(mysql)로 바꾸어 준다.

1개 node의 db만 있으면 복구가 가능하다.
[root@controller01 ~]# rm -rf /var/lib/docker/volumes/mariadb/_data/*
[root@controller03 ~]# rm -rf /var/lib/docker/volumes/mariadb/_data/*
[root@controller02 ~]# scp /var/lib/docker/volumes/mariadb/_data/grastate.dat controller03:/var/lib/docker/volumes/mariadb/_data/grastate.dat
[root@controller02 ~]# scp /var/lib/docker/volumes/mariadb/_data/grastate.dat controller01:/var/lib/docker/volumes/mariadb/_data/grastate.dat
[root@controller01 ~]# sed -i -e ‘s/seqno.*/seqno: -1/’ -e ‘s/safe_to_bootstrap: 1/safe_to_bootstrap: 0/’ /var/lib/docker/volumes/mariadb/_data/grastate.dat
[root@controller01 ~]# chown 42434:42434 -R /var/lib/docker/volumes/mariadb/_data/grastate.dat
[root@controller03 ~]# sed -i -e ‘s/seqno.*/seqno: -1/’ -e ‘s/safe_to_bootstrap: 1/safe_to_bootstrap: 0/’ /var/lib/docker/volumes/mariadb/_data/grastate.dat
[root@controller03 ~]# chown 42434:42434 -R /var/lib/docker/volumes/mariadb/_data/grastate.dat
[root@cloud-deploy ~]# kolla-ansible mariadb_recovery -i /etc/kolla/multinode
db 복구가 잘 되는것을 볼 수 있을 것이다.
이 때 grastate.dat file이 없으면 kolla-ansible mariadb_recovery가 진행 되지 않으므로 scp로 copy해서 위와 같은 작업을 하여 dummy file을 만들어 주자.

ctags ansible setting

ansible 분석 하려고 ctags를 써봤다.

# cat ~/.ctags

–langdef=ansible
–langmap=ansible:.yml.yaml
–regex-ansible=/^[ \t]*-[ \t]*name:[ \t]*(.+)/\1/k,tasks/
–regex-ansible=/.*\{\{(.*)\}\}/\1/k,tasks/
–languages=ansible,ruby,python

위에 regex-ansible 에서 뽑아낼 tag를 찾는데 1번째로 name을 찾고 다음으로 변수를 찾게 해놨다.

# ctags -R .

# vim -t <tag name>

이렇게 쓰면 된다.

:tselect <tag>, :ts <tag>

로 원하는 tag를 찾을 수 있다.

ubuntu 17.04 에 FUJI XEROX DocuCentre-IV C2265 printer driver(ppd) 설치하기.

오늘 printer driver를 설치 하는데 진땀뺏다.

printer 기종이 “FUJI XEROX DocuCentre-IV C2265 “ 인데 이건 linux open printer driverfoomatic에 포함되어 있지 않더라. 여기에 포함되어 있으면 그냥 package만 깔아서 자동으로 driver 설치하고 되는지 확인하면 끝나는 거였는데…

하지만 우회하는 방법이 있다. Mac에서 쓰는 ppd file을 가져와서 수정을 하면 된다.

방법을 알아보면…

1. 필요 package 설치.

# apt install dmg2img libssl-dev hfsprogs build-essential

 

2. Fuji Xerox site에서 Macdriver download

http://onlinesupport.fujixerox.com/tiles/common/hc_drivers_download.jsp?system=%27Mac%20OS%20X%2010.10%27&shortdesc=null&xcrealpath=http://onlinesupport.fujixerox.com//driver_downloads/fxmacprnps1609am106iml.dmg

대충 Mac OS 10.10 driver로 받음.

위에 링크를 그대로 받으면 dmg 형식의 file이 하나 생긴다.

 

3. dmg filedmg2img tool을 이용해서 img file로 바꾼다.

# dmg2img fxmacprnps1509am105iml.dmg

dmg2img v1.6.5 (c) vu1tur (to@vu1tur.eu.org)

fxmacprnps1509am105iml.dmg –> fxmacprnps1509am105iml.img

decompressing:

opening partition 0 … 100.00% ok

opening partition 1 … 100.00% ok

opening partition 2 … 100.00% ok

Archive successfully decompressed as fxmacprnps1509am105iml.img

 

4. 얻은 img filemount한다.

# mount -o loop -t hfsplus fxmacprnps1509am105iml.img /mnt/test

# ls /mnt/test

total 34676

drwxrwxrwx 29 root root 4096 1014 20:01 ..

drwxr-xr-x 1 501 80 8 911 2015 .

-rw-r–r– 1 501 dialout 17530 911 2015 readme.txt

-rw-r–r– 1 501 dialout 27088819 911 2015 Fuji Xerox PS Plug-in Installer.pkg

———- 1 root 80 8388608 910 2015 .journal

———- 1 root 80 4096 910 2015 .journal_info_block

dr-xr-xr-t 1 root root 2 910 2015 .HFS+ Private Directory Data?

안에 이런 file들이 들어 있는데 여기서 필요한건 Fuji Xerox PS Plug-in Installer.pkg 이다.

 

4. xar tool install

.. 이제 Fuji Xerox PS Plug-in Installer.pkg를 살펴보자.

# file Fuji\ Xerox\ PS\ Plug-in\ Installer.pkg

Fuji Xerox PS Plug-in Installer.pkg: xar archive version 1, SHA-1 checksum

xar archive 란다…

xareXtensible Archiver 라는MAC에서 쓰는 압축 방식같다. 이걸 풀려면 xar라는 tool이 필요한데, 아래와 같이 설치 한다.

먼저 git에서 xar toolsource를 받는다.

# mkdir ~/src/ && cd ~/src/

# git clone https://github.com/mackyle/xar.git

# cd xar/xar

# ./autogen.sh –noconfigure

# ./configure

# make

# sudo make install

만약 중간에 error가 뜬다면 해당 라이브러리를 설치 해줘야 한다.

# xar –version

xar 1.6.1

이렇게 나오면 성공!

 

이제 압축을 풀어보자.

# xar -xvf Fuji\ Xerox\ PS\ Plug-in\ Installer.pkg -C /mnt/unzip

# cd /mnt/unzip/ppd.pkg

# cp -av Payload{,.cpio.gz}

# mkdir ppd && cd ppd

# cpio -id < ../Payload.cpio

 

5. PPD file 수정 및 print 설치

이렇게 하면 Library 라는 directory하나가 생긴다. Library/Printers/PPDs/Contents/Resources안에 우리가 원하는 ppd file이 있다.

# cd Library/Printers/PPDs/Contents/Resources/

# mkdir /mnt/ppd

# cp -av ‘FX DocuCentre-IV C2265 PS.gz’ /mnt/ppd/

# cd /mnt/ppd

# gunzip FX\ DocuCentre-IV\ C2265\ PS.gz

# mv ‘FX DocuCentre-IV C2265 PS’ c2265.ppd

 

후… 이제 원하는 PPD file을 얻었다.

자 이제 이걸 linux에서 잘 사용 할 수있도록 수정을 조금 하자.

# vim c2265.ppd

*APPrinterIconPath: “/Library/Printers/FujiXerox/Icons/FX DocuCentre-IV C2265.icns”

*cupsFilter: “application/vnd.cups-postscript 0 /Library/Printers/FujiXerox/Filter/FXPSACEFilter”

*APDialogExtension: “/Library/Printers/FujiXerox/PDEs/FXPSACEAccount.plugin”

*APDialogExtension: “/Library/Printers/FujiXerox/PDEs/FXPSACEImageOptions.plugin”

*APDialogExtension: “/Library/Printers/FujiXerox/PDEs/FXPSACEWatermark.plugin”

*APDialogExtension: “/Library/Printers/FujiXerox/PDEs/FXPSACEFeatures.plugin”

이거를

*cupsFilter: “application/vnd.cups-postscript 0 pstops”

이렇게 바꾸면 된다.

그리고 이제 web browserhttp://localhost:631/ 로 들어간 다음. cups를 이용해서 printer를 추가하면 된다. 이 때 PPD file 추가하는곳에 우리가 만든 file을 넣어주면 된다.
testprint를 해보면 아마 잘 나올 것이다. ㅎㅎ

iperf 를 이용한 packet loss 확인

간단하게 iperf를 이용해서 packet loss를 확인 할 수 있다.

 

server 측 (-s(server) -u(udp) -i(interval))

[root@testlab1 ~]# iperf -s -u -i 1
————————————————————
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 208 KByte (default)
————————————————————
[ 3] local 10.65.20.12 port 5001 connected with 10.65.20.191 port 39835
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0- 1.0 sec 114 MBytes 957 Mbits/sec 0.011 ms 14/81400 (0.017%)
[ 3] 1.0- 2.0 sec 114 MBytes 957 Mbits/sec 0.012 ms 0/81377 (0%)
[ 3] 2.0- 3.0 sec 114 MBytes 957 Mbits/sec 0.012 ms 0/81379 (0%)
[ 3] 3.0- 4.0 sec 114 MBytes 957 Mbits/sec 0.014 ms 0/81378 (0%)
[ 3] 4.0- 5.0 sec 114 MBytes 957 Mbits/sec 0.013 ms 0/81378 (0%)
[ 3] 5.0- 6.0 sec 114 MBytes 957 Mbits/sec 0.012 ms 0/81378 (0%)
[ 3] 6.0- 7.0 sec 114 MBytes 957 Mbits/sec 0.013 ms 0/81378 (0%)
[ 3] 7.0- 8.0 sec 114 MBytes 957 Mbits/sec 0.014 ms 0/81377 (0%)
[ 3] 8.0- 9.0 sec 114 MBytes 957 Mbits/sec 0.014 ms 0/81379 (0%)
[ 3] 9.0-10.0 sec 114 MBytes 957 Mbits/sec 0.019 ms 0/81378 (0%)
[ 3] 0.0-10.0 sec 1.11 GBytes 957 Mbits/sec 0.018 ms 14/813845 (0.0017%)

 

client 측

[root@testlab2 ~]# iperf -c testlab1 -u -i1 -b 10000m -w 16KB
————————————————————
Client connecting to testlab1, UDP port 5001
Sending 1470 byte datagrams, IPG target: 1.18 us (kalman adjust)
UDP buffer size: 32.0 KByte (WARNING: requested 16.0 KByte)
————————————————————
[ 3] local 10.65.20.191 port 39835 connected with 10.65.20.12 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 114 MBytes 957 Mbits/sec
[ 3] 1.0- 2.0 sec 114 MBytes 957 Mbits/sec
[ 3] 2.0- 3.0 sec 114 MBytes 957 Mbits/sec
[ 3] 3.0- 4.0 sec 114 MBytes 957 Mbits/sec
[ 3] 4.0- 5.0 sec 114 MBytes 957 Mbits/sec
[ 3] 5.0- 6.0 sec 114 MBytes 957 Mbits/sec
[ 3] 6.0- 7.0 sec 114 MBytes 957 Mbits/sec
[ 3] 7.0- 8.0 sec 114 MBytes 957 Mbits/sec
[ 3] 8.0- 9.0 sec 114 MBytes 957 Mbits/sec
[ 3] 9.0-10.0 sec 114 MBytes 957 Mbits/sec
[ 3] 0.0-10.0 sec 1.11 GBytes 957 Mbits/sec
[ 3] Sent 813845 datagrams
[ 3] Server Report:
[ 3] 0.0-10.0 sec 1.11 GBytes 957 Mbits/sec 0.017 ms 14/813845 (0.0017%)
[root@testlab2 ~]#

각  server측과 client측에서 얼마만큼의 packet이 loss되었는지 알 수 잇다.!!!

[ 3] 0.0-10.0 sec 1.11 GBytes 957 Mbits/sec 0.018 ms 14/813845 (0.0017%)

저 빨간색 숫자가 높으면 높을수록 뭔가 물리적으로 문제가 있다는 뜻임…!!

IPMI sol setting

ipmi를 이용한 serial console setting하기를 해봣다.

이걸 하게되면 일반 terminal을 이용하여 물리장비의 booting화면과 grub 화면, login 화면을 물리머신의 monitor앞에 가지 않고서도 text로 볼 수가 있다.

심지어 bios setup도 된다.

 

Screenshot from 2017-07-31 16-33-21

<ipmi sol을 통한 booting화면> 이거 console 부화면팅 인줄 아는사람이 있는데 terminal 화면이다.

일반적인 gnome-terminal 화면임 ㅋ

 

setting하는걸 알아보면…

일단 난 dell R620으로 하였다. ipmi를 지원하는 server들은 아마도 다 되지 않을까 싶다.

1. bios에서 아래와 같이 셋팅을 한다.

Serial Communication : On with Console Redirection via COM2
External Port Address
: Serial Device1=COM1,Serial Device2=COM2
External Serial Connector : Serial Device1
Failsafe Baud Rate
: 115200
Remote Terminal Type : VT100/VT220

 

2. grub 설정

[root@testlab1 default]# diff -urpN /etc/default/{grub-backup,grub}

— grub-backup 2017-07-07 13:41:06.655361133 +0900

+++ grub 2017-07-31 16:14:48.222973918 +0900

@@ -2,6 +2,8 @@ GRUB_TIMEOUT=5

GRUB_DISTRIBUTOR=”$(sed ‘s, release .*$,,g’ /etc/system-release)”

GRUB_DEFAULT=saved

GRUB_DISABLE_SUBMENU=true

-GRUB_TERMINAL_OUTPUT=”console”

-GRUB_CMDLINE_LINUX=”crashkernel=auto rd.lvm.lv=rhgs/root rd.lvm.lv=rhgs/swap rhgb quiet”

+GRUB_CMDLINE_LINUX=”crashkernel=auto rd.lvm.lv=rhgs/root rd.lvm.lv=rhgs/swap rhgb”

GRUB_DISABLE_RECOVERY=”true”

+GRUB_TERMINAL=”serial console”

+GRUB_SERIAL_COMMAND=”serial –speed=115200 –unit=1 –word=8 –parity=no –stop=1″

+GRUB_CMDLINE_LINUX_DEFAULT=”console=tty1 console=ttyS1,115200n8″

[root@testlab1 default]# grub2-mkconfig -o /boot/grub2/grub.cfg

/etc/default/grub을 위에 같이 고친다.

앞에 가 붙은 곳은 빠진곳이고 +가 붙은부분이 추가된 부분이다.

그리고 # `grub2-mkconfig -o /boot/grub2/grub.cfg` 를 실행하여 grub.cfgupdate하자.

 

3. IPMI sol setting

[root@testlab1 default]# ipmitool sol info 1

Info: SOL parameter ‘Payload Channel (7)’ not supported – defaulting to 0x01

Set in progress : set-complete

Enabled : true

Force Encryption : true

Force Authentication : false

Privilege Level : ADMINISTRATOR

Character Accumulate Level (ms) : 50

Character Send Threshold : 255

Retry Count : 7

Retry Interval (ms) : 480

Volatile Bit Rate (kbps) : 115.2

Non-Volatile Bit Rate (kbps) : 115.2

Payload Channel : 1 (0x01)

Payload Port : 623

[root@testlab1 default]# ipmitool sol set force-authentication true 1

[root@testlab1 default]# ipmitool sol info 1 |grep -i authen

Force Authentication : true

force authenticationtrue로 고쳐주자.

 

 

4. ipmi user create

[root@testlab1 ~]# ipmitool user list 1

ID Name Callin Link Auth IPMI Msg Channel Priv Limit

2 root true true true ADMINISTRATOR

일단 root라는 user가 있다.

암호를 다시 셋팅해줘도 되긴 하는데 누군가 쓰고 있을지도 모르니 나만을 위한 user를 하나 만들자.

 

[root@testlab1 ~]# ipmitool user set name 3 admin

user id 2root가 사용하고 있으니 id 3을 이용해서 admin 계정을 만들었다.

 

[root@testlab1 ~]# ipmitool user set password 3 test123

admin userpasswordtest123으로 설정 하였다.

 

[root@testlab1 ~]# ipmitool channel setaccess 1 3 callin=on ipmi=on link=on privilege=4

[root@testlab1 ~]# ipmitool user list 1

ID Name Callin Link Auth IPMI Msg Channel Priv Limit

2 root true true true ADMINISTRATOR

3 admin true true true ADMINISTRATOR

[root@testlab1 ~]#

admin user에게 모든 권한을 다 줬다. root와 똑같이…

 

[root@testlab1 ~]# ipmitool sol payload status 1 3

User 3 on channel 1 is disabled

[root@testlab1 ~]# ipmitool sol payload enable 1 3

[root@testlab1 ~]# ipmitool sol payload status 1 3

User 3 on channel 1 is enabled

마지막으로 sol paylaodenable 시켜준다. 이거 안해주면 admin usersol 사용 불가능함.

 

5. rhel7/centos7 환경에서 getty setting

[root@testlab1 default]# systemctl enable getty@ttyS1

Created symlink from /etc/systemd/system/getty.target.wants/getty@ttyS1.service to /usr/lib/systemd/system/getty@.service.

[root@testlab1 default]# systemctl status getty@ttyS1

getty@ttyS1.service – Getty on ttyS1

Loaded: loaded (/usr/lib/systemd/system/getty@.service; enabled; vendor preset: enabled)

Active: inactive (dead)

Docs: man:agetty(8)

man:systemd-getty-generator(8)

http://0pointer.de/blog/projects/serial-console.html

[root@testlab1 default]# systemctl restart getty@ttyS1

[root@testlab1 default]#

 

이제 접속을 해보면…

youngjulee@youngjulee-ThinkPad-S2:~$ ipmitool -I lanplus -H testlab1-ipmi -U admin -P test123 sol activate

[SOL Session operational. Use ~? for help]

요렇게 나올것이다.

이러면 잘 된거임 ㅋ

 

 

이제 재부팅을 해보자.

youngjulee@youngjulee-ThinkPad-S2:~$ ipmitool -I lanplus -H testlab1-ipmi -U admin -P test123 sol activate

[SOL Session operational. Use ~? for help]

[root@testlab1 ~]# reboot

[ 5215.005606] FAT-fs (sdo): unable to read boot sector to mark fs as dirty

[ 5215.016773] type=1305 audit(1501491730.976:3228): audit_pid=0 old=2081 auid=4294967295 ses=4294967295 res=1

[ 5215.018859] type=1130 audit(1501491730.978:3229): pid=1 uid=0 auid=4294967295 ses=4294967295 msg=’unit=lvm2-lvmetad comm=”s’

[ 5215.018876] type=1131 audit(1501491730.978:3230): pid=1 uid=0 auid=4294967295 ses=4294967295 msg=’unit=lvm2-lvmetad comm=”s’

[ 5215.019231] type=1130 audit(1501491730.978:3231): pid=1 uid=0 auid=4294967295 ses=4294967295 msg=’unit=auditd comm=”systemd’

[ 5215.019244] type=1131 audit(1501491730.978:3232): pid=1 uid=0 auid=4294967295 ses=4294967295 msg=’unit=auditd comm=”systemd’

[ 5215.019717] type=1130 audit(1501491730.979:3233): pid=1 uid=0 auid=4294967295 ses=4294967295 msg=’unit=systemd-user-session’

[ 5215.019730] type=1131 audit(1501491730.979:3234): pid=1 uid=0 auid=4294967295 ses=4294967295 msg=’unit=systemd-user-session’

[ 5215.020219] type=1130 audit(1501491730.979:3235): pid=1 uid=0 auid=4294967295 ses=4294967295 msg=’unit=ksm comm=”systemd” e’

[ 5215.020231] type=1131 audit(1501491730.979:3236): pid=1 uid=0 auid=4294967295 ses=4294967295 msg=’unit=ksm comm=”systemd” e’

[ 5215.021845] type=1130 audit(1501491730.981:3237): pid=1 uid=0 auid=4294967295 ses=4294967295 msg=’unit=systemd-tmpfiles-set’

[ OK [ 5223.715686] audit_printk_skb: 81 callbacks suppressed

… 생략

[ 5229.200238] sd 1:2:1:0: [sdc] Synchronizing SCSI cache

[ 5229.263038] sd 1:2:0:0: [sdb] Synchronizing SCSI cache

[ 5229.325825] sd 0:2:0:0: [sda] Synchronizing SCSI cache

[ 5231.830824] failed to kill vid 0081/0 for device em4

[ 5232.203091] br-em3: port 1(em3) entered disabled state

[ 5232.267099] failed to kill vid 0081/0 for device em3

[ 5232.612309] failed to kill vid 0081/0 for device em2

[ 5232.957757] failed to kill vid 0081/0 for device em1

[ 5233.024440] Restarting system.

[ 5233.063405] reboot: machine restart

reboot log가 나오고…

 

Screenshot from 2017-07-31 18-12-16

부팅로그가 나온다..!!

<이 블로그를 참조 함. http://coffeenix.net/board_print.php?bd_code=1767>

ubuntu touchpad disable

touchpad가 너무 민감한건지 싫컷 타이핑 하고 있으면 어먼데 클릭이 되서 원하지 않는곳에 타이핑이 들어갔었다.

난 이걸 계속 function key 조합해서 키고 끄는걸 찾고 있었는데, OS level에서 당연히 되어야 하는 기능이었다.

역시나 찾아보니 xinput 이란게 있더라…

man page에는 “X input의 test나 설정을 하는 도구.” 라고 나와있다.

간단하게 사용법을 보면…

root@youngjulee-ThinkPad-S2:~# xinput –list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ ETPS/2 Elantech TrackPoint id=13 [slave pointer (2)]
⎜ ↳ USB Optical Mouse id=15 [slave pointer (2)]
⎜ ↳ ETPS/2 Elantech Touchpad id=12 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Video Bus id=7 [slave keyboard (3)]
↳ Power Button id=8 [slave keyboard (3)]
↳ Sleep Button id=9 [slave keyboard (3)]
↳ Integrated Camera id=10 [slave keyboard (3)]
↳ AT Translated Set 2 keyboard id=11 [slave keyboard (3)]
↳ ThinkPad Extra Buttons id=14 [slave keyboard (3)]
root@youngjulee-ThinkPad-S2:~#

위에서 touch pad의 id를 검색한다.

그리고 여러가지 기능들이 있는데 이중에 우리가 쓸것은 “Enable” 그 자체이다.

root@youngjulee-ThinkPad-S2:~# xinput –list-props 12
Device ‘ETPS/2 Elantech Touchpad’:
Device Enabled (139): 0
Coordinate Transformation Matrix (141): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000
Device Accel Profile (268): 1
Device Accel Constant Deceleration (269): 2.500000
Device Accel Adaptive Deceleration (270): 1.000000

그리고 –set-prop option을 이용해서 키고 끄는것을 할 수가 있다.

root@youngjulee-ThinkPad-S2:~# xinput –set-prop 12 “Device Enabled” 0

마지막으로 쓰기 쉽도록 alias를 걸어두자.

root@youngjulee-ThinkPad-S2:~# alias touch
alias touch=’xinput –set-prop 12 “Device Enabled”‘
root@youngjulee-ThinkPad-S2:~# touch 0
root@youngjulee-ThinkPad-S2:~# touch 1

이제 원할 때 아무대나 terminal을 열어서 touch pad를 키고 끌 수 있다. !!

 

RAID level 별 성능 테스트

RAID level 별 성능 테스트

 

 

회사에 어떤분이 “raid 6에서는 write 일 때 쓰기를 3번해서 io가 1/3이 된다.!” 라고해서 이 test를 시작하기 되었다.

다행이도 회사에 disk 12 bay짜리 array가 있고, 놀고있는 server도 있었다.

주말전에 2일 휴가도 내서 시간도 충분했다.

 

 

test case는

Disk : 2 ~ 12

Raid : 0, 10, 5, 6

R/W : read, write, R70:W30

이렇게해서 총 312개 case의 test를 하였다.

 

 

일단 server와 HBA(disk controller), disk array의 spec을 알아보면…

1. server

PowerEdge R620

Architecture: x86_64

CPU op-mode(s): 32-bit, 64-bit

Byte Order: Little Endian

CPU(s): 24

On-line CPU(s) list: 0-23

Thread(s) per core: 2

Core(s) per socket: 6

Socket(s): 2

NUMA node(s): 2

Vendor ID: GenuineIntel

CPU family: 6

Model: 45

Model name: Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz

Stepping: 7

CPU MHz: 1200.000

BogoMIPS: 4004.06

Virtualization: VT-x

L1d cache: 32K

L1i cache: 32K

L2 cache: 256K

L3 cache: 15360K

NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22

NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23

 

 

memory 8 x 22 = 176G

Memory Device

Array Handle: 0x1000

Error Information Handle: Not Provided

Total Width: 72 bits

Data Width: 64 bits

Size: 8192 MB

Form Factor: DIMM

Set: 1

Locator: DIMM_A4

Bank Locator: Not Specified

Type: DDR3

Type Detail: Synchronous Registered (Buffered)

Speed: 1600 MHz

Manufacturer: 00CE00B300CE

Serial Number:

Asset Tag: 03132463

Part Number: M393B1G70BH0-YK0

Rank: 1

Configured Clock Speed: 1333 MHz

 

 

2. HBA

Name : PERC H810 Adapter

Slot ID : PCI Slot 1

State : Ready

Firmware Version : 21.3.2-0005

Minimum Required Firmware Version : Not Applicable

Driver Version : 06.811.02.00-rh1

Minimum Required Driver Version : Not Applicable

Storport Driver Version : Not Applicable

Minimum Required Storport Driver Version : Not Applicable

Number of Connectors : 2

Rebuild Rate : 30%

BGI Rate : 30%

Check Consistency Rate : 30%

Reconstruct Rate : 30%

Alarm State : Not Applicable

Cluster Mode : Not Applicable

SCSI Initiator ID : Not Applicable

Cache Memory Size : 1024 MB

Patrol Read Mode : Auto

Patrol Read State : Stopped

Patrol Read Rate : 30%

Patrol Read Iterations : 0

Abort Check Consistency on Error : Disabled

Allow Revertible Hot Spare and Replace Member : Enabled

Load Balance : Auto

Auto Replace Member on Predictive Failure : Disabled

Redundant Path view : Not Applicable

CacheCade Capable : Yes

Persistent Hot Spare : Disabled

Encryption Capable : Yes

Encryption Key Present : No

Encryption Mode : None

Preserved Cache : Not Applicable

Spin Down Unconfigured Drives : Disabled

Spin Down Hot Spares : Disabled

Spin Down Configured Drives : Disabled

Automatic Disk Power Saving (Idle C) : Disabled

Start Time (HH:MM) : Not Applicable

Time Interval for Spin Up (in Hours) : Not Applicable

T10 Protection Information Capable : No

Non-RAID HDD Disk Cache Policy : Not Applicable

 

 

3. OS

[root@testlab1 youngju]# lsb_release -a

LSB Version: :core-4.1-amd64:core-4.1-noarch

Distributor ID: RedHatEnterpriseServer

Description: Red Hat Enterprise Linux Server release 7.3 (Maipo)

Release: 7.3

Codename: Maipo

[root@testlab1 youngju]# uname -a

Linux testlab1 3.10.0-514.10.2.el7.x86_64 #1 SMP Mon Feb 20 02:37:52 EST 2017 x86_64 x86_64 x86_64 GNU/Linux

[root@testlab1 youngju]#

 

 

performance test는 vdbench를 이용해서 하였다.

vdbench script

sd=sd1,lun=/dev/sdc

wd=wd1,sd=sd1,rdpct=100,xfersize=512

rd=run1,wd=wd1,iorate=max,elapsed=120,interval=5,openflags=o_direct,forrdpct=(100,0,70),forxfersize=(512,4096)

 

 

4. disk 1TB, 7200rpm sas

ID : 1:0:11

Status : Ok

Name : Physical Disk 1:0:11

State : Online

Power Status : Spun Up

Bus Protocol : SAS

Media : HDD

Part of Cache Pool : Not Applicable

Remaining Rated Write Endurance : Not Applicable

Failure Predicted : No

Revision : GS0A

Driver Version : Not Applicable

Model Number : Not Applicable

T10 PI Capable : No

Certified : Yes

Encryption Capable : No

Encrypted : Not Applicable

Progress : Not Applicable

Mirror Set ID : Not Applicable

Capacity : 931.00 GB (999653638144 bytes)

Used RAID Disk Space : 1.20 GB (1288437760 bytes)

Available RAID Disk Space : 929.80 GB (998365200384 bytes)

Hot Spare : No

Vendor ID : DELL(tm)

Product ID : ST1000NM0023

Serial No. :

Part Number :

Negotiated Speed : 6.00 Gbps

Capable Speed : 6.00 Gbps

PCIe Negotiated Link Width : Not Applicable

PCIe Maximum Link Width : Not Applicable

Sector Size : 512B

Device Write Cache : Not Applicable

Manufacture Day : 07

Manufacture Week : 29

Manufacture Year : 2013

SAS Address : 5000C50056FBA12D

Non-RAID HDD Disk Cache Policy : Not Applicable

Disk Cache Policy : Not Applicable

Form Factor : Not Available

Sub Vendor : Not Available

ISE Capable : No

 

 

[root@testlab1 youngju]# omreport storage pdisk controller=1

 

 

test에 사용된 shell script

[root@testlab1 youngju]# grep -iv ‘^#[a-zA-Z]’ create-vdisk.sh

#!/bin/bash

 

 

RAID=${1}

DISKN=`echo $(($2-1)) `

PDISK=`seq -s “,” -f 1:0:%g 0 $DISKN`

SIZE=$3

echo -e “\e[93m———– dell omsa RAID=$RAID PDISK=$2 vdisk create ———–\e[0m”

omconfig storage controller action=createvdisk controller=1 raid=r${RAID:=0} size=${SIZE:=5g} pdisk=${PDISK:=1:0:0,1:0:1} stripesize=64kb readpolicy=nra writepolicy=wt name=yj-r${RAID}-${2}disk

if [ $? = 0 ]; then

echo -e “\e[93m———– dell omsa RAID=$RAID PDISK=$2 vdisk create done ———–\e[0m”

else

echo -e “\e[91m———– dell omsa RAID=$RAID PDISK=$2 vdisk create fail ———–\e[0m”

exit 1

fi

 

 

[root@testlab1 youngju]# grep -iv ‘^#[a-zA-Z]’ delete-vdisk.sh

#!/bin/bash

 

 

VDISK=$1

VNAME=`bash status-vdisk.sh |grep -i name |head -n1|awk ‘{print $3}’`

 

 

echo -e “\e[93m———– dell omsa vdisk ${VNAME:=no vdisk} delete ———–\e[0m”

omconfig storage vdisk action=deletevdisk controller=1 vdisk=${VDISK:=1}

echo -e “\e[93m———– dell omsa vdisk ${VNAME:=no vdisk} delete done ———–\e[0m”

 

 

[root@testlab1 youngju]# grep -iv ‘^#[a-zA-Z]’ status-vdisk.sh

#!/bin/bash

 

 

omreport storage vdisk controller=1 vdisk=1

 

 

[root@testlab1 youngju]# grep -iv ‘^[[:space:]]*#\|^$’ raid-test.sh

#!/bin/bash

RSTD=test-result-`date +%Y%m%d-%H%M`

RSTD512=test-512-result-`date +%Y%m%d-%H%M`

RSTDCACHE=test-cache-result-`date +%Y%m%d-%H%M`

mkdir $RSTD512

for R in 0 10 5 6

do

for D in `seq 2 12`

do

sleep 2

bash delete-vdisk.sh

sleep 2

bash create-vdisk.sh $R $D 12g

if [ $? = 0 ] ; then

sleep 3

omconfig storage vdisk action=slowinit controller=1 vdisk=1

sleep 2

while [ `bash status-vdisk.sh |grep -i state|head -n1 |awk ‘{print $3}’` != Ready ]

do

echo -e “\e[91m —– vdisk is initializing ——–\e[0m”

bash status-vdisk.sh |grep -i progress

sleep 5

done

echo

echo -e “\e[93m —– vdisk is initialized ——–\e[0m”

sleep 1

vdbench/vdbench -f youngju-test.param-512 -o ${RSTD512}/raid${R}-disk${D} -w 10

sleep 1

else

echo vdisk create fail raid $R disk $D

echo

fi

echo -e “\e[96m———– test raid $R disk $D done ————\e[0m”

echo

echo

done

done

[root@testlab1 youngju]#

 

 

test는 최대한 cache effect를 타지 않게 하였다.

test 결과는 다음과 같다.

io/cache raid disk R/W iops

512 raid 0 2 read 514.26

512 raid 0 3 read 677.07

512 raid 0 4 read 830.22

512 raid 0 5 read 947.13

512 raid 0 6 read 1027.46

512 raid 0 7 read 1108.98

512 raid 0 8 read 1121.44

512 raid 0 9 read 1207.18

512 raid 0 10 read 1265.49

512 raid 0 11 read 1286.46

512 raid 0 12 read 1335.8

512 raid 10 4 read 815.64

512 raid 10 6 read 975.67

512 raid 10 8 read 1119.82

512 raid 10 10 read 1212.33

512 raid 10 12 read 1303.5

512 raid 5 3 read 635.9

512 raid 5 4 read 783.8

512 raid 5 5 read 911.86

512 raid 5 6 read 1010.51

512 raid 5 7 read 1074.68

512 raid 5 8 read 1145.63

512 raid 5 9 read 1187.66

512 raid 5 10 read 1242.5

512 raid 5 11 read 1289.53

512 raid 5 12 read 1310.89

512 raid 6 4 read 729.6

512 raid 6 5 read 850.5

512 raid 6 6 read 983.22

512 raid 6 7 read 1053.27

512 raid 6 8 read 1126.92

512 raid 6 9 read 1179.58

512 raid 6 10 read 1217.82

512 raid 6 11 read 1274.77

512 raid 6 12 read 1296.48

더 많은데… 일단 것만함..

이걸 graph로 만들어 보면…

 

 

read

1

 

write

2

 

 

read write 비교

3

 

대략 이와같은 결과가 나왓다.

test는 transper size 512byte 일때 4096byte 일때 HBA write policy를 writeback일 때 이렇게 3종류 case로 진행 했는데 비슷하더라. 512와 4096은 어차피 1개 page가 4k라서 인것같고, writeback policy로 진행한것은 약간의 성능 향상이 있었는데 결과는 비슷햇다.

 

 

Raid 당 순수 performance를 계산하는 공식이다. I/O처리하는데 있어 대기시간이 전혀 없다는 가정이다.

N=disk 갯수(8)

X=1개 disk가 낼수 있는 iops (125)

Read는 모두 parity를 뺀 NX 만큼의 iops가 나온다.

Write Raid 0 = NX = 8×125= 1000

Write Raid 10 = NX/2 = 8×125 = 500

Write Raid 5 = NX/4 = 8×125/4 = 250

Write Raid 6 = NX/6 = 8×125/6 = 166

아래 site 참조함.

https://www.storagecraft.com/blog/raid-performance/

raid 5는 write를 할 때 4개의 operation이 들어가게 된다. 먼저 data를 읽고, parity를 읽고, data를 쓰고, parity쓰는 4개의 operation이 일어나게 되어 4를 나눈것이고, 6은 parity가 1개 더 들어가서 6을 나눈다.

 

 

대략적인 test결과도 비슷하게 나왔다.

operation size 별 read 대비 write performance

4

 

결론

1. HBA의 성능이 받쳐주는 이상 disk를 늘리면 늘릴 수록 I/O의 성능은 올라간다. storage vendor에 물어보니까. HBA의 cache size에 따라서 몇개의 disk까지 수용 할 수 있는지가 정해 진단다. test에서는 12개 까지밖에 쓸 수가 없어서 12개까지만 test 해봄.

2. 위 site에서 raid 5는 안전상의 이유로 사용하지 말라고 하더라. 사실상 raid5와 raid6과의 read 대비 write의 차이가 그리 크지 않다. raid5가 약 25% 정도의 효율이 나오고, raid6이 20% 정도의 효율이 나온다. disk가 많아지면 많아질 수록 이 격차는 좁아진다. 그러므로 raid 6을 쓰자.

 

command 실행 스크립트

지정된 command를 한줄 한줄 실행시켜 주는 scrip이다.

점검script 같은거 짤 때 괜찮을듯 ㅋ

CMD 변수 안에 command를 한줄 한줄 넣어주면 된다.

OLDIFS=$IFS
IFS=$(echo -en “\n\b”)
CMD=”df -h
echo 3 > /proc/sys/vm/drop_caches
ps aux |grep -i gluster
grep -i ’42 second’ /var/log/messages |wc -l
grep -i ‘transport endpoint is not connected’ /var/log/glusterfs/${LOGFILE}.log |wc -l
top -b -n 1 |tee /root/RH/before_top-`date +%Y%m%d-%H%M%S`
free -m |tee /root/RH/before_free-`date +%Y%m%d-%H%M%S`

for i in $CMD
do
IFS=$OLDIFS
echo -e “\e[93m—— $i excute ———–\e[0m”
eval $i
echo -e “\e[93m—— $i done ———–\e[0m”
echo
echo
sleep 1
done

vdbench rdma performance test

vdbench rdma performance test를 해봣다.

vdbench-rdma-test

결론은 nfs가 짱임.!!

위의 test는 모두 rdma로 진행하였다.

vdbench test script
create_anchors=yes
hd=default,user=root,shell=ssh
hd=srv3,system=srv3
fsd=rdma1-$host,anchor=/mnt/nfsordma/vdbench-rdma-$host,depth=2,width=3,files=10000,size=128k
fsd=rdma2-$host,anchor=/mnt/testvol1/vdbench-rdma-$host,depth=2,width=3,files=10000,size=128k
fsd=rdma3-$host,anchor=/mnt/ltest/vdbench-rdma-$host,depth=2,width=3,files=10000,size=128k
fwd=rdma1,host=srv3,fsd=rdma1-srv3,fileio=random,fileselect=random
fwd=rdma2,host=srv3,fsd=rdma2-srv3,fileio=random,fileselect=random
fwd=rdma3,host=srv3,fsd=rdma3-srv3,fileio=random,fileselect=random
rd=1rd-rdma-create,fwd=rdma1,fwdrate=max,format=yes,elapsed=10,interval=1,forxfersize=(4k),foroperations=(read)
rd=2rd-rdma-create,fwd=rdma2,fwdrate=max,format=yes,elapsed=10,interval=1,forxfersize=(4k),foroperations=(read)
rd=3rd-rdma-create,fwd=rdma3,fwdrate=max,format=yes,elapsed=10,interval=1,forxfersize=(4k),foroperations=(read)
rd=1rd-rdma-attr,fwd=rdma1,fwdrate=max,format=no,elapsed=30,interval=1,xfersize=4k,foroperations=(getattr,setattr)
rd=2rd-rdma-attr,fwd=rdma2,fwdrate=max,format=no,elapsed=30,interval=1,xfersize=4k,foroperations=(getattr,setattr)
rd=3rd-rdma-attr,fwd=rdma3,fwdrate=max,format=no,elapsed=30,interval=1,xfersize=4k,foroperations=(getattr,setattr)
rd=1rd-rdma,fwd=rdma1,fwdrate=max,format=no,elapsed=30,interval=1,xfersize=4k,foroperations=(read,write)
rd=2rd-rdma,fwd=rdma2,fwdrate=max,format=no,elapsed=30,interval=1,xfersize=4k,foroperations=(read,write)
rd=3rd-rdma,fwd=rdma3,fwdrate=max,format=no,elapsed=30,interval=1,xfersize=4k,foroperations=(read,write)