카테고리 보관물: cluster

ovirt-engine HA(active backup) cluster 구성 3 부

 

luci를 이용한 cluster share resouce 생성

luci webpage에서 …

Manage Clusters > Resouces > Add > IP Address

4

56

### !!! file system ID 는 비워둔다. (자동으로 들어감) ###

### file system resource는 위의 lvm 과 같게 만들어준다. 6!! ###

7 8

9

Manage Clusters > rhevm-cluster > service Groups > Add

10

Cluster Service Group 생성 후 Add Resouce 를 클릭

192.168.144.30/24

ovirt ha lvm

etc-ovirt-engine

usr-share-ovirt-engine

usr-share-ovirt-engine

usr-share-ovirt-engine-wildfly

var-lib-ovirt-engine

var-lib-pgsql

postgresql

ovirt-engine-service

apache service

### 위와 같은 순서로 resouce 등록 (apache service IP Address 보다 앞에 있으면 안됨)###

### /etc/cluster/cluster.conf (예제) ###

<?xml version=”1.0″?>

<cluster config_version=”33″ name=”rhevm-cluster”>

<clusternodes>

<clusternode name=”node1.test.dom” nodeid=”1″>

<fence>

<method name=”xvm fence”>

<device delay=”1″ domain=”225.0.1.12″ name=”kvm_xvm”/>

</method>

</fence>

</clusternode>

<clusternode name=”node2.test.dom” nodeid=”2″>

<fence>

<method name=”xvm fence”>

<device delay=”2″ domain=”225.0.1.12″ name=”kvm_xvm”/>

</method>

</fence>

</clusternode>

</clusternodes>

<cman expected_votes=”1″ two_node=”1″/>

<fencedevices>

<fencedevice agent=”fence_xvm” name=”kvm_xvm” timeout=”2″/>

</fencedevices>

<rm>

<failoverdomains>

<failoverdomain name=”ovirt_failover_domain” ordered=”1″ restricted=”1″>

<failoverdomainnode name=”node1.test.dom” priority=”1″/>

<failoverdomainnode name=”node2.test.dom” priority=”2″/>

</failoverdomain>

</failoverdomains>

<resources>

<ip address=”192.168.144.30/24″ sleeptime=”10″/>

<lvm name=”ovirt ha lvm” vg_name=”RHEVM”/>

<fs device=”/dev/RHEVM/etc-ovirt-engine” fsid=”63050″ fstype=”ext4″ mountpoint=”/etc/ovirt-engine” name=”etc-ovirt-engine” self_fence=”1″/>

<fs device=”/dev/RHEVM/usr-share-ovirt-engine” fsid=”45498″ fstype=”ext4″ mountpoint=”/usr/share/ovirt-engine” name=”usr-share-ovirt-engine” self_fence=”1″/>

<fs device=”/dev/RHEVM/usr-share-ovirt-engine-wildfly” fsid=”27022″ fstype=”ext4″ mountpoint=”/usr/share/ovirt-engine-wildfly” name=”usr-share-ovirt-engine-wildfly” self_fence=”1″/>

<fs device=”/dev/RHEVM/var-lib-ovirt-engine” fsid=”38611″ fstype=”ext4″ mountpoint=”/var/lib/ovirt-engine” name=”var-lib-ovirt-engine” self_fence=”1″/>

<fs device=”/dev/RHEVM/var-lib-pgsql” fsid=”47186″ fstype=”ext4″ mountpoint=”/var/lib/pgsql” name=”var-lib-pgsql” self_fence=”1″/>

<script file=”/etc/init.d/postgresql” name=”postgresql”/>

<script file=”/etc/init.d/ovirt-engine” name=”ovirt-engine-service”/>

<apache config_file=”conf/httpd.conf” name=”apache service” server_root=”/etc/httpd” shutdown_wait=”5″/>

<fs device=”/dev/RHEVM/etc-pki-ovirt-engine” fsid=”8507″ fstype=”ext4″ mountpoint=”/etc/pki/ovirt-engine” name=”etc-pki-ovirt-engine” self_fence=”1″/>

</resources>

<service domain=”ovirt_failover_domain” name=”ovirt-ha-cluster” recovery=”relocate”>

<ip ref=”192.168.144.30/24″/>

<lvm ref=”ovirt ha lvm”/>

<fs ref=”etc-ovirt-engine”/>

<fs ref=”etc-pki-ovirt-engine”/>

<fs ref=”usr-share-ovirt-engine”/>

<fs ref=”usr-share-ovirt-engine-wildfly”/>

<fs ref=”var-lib-ovirt-engine”/>

<fs ref=”var-lib-pgsql”/>

<script ref=”postgresql”/>

<script ref=”ovirt-engine-service”/>

<apache ref=”apache service”/>

</service>

</rm>

</cluster>

[root@node1 ~]# clusvcadm –r ovirt-ha-cluster

Trying to relocate service:ovirt-ha-cluster…Success

[root@node1 ~]# clustat

Cluster Status for rhevm-cluster @ Wed May 18 15:54:49 2016

Member Status: Quorate

Member Name ID Status

—— —- —- ——

node1.test.dom 1 Online, Local, rgmanager

node2.test.dom 2 Online, rgmanager

Service Name Owner (Last) State

——- —- —– —— —–

service:ovirt-ha-cluster node2.test.dom started

### cluster 절차 test. 이와 같이 나오면 성공!! ###

 

^11^

ovirt-engine HA(active backup) cluster 구성 2 부

구성 2부임. 용량때문인지 한꺼번에 다 안올라감 ㅋ

 

[root@node1 ~]# yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release36.rpm

[root@node1 ~]# yum install -y ovirt-engine-setup

[root@node1 ~]# engine-setup

[ INFO ] Stage: Initializing

[ INFO ] Stage: Environment setup

Configuration files: [‘/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf’, ‘/etc/ovirt-engine-setup.conf.d/10-packaging.conf’]

Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20160518012835-iqr01z.log

Version: otopi-1.4.1 (otopi-1.4.1-1.el6)

[ INFO ] Stage: Environment packages setup

[ INFO ] Stage: Programs detection

[ INFO ] Stage: Environment setup

[ INFO ] Stage: Environment customization

–== PRODUCT OPTIONS ==–

Configure Engine on this host (Yes, No) [Yes]:

Configure VM Console Proxy on this host (Yes, No) [Yes]:

Configure WebSocket Proxy on this host (Yes, No) [Yes]:

–== PACKAGES ==–

[ INFO ] Checking for product updates…

[ INFO ] No product updates found

–== ALL IN ONE CONFIGURATION ==–

–== NETWORK CONFIGURATION ==–

Host fully qualified DNS name of this server [localhost]: node0.test.dom

### 반드시 가상 ip로 한다. 이 이름을 기반으로 ca 인증서들 만듦. ###

[WARNING] Failed to resolve node0.test.dom using DNS, it can be resolved only locally

Setup can automatically configure the firewall on this system.

Note: automatic configuration of the firewall may overwrite current settings.

Do you want Setup to configure the firewall? (Yes, No) [Yes]: no

[WARNING] Failed to resolve node0.test.dom using DNS, it can be resolved only locally

[WARNING] Failed to resolve node0.test.dom using DNS, it can be resolved only locally

–== DATABASE CONFIGURATION ==–

Where is the Engine database located? (Local, Remote) [Local]:

Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.

Would you like Setup to automatically configure postgresql and create Engine database, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

–== OVIRT ENGINE CONFIGURATION ==–

Application mode (Virt, Gluster, Both) [Both]:

Engine admin password:

Confirm engine admin password:

[WARNING] Password is weak: 사전에 있는 단어를 기반으로 합니다

Use weak password? (Yes, No) [No]: yes

–== STORAGE CONFIGURATION ==–

Default SAN wipe after delete (Yes, No) [No]:

–== PKI CONFIGURATION ==–

Organization name for certificate [test.dom]:

–== APACHE CONFIGURATION ==–

Setup can configure apache to use SSL using a certificate issued from the internal CA.

Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.

Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:

–== SYSTEM CONFIGURATION ==–

Configure an NFS share on this server to be used as an ISO Domain? (Yes, No) [Yes]: no

–== MISC CONFIGURATION ==–

–== END OF CONFIGURATION ==–

[ INFO ] Stage: Setup validation

[WARNING] Cannot validate host name settings, reason: resolved host does not match any of the local addresses

[WARNING] Less than 16384MB of memory is available

–== CONFIGURATION PREVIEW ==–

Application mode : both

Default SAN wipe after delete : False

Update Firewall : False

Host FQDN : node0.test.dom

Engine database secured connection : False

Engine database host : localhost

Engine database user name : engine

Engine database name : engine

Engine database port : 5432

Engine database host name validation : False

Engine installation : True

PKI organization : test.dom

Configure local Engine database : True

Set application as default page : True

Configure Apache SSL : True

Configure VMConsole Proxy : True

Engine Host FQDN : node0.test.dom

Configure WebSocket Proxy : True

Please confirm installation settings (OK, Cancel) [OK]:

### node1 engine-setup ###

[root@node1 ~]# cat ser

postgresql

ovirt-engine

httpd

[root@node1 ~]# for i in `cat ser`; do service $i stop; chkconfig $i off; done

[root@node2 ~]# cat ser

postgresql

ovirt-engine

httpd

[root@node2 ~]# for i in `cat ser`; do service $i stop; chkconfig $i off; done

### service stop; chkconfig off ###

[root@node1 nodes]# scp -r /etc/httpd/ node2:/etc/

### apache httpd.conf sync ###

[root@node1 nodes]# lvmconf –disable-cluster

[root@node1 nodes]# grep ‘[[:space:]] locking_type’ /etc/lvm/lvm.conf

locking_type = 1

[root@node1 ~]# grep ‘[[:space:]] volume_list’ /etc/lvm/lvm.conf

volume_list = [ “vg_node1”, “@node1.test.dom” ]

[root@node1 ~]# dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)

[root@node2 ~]# lvmconf –disable-cluster

[root@node2 ~]# grep ‘[[:space:]] volume_list’ /etc/lvm/lvm.conf

volume_list = [ “vg_node2”, “@node2.test.dom” ]

[root@node2 ~]# grep ‘[[:space:]] locking_type’ /etc/lvm/lvm.conf

locking_type = 1

[root@node2 ~]# dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)

### node1node2 에서 ha lvm 설정 ###

3부 계속… 사진때문인가… 다 안올라가네…

 

ovirt-engine HA(active backup) cluster 구성

RHEVM(ovirt-engine) HA test

참고자료

https://access.redhat.com/articles/216973 의 문서 참고함.

3.1문서 보다 3.0문서가 더 좋음.

2주간 3.1 문서 보면서 삽질을 했는데, 성공 하고 난 후 3.0 문서보니까 삽질하면서 찾았던것들이 여기 다 나오더라.

3.1 문서대로 하면 될 수가 없음. ㅋㅋ

환경

kvm instance : 2 ea

os version : centos 6.7

ovirt version : 3.6

cpu : 2

memory : 4096

storage : HDD 20G

ip addrress

192.168.144.31 node1.test.dom

192.168.144.32 node2.test.dom

192.168.144.30 node0.test.dom ( cluster resource 가상 ip )

os install이 끝난 후

[root@node1 ~]# cat /etc/selinux/config |grep -v ^# | grep -v ^$

SELINUX=permissive

SELINUXTYPE=targeted

### selinux permissive ###

[root@node1 ~]# chkconfig iptables off

[root@node1 ~]# service iptables off

### test의 편의를 iptables sevice off ###

[root@node1 ~]# yum update -y && yum groupinstall ‘high availability’ && reboot

### yum update ha group install 해준다. 그리고 kernel version update 를 위해 reboot ###

####### iptable 설정 예 ######

posting 에서는 편의를 위해 iptable 을 끄고 함.

# iptables -N RHCS

# iptables -I INPUT 1 -j RHCS

# iptables -A RHCS -p udp --dst 224.0.0.0/4 -j ACCEPT

# iptables -A RHCS -p igmp -j ACCEPT

# iptables -A RHCS -m state --state NEW -m multiport -p tcp --dports \

40040,40042,41040,41966,41967,41968,41969,14567,16851,11111,21064,50006,\

50008,50009,8084 -j ACCEPT

# iptables -A RHCS -m state --state NEW -m multiport -p udp --dports \

6809,50007,5404,5405 -j ACCEPT

# service iptables save

[root@node1 ~]# chkconfig ricci on && service ricci start

[root@node1 ~]# echo test123 | passwd ricci –stdin

ricci 사용자의 비밀 번호 변경 중

passwd: 모든 인증 토큰이 성공적으로 업데이트 되었습니다.

### ricci service on ricci 사용자 passwd 를 test123 로 등록 ###

[root@node1 ~]# yum install -y luci && chkconfig luci on && service luci start

### luci install, luci 는 다른 가상머신이나 다른곳에 설치 후 사용하는게 더 좋다. Cluster member luci가 깔려 있으면 큰 오류는 아니지만 약간 오작동을 했었다. ###

web browser 에서 luci 접속

https://node1.test.dom:8084

cluster 생성

Manage clusters > create 클릭 하면 아래와 같이 입력.

1

xvm Fence device 만들기

Manage clusters > rhevm-cluster > Fence Devices 클릭

https://access.redhat.com/solutions/293183 참고하여 xvm fence device 만든 후 등록 함. 궂이 등록하지 않아도 fencing 은 된다.

2

3

위와 같이 등록 하였다.

Iscsi 설정

iscsi server(node1 node2를 제외한) 에서 다음과 같이 volume을 만들어 준다.

Rhel7 에서 targetcli 이용 하여 volume 만듬.

/iscsi/iqn.20…u-test-domain> ls

o- iqn.2016-04.test.dom:youngju-test-domain ……………………………… [TPGs: 1]

o- tpg1 ………………………………………………… [no-gen-acls, no-auth]

o- acls ………………………………………………………….. [ACLs: 2]

| o- iqn.2016-04.test.dom:nestedkvm …………………………… [Mapped LUNs: 1]

| | o- mapped_lun0 ………………………………….. [lun0 block/youngju (rw)]

| o- iqn.2016-05.dom.test:ovirt1 ……………………………… [Mapped LUNs: 1]

| | o- mapped_lun0 …………………………………. [lun1 block/youngju2 (rw)]

| o- iqn.2016-05.dom.test:ovirt2 ……………………………… [Mapped LUNs: 1]

| | o- mapped_lun0 …………………………………. [lun1 block/youngju2 (rw)]

| o- iqn.2016-05.dom.test:rhevm1 ……………………………… [Mapped LUNs: 1]

| | o- mapped_lun2 …………………………………. [lun2 block/youngju3 (rw)]

| o- iqn.2016-05.dom.test:rhevm2 ……………………………… [Mapped LUNs: 1]

| o- mapped_lun2 …………………………………. [lun2 block/youngju3 (rw)]

o- luns ………………………………………………………….. [LUNs: 3]

| o- lun0 …………………………….. [block/youngju (/dev/vg_iscsi/youngju1)]

| o- lun1 ……………………………. [block/youngju2 (/dev/vg_iscsi/youngju2)]

| o- lun2 ……………………………. [block/youngju3 (/dev/vg_iscsi/youngju3)]

o- portals …………………………………………………….. [Portals: 1]

o- 0.0.0.0:3260 ……………………………………………………… [OK]

/iscsi/iqn.20…u-test-domain>

[root@node1 ~]# yum install -y iscsi-initiator-utils

[root@node1 ~]# cat /etc/iscsi/initiatorname.iscsi |grep -iv ^#

InitiatorName=iqn.2016-05.dom.test:rhevm1

[root@node1 ~]# iscsiadm -m discovery -t st -p 10.10.10.10

10.10.10.10:3260,1 iqn.2016-04.test.dom:youngju-test-domain

[root@node1 nodes]# iscsiadm -m node -l

[root@node2 ~]# yum install -y iscsi-initiator-utils

[root@node2 ~]# cat /etc/iscsi/initiatorname.iscsi |grep -iv ^#

InitiatorName=iqn.2016-05.dom.test:rhevm2

[root@node2 ~]# iscsiadm -m discovery -t st -p 10.10.10.10

10.10.10.10:3260,1 iqn.2016-04.test.dom:youngju-test-domain

[root@node2 ~]# iscsiadm -m node -l

### iscsi initiator를 설치하고 targetlogin 한다. ###

[root@node1 ~]# lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

vda 253:0 0 20G 0 disk

├─vda1 253:1 0 500M 0 part /boot

├─vda2 253:2 0 1G 0 part [SWAP]

└─vda3 253:3 0 18.5G 0 part /

sr0 11:0 1 1024M 0 rom

sda 8:0 0 50G 0 disk

└─sda1 8:1 0 50G 0 part

[root@node1 ~]# lvs

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert

etc-ovirt-engine RHEVM -wi——- 1.00g

etc-pki-ovirt-engine RHEVM -wi——- 1.00g

usr-share-ovirt-engine RHEVM -wi——- 1.00g

usr-share-ovirt-engine-wildfly RHEVM -wi——- 2.00g

var-lib-ovirt-engine RHEVM -wi——- 5.00g

var-lib-pgsql RHEVM -wi——- 5.00g

### lvm 볼륨을 만들어 준다. ###

[root@node1 ~]# cat clusterdir

/etc/ovirt-engine

/etc/pki/ovirt-engine

/usr/share/ovirt-engine

/usr/share/ovirt-engine-wildfly

/var/lib/ovirt-engine

/var/lib/pgsql

[root@node1 ~]# for i in `cat clusterdir` ; do mkdir -p $i; done

### lvm volume mount point 생성 ###

mount /dev/RHEVM/etc-ovirt-engine /etc/ovirt-engine

mount /dev/RHEVM/etc-pki-ovirt-engine /etc/pki/ovirt-engine

mount /dev/RHEVM/usr-share-ovirt-engine /usr/share/ovirt-engine

mount /dev/RHEVM/usr-share-ovirt-engine-wildfly /usr/share/ovirt-engine-wildfly

mount /dev/RHEVM/var-lib-ovirt-engine /var/lib/ovirt-engine

mount /dev/RHEVM/var-lib-pgsql /var/lib/pgsql

### lvm volume mount ###

2 부 계속…