分页: 1 / 1

Integrity VM - "MSE" question

发表于 : 2012年 4月 12日 10:14 星期四
YAN
Question regarding “Multi-Server Environments” as a construct with HPVM v6.1 . . .


Although the documentation on really touts it as a requirement for Serviceguard-packaged guests, the concept of MSE seems to be decoupled from Serviceguard. Unfortunately the only detailed description of configuring an MSE is from hpvmdevmgmt(1M).

I’m looking for more information on if/how I can operationalize an MSE w/o Serviceguard . . .

Per the below output, my two lab systems have an MSE established between them. I believe the “RHost” column is generated as a product of combining two or more VSPs (in this case, HPVM v4.2.5 hosts) in a Multi-Server Environment . . not necessarily a result of Serviceguard being in the mix (which it is). I have a customer that would like to be able to determine where guests are as they are migrated from one VSP to another, and I hope that setting up an MSE w/o Serviceguard will provide this information. Below is a quick look at the “hpvmstatus” command from my lab and the entries required in the HPVM device database for MSE enablement:

代码: 全选

root@sdd-unx1> hpvmstatus
[Virtual Machines]
Virtual Machine Name  VM #  OS Type State     #VCPUs #Devs #Nets Memory  RHost
===================== ===== ======= ========= ====== ===== ===== ======= =====
goldenvm                  1 HPUX    On (OS)        1     1     1    2 GB    11
orarac0                   5 HPUX    Off (NR)       1     4     3    3 GB     -
orarac1                   6 HPUX    Off            1     4     3    3 GB     -
tony0                    11 HPUX    On (OS)        1     1     1    2 GB    11
gary0                     8 HPUX    On (RMT)       1     1     1    2 GB    10
keith0                   10 HPUX    On (OS)        1     1     1    2 GB    11

代码: 全选

root@sdd-unx1> hpvmdevmgmt -l env
HPVM_MSE_GROUP_ENTRY:CONFIG=env,EXIST=NO,DEVTYPE=UNKNOWN,SHARE=NO,UUID=4da2f970-8634-11df-9101-00306ef3ab94,GROUPNAME=HPVM-SG-hpvm_mse::WWID_NULL

代码: 全选

root@sdd-unx1> hpvmdevmgmt -l server
sdd-unx1:CONFIG=server,SERVERADDR=10.0.52.11,SERVERID=11,UUID=8b6f9569-e915-11d6-9329-71c146dcb688,PHYSUUID=8b6f9569-e915-11d6-9329-71c146dcb688::WWID_NULL
sdd-unx0:CONFIG=server,SERVERADDR=10.0.52.10,SERVERID=10,UUID=9ac90640-d8ba-11d8-aef3-039e3a3c57ab,PHYSUUID=9ac90640-d8ba-11d8-aef3-039e3a3c57ab::WWID_NULL


Unfortunately, the “hpvmstatus” output doesn’t have this extra column, although they supposedly have the MSE set-up correctly. Will/should this work as I hope? . . In other words, I want to make sure that I’m surmised this reporting exists if VSP’s are coupled into an MSE before I start trouble-shooting. Perhaps the guests need to be toggled as “distributed” . . i.e. “hpvmmodify –P {guest} -j 1” on each VSP?

Thanks a ton for any direction,

Re: Integrity VM - "MSE" question

发表于 : 2012年 4月 12日 10:52 星期四
HEUNG
I have not had a chance to play with MSE without Serviceguard.

It seems that the command to enable it is linked to SG instance,
or a machine that is managed by gWLM:

-i package-name
Specifies whether the virtual machine is managed by Serviceguard or gWLM (or both).
For the argument, specify the Serviceguard package name or gWLM, both, or NONE.

# hpvmmodify -i <vminst> -j 1

I would expect the following command to tell us if MSE is successfully configured:

# hpvmstatus -m

I am very interested to hear from others if they tried MSE without Serviceguard too.