如果你的Windows 10電腦上裝了VMware Workstation和VMware Remote Console,當你在vSphere Web Client裡打開運行在vCenter上的虛擬機控制台時,VMware Workstation可能會代替VMware Remote Console自動連接到控制台。恭喜!你的VMware Remote Console被劫持了。這個問題解決起來比較複雜所以我叫它“劫持”。
- 11月 14 週三 201811:05
VMware Workstation自動打開vCenter虛擬機的控制台
如果你的Windows 10電腦上裝了VMware Workstation和VMware Remote Console,當你在vSphere Web Client裡打開運行在vCenter上的虛擬機控制台時,VMware Workstation可能會代替VMware Remote Console自動連接到控制台。恭喜!你的VMware Remote Console被劫持了。這個問題解決起來比較複雜所以我叫它“劫持”。
- 8月 15 週三 201815:35
VMware虛擬機發出IO需求到底層的flow

vSphere Pluggable Storage Architecture (PSA)
- 8月 17 週四 201710:00
將vmkernel service console從standard s/w遷移到distributed switch
- 4月 01 週二 201416:14
Difference Between vSphere 5.1 and vSphere 5.5
Embedded Database support upto
Machines
Machines
Guest OS
processor running at a lower frequency and voltage
also is used, providing additional power savings
and increased Performance
AHCI (Advanced Host Controller Interface) support
devices per VM)
distributed switch
only the most recent copy
of a virtual machine
24 historical snapshots
- 2月 16 週日 201422:16
Disabling the VAAI functionality in ESXi/ESX (1033665)
Purpose
This article provides steps to disable the vStorage APIs for Array Integration (VAAI) functionality in ESXi/ESX. You may want to disable VAAI if the storage array devices in the environment do not support the hardware acceleration functionality or are not responding correctly to VAAI primitives.
For information on VAAI support in a given storage array or required firmware levels, contact the storage array vendor.
- 11月 23 週六 201321:51
vSphere 5.x 支援的UNMAP功能-慎用
Although simplyfied, UNMAP is still a manual process you have to trigger on the command line. It is still not an inline feature!
it puts very high load on your storage subsystem so really think twice before reclaiming space (probably this feature should currently ONLY be used during maintenance windows)
space reclaiming, of course, only makes sense if you delete whole VMs/virtual disks on a datastore. It won't work for files deleted inside the guest OSes
always consider moving VMs with SVMotion to other datastores and completely delete the empty datastores as an alternative. This will achieve the same result regarding space reclamation but probably with less impact on storage performance
- 8月 31 週六 201317:00
VMware VSAN for vSphere

今年2013在VMword,VMware釋出了一個新功能VSAN。
看起來對儲存廠商會有衝擊,實則影響不大。
原因是因為VSAN還是針對中小用戶為主,雖然說用戶可以因為這樣節省了購買實體儲存設備的費用,也可以透過VSAN提供VMotion、DRS、HA等功能。
- 5月 09 週四 201317:14
救回被修改的VMFS partition

測試方式如下:
用SSV,assign一顆Virtual Disk給ESX5.0
格式化成VMFS5,並寫入一些資料
接著,將該Virtual Disk 卸載從ESX5.0,再mapping給Windows,刪除該volume,導致VMFS ID消失。
重新掛給ESX5.0,VMFS DataStore變成全新的,資料當然也找不到。
首先要先刪除第二個partition才可以喔!
partedUtil delete "/vmfs/devices/disks/mpx.vmhba0:C0:T0:L0" 2然後修復第一個partition
修好了!
=================================================================================================================
詳細的啟用方法,以及指令與連結參考 :
ESX5i Enable Troubshooting mode :
F2 --> 選Troubleshooting Options -->
Enable ESXi Shell
Enable SSH
Modify ESXi Shell timeout -->30秒
Alt+F1 切換Console畫面輸入帳號密碼
ls /vmfs/devices/disks/
mpx.vmhba0:C0:T0:L0 <-- disk device
mpx.vmhba0:C0:T0:L0:1 <-- partition 1
mpx.vmhba0:C0:T0:L0:2 <-- partition 2
mpx.vmhba0:C0:T0:L0:3 <-- partition 3
mpx.vmhba0:C0:T0:L0:5 <-- partition 5
naa.60060160205010004265efd36125df11 <-- disk device
naa.60060160205010004265efd36125df11:1 <-- partition 1
show出格式
partedUtil getptbl "/vmfs/devices/disks/DeviceName"
gpt
13054 255 63 209715200
| | | |
| | | \----- quantity of sectors
| | \-------- quantity of sectors per track
| \------------ quantity of heads
\------------------ quantity of cylinders(13054是100GB)
1 2048 209712509 AA31E02A400F11DB9590000C2911D1B8vmfs 0
| | | | |
| | | | \--- attribute
| | | \------- type
| | \----------------- ending sector
| \------------------------- starting sector
\--------------------------- partition number
partedUtil showGuids
A: the volume was created as VMFS 3 and was later updated to VMFS 5
B: the volume was freshly formatted and partitioned as VMFS 5
A:
partedUtil setptbl "/vmfs/devices/disks/eui.3238373535393136" gpt "1 128 209712509 AA31E02A400F11DB9590000C2911D1B8 0"
B:
partedUtil setptbl "/vmfs/devices/disks/eui.3238373535393136" gpt "1 2048 209712509 AA31E02A400F11DB9590000C2911D1B8 0"
這個link比較老舊一點
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1036609
Using partedUtil to recover an esxi5 partition table (從IBM文章找到的有用Link)
Problem(Abstract)
Partition tables have at times been overwritten causing VMFS volumes
to become inaccessible. In esxi5 the partedUtil needs to be used to
recover/recreate the partitions.
Resolving the problem
Besides the difference in using the partedUtil instead of fdisk as in the past, the
biggest challenge is knowing what the end sector must be. fdisk was more helpful
in finding the ending sector. This procedure assumes the full LUN is being used for the partition.
The last sector of a VMFS volume must end on a cylinder boundary. The following
formula can be used to find that sector:
end sector = (C * H * S) -1
C = cylinders
H = Heads
S = Sectors/track
To get the cyclinder/heads/sectors information execute the following query:
# partedUtil get /vmfs/devices/disks/mpx.vmhba1:C0:T4:L0
93990 255 63 1509949440
Collect the info and using the formula:
Cylinders = 93990
Heads = 255
Sectors/track = 63
(93990 * 255 * 63) -1 = 1509949349
The VMFS GUID is AA31E02A400F11DB9590000C2911D1B8
The command to recreate would be the following....
partedUtil setptbl "/vmfs/devices/disks/mpx.vmhba1:C0:T4:L0" gpt "1 2048 1509949349 AA31E02A400F11DB9590000C2911D1B8 0"
To print the table:
partedUtil getptbl "/vmfs/devices/disks/mpx.vmhba1:C0:T4:L0"
The above information was derived from the following web link:
http://communities.vmware.com/message/1990824
======================================================
In addition, I found the following information at this link:
http://www.virtuallyghetto.com/2011/07/how-to-format-and-create-vmfs-volume.html
This information explains different offsets you may encounter. Along with this information
I also observed in the lab that VMFS3 filesystems created in esxi5 have an offset of "2048".
The reference disk used in the web link:
# partedUtil get /dev/disks/eui.3238373535393136
254803 255 63 4093411328
The two kinds of examples demonstrated in the link:
A: the volume was created as VMFS 3 and was later updated to VMFS 5
B: the volume was freshly formatted and partitioned as VMFS 5
The two command examples....
A:
partedUtil setptbl "/vmfs/devices/disks/eui.3238373535393136" gpt "1 128 4093410194 AA31E02A400F11DB9590000C2911D1B8 0"
B:
partedUtil setptbl "/vmfs/devices/disks/eui.3238373535393136" gpt "1 2048 4093410194 AA31E02A400F11DB9590000C2911D1B8 0"
======================================================
Some additional information links:
http://kb.vmware.com/kb/1036609 for partutil information....and GUID information
http://kb.vmware.com/kb/1009829 for some pre-5.0 information.....
- 5月 09 週四 201317:14
VMFS Volume is Locked (1009570)
Details
The event indicates that a VMFS volume on the ESX/ESXi host is currently locked due to an I/O error.
For example, if naa.60060160b3c018009bd1e02f725fdd11:1 represents one of the partitions used in a VMFS volume, then the following is displayed if the VMFS volume is inaccessible:
If this occurs, the VMFS volume (and the virtual machines residing on the affected volume) are unavailable to the ESX/ESXi host
Solution
- For information on how to login to ESXi 4.1 and 5.x hosts refer to Using Tech Support Mode in ESXi 4.1 and ESXi 5.0 (1017910).
- For information on how to login to ESXi 4.0 refer to Tech Support Mode for Emergency Support (1003677)
Log in to the terminal of the VMware ESX or ESXi host and run the following commands:
- Break the existing LVM lock on the datastore:
# vmkfstools –B <vmfs device>
Note: You can also use the parameter --breaklock instead of -B with the vmkfstools command.
From the error message above, the following command is used:
# vmkfstools -B /vmfs/devices/disks/naa.60060160b3c018009bd1e02f725fdd11:1
The following output will be displayed:
VMware ESX Question:
LVM lock on device /vmfs/devices/disks/naa.60060160b3c018009bd1e02f725fdd11:1 will be forcibly broken. Please consult vmkfstools or ESX documentation to understand the consequences of this.
Please ensure that multiple servers aren't accessing this device.
Continue to break lock?
0) Yes
1) No
Please choose a number [0-1]:
- Enter 0 to break the lock.
- Re-read and reload VMFS datastore metadata to memory:
# vmkfstools –V - From the vSphere UI, Refresh the Storage Datastores View under Configuration tab.
- 5月 02 週四 201322:08
Storage IO Control (SIOC)
When you set a limit on multiple virtual disks for a virtual machine is that all of these limits will be added up and that will be your threshold. In other words:
Disk01 – 50 IOps limit
Disk02 – 200 IOps limit
Combined total: 250 IOps limit
If Disk01 only uses 5 IOps then Disk02 can use 245 IOps!

