Archive for November, 2014


VMware Infrastructure 3……

November 26, 2014


Virtual machines are just like physical machines. You can log on to them ; and they have BIOS , hard disks, memory , CPUs , operating systems and applications. In fact , if you connect remotely to a machine , you’ll never know what it’s virtual unless someone tells you. Virtual machines work and behave just like physical machines. Even the machines themselves don’t even know they are virtual.

Got a book from National Library (PNM) about VMware Infrastructure 3 – for dummies. To fully understand your storage options and make informed decisions , you need to understand SCSI. Like the OSI networking model , SCSI makes use of several layers offering different functionality. The official name for this model is the SCSI Architecture Model (SAM). You’ll need to decide whether or not you want to boot ESX from a SAN before you install the server. You can boot from SAN with both FIBRE Channel and SCSI. The only iSCSI catch is that you need to use a hardware initiator instead of a software initiator. Switched Fibre Channel SANs were the first SAN technology fully supported by VMWare configuration with a switched Fibre Channel SAN. iSCSI is a newer technology. However , like Fibre Channel , iSCSI puts SCSI commands and data blocks into network frames. The difference is that iSCSI uses TCP/IP instead of Fibre Channel Protocol (FCP). To frame data , iSCSI needs a protocol initiator. This can be software based – Microsoft’s iSCSI Software Initiator or hardware based – TOE card. iSCSI nodes are iSCSI devices. They can be initiators , targets , or both. A NAS device is typically a plug-and play storage device that supports  Network File System (NFS) – Open Source or Server Message block (SMB) – SMB is Windows Networking. VMWare uses the NFS protocol because it’s more of an open standard than SMB. Most of VMWare’s datacenter features , such as VMotion and DRS , work with NAS. However , VCB does not.

VMkernel loads in high memory and controls all your hardware. It is an abstraction layer (hiding the implementation details of hardware sharing ) that virtualizes hardware . The VMkernel assumes that all the hardware in your system is functioning properly. Any faulty hardware can cause it to crash , yielding the Purple Screen of Death (PSOD). Additionally , the VMkernel controls all scheduling for the ESX machine. This includes virtual machines and the Service Console. You can install ESX on an Intel processor that is Xeon and later ; or an AMD Opteron processor in 32 bit mode. You also need at least 2GB of RAM and 560MB of disk space. Of course , the more CPUs , RAM , and disk storage you have , the more virtual machines you can support. VMwareHigh Availability (HA) is supported only experimentally in ESXi. If it does not work , you will need to manually start virtual machines on another ESX or ESXi host if the one they were running on fails. VMware Infrastucture Client (VIC) is your one-stop-shopping for all your VMware Infrastructure 3 needs. VIC can log in to and manage ESX hosts directly , or as a proxy through VirtualCenter.

Your virtual machines connect to virtual switches. Virtual switches , in turn , connect to NICs in your ESX host. And the NICs connect to your physical network. Virtual switches perform 3 differnet functions for an ESX host. Each function is considered a different connection type or port.

1. Virtual machines.

2. VMkernel.

3. Service Console.

Load balancing offers 3 different ways to pick which uplink adapter to use for outgoing traffic. You can choose from a virtual port-based algorithm , a MAC address-based algorithm, an IP address-based algorithm , and an explicit failover order.Each has its tradeoffs.

p/s:- This is some excerpt taken from the book VMware Infrastructure 3 – for dummies by William J.Lowe – Wiley Publishing Inc.

– Just to make a note…PIKOM PC Fair will be held at KL Convention Center 0n 19th-21th December 2014…I will be attending the PC Fair next month….