You are on page 1of 44

VMware Integration

BRKDCT-2868

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

2

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

1

Virtualization

App

App

App

App

App

App

VM
Guest OS Guest OS Guest OS

VM
Guest OS Modified OS

VM
Modified OS

Mofied Stripped Down OS with Hypervisor
CPU CPU

Hypervisor

Host OS

Mofied Stripped Down OS with Hypervisor
CPU

VMware

Microsoft

XEN aka Paravirtualization

Presentation_ID

© 2006 Cisco Systems, Inc. All rights reserved.

Cisco Confidential

3

Migration
VMotion, aka VM Migration allows a VM to be reallocated on a different Hardware itho t having Hard are without ha ing to interrupt service. Downtime in the order of few milliseconds to few minutes, not hours or days Can be used to perform Maintenance on a server, Can be used to shift workloads more efficiently 2t types of Mi f Migration: ti
VMotion Migration Regular Migration
CPU

App.

App.

App.

Console OS

OS

VMware Virtualization Layer

OS

OS

VMware Virtualization Layer

Hypervisor

Hypervisor

CPU

BRKDCT-2868 14490_04_2008_c2

Console OS
4

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

2

VMware Architecture in a Nutshell
Mgmt Network
App. App. App. Console OS

Production Network VM Kernel Network

OS

OS

OS

VM Vi Virtualization Layer li i L Physical Hardware
CPU

Virtual Machines

… ESX Server Host

Presentation_ID

© 2006 Cisco Systems, Inc. All rights reserved.

Cisco Confidential

5

VMware HA Clustering

App1 App1 Guest OS App2 Guest OS App3 Guest OS App4 Guest OS Guest OS App5 Guest OS

App2 Guest OS

Hypervisor

Hypervisor

Hypervisor

ESX Host 1 H t
CPU CPU

ESX Host 2 H t
CPU

ESX Host 3

Presentation_ID

© 2006 Cisco Systems, Inc. All rights reserved.

Cisco Confidential

6

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

3

Application-level HA clustering (Provided by MSCS, Veritas etc…)

App1 Guest OS

App2 Guest OS

App3 Guest OS

App4 Guest OS

App1

App5 Guest OS

App2 Guest OS

Guest OS

Hypervisor

Hypervisor

Hypervisor

ESX Host 1 H t
CPU CPU

ESX Host 2 H t
CPU

ESX Host 3

Presentation_ID

© 2006 Cisco Systems, Inc. All rights reserved.

Cisco Confidential

7

Agenda
VMware LAN Networking
vSwitch Basics NIC Teaming vSwitch vs LAN Switch Cisco/VMware DC DESIGNS SAN Designs

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

8

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

4

VMware Networking Components
Per ESX-server configuration VMs vSwitch VMNICS = uplinks

vNIC VM_LUN_0007

vSwitch0 vmnic0

VM_LUN_0005

vNIC Virtual Ports

vmnic1
9

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

vNIC MAC Address
VM’s MAC address automatically generated Mechanisms to avoid MAC collision VM’s MAC address doesn’t change with migration VM’s MAC addresses can be made static by modifying the configuration files g ethernetN.address = 00:50:56:XX:YY:ZZ /vmfs/volumes/46b9d79a2de6e23e-929d001b78bb5a2c/VM_LUN_0005 001b78bb5a2c/VM LUN 0005 /VM_LUN_0005.vmx ethernet0.addressType = "vpx" ethernet0.generatedAddress = "00:50:56:b0:5f:24„ ethernet0.addressType = „ „static“ ethernet0.address = "00:50:56:00:00:06„

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

10

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

5

vSwitch Forwarding Characteristics
Forwarding based on MAC address (No Learning): If traffic doesn’t match a VM MAC is sent out to vmnic VM-to-VM traffic stays local Vswitches TAG traffic with 802.1q VLAN ID vSwitches are 802.1q Capable vSwitches can create Etherchannels

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

11

vSwitch Creation
YOU DON’T HAVE TO SELECT A NIC

This is just a name

vNICs

vswitch

Select the Port-Group by specifying the NETWORK LABEL

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

12

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

6

VM

Port-Group

vSwitch

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

13

VLAN’s - External Switch Tagging - EST
VM1 VM2
Service Console Virtual NIC’s

VLAN tagging and stripping is done by the physical switch No ESX configuration required as the server is not tagging The number of VLAN’s supported is limited to the number of physical NIC’s in t e se e C s the server

VMkernel NIC

VSwitch A

VSwitch B

ESX Server

VMkernel

Physical NIC’s Physical Switches
BRKDCT-2868 14490_04_2008_c2

VLAN 100

VLAN 200
14

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

7

VLAN’s - Virtual Switch Tagging - VST
VM1 VM2
Service Console Virtual NIC’s

The vSwitch tags outgooing frames with the VLAN Id The vSwitch strips any dot1Q tags before delivering to the VM Physical NIC’s and switch port operate as a trunk Number of VLAN’s are limited to the number of vNIC’s

VMkernel NIC

VSwitch A

ESX Server

VMkernel

Physical NIC’s Physical Switches
BRKDCT-2868 14490_04_2008_c2

dot1Q

VLAN 100

VLAN 200

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

No VTP or DTP. All static config. Prune VLAN’s so ESX doesn’t process broadcasts 15

VLAN’s - Virtual Guest Tagging - VGT
VM1 VM2
dot1Q VM applied VMkernel NIC Service Console Virtual NIC’s

Portgroup VLAN Id set to 4095 Tagging and stripping of VLAN id’s happens in the guest VM – requires an 802.1Q driver Guest can send/receive any tagged VLAN frame Number of VLAN’s per guest are not limited to the number of VNIC’s

VSwitch A

ESX Server

VMkernel

Physical NIC’s Physical Switches
BRKDCT-2868 14490_04_2008_c2

dot1Q

VLAN 100

VLAN 200

VMware does not ship with the driver: Windows E1000 Linux dot1q module
16

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

8

Agenda
VMware LAN Networking
vSwitch Basics NIC Teaming vSwitch vs LAN Switch Cisco/VMware DC DESIGNS SAN Designs

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

17

Meaning of NIC Teaming in VMware (1)
ESX server NIC cards vSwitch Uplinks vmnic0 NIC Teaming vmnic1 vmnic2 vmnic3 NIC Teaming

THIS IS NOT NIC Teaming vNIC vNIC vNIC vNIC vNIC

ESX Server Host

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

18

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

9

Meaning of NIC Teaming in VMware (2)
Teaming is Configured at The vmnic Level

BRKDCT-2868 14490_04_2008_c2

This is NOT Teaming s

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

19

Design Example
2 NICs, VLAN 1 and 2, Active/Standby

Port-Group 1 VLAN 2

802.1q Vlan 1,2

802.1q Vlan 1,2

vmnic0 vSwitch0

vmnic1

ESX Server

Port-Group 2 VLAN 1

VM1
BRKDCT-2868 14490_04_2008_c2

VM2

Service Console

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

20

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

10

Active/Standby per-Port-Group
CBS-left CBS-right

VMNIC0

VMNIC1

Port-Group1

Port-Group2

vSwitch0

VM5

VM7

.5
BRKDCT-2868 14490_04_2008_c2

.7

ESX Server
Cisco Public

VM4

VM6

.4

.6
21

© 2008 Cisco Systems, Inc. All rights reserved.

Port-Group Overrides vSwitch Global Configuration

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

22

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

11

Active/Active

ESX server NIC cards d

ESX server vmnic0 vmnic1

vSwitch Port-Group

VM1

VM2

VM3

VM4

VM5

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

23

Active/Active
IP-Based Load Balancing
Works with Channel-Group mode ON LACP i not supported is t t d (see below):
9w0d: %LINK-3-UPDOWN: Interface GigabitEthernet1/0/14, changed state to up 9w0d: %LINK-3-UPDOWN: Interface GigabitEthernet1/0/13, changed state to up 9w0d: %EC-5-L3DONTBNDL2: Gi1/0/14 suspended: LACP currently not enabled on the remote port. 9w0d: %EC-5-L3DONTBNDL2: Gi1/0/13 suspended: LACP currently not enabled on the remote port. VM1 VM2 VM3 VM4 vmnic0 vmnic1 Port-channeling P t h li

ESX server

vSwitch Port-Group

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

24

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

12

Agenda
VMware LAN Networking
vSwitch Basics NIC Teaming vSwitch vs LAN Switch Cisco/VMware DC DESIGNS SAN Designs

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

25

All Links Active, No Spanning-Tree Is There a Loop?
CBS-right

CBS-left

NIC1

NIC2

NIC3

NIC4

Port-Group1

Port-Group2

vSwitch1

VM5

VM7

.5
BRKDCT-2868 14490_04_2008_c2

.7

ESX Server
Cisco Public

VM4

VM6

.4

.6
26

© 2008 Cisco Systems, Inc. All rights reserved.

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

13

Broadcast/Multicast/Unknown Unicast Forwarding in Active/Active (1)

802.1q Vlan 1,2

802.1q Vlan 1,2

vmnic0

vmnic1

vSwitch0 Port-Group 1 VLAN 2

ESX Server
BRKDCT-2868 14490_04_2008_c2

VM1

VM2
27

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Broadcast/Multicast/Unknown Unicast Forwarding in Active/Active (2)

802.1q Vlan 1,2

802.1q Vlan 1,2

ESX Host vSwitch

NIC1

NIC2

VM1
BRKDCT-2868 14490_04_2008_c2

VM2

VM3
28

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

14

Can the vSwitch Pass Traffic Through?

E.g. HSRP?

NIC1 vSwitch

NIC2

VM1
BRKDCT-2868 14490_04_2008_c2

VM2
29

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Is This Design Possible?
Catalyst1 Catalyst2

802.1q

1 ESX server1
VMNIC1 vSwitch VMNIC2

802.1q

2

VM5

VM7

.5
BRKDCT-2868 14490_04_2008_c2

.7
Cisco Public

© 2008 Cisco Systems, Inc. All rights reserved.

30

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

15

vSwitch Security
Promiscuous mode Reject prevents a port from capturing traffic whose address is not the VM’s address MAC Address Change, prevents the VM from modifying the vNIC address Forget Transmits prevents the VM from sending out traffic with a different MAC (e.g NLB)

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

31

vSwitch vs LAN Switch
Similarly to a LAN Switch:
Forwarding based on MAC address VM-to-VM traffic stays local Vswitches TAG traffic with 802.1q VLAN ID vSwitches are 802.1q Capable vSwitches can create Etherchannels Preemption Configuration (similar to Flexlinks, but no delay preemption)

Differently from a LAN Switch
No Learning No Spanning-Tree protocol No Dynamic trunk negotiation (DTP) No 802.3ad LACP 2 Etherchannel backing up each other is not possible No SPAN/mirroring capabilities: Traffic capturing is not the equivalent of SPAN Port Security limited

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

32

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

16

Agenda
VMware LAN Networking
vSwitch Basics NIC Teaming vSwitch vs LAN Switch Cisco/VMware DC DESIGNS SAN Designs

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

33

vSwitch and NIC Teaming Best Practices
Q: Should I use multiple vSwitches or multiple Port-Groups to isolate traffic? A: We didn’t see any advantage in using multiple vSwitches, multiple Port-Groups with different VLANs give you enough flexibility to isolate servers Q: Should I use EST or VST? A: Always use VST, i.e. assign the VLAN from the vSwitch Q: Can I use native VLAN for VMs? A: Yes you can, but to make it simple don’t. If you do, do not TAG VMs with the native VLAN Q: Which NIC Teaming configuration should I use? A: Active/Active, Virtual Port-ID based Q: Do I have to attach all NICs in the team to the same switch or to different switches? A: with Active/Active Virtual Port-ID based, it doesn’t matter Q: Should I use Beaconing? A: No Q: Should I use Rolling Failover (i.e. no preemption) A: No, default is good, just enable trunkfast on the Cisco switch

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

34

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

17

Cisco Switchport Configuration
Make it a Trunk Enable Trunkfast Can the Native VLAN be used for VMs? Yes, but IF you do, you have 2 options
Configure VLAN ID = 0 for the VMs that are going to use the native VLAN (preferred) Configure “vlan dot1q tag native” on the 6k (not recommended)

interface GigabitEthernetX/X description <<** VM Port **>> no ip address switchport switchport trunk encapsulation dot1q switchport trunk native vlan <id> switchport trunk allowed vlan xx,yy-zz switchport mode trunk switchport nonegotiate no cdp enable spanning-tree portfast trunk !

Do not enable Port Sec rit Security (see next slide) Make sure that “teamed” NICs are in the same Layer 2 domain Provide a Redundant Layer 2 path

Typically: SC, VMKernel, VM Production
BRKDCT-2868 14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

35

Configuration with 2 NIC
SC, VMKernel, Production Share NICs

Trunks 802.1q 802.1q: Production VLANs, Service Console, VM Kernel ESX Server VMNIC2 Port-Group Port-Group 3 2

NIC teaming Active/Active VMNIC1 Port-Group 1

vSwitch 0
Global Active/Active VST
VM1 HBA1 VM2 HBA2 Service Console VM Kernel

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Active/Standby Vmnic1/vmnic2

Active/Standby Vmnic2/vmnic1

36

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

18

Configuration with 2 NICs
Dedicated NIC to SC, VMKernel, Separate NIC for Production

Trunks 802.1q 802.1q: Production VLANs, Service Console, VM Kernel ESX Server VMNIC2 Port-Group Port-Group 3 2

NIC teaming Active/Active VMNIC1 Port-Group 1

vSwitch 0
Global Active/Standby Vmnic1/vmnic2 VST
VM1 HBA1 VM2 HBA2 Service Console VM Kernel

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Active/Standby Vmnic2/vmnic1

Active/Standby Vmnic2/vmnic1

37

Network Attachment (1)
root Secondary root Rapid PVST+

Trunkfast BPDU guard

802.1q: Production, SC, VMKernel

No Blocked Port, No Loop

Catalyst1
802.1q: Production, SC, SC VMKernel

Catalyst2 All NICs are used Traffic distributed On all links

802.1q

1
VMNIC1

2
VMNIC2

3

4

VMNIC1

VMNIC2

ESX server1

vSwitch

vSwitch

ESX server 2

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

38

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

19

Network Attachment (2)
root
802.1q: Production, SC, VMKernel

Secondary root Rapid PVST+

Trunkfast BPDU guard Typical Spanning-Tree V-Shape Topology

802.1q: 802 1q: Production, SC, VMKernel

802.1q

All NICs are used Traffic distributed On all links

1
VMNIC1

2
VMNIC2

3

4

VMNIC1

VMNIC2

vSwitch

ESX server1
BRKDCT-2868 14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

ESX server 2 vSwitch
39

Configuration with 4 NICs
Dedicated NICs for SC and VMKernel
Dedicated NIC for SC
Production Dedicated VLANs
VMNIC1

NIC for VMKernel

VMs become completely isolated

Redundant Production VMNIC2

VMNIC3

ESX Server VMNIC4

How good is this design? Isolates Management Access
Active/Active cannot VC Vmnic1/vmnic2

control ESX Host

vswitch

Isolates VMKernel

If this is part of an HA Cluster VMs are powered down Port-Group 1

Management is the is lost If using iSCSI thisaccessworst iSCSI very complicated Possible failure,access is lost Service VMotion from To recovercan’t run VM Kernel
Console

HBA1
BRKDCT-2868 14490_04_2008_c2

HBA2

If this is part of a DRS cluster It prevents automatic migration
40

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

20

Configuration with 4 NICs
Redundant SC and Production P d ti VMKernel Connectivity VLANs
VMNIC1 Redundant Production

SC, VMKernel SC VMK l SC VLANs
VMNIC2 VMNIC3

swaps to vmnic4

ESX Server VC can still control Host VMNIC4

HA augmented by teaming on Different NIC chipsets
Active/Active Go G Vmnic1/vmnic3

Production Traffic goes to vmnic3

All links used Production and Management through chipset 2 h h hi “Dedicated NICs” for SC And VMKernel
Port-Group 1

vswitch

VMKernel swaps Management Production and to vmnic2 Go through chipset1 Service Production Traffic VM Kernel Console Continues on vmnic1
Active/Standby Vmnic2/vmnic4 Active/Standby Vmnic4/vmnic2

HBA1
BRKDCT-2868 14490_04_2008_c2

HBA2
41

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

Network Attachment (1)
root Secondary root Rapid PVST+

Trunkfast BPDU guard

802.1q: Production, SC, VMKernel

No Blocked Port, No Loop

Catalyst1
q 802.1q: Production

Catalyst2
802.1q: SC and VMKernel

1

2

3 4 5

6

7

8

ESX server1

ESX server 2

vSwitch
BRKDCT-2868 14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

vSwitch
42

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

21

Network Attachment (2)
root
802.1q: Production, SC, VMKernel

Secondary root Rapid PVST+

Trunkfast BPDU guard Typical Spanning-Tree V-Shape Topology

Catalyst1
802.1q: 802 1q Production

Catalyst2

802.1q: SC and VMKernel

1

2

3

6 4 5

7

8 ESX server 2

ESX server1

vSwitch
BRKDCT-2868 14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

vSwitch
43

How About?
root
802.1q: Production, SC, VMKernel

Secondary root

Trunkfast BPDU guard Typical Spanning-Tree V-Shape Topology Catalyst2

Catalyst1
802.1q: 802 1q Production

802.1q: SC and VMKernel

1 ESX server1
vSwitch

2

3

6 4 5

7

8 ESX server 2
vSwitch

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

44

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

22

4 NICs with Etherchannel
“Clustered” switches

802.1q: Production

1

3 2 4 5

6 7

8

802.1q: SC, VMKernel

vSwitch

vSwitch

ESX server1

ESX server 2

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

45

VMotion Migration Requirements

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

46

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

23

VMKernel Network can be routed
VM Kernel Network Mgmt VM Kernel Production Network Network Network

Virtual Machines

… ESX Server Host
BRKDCT-2868 14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

47

VMotion L2 Design

Rack10

Rack1

vmnic0 vmnic0 vmnic1 vmnic2 vmnic3

vmnic2

vSwitch0

vSwitch1

vSwitch2

vSwitch0

vSwitch2

vmkernel
ESX Host 2
BRKDCT-2868 14490_04_2008_c2

Service console
VM4 VM5 VM6

vmkernel
ESX Host 1

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

48

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

24

HA clustering (1)
EMC/Legato AAM based HA Agent runs in every host Heartbeats Unicast UDP port ~8042 (4 UDP ports opened) Hearbeats run on the Service Console ONLY When a Failure Occurs, the ESX Host pings the gateway (on the SERVICE CONSOLE ONLY) to verify Network Connectivity If ESX Host is isolated, it shuts down the VMs thus releaseing locks on the SAN Recommendations:
Have 2 Service Console on redundant paths d d h Avoid losing SAN access (e.g. via iSCSI) Make sure you know before hand if DRS is activated too!

Caveats:
Losing Production VLAN connectivity only, ISOLATES VMs (there’s no equivalent of (there s uplink tracking on the vswitch)

Solution:
NIC TEAMING

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

49

HA clustering (2)

iSCSI access/VMkernel COS 10.0.2.0 Prod 10.0.100.0

10.0.200.0

vmnic0

vmnic0

VM1

VM2 VM1 VM2

ESX1 Server Host
BRKDCT-2868 14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

ESX2 Server Host
50

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

25

Agenda
VMware LAN Networking
vSwitch Basics NIC Teaming vSwitch vs LAN Switch Cisco/VMware DC DESIGNS SAN Designs

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

51

Multiple ESX Servers—Shared Storage

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

52

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

26

VMFS
VMFS Is High Performance Cluster File System for Virtual Machines Stores the entire virtual machine
state in a central location
Virtual Machines

Supports heterogeneous storage arrays
ESX Server ESX Server

ESX Server

ESX Server

VMFS
Servers

VMFS

VMFS

VMFS

Adds more storage to a VMFS volume dynamically Allows multiple ESX Servers to access the same virtual machine storage concurrently g y Enable virtualization-based distributed infrastructure services such as VMotion, DRS, HA

Storage

A.vmdk

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

53

The Storage Stack in VI3
ESX 1
VM1 VM2 VM3 VM4 VM5ESX

2

Selectively presents logical containers to VMs

VD1

VD2

VD3 VD4

VD5

VSCSI Disklib ESX Storage Stack VMFS LVM

VSCSI Disklib ESX Storage Stack VMFS LVM

Provides services such as snapshots

Provisions logical containers

Aggregates physical volumes SAN switch Clustered host-based VM and filesystem Analogous to how VI3 virtualizes servers Looks like a SAN to VMs
LUN 1 LUN 2 LUN 3
A network of LUNs Presented to a network of VMs
BRKDCT-2868 14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

54

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

27

Standard Access of Virtual Disks on VMFS
ESX1 VM1 VM2 ESX2 VM3 VM4 ESX3 VM5 VM6

VMFS1 LUN1

The LUN(s) are presented to an ESX Server cluster via standard LUN masking and zoning VMFS is a clustered volume manager and filesystem that arbitrates access to the shared LUN
Data is still protected so that only the right application has access. The point of control moves from the SAN to the vmkernel, but there is no loss of security.

ESX Server creates virtual machines (VMs), each with their own virtual disk(s)
The virtual disks are really files on VMFS Each VM has a virtual LSI SCSI adapter in its virtual HW model Each VM sees virtual disk(s) as local SCSI targets – whether the virtual disk files sit on local storage, iSCSI, or fiber channel VMFS makes sure that only one VM is accessing a virtual disk at one time

With VMotion, CPU state and memory are transferred from one host to another but the virtual disks stay still
VMFS manages the transfer of access from source to destination ESX Server
BRKDCT-2868 14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

55

Three Layers of the Storage Stack
Virtual disks ( (VMDK) ) Virtual Machine Datastores VMFS Vols (LUNs)

ESX Server

Physical disks
BRKDCT-2868 14490_04_2008_c2

Storage Array

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

56

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

28

ESX Server View of SAN
FibreChannel disk arrays appear as SCSI targets (devices) which may have one or more LUNs On boot, ESX Server scans for all LUNs by sending inquiry command to each possible target/LUN number Rescan command causes ESX Server to scan again, looking for added or removed targets/LUNs ESX Server can send normal SCSI commands to any LUN, just like a local disk

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

57

ESX Server View of SAN (Cont.)
Built-in locking mechanism to ensure multiple hosts can access same disk on SAN safely
VMFS-2 and VMFS-3 are distributed file systems, do appropriate on-disk locking to allow many ESX Server servers to access same VMFS

Storage is a resource that must be monitored and managed to ensure performance of VM’s
Leverage 3rd-party systems and storage management tools g p y y g g Use VirtualCenter to monitor storage performance from virtual infrastructure point of view

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

58

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

29

Choices in Protocol
FC, iSCSI or NAS?
Best practice to leverage the existing infrastructure Not to introduce too many changes all at once Virtual environments can leverage all types You can choose what fits best and even mix them Common industry perceptions and trade offs still apply in the virtual world What works well for one does not work for all

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

59

Which Protocol to Choose?
Leverage the existing infrastructure when possible Consider customer expertise and ability to learn p y Consider the costs (Dollars and Performance) What does the environment need in terms of throughput
Size for aggregate throughput before capacity

What functionality is really needed for Virtual Machines
Vmotion, HA, DRS (works on both NAS and SAN) VMware Consolidated Backup (VCB) ESX boot from disk Future scalability DR requirements

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

60

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

30

FC SAN—Considerations
Leverage multiple paths for high availability Manually distribute I/O intensive VMs on separate paths Block access provides optimal performance for large high transactional throughput work loads Considered the industrial strength backbone for most large enterprise environments Requires expertise in storage management team Expensive price per port connectivity Increasing to 10 Gb throughput (Soon)
BRKDCT-2868 14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

61

iSCSI—Considerations
Uses standard NAS infrastructure
Best Practice to Have dedicated LAN/VLAN to isolate from other network traffic Use GbE or faster network Use multiple NICs or iSCSI HBAs Use iSCSI HBA for performance environments Use SW initiator for cost sensitive environments

Supports all VI 3 features
Vmotion, DRS, HA ESX boot from HW initiator only VCB is in experimental support today – full support shortly
BRKDCT-2868 14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

62

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

31

NFS—Considerations
Has more protocol overhead but less FS overhead than VMFS as the NAS FS lives on the NAS Head Simple to define in ESX by providing
Configure NFS server hostname or IP NFS share ESX Local datastore name

No tuning required for ESX as most are already defined
No options for rsize or wsize Version is v3, Protocol is TCP

Max mount points = 8 by default
Can be increase to hard limit of 32

Supports almost all VI3 features except VCB
BRKDCT-2868 14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

63

Summary of Features Supported
Protocol FC SAN Yes iSCSI SAN HW init iSCSI SAN SW init i it NFS Yes No No Yes Y Soon S No N Yes Soon Yes Yes Yes Vmotion, DRS & HA VCB ESX boot from disk

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

64

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

32

Choosing Disk Technologies
Traditional performance factors
Capacity / Price Disk types (SCSI, FC, SATA/SAS) Access Time; IOPS; Sustained Transfer Rate Drive RPM to reduce rotational latency Seek time Reliability (MTBF)

VM performance gated ultimately by IOPS density and storage space IOPS Density -> Number of read IOPS/GB
Higher = better
BRKDCT-2868 14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

65

The Choices One Needs to Consider
FS vs. Raw
VMFS vs. RDM (when to use)

NFS vs. Block
NAS vs. SAN (why use each)

iSCSI vs. FC
What is the trade off?

Boot from SAN
Some times needed for diskless servers

Recommended Size of LUN
it depends on application needs…

File system vs. LUN snapshots (host or array vs. Vmware VMFS snapshots) – which to pick? Scalability (factors to consider)
# hosts, dynamic adding of capacity, practical vs. physical limits
BRKDCT-2868 14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

66

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

33

Trade Offs to Consider
Ease of provisioning Ease of on-going management Performance optimization Scalability – Head room to grow Function of 3rd Party services
Remote Mirroring Backups Enterprise Systems Management

Skill level of administration team How many shared vs. isolated storage resources
BRKDCT-2868 14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

67

Isolate vs. Consolidate Storage Resources
RDMs map a single LUN to one VM One can also dedicate a single VMFS Volume to one VM When comparing VMFS to RDMs both the above configurations are what should be compared The bigger question is how many VM can share a single VMFS Volume without contention causing pain The answer is that it depends on many variables
Number of VMs and their workload type Number of ESX servers those VM are spread across Number of concurrent request to the same disk sector/platter
BRKDCT-2868 14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

68

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

34

Isolate vs. Consolidate

Poor utilization Islands of allocations More management
BRKDCT-2868 14490_04_2008_c2

Increased utilization Easier provisioning Less management
Cisco Public

© 2008 Cisco Systems, Inc. All rights reserved.

69

Where Have You Heard This Before
Remember the DAS SAN migration

Convergence of LAN and NAS All the same concerns have been raised before
What if the work load of some cause problems for all? How will we know who is taking the lions share of resource? What if it does not work out?

Our Biggest Obstacle Is Conventional Wisdom!
The Earth Is Flat! If Man Were Meant to fly He Would Have Wings
BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

70

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

35

VMFS vs. RDM—RDM Advantages
Virtual machine partitions are stored in the native guest OS file system format, facilitating “layered applications” that need this level of access As there is only one virtual machine on a LUN, you have much finer grain characterization of the LUN, and no I/O or SCSI reservation lock contention. The LUN can be designed for optimal performance With “Virtual Compatibility” mode virtual machines Virtual Compatibility mode, have many of the features of being on a VMFS, such as file locking to allow multiple access, and snapshots

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

71

VMFS vs. RDM—RDM Advantages
With “Physical Compatibility” mode, it gives a virtual machine the capability of sending almost all “lowlevel” SCSI commands to the target device, including command and control to a storage controller, such as through SAN Management agents in the virtual machine. Dynamic Name Resolution: Stores unique information about LUN regardless of changes to physical address changes due to hardware or path changes

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

72

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

36

VMFS vs. RDM—RDM Disadvantages
Not available for block or RAID devices that do not report a SCSI serial number No snapshots in “Physical Compatibility” mode, only available in “Virtual Compatibility” mode Can be very inefficient, in that, unlike VMFS, you can only have one VM access a RDM

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

73

RDMs and Replication
RDMs mapped RAW LUNs can be replicated to the Remote Site RDMs reference the RAW LUNs via
the LUN number LUN ID

VMFS3 Volumes on Remote site will have unusable RDM configuration if either properties change Remove the old RDMs and recreate them
Must correlate RDM entries to correct RAW LUNs Use the same RDM file name as old one to avoid editing the vmx file
BRKDCT-2868 14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

74

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

37

Storage—Type of Access
RAW RAW may give better performance RAW means more LUNs
More provisioning time

VMFS Leverage templates and quick provisioning Fewer LUNs means you don’t have to watch Heap Scales better with Consolidated Backup Preferred Method

Advanced features still work o

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

75

Storage—How Big Can I Go?
One Big Volume or Individual?
Will you be doing replication? More granular slices will help High performance applications? Individual volumes could help With Virtual Infrastructure 3 VMDK, swap, config files, log files, and snapshots all live on VMFS

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

76

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

38

What Is iSCSI?
A SCSI transport protocol, enabling access to storage devices over standard TCP/IP networks
Maps SCSI block-oriented storage over TCP/IP Similar to mapping SCSI over Fibre Channel

“Initiators”, such as an iSCSI HBA in an ESX Server, send SCSI commands to “targets”, located in iSCSI storage systems
Block storage
IP

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

77

VMware iSCSI Overview
VMware added iSCSI as a supported option in VI3
Block level Block-level I/O over TCP/IP using SCSI-3 protocol SCSI 3 Supporting both Hardware and Software Initiators GigE NiCs MUST be used for SW Initiators (no 100Mb NICs) Support iSCSI HBAs (HW init) and NICs for SW only today Check the HCL for supported HW Initiators and SW NICs

What we do not support in ESX 3.0.1
10 gigE Jumbo Frames Multi Connect Session (MCS) TCP-Offload Engine (TOE) Cards
BRKDCT-2868 14490_04_2008_c2 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

78

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

39

VMware ESX Storage Options
FC iSCSI/NFS DAS

VM

VM

VM

VM

VM

VM

FC

FC

SCSI

80%+ of install base uses FC storage iSCSI is popular in SMB i l i market DAS is not popular because it prohibits VMotion

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

79

Virtual Servers Share a Physical HBA
A zone includes the physical hba and the storage array Access control is demanded to storage array “LUN masking and mapping”, it is LUN mapping , based on the physical HBA pWWN and it is the same for all VMs The hypervisor is in charge of the mapping, errors may be disastrous
MDS9000 Mapping Storage Array (LUN Mapping and Masking) Virtual Serve ers Hypervisor

FC

HW

pWWN-P

FC

pWWN-P

Zone
BRKDCT-2868 14490_04_2008_c2

Single Login on a Single Point-to-Point Connection
© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

FC Name Server
80

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

40

NPIV Usage Examples
Virtual Machine Aggregation ‘Intelligent Pass-thru’

FC

FC

FC

FC

FC

FC

FC

FC

Switch becomes an HBA concentrator

FC

NP_Port

NPIV enabled HBA F_Port F_Port

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

81

Raw Device Mapping
RDM allows direct read/write access to disk Block mapping is still maintained within a VMFS file Rarely used but important for clustering (MSCS supported) Used with NPIV environments
BRKDCT-2868 14490_04_2008_c2

VM1

VM2

FC

FC

RDM

Mapping

FC
VMFS

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

82

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

41

Storage Multi-Pathing
No storage load balancing, strictly failover Two modes of operation dictate behavior (Fixed d M t R (Fi d and Most Recent) t) Fixed Mode
Allows definition of preferred paths If preferred path fails a secondary path is used If preferred path reappears it will fail back
FC FC

VM

VM

Most Recently Used
If current path fails a secondary path is used If previous path reappears the current path is still used

Supports both Active/Active and Active/Passive arrays Auto detects multiple paths

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

83

Q and A

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

84

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

42

Recommended Reading

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

85

Recommended Reading

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

86

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

43

Complete Your Online Session Evaluation
Give us your feedback and you could win fabulous prizes. Winners announced daily. Receive 20 Passport points for each session evaluation you complete. Complete your session evaluation online now (open a browser through our wireless network to access our portal) or visit one of the Internet stations throughout the Convention Center.
Don’t forget to activate your Cisco Live virtual account for access to all session material on-demand and return for our live virtual event in October 2008. Go to the Collaboration Zone in World of Solutions or visit www.cisco-live.com.

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

87

BRKDCT-2868 14490_04_2008_c2

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

88

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

44