Hands-on lab

:
Introduction to the Cisco Nexus 1000V

The next generation virtual datacenter from VMware will ensure efficient collaboration
between network administrators and VMware administrators with the use of vNetwork
Distributed Switches.
By replacing an existing virtual switch with the Cisco Nexus 1000V and the availability
of familiar Cisco NX-OS, Cisco Nexus 1000V supports the traditional boundaries
between server and network administrators, allowing network administrators to also
manage virtual switches. This lab will augment your knowledge about the Cisco Nexus
1000V with a considerable amount of hands-on experience.

©2011 Cisco | VMware. All rights reserved.

Page 1

Contents
Hands-on lab: Introduction to the Cisco Nexus 1000V .............................................................................................................................. 1
Lab Overview ............................................................................................................................................................................................. 4
Objectives ..............................................................................................................................................................................................4
Cisco CloudLab .....................................................................................................................................................................................4
Lab Exercises ........................................................................................................................................................................................5
Network Admin vs. Server Admin ..........................................................................................................................................................5
Lab Topology and Access .......................................................................................................................................................................... 6
Logical Topology ...................................................................................................................................................................................6
Access ...................................................................................................................................................................................................7
Connecting via the vSphere Client ........................................................................................................................................................8
Deployment ................................................................................................................................................................................................ 9
Connect to the Cisco Nexus 1000V Virtual Supervisor Module (VSM) ..................................................................................................9
Creating an uplink port profile for the Management Traffic ..................................................................................................................10
Creating an uplink port profile for the Management Traffic ..................................................................................................................11
Adding an ESX host to the Distributed Virtual switch ..........................................................................................................................12
Attaching a Virtual Machine to the Network .............................................................................................................................................. 16
Creating a port profile for virtual machines ..........................................................................................................................................16
Verify the successful creation of the port-group: ..................................................................................................................................17
Network Administrator view of Virtual Machine connectivity ................................................................................................................21
VMotion and Visibility ............................................................................................................................................................................... 24
VMotion Configuration .........................................................................................................................................................................24
Network Administrators view of VMotion .............................................................................................................................................28
Perform a VMotion ...............................................................................................................................................................................28
Verify the new Network Administrators view on the Virtual Machine ...................................................................................................29
Policy-based virtual machine connectivity ................................................................................................................................................ 30
Verify open ports within your virtual machine ......................................................................................................................................30
Configuration of an IP-based access list..............................................................................................................................................32
Verify the application of the IP-based access list .................................................................................................................................33
Mobile VM Security .................................................................................................................................................................................. 34
Private VLANs .....................................................................................................................................................................................34
Removing the Private VLAN configuration...........................................................................................................................................38
Traffic Inspection of individual Virtual Machines ....................................................................................................................................... 39
Configure an ERSPAN monitor session ..............................................................................................................................................39
Create an ERSPAN Session on the Nexus 1000V ..............................................................................................................................40
Configuring a VMkernel Interface to transport the ERSPAN Session ..................................................................................................41
Test the session and VMotion the VM .................................................................................................................................................43
Conclusion................................................................................................................................................................................................ 46
Feedback.................................................................................................................................................................................................. 46
Lab proctors ............................................................................................................................................................................................. 47

©2011 Cisco | VMware. All rights reserved.

Page 2

Hands-on lab: Introduction to the Cisco Nexus 1000V
Lab Summary
In this self-paced lab, participants will discover how the Cisco Nexus 1000V software switch for VMware
vSphere enables organizations to unleash the true power and flexibility of server virtualization, by offering a
set of network features, management tools and diagnostic capabilities consistent with the customer's existing
physical Cisco network infrastructure and enhanced for the virtual world.
Some of the features of the Cisco Nexus 1000V that will be covered include:

Policy based virtual machine (VM) connectivity

Mobility of security and network properties

Non-disruptive operational model for both Server and Network administrators

In the highly agile VMware environment, the new Cisco Virtual Network Link (VN-Link) technology on the
Nexus 1000V will integrate with VMware's vNetwork Distributed Switch framework to create a logical network
infrastructure across multiple physical hosts that will provide full visibility, control and consistency of the
network.

Key Benefits of the Cisco Nexus 1000V
Policy-based virtual machine (VM) connectivity
• Provides real-time coordinated configuration of network and security services

Maintains a virtual machine-centric management model, enabling the server administrator to increase
both efficiency and flexibility

Mobile VM security and network policy
• Policy moves with a virtual machine during live migration ensuring persistent network, security, and
storage compliance

Ensures that live migration won't be affected by disparate network configurations

Improves business continuance, performance management, and security compliance

Non-disruptive operational model for your server virtualization, and networking teams
• Aligns management and operations environment for virtual machines and physical server connectivity
in the data center

Maintains the existing VMware operational model

Reduces total cost of ownership (TCO) by providing operational consistency and visibility throughout
the network

©2011 Cisco | VMware. All rights reserved.

Page 3

Lab Overview
Objectives
The goal of this manual is to give you a chance to receive hands-on experience with a subset of the features of
the Cisco Nexus 1000V Distributed Virtual Switch (DVS). The Cisco Nexus 1000V introduces many new features
and capabilities. This lab will give you an overview of these features and introduce you to the main concepts.

Cisco CloudLab
This lab is hosted in Cisco’s cloud-based hands-on and demo lab. Within this cloud you are provided with your
personal dedicated virtual pod (vPod). You connect via RDP to a so-called “control center” within this host and
walk through the lab steps below. All necessary tools to complete this lab can be found in the “control center”.
Refer to the separate documentation for Cisco CloudLab for details on how to reach the “control center”
within your vPod.
Figure 1.

Logical Lab Topology

The username and password to access the Control Center of this vPod are listed below:
User Name: VPOD\administrator
Password: <Refer to the CloudLab Portal>

©2011 Cisco | VMware. All rights reserved.

Page 4

This means that in a real world deployment scenario of this product. Even if you won't be exposed to "the other side” during your regular job it might be a good idea to understand the overall operation and handling of the Nexus 1000V. Server Admin One of the key features of the Cisco Nexus 1000V is the non-disruptive operational model for both Network and Server administrators.Lab Exercises This lab was designed to be completed in sequential order. ©2011 Cisco | VMware. As some steps rely on the successful completion of previous steps. both Network Admin and VMware administrator would have their own management perspectives with different views and tools. The individual lab steps are: • Cisco Nexus 1000V deployment • Attaching Virtual Machines to the Cisco Nexus 1000V • VMotion and Visibility • Policy-based Virtual Machine connectivity • Traffic Inspection of a Virtual Machine • Quality of Service (QoS) for Virtual Machines Network Admin vs. This lab purposely exposes you to both of these perspectives: The Network administrator perspective with the Cisco NX-OS Command Line Interface (CLI) as the primary management tool and the VMware administrator perspective with vCenter as the primary management tool. All rights reserved. Page 5 . you are required to complete all steps before moving on.

vpod. ©2011 Cisco | VMware. reachable at vcenter.local via SSH. • One pre-configured upstream switch to which you do not have access to. All rights reserved. offering services to virtual machines and a vCenter to coordinate this behavior. Figure 2.vpod.Lab Topology and Access The lab represents a typical VMware setup with two physical ESX hosts.vpod. • One VMware vCenter.local and esx02. Logical Pod Design Your pod consists of: • Two physical VMware ESX servers.local via the vSphere client. • One Cisco Nexus 1000V Virtual Supervisor Module. Page 6 .vpod. Furthermore a Cisco Nexus 1000V will be used to provide network services to the two physical ESX hosts as well as the virtual machines residing on them. Logical Topology The diagram below represents the logical lab setup of a vPod as it pertains to the Cisco Nexus 1000V. They are called esx01.local. reachable at vsm.

The VMWare vCenter is accessible through the vClient application. The VSM is accessible through a SSH connection. Usernames and Passwords vCenter Login VPOD\Administrator Password Cisco123 Use the vSphere client feature “Use Windows session credentials” for easier login. Page 7 . All rights reserved. Nexus 1000V VSM Login admin Password Cisco123 All necessary applications used within this lab are available on the desktop of the control center machine to which you are connected via Remote Desktop Protocol (RDP). Use the usernames and passwords listed below for accessing your vPod’s elements. ©2011 Cisco | VMware.Access During this lab configuration steps need to be performed on the VMWare vCenter as well as the Cisco Nexus 1000V Virtual Supervisor Module (VSM) within the CloudLab Virtual Pod.

All rights reserved. Figure 1: vSphere Client login screen Please tick “Use Windows session credentials” and click on “Login” for vSphere Client authentification.Connecting via the vSphere Client Start VMware vSphere client by double clicking on the VMWare vSphere Client icon on the desktop. The following figure shows the vSphere Client login screen. Page 8 . Figure 2: vSphere Client application screen ©2011 Cisco | VMware. After a successful login you’ll see the following vSphere Client application screen.

Connect to the Cisco Nexus 1000V Virtual Supervisor Module (VSM) Use the following credentials to connect via SSH to the Cisco Nexus 1000V Virtual Supervisor Module (VSM). VMotion traffic and application traffic coming from the VM.vpod. we will be using VMware Virtual Update Manager (VUM). All rights reserved. Out of these NICs one will be used with the Nexus 1000V to carry all the management. 1 Virtual Supervisor Module and one Virtual Center.Deployment While the Nexus 1000V has already been registered in vCenter. In a vSphere setup VUM is used to stage and apply patches and updates to ESX hosts. In this lab you will:  Create a uplink port-profile and apply it on the uplink interface of the ESX hosts  Add the two hosts to the Nexus 1000V Switch Lab Setup In order to add a new host to the Distributed Switch we need to create a port-profile to enable the communication between the Virtual Supervisor Module and the different Virtual Ethernet Module. Page 9 . It has been preconfigured to connect to the correct VSM module vsm. The SSH client software called Putty can be found on the desktop of your vCenter host.local.local Username admin Password Cisco123 ©2011 Cisco | VMware. The goal of this step consists of adding the two hosts to the Nexus 1000V. In order to automatically install the necessary Virtual Ethernet Module (VEM) of the Cisco Nexus 1000V into the ESX hosts. Each pod is composed of 2 ESX Host. it is still necessary to connect the different ESX hosts as part of the Nexus 1000V. Hostname vsm.vpod. On top of that we want to enable the VMotion traffic on the same interface. Both ESX host are connected an upstream switch using 4 different NICs.

You will utilize 4 different VLANs.g. CDP – between the VSM and the VEM  VMotion: 12: VLAN used for VMotion traffic  Virtual Machine: 11: VLAN used for the application traffic  Private VLAN – Secondary VLAN: 111: Secondary VLAN for the Private VLAN lab step Specify the VLANs for later usage.  Control VLAN: 10: VLAN used to allow the communication between the VSM and the VEM  Packet VLAN: 10: VLAN used to exchange some specific packets – e. as well as the communication for the Virtual Machine we will use different VLANs to segregate the different type of traffic.Creating an uplink port profile for the Management Traffic In order to configure the communication between the VSM and the VEM. N1KV_Control_Packet 11 VM_Network 12 VMotion 111 PVLAN_Secondary Page 10 . All rights reserved. Nexus1000V# conf t Nexus1000V(config)# vlan Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# 10 name vlan name vlan name vlan name end ©2011 Cisco | VMware.

0(4)SV1(1) New feature level: 4. A portprofile can be compared to a template that will contain all the networking information that will be applied on different interfaces. for VMotion as well as productive VM traffic. Page 11 . If not it will be applied on a Virtual Machine interface. We will use this port-profile for all the management VLANs. If the port-profile is configured as type ethernet the port-profile it is targeted to be applied on a physical interface.Creating an uplink port profile for the Management Traffic In this part you will learn how to configure a port-profile that will be applied on an uplink interface. This is also indicated through a special icon in the vSphere client: channel-group auto: This configuration line activates the feature virtual port-channel host mode.0(4)SV1(3) Nexus1000V(config)# port-profile type ethernet Uplink Nexus1000V(config-port-prof)# vmware port-group Nexus1000V(config-port-prof)# switchport mode private-vlan trunk promiscuous Nexus1000V(config-port-prof)# switchport private-vlan trunk allowed vlan 10-12. Congratulation you just configured your first port-profile! ©2011 Cisco | VMware. It already has to be included at this stage as certain configurations cannot be altered once the uplink port profile is in use.111 Nexus1000V(config-port-prof)# channel-group auto mode on mac-pinning Nexus1000V(config-port-prof)# no shutdown Nexus1000V(config-port-prof)# system vlan 10. This configuration is necessary for a later lab step and will be explained in the corresponding section. It allows the Nexus 1000V to form a port-channel with upstream switches that do not support multichassis etherchannel. Nexus1000V# conf t Nexus1000V(config)# system update vem feature level 2 Old feature level: 4. One special characteristics of the uplink port profile should be pointed out at this stage: type ethernet: This configuration line means that the corresponding port-profile can only be applied to a physical Ethernet port. All rights reserved.12 Nexus1000V(config-port-prof)# state enabled Note: The uplink port-profile already includes a configuration line for private vlans.

The VEM component has already been pre-installed on the ESX hosts. 3. 1. Navigate to the Networking view by clicking on the Home -> Inventory -> Networking tab. which would make the integration of the ESX host to the Nexus 1000V completely automated and transparent. Consistent network configuration across host is required for successful VMotion. You are presented with all hosts that are part of the data center but not part of the DVS. Page 12 ... Utilizing the traditional non-distributed vSwitches requires multiple manual steps to ensure consistent hosts and is therefore time consuming and error-prone. Right-click on your DVS and choose Add Host. An alternative would be the usage of VMware Update Manager (VUM).. 2. All rights reserved.Adding an ESX host to the Distributed Virtual switch We will now add the two ESX hosts of your pod to the Nexus 1000V DVS and apply the port-profile that we just created to the uplink interface of the different hosts. Adding a host to the Distributed Virtual Switch is done by assigning some or all of the physical NICs of an ESX host to become part of the DVS and assign previously created uplink port-profile to these NICs. To reach this view click on the arrow to the right of “Inventory” and pick “Networking” from the list being displayed. ©2011 Cisco | VMware.

This ensures that there is no mis-configuration between the physical network and the virtual network.g. All rights reserved. Currently vmnic0 is already in use by the traditional vSwitch to enable the initial management of your ESX hosts. Please only choose vmnic3 to become part of the Cisco Nexus 1000V DVS. while vmnic1 is used for iSCSI storage traffic and vmnic2 provides network access to the existing VMs through a vSwitch. QoS. Select the hosts and the NICs that will be assigned to the DVS. Note: In real life scenarios uplink port-profiles are configured by the networking administrator to match the setting of the physical upstream switches. Etherchannel. Page 13 . It also enables network administrators to use features for this uplink that are available on other Cisco switches (e. …) ©2011 Cisco | VMware.vpod.local and click on “Next”.4. Assign the uplink port profile “Uplink” that you created in the previous step to vmnic3 on host esx01.

5. 6. For the purpose of this lab do not choose to migrate any VMkernel ports and click on “Next” Note: Migrating the Management Network and/or iSCSI will result in a loss of management and storage connectivity of the hosts. But this lab has not been prepared to do so. this next screen allows you to migrate existing Virtual Machine Networks to the Nexus 1000V. For the purpose of this lab do not choose to migrate any existing Virtual Machine Networks and click “Next” ©2011 Cisco | VMware. Therefore under no circumstances choose vmnic0 and/or vmnic1 to become part of the Cisco Nexus 1000V DVS. The next screen offers you the possibility to migrate existing VMKernel to the Nexus 1000V. All rights reserved. Similar to the previous screen. In a real-life scenario it is possible to even migrate the service console to the Cisco Nexus 1000V and thereby completely decommission the VMware vSwitch. Page 14 .

7. All rights reserved. ©2011 Cisco | VMware. Page 15 . After a few seconds this ESX host esx01. Repeat the same steps to add the host esx02. Acknowledge these settings by clicking on Finish. You are presented with an overview of the uplink ports that are created. By default VMWare creates 32 uplink ports per hosts and leaves it to the Nexus 1000V VSM to map them to useful physical ports.local will appear in the Hosts view of the Distributed Virtual Switch.vpod.local to the Cisco Nexus 1000V. 8.vpod.

All rights reserved. The workflow to attach a virtual machine to the network consists of the following steps:  The network admin creates a port profile which can be considered as a configuration template for virtual Ethernet ports. The Nexus 1000V creates a virtual ethernet port (Veth) to connect the VM and configures the port based on the port profile that was tied to the port group chosen by the ESX admin. This lab step consists of:  Configure a port profile for Virtual Machines (Network Administrator)  Assign a VM to a port profile (VMware Administrator) Creating a port profile for virtual machines On the CLI create the port profile VM-Client by typing the shown configuration commands Nexus1000V# conf t Nexus1000V(config)# port-profile VM-Client Nexus1000V(config-port-prof)# vmware port-group Nexus1000V(config-port-prof)# switchport mode access Nexus1000V(config-port-prof)# switchport access vlan 11 Nexus1000V(config-port-prof)# no shutdown Nexus1000V(config-port-prof)# state enabled Nexus1000V(config-port-prof)# exit ©2011 Cisco | VMware. Page 16 .  The port profile is translated into a port-group and appears in Virtual Center  The ESX admin assigns a virtual machine to a port-group.Attaching a Virtual Machine to the Network The next step demonstrates how Network Administrator and VMware Administrator work hand in hand to provide the network connectivity for virtual machines.

©2011 Cisco | VMware. as well as in the Networks tab.Verify the successful creation of the port-group: 1. Page 17 . 2. Navigate to the Networking view by choosing the Home -> Inventory -> Networking tab at the top of the screen. Verify that the port-profile with the name VM-Client appears in the resource tree view under the distributed virtual switch called Nexus1000V. Choose the Nexus 1000V Distributed Virtual Switch object called Nexus1000V in order to gain the same insight under the Networks tab. All rights reserved.

Page 18 . by associating it to the port-group VM-Client. This VM initially uses the vSwitch port-group labeled VM Network. click on the arrow to the right of “Inventory” and pick “Hosts and Clusters” from the dropdown menu. your lab pod already includes a virtual machine – named “Windows 7 – A”. To reach this tab. ©2011 Cisco | VMware. Add a vNIC to the VM inside your pod. All rights reserved. Navigate to the Virtual NIC section and choose the port group VM-Client for the network label and finalize by clicking on OK.Assign a Virtual Machine to a port profile As you can see under the Home -> Inventory -> Hosts and Clusters tab. In VMWare Virtual Center open the settings dialog of the first VM by clicking on Edit Settings. 1.

com with the internet browser and verifies the network connectivity of the VM.cisco. 3. This opens the web page www.2. Open the Virtual Machine Console for the VM “Windows 7 – A” 4. Verify that the Virtual Machine is using the port-group VM-Client. Click on the “Cisco Systems. All rights reserved. which you can find on the desktop inside the VM. Inc.” link. Page 19 . ©2011 Cisco | VMware.

5. ©2011 Cisco | VMware. Repeat steps 1 to 4 for the Virtual Machine “Windows 7 – B” Congratulation you successfully configured the network connectivity for a virtual Machine! This step demonstrated that the workflow introduced by the Cisco Nexus 1000V is much more efficient than the traditional approach using vSwitches: The network team configures the network for the server team. All rights reserved. The server team only needs to apply the prepared settings. Close the Virtual Machine Console 6. Page 20 .

11. The Cisco Nexus 1000V supports a model. All rights reserved.12 10. ©2011 Cisco | VMware.0(4)SV1(3a) 4. identified by the server IP address and name.11. but not a secondary VSM.0(4)SV1(3a) 4.2.  Module 3 and module 4 represent a Virtual Ethernet Module (VEM).11. 2.0 Serial-Num ---------NA NA NA Server-UUID -----------------------------------NA 422c745e-64b2-09d0-e470-dcc9cdacb560 422c9ae2-9381-e104-6a91-2f2815f5028d Server-Name -------------------NA esx02.2.vpod. Issue the command show module Nexus1000V# Mod Ports --. Connect to the Cisco Nexus 1000V Virtual Supervisor Module through an SSH connection.0 2.local esx01.5 10. The correct host and access credentials are already setup for you. As shown at the bottom of the screen. where the supervisor can run in an active/standby high availability mechanism. each VEM corresponds to a physical ESX host. you can take some time to explore more details of the virtual switch 1.2.vpod.0 2.----1 0 3 248 4 248 show module Module-Type -------------------------------Virtual Supervisor Module Virtual Ethernet Module Virtual Ethernet Module Mod --1 3 4 Sw --------------4.Network Administrator view of Virtual Machine connectivity Now that the Nexus 1000V is up and ready. Page 21 .local * this terminal session Nexus1000V# In the output of the show module command you can see different familiar components:  Module 1 and module 2 are reserved for the Virtual Supervisor Module (VSM). Your lab’s pod is only equipped with a primary VSM. This mapping of virtual line-card to a physical server eases the communication between the network and server team.11 Model -----------------Nexus1000V NA NA Status -----------active * ok ok Hw -----0.0(4)SV1(3a) Mod --1 3 4 MAC-Address(es) -------------------------------------00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 02-00-0c-00-03-00 to 02-00-0c-00-03-80 02-00-0c-00-04-00 to 02-00-0c-00-04-80 Mod --1 3 4 Server-IP --------------10.

When using the uplink port-profile configuration mac-pinning there is no need for the configuration of a traditional port-channel on the upstream switch(es).2. Nonetheless on the Nexus 1000V a virtual port-channel is still formed. Page 22 . The numbering scheme lets you easily identify the corresponding module and NIC. Let’s have a look at the interfaces next by using the show interface brief command Nexus1000V# show interface brief -------------------------------------------------------------------------------Port VRF Status IP Address Speed MTU -------------------------------------------------------------------------------mgmt0 -up 10. All rights reserved. ©2011 Cisco | VMware.  Port-Channels: Ethernet Interfaces can be bound manually or automatically through vPC-HM into port channels.5 1000 1500 -------------------------------------------------------------------------------Ethernet VLAN Type Mode Status Reason Speed Port Interface Ch # -------------------------------------------------------------------------------Eth3/4 1 eth trunk up none 1000(D) 1 Eth4/4 1 eth trunk up none 1000(D) 2 -------------------------------------------------------------------------------Port-channel VLAN Type Mode Status Reason Speed Protocol Interface -------------------------------------------------------------------------------Po1 1 eth trunk up none a-1000(D) none Po2 1 eth trunk up none a-1000(D) none -------------------------------------------------------------------------------Interface VLAN Type Mode Status Reason MTU -------------------------------------------------------------------------------Veth1 11 virt access up none 1500 Veth2 11 virt access up none 1500 -------------------------------------------------------------------------------Port VRF Status IP Address Speed MTU -------------------------------------------------------------------------------ctrl0 -up -1000 1500 Nexus1000V# The output of the command show interface brief shows you the different interface types that are used within the Cisco Nexus 1000V:  Mgmt0: This interface is used for out of band management and correspond to the second vNIC of the VSM  Ethernet Interfaces: These are physical Ethernet interface and correspond to the physical NICs of the ESX hosts.3.11.

5. Nexus1000V# show port-profile usage ------------------------------------------------------------------------------Port Profile Port Adapter Owner ------------------------------------------------------------------------------Uplink Po1 Po2 Eth3/4 vmnic3 esx01. 4. On top of that the Network Administrator can see at any given time which VM is in use and which portprofile it is attached to it by using the show port-profile usage command.vpod.B 4 esx02. All rights reserved.A 3 esx01.local Eth4/4 vmnic3 esx02. Page 23 .local Veth2 Net Adapter 1 Windows 7 . Nexus1000V# show interface virtual ------------------------------------------------------------------------------Port Adapter Owner Mod Host ------------------------------------------------------------------------------Veth1 Net Adapter 1 Windows 7 .local VM-Client Veth1 Net Adapter 1 Windows 7 – A Veth2 Net Adapter 1 Windows 7 – B Nexus1000V# Note: The Network administrator can manage the shown virtual ethernet interfaces the same way as a physical interface on a Cisco switch.local Nexus1000V# The output of the above command gives you a mapping of the VM name to its Veth interface. The numbering scheme therefore does not include any module information. ©2011 Cisco | VMware. Congratulations! You have successfully added Virtual Machines to the Nexus 1000V distributed virtual switch! As a result the network team now has complete insight into the network part of the Server Virtualization infrastructure. The Veth identifier remains with the VM during its entire life time – even while the VM is powered down.vpod. Verify on the Nexus 1000V CLI that the corresponding Virtual Ethernet interface has been created for the two virtual machines by issuing the command show interface virtual. Veths: Virtual Ethernet Interfaces connect to VMs and are independent of the host the host that the VM runs on.vpod.vpod.

In this step you will configure the VMKernel VMotion interface for both servers 1. Therefore this feature is also called live migration. Go to the Home -> Inventory -> Hosts and Clusters tab and choose the first server esx01 of your pod. ©2011 Cisco | VMware. All rights reserved. Let’s call this port-profile “VMotion” Nexus1000V# conf t Nexus1000V(config)# port-profile VMotion Nexus1000V(config-port-prof)# vmware port-group Nexus1000V(config-port-prof)# switchport mode access Nexus1000V(config-port-prof)# switchport access vlan 12 Nexus1000V(config-port-prof)# no shutdown Nexus1000V(config-port-prof)# system vlan 12 Nexus1000V(config-port-prof)# state enabled 2. Page 24 . In the second step the continuous visibility of virtual machines during VMotion is demonstrated.VMotion and Visibility The next section demonstrates the configuration of the VMKernel VMotion interface in order to perform a successful VMotion. VMotion is a well-known feature of VMware which allows users to move the Virtual Machine from one physical host to another while the VM remains operational. The first step is to provision a port-profile for the VMotion Interface. This lab step consists of the following:  Configure a VMotion network connection  Perform a VMotion and note the veth mapping VMotion Configuration You will now create a VMkernel Interface that will be used for VMotion.

All rights reserved. ©2011 Cisco | VMware. and then click on the Next button. Page 25 . and afterwards click on Add within the Manage Virtual Adapters dialog. Under View choose Distributed Virtual Switch.3.. 4.. In order to add the VMKernel VMotion interface choose Manage Virtual Adapters. In the Add Virtual Adapter Wizard choose to create a New Virtual Adapter. Click on the Configuration tab and within the Hardware area on Networking.

Click Next 7.0.12.255. Also check the box right next to Use this virtual NIC for VMotion to enable VMotion on this interface.5.168.11 and for host esx02 the IP address 192. Click Next 6.255. All rights reserved. Page 26 . For both hosts choose the Subnet Mask of 255.168. Configure the IP settings for the VMotion interface. As Virtual Adapter Types you can only choose VMKernel. For the host esx01 choose the IP address 192. Choose VMotion as the port group name.12.12. ©2011 Cisco | VMware. Do not change the VMkernel Default Gateway and click on the Next button.

Page 27 . You have now successfully added the VMkernel VMotion interface.8. All rights reserved. Verify the correctness of these settings and choose Finish. Before finishing the Wizard you are presented with an overview of your setting. Repeat steps 3 to 8 to configure the VMkernel VMotion Interface on the second host esx02. Congratulation! You successfully configured the VMKernel VMotion interface leveraging the Cisco Nexus 1000V. 10. Close the “Manage Virtual Adapters” window. 9. ©2011 Cisco | VMware.

Go to the Home -> Inventory -> Hosts and Clusters tab 2.A 3 esx01. Perform a VMotion Test your previous VMotion configuration by performing a VMotion process. Prior to the VMotion perform a lookup of the used Virtual Interfaces with the command show interface virtual.B 4 esx02. All rights reserved. network management capabilities or traceability for a VM from the perspective of the Network Administrator.local Nexus1000V# 2. Drag & drop the Virtual Machine “Windows 7 – A” from the first ESX host of your setup to your second ESX host.local Veth2 Net Adapter 1 Windows 7 . Instead the Virtual Machines keep its Veth identifier across the VMotion process. Before VMotioning your pods’ Virtual Machine.local Veth4 vmk2 VMware VMkernel 4 esx02. ©2011 Cisco | VMware. 1.vpod. 1. This yields the following or similar results: Nexus1000V# show interface virtual -------------------------------------------------------------------------------Port Adapter Owner Mod Host -------------------------------------------------------------------------------Veth1 Net Adapter 1 Windows 7 .vpod. 3. Walk through the appearing VMotion wizard by leaving the default settings and clicking on Next and finally finish.Network Administrators view of VMotion An important attribute of the Nexus 1000V with regards to VMotion is the capability that the VM keeps its virtual connection identifier throughout the VMotion process. Make note of the associated Veth port and the Module and the ESX hostname currently associated to the Virtual Machine.vpod. This way a VMotion does not influence the interface policies.local Veth3 vmk2 VMware VMkernel 3 esx01. make note of the current veth for the given Virtual Machine.vpod. Page 28 .

even while the VM attached to it is live migrated. Verify that this is the case.vpod. ©2011 Cisco | VMware.vpod. All rights reserved. By comparing the output before and after the VMotion process.local Veth3 vmk2 VMware VMkernel 3 esx01. 2.B 4 esx02.A 4 esx02.local Nexus1000V# Congratulation! You are now able to trace a VM moving across physical ESX hosts via VMotion.local Veth2 Net Adapter 1 Windows 7 . Verify the new Network Administrators view on the Virtual Machine After a successful VMotion the expected behavior is that the Virtual Machine can be seen and managed by the network administrator through the same virtual Ethernet port.local” before progressing to the next lab step. Wait for the VMotion to successfully complete. Again use the show interface virtual command to perform a lookup of the used Virtual Interfaces. On top of that all the configuration and statistics follow the VM across the VMotion process. Please migrate the Virtual Machine “Windows 7 – A” back to the host “esx01. while the output for Module and Host changes. 1.vpod. you can notice that the Virtual Machine still uses the same Veth port. Nexus1000V# show interface virtual -------------------------------------------------------------------------------Port Adapter Owner Mod Host -------------------------------------------------------------------------------Veth1 Net Adapter 1 Windows 7 . Open the Virtual Machine Console again and verify that the Virtual Machine still has network connectivity by reloading the default webpage.vpod. The Cisco Nexus 1000V provides all the monitoring capabilities that the network team is used to for a Virtual Ethernet port.local Veth4 vmk2 VMware VMkernel 4 esx02. 5.vpod.4. The resulting output shows you the current mapping of a Veth port to the Virtual Machine. Page 29 .

To demonstrate this.Policy-based virtual machine connectivity After the basic functionality of the Cisco Nexus 1000V distributed virtual switch has been demonstrated.com. Thus this section will demonstrate the policy-based virtual machine capabilities in form of IP based filtering. At the same this also means that the VM is accessible by hosts on the upstream network and might be at risk for various network based attacks. the Virtual Machine inside your pod has two Windows specific ports open which might be used for attacks.cisco. verify that your Virtual Machine currently has two open ports: 1. Page 30 . Open the Virtual Machine Console of the VM Windows 7 – A inside your pod ©2011 Cisco | VMware. All rights reserved. This could be seen by opening the webpage www. which is connected to the Cisco Nexus 1000V switch has basic connectivity to the upstream network. it is time to explore some of the more advanced features. The steps of this section include:  Configure an IP-based access list  Apply the access list to a port-group  Verify the functionality of the access list Verify open ports within your virtual machine In a previous section of this lab guide it was already demonstrated that the Virtual Machine inside your pod. Before configuring the access list to block access.

All rights reserved. Click on the Cisco Systems. Page 31 .2. Inc. Verify that port 135 (Windows RPC) and 445 (Windows CIFS) are open ©2011 Cisco | VMware. icon to load the default webpage and choose the link for the Host PortStatus Analyzer 3.

which blocks access to these two ports. adding the access list to this port profile will automatically update all associated Veth interfaces and assign the access list to them. Thus “in” specifies traffic flowing in to the VEM from the VM. 1. Here the concept of port-profiles comes very handy in simplifying the work. Nexus1000V# conf t Nexus1000V(config)# ip access-list ProtectVM Nexus1000V(config-acl)# deny tcp any any eq 135 Nexus1000V(config-acl)# deny tcp any any eq 445 Nexus1000V(config-acl)# permit ip any any This access list denies all TCP traffic to port 135 (Windows RPC) and 445 (Windows CIFS) while permitting any other IP traffic. ©2011 Cisco | VMware. Nexus1000V(config-acl)# port-profile VM-Client Nexus1000V(config-port-prof)# ip port access-group ProtectVM out As a result access to both open ports within your Virtual Machine has been blocked. Page 32 . All rights reserved. 2. Using the CLI. not the Virtual Machine. Note: The directions “in” and “out” of an ACL have to be seen from the perspective of the Virtual Ethernet Module (VEM). As the Veth interface of the Windows 7 VM leverage the port profile VM-Client. You will now apply the access list ProtectVM as an outbound-rule to the virtual Ethernet interfaces (veth) of the existing VMs running Windows 7.Configuration of an IP-based access list In this lab step you will create an IP based access list. The name ProtectVM is chosen as name for this access list. while “out” specifies traffic flowing out from the VEM to the VM. create an access list within the Cisco Nexus 1000V VSM.

open the Virtual Machine Console. applied and verified an IP based access list. All rights reserved. ©2011 Cisco | VMware. This exercise demonstrated that all the features usually used on a physical switch interface can now be applied on the veth and that the concept of port-profile makes the network configuration much easier: Changes to a port-profile will be propagated on the fly on all the VM using it. Verify that port 135 (Windows RPC) and 445 (Windows CIFS) are filtered Congratulations! You have successfully created.Verify the application of the IP-based access list Verify that both ports that were open before have been blocked: 1. Again. Page 33 . 2. Click on the Cisco. 3.com icon to load the default webpage and choose the link for the Host Port-Security Analyzer.

you will prepare the primary and secondary VLAN on the VSM. Nexus1000V# conf t Nexus1000V(config)# vlan Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# ©2011 Cisco | VMware.Mobile VM Security Another key differentiator of the Cisco Nexus 1000V over the VMWare DVS is the advanced Private VLAN capability. The content of this step includes:   Configure Private VLANs. First. Private VLANs This section demonstrates the configuration of a Private VLAN towards the connected VM. This section demonstrates the capabilities of Private VLANs by placing individual VMs in a Private VLAN while utilizing the uplink port as a promiscuous PVLAN trunk. Removing the Private VLAN configuration. All rights reserved. In order to prevent the requirement of configuring the PVLAN merging on the upstream switch the new feature of promiscuous PVLAN trunks is showcased on the uplink port. 1. Thus VMs will not be able to communicate among each other but can only communicate with the default gateway and any other peer beyond the default gateway. It is therefore recommend not to change an in-use VLAN from non-PVLAN usage to PVLAN usage in a production environment. This means that the primary and secondary VLAN will be merged before leaving the uplink port. Then we will configure the VM and uplink port-profile to do the translation between the isolated and the promiscuous VLAN. As your Virtual Machines are still using VLAN 11 for network connectivity your VMs will encounter connectivity issues while you perform the configuration steps below. The upstream switch does not need to be configured for that. 11 private-vlan primary vlan 111 private-vlan isolated vlan 11 private-vlan association add 111 Page 34 . First we will update the VLAN to run in isolated mode. Note: When a VLAN is specified to be a primary VLAN for usage with private VLANs it instantly becomes unusable as a VLAN. This can for example be used to deploy Server Virtualization within a DMZ.

Nexus1000V(config)# port-profile type ethernet Uplink Nexus1000V(config-port-prof)# switchport private-vlan mapping trunk 11 111 3. Page 35 . So it is not necessary to configure it again.You can check that the configuration has been successfully applied by issuing the show vlan privatevlan command Nexus1000V# show vlan private-vlan Primary ------11 Secondary --------111 Type --------------isolated Ports ------------------------------------------ 2. Nexus1000V(config)# port-profile VM-pvlan Nexus1000V(config-port-prof)# vmware port-group Nexus1000V(config-port-prof)# switchport mode private-vlan host Nexus1000V(config-port-prof)# switchport private-vlan host-association 11 111 Nexus1000V(config-port-prof)# no shutdown Nexus1000V(config-port-prof)# state enabled 4. After this step has been completed. The configuration of the promiscuous trunk has already been done during the creation of system-uplink. Apply the port-profile on both “Windows 7 – A” and “Windows 7 – B” ©2011 Cisco | VMware.as a private VLAN in host mode. thus isolating the individual VMs from each other. As a next step configure the uplink port profile as a promiscuous PVLAN trunk with the primary VLAN 11 and the secondary VLAN 111. All rights reserved. configure the port profile VM-pvlan which connects the Virtual Machines .

All rights reserved.5.5 1000 1500 -------------------------------------------------------------------------------Ethernet VLAN Type Mode Status Reason Speed Port Interface Ch # -------------------------------------------------------------------------------Eth3/4 1 eth trunk up none 1000(D) 1 Eth4/4 1 eth trunk up none 1000(D) 2 -------------------------------------------------------------------------------Port-channel VLAN Type Mode Status Reason Speed Protocol Interface -------------------------------------------------------------------------------Po1 1 eth trunk up none a-1000(D) none Po2 1 eth trunk up none a-1000(D) none -------------------------------------------------------------------------------Interface VLAN Type Mode Status Reason MTU -------------------------------------------------------------------------------Veth1 11 virt access down nonParticipating 1500 Veth2 11 virt access down nonParticipating 1500 Veth3 12 virt access up none 1500 Veth4 12 virt access up none 1500 Veth5 111 virt pvlan up none 1500 Veth6 111 virt pvlan up none 1500 -------------------------------------------------------------------------------Port VRF Status IP Address Speed MTU -------------------------------------------------------------------------------ctrl0 -up -1000 1500 ©2011 Cisco | VMware.local Nexus1000V(config-port)# show interface brief -------------------------------------------------------------------------------Port VRF Status IP Address Speed MTU -------------------------------------------------------------------------------mgmt0 -up 10.2. Page 36 .B 4 esx02. Nexus1000V(config-port)# show interface virtual ------------------------------------------------------------------------------Port Adapter Owner Mod Host ------------------------------------------------------------------------------Veth3 vmk2 VMware VMkernel 3 esx01.local Veth5 Net Adapter 1 Windows 7 .local Veth6 Net Adapter 1 Windows 7 .A 3 esx01.vpod.local Veth4 vmk2 VMware VMkernel 4 esx02.11. After a applying a new port-profile to a Virtual Machine is created.vpod. Verify the current Veth-mapping of the VMs and the usage of PVLAN.vpod. Therefore the VMs “Windows 7 – A” and “Windows 7 – B” will no longer be connected to Veth1 and Veth2 respectively as shown in a previous lab step.vpod.

This can be verified by pinging the default gateway 192.1. The expected behavior of the above configuration is that the first two virtual machines of your pod should both still be able to reach the default gateway and all host beyond this gateway.1.1.168.1.6.1. The IP address of “Windows 7 – B” is 192.168. To do so.168. the ping times out.12.1. Now issue the command ping 192.12. Try now to ping “Windows 7 – B” from “Windows 7 – A”.168.168. However they should not be able to reach each other.1. Page 37 . Click on the Command Prompt icon on the desktop within the VM. ©2011 Cisco | VMware. Issue the command ping 192. login to one of the Windows 7 VMs and open the console where you enter the command ping 192. As expected. All rights reserved.1 from “Windows 7 – A”.

Again try to ping the second VM from the first. Remove the configuration of VLAN 11 as a primary PVLAN Nexus1000V# conf t Nexus1000V(config)# vlan 11 Nexus1000V(config-vlan)# no private-vlan primary ©2011 Cisco | VMware. However they cannot talk to an isolated port. please remove the Private VLAN configuration from VLAN 11 again. Feel free to move the VMs around the two ESX hosts via VMotion. 8. The community VLAN can talk to each other as well as two the promiscuous port. 1. Nexus1000V(config-port)# vlan 111 Nexus1000V(config-vlan)# private-vlan community Note: The Virtual Machines using the port-profile “VM-pvlan” will lose network connectivity for a brief moment (interface flap). The previously created port-profile “VM-pvlan” will become unusable and your VMs will therefore lose connectivity. Removing the Private VLAN configuration Before continuing with further lab steps. such as in the deployment of DMZ. All rights reserved. when changing the PVLAN mode. the network policies are enforced the same way. Page 38 . You can now change the isolated vlan to community vlan.7. you have successfully configured a Private VLAN with a promiscuous PVLAN trunk on the uplink! This feature allows you to utilize server virtualization in new areas. Congratulations. This time the ping will work. You will notice that no matter where the 2 VMs reside.

In this lab step. The ERSPAN session will terminate in another virtual machine running Wireshark. All rights reserved. such as ERSPAN give the network administrator back the capability to inspect traffic of a virtual machine within the virtual network infrastructure. you will configure an ERSPAN session to inspect the traffic of a virtual Ethernet interface connected to a certain VM. In a second step you will then live-migrate the VM using VMotion and observe how the monitor session is still spanning the traffic to Wireshark. the Veth mapping will correspond again to the original mapping as outlined in the “Attaching a Virtual Machine to the Network” lab guide step.Traffic Inspection of individual Virtual Machines One of the main drawbacks of server virtualization up until today was the lack of visibility into the VM from a network perspective. The different steps include  Configure an ERSPAN type monitor session  Create a port-profile to enable the SPAN traffic to be send to a Virtual Machine containing our sniffing application (Wireshark)  Verify the configuration of the ERSPAN session  Verify that the Wireshark VM receives the traffic Configure an ERSPAN monitor session 1. Page 39 . Advanced features of the Cisco Nexus 1000V. Especially features such as VMotion aided to this lack of visibility. ©2011 Cisco | VMware. Apply the VM-Client port-profile back on “Windows 7 – A” and “Windows 7 – B” Note: After configuring the VMs to use the original port-profile of VM-Client again.

local Veth2 vmk2 VMware VMkernel 3 esx02. In the VSM configure a new ERSPAN session by issuing the commands below.vpod.1. Unlike any other switch.vpod.vpod. Nexus1000V# conf t Nexus1000V(config)# monitor session 1 type erspan-source Nexus1000V(config-erspan-src)# description “Monitor Windows 7 .1.168. can change the size of the ERSPAN Packets to receive only the useful information desired by the network administrator. In the above case ZZ would be replaced by 3. I will only send the GRE header plus some of the packet header but will not saturate the link by sending to much information. ©2011 Cisco | VMware.12 is the IP address of “Windows 7 – B”.local Veth3 Net Adapter 1 Windows 7 . We will use this VM as our ERSPAN target. 2. By changing the MTU to 128. Note that vethZZ correspond to the veth number of “Windows 7 – A” as identified in step 1. 1. identify the veth port of the VM to be spanned by using show interface virtual Nexus1000V# show interface virtual -------------------------------------------------------------------------------Port Adapter Owner Mod Host -------------------------------------------------------------------------------Veth1 vmk2 VMware VMkernel 4 esx01.12 Nexus1000V(config-erspan-src)# erspan-id 999 Nexus1000V(config-erspan-src)# mtu 128 Nexus1000V(config-erspan-src)# no shut 192. The only difference is the ability to select a veth interface as a source.A VM” Nexus1000V(config-erspan-src)# source interface vethZZ both Nexus1000V(config-erspan-src)# destination ip 192. Before creating the ERSPAN Session.A 3 esx02.local Veth4 Net Adapter 1 Windows 7 . where the packet sniffer is installed.B 3 esx02. since it is a software switch.local Nexus1000V# Find out what veth interface is being used by the VM named “Windows 7 – A”. You would therefore have to go through the following steps again and update the ERSPAN configuration with the new veth interface information. Page 40 . Note: One of the powerful features of the Nexus 1000V. the configuration of an ERSPAN session is equivalent to the configuration of this feature on other products of the Cisco Nexus platform. the Nexus 1000V. All rights reserved. is the ability to use truncated ERSPAN.Create an ERSPAN Session on the Nexus 1000V As the Cisco Nexus 1000V VSM is running NX-OS. will create a new veth interface for this VM.vpod. In the above example it is associated with veth3. should you change the port-group. Note: Changing the association of a Virtual Machine to a port-group.168.

In case you need to e. update VLAN used for the ERSPAN traffic. 2. Create a new VMKernel interface using Virtual Center and apply the newly created port-profile ©2011 Cisco | VMware. but leveraging the port-profile concept is a more scalable approach.g. All rights reserved. Nexus1000V# conf t Nexus1000V(config)# port-profile ERSPAN Nexus1000V(config-port)# vmware port-group Nexus1000V(config-port)# capability l3control Nexus1000V(config-port)# switchport mode access Nexus1000V(config-port)# switchport access vlan 11 Nexus1000V(config-port)# no shutdown Nexus1000V(config-port)# system vlan 11 Nexus1000V(config-port)# state enable Note: The keywords capability l3control indicates to the Cisco Nexus 1000V that the interface will be used to carry L3 Traffic. We could configure the interface directly. Page 41 .Configuring a VMkernel Interface to transport the ERSPAN Session The Nexus 1000V leverages a VMKernel Interface to transport the SPAN traffic when using ERSPAN. In this lab step define a new port-profile which will be used by the VMKernel interface to send the ERSPAN traffic. 1. Configure a new port-profile for the VMKernel interface used for ERSPAN. this change can easily be accomplished.

Repeat steps 2 to 4 to add the new VMKernel ERSPAN interface on server 2 as well Congratulation! You configured your first ERSPAN session.255. even if the VM moves to another host due to a VMotion.255. you will still be able to span traffic.1. All rights reserved.255.168. Choose VMkernel as Virtual Adapter Type 4. Use the IP address 192. Click on Next and Finish. Select the ERSPAN port-profile that you created before. Configure the IP settings for the VMKernel ERSPAN interface. As the source of the ERSPAN session is a veth interface.101 with a Subnet Mask of 255.0 on the host esx02 6.255. 7.3.168. 5.0 on the host esx01 and the IP address 192. ©2011 Cisco | VMware. Now you can monitor and troubleshoot the traffic of a particular Virtual Machine.102 with the same Subnet Mask of 255.1. Page 42 .

1.168. You can issue the command show monitor session 1 to verify if the ERSPAN session is up and working Nexus1000V# show monitor session 1 session 1 --------------description : "Monitor Windows 7 .1. : 0 ERSPAN DSCP : 0 ERSPAN MTU : 128 ERSPAN Header Type: 2 2.168. Page 43 .12 ERSPAN ID : 999 ERSPAN TTL : 64 ERSPAN IP Prec.1.1. All rights reserved.1 ©2011 Cisco | VMware.A VM" type : erspan-source state : up source intf : rx : Veth3 tx : Veth3 both : Veth3 source VLANs : rx : tx : both : filter VLANs : filter not specified destination IP : 192.168. Issue a continuous ping to the default gateway at 192. From the “Windows 7 – A” Console. To do so type ping -t 192.Test the session and VMotion the VM 1.

3. Open the console to control the VM called “Windows 7 – B”. 4. ©2011 Cisco | VMware. Start Wireshark by double click on icon on the desktop. All rights reserved. Click on Intel(R) PRO 1000MT Network Connection under Interface List to start capturing packets. Page 44 .

type == 8) As a result of the filter you will only see the ICMP requests and replies received via ERSPAN. ©2011 Cisco | VMware.5. Fine-tune the selection of traffic by applying the filter erspan. All rights reserved. Initiate a VMotion of “Windows 7 – A” from one ESX host to the other one by dragging the VM icon to the new ESX host. You will a see various different traffic received by the sniffer.type == 0 || icmp. Furthermore you saw that you can do this even across a VMotion. 6.spanid == 999 && (icmp. you will lose a minimal amount of packets (1-2). This is the moment when VMware briefly halts (stuns) all components – such as CPU. Only while the VM named “Windows 7 – A” is stunned (at around 78% progress) for a very brief moment as part of the VMotion. Observe that even during the VMotion Wireshark is receiving the spaned traffic. Congratulation! You have successfully monitored the traffic of a particular VM using ERSPAN. Page 45 . I/O (NICs) – and transfers control from the original VM to the VMotioned VM.

As you have experienced during the lab. Online Feedback ©2011 Cisco | VMware. Just click on the link below and answer the online questionnaire. All rights reserved. o Install and configure the Nexus 1000V o Added physical ESX host to the DVS o Attached a Virtual Machine to the Distributed Virtual Switch o Tested the VMotion capability  Familiarized yourself with advanced features of the Cisco Nexus 1000V o IP based access lists o Configure an ERSPAN session to troubleshoot the VM Traffic o Configure Private-VLAN Feedback We would like to improve this lab to better suit your needs. The Nexus 1000V is based on three important pillars: - Security Mobility of the network Non-disruptive operational model In this lab you:  Have gotten familiar with the Cisco Nexus 1000V Distributed Virtual Switch for VMWare ESX.Conclusion You are now familiar with the Nexus 1000V. we need your feedback. Please take 5 minutes to complete the online feedback for this lab. To do so. Page 46 .

Page 47 . All rights reserved.Lab proctors    Christian Elsen Kishan Pallapothu Cuong Tran ©2011 Cisco | VMware.

672.1 Cisco Systems. Protected by one or more U. 170 West Tasman Drive San Jose. 7. 7.vmware. (0807R) 09/08 ©2011 Cisco | VMware. CA 94304 USA www.785. 6.278.069.com/go/nexus1000v or contact your local Cisco account representative.cisco.155. All rights reserved. 6. Inc.795.944.089.601.260. 7. VMware. 6.820. and/or its affiliates in the United States and certain other countries. 7.496.269.998.253.397.925.725.289. 7.111.242.679 and patents pending. the Cisco logo.699. 7.356.886.102.806. 7. 7. Inc. 843.222.117.S. go to: http://www.413. 6.941.221. Patent Nos.999. 7. 7. 6. 6. 7. 7.847. 7. 7. 7.481.290.149.145.815. 6.961. 7.136. 6. CA 95134-1706 USA www.735.789.281. All other trademarks mentioned in this document or Website are the property of their respective owners. Inc 3401 Hillview Ave Palo Alto. Inc. Cisco.966. For more information about the VMware vNetwork capabilities augmenting physical networking. All rights reserved.111. 6.277.com Tel: 1-877-486-9273 or 650-427-5000 Fax: 650-427-5001 Copyright © 2008.598.275. 6. 6.683. Page 48 .086.vmware.377.022.com/technology/virtual-datacenter-os/infrastructure/vnetwork.277. 6.961.260. and Cisco Systems are registered trademarks or trademarks of Cisco Systems.030.880. visit http://www.cisco.082.html Revision: 1. 7.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 VMware.156. 7.711.704. The use of the word partner does not imply a partnership relationship between Cisco and any other company. 6.7.For More Information For more information about the Cisco Nexus 1000V.558.