Network Design Considerations in a Blade Server Environment - LAN & Storage

Session Number-BRKDCT-2869

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

2

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

1

Session Abstract
This session will cover design considerations for blade server centric deployments. It will cover interconnect considerations and topology options for both LAN and storage technologies. Key Blade Server specific features will be covered along with installation instructions. Various deployment scenarios will be described with pros and cons for each design. Tools for simplifying deployment will be described and demonstrated.

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

3

Session Objectives
After completing this session you will be able to:
Understand the architecture and deployment for the Blade Switches Understand topology options Identify key edge features and IOS commands Manage and monitor Ethernet Blade Switches via Device Manager and CNA Understand Key Fibre Channel Edge features and Architecture

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

4

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

2

Outline
Network Design for Blade Servers
– DC Access Options for Blade Switchess: Pass-thru, ToR or Integrated Switch – L2 vs L3 inside Blade Enclosure – Single vs Two tier Access – Services in a Blade Server deployment – NIC teaming designs – Virtual Server Deployment on bladeservers

Architectural Considerations for VBS Technology
– Overview & Benefits of Virtualization Technology – Deployment scenarios – Pros & Cons

SAN
– Key issues faced by customers – Solutions: NPV, FlexAttach

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

5

View of Data Center Networking
The BIG Picture
LAN
Integrated Network Services
Server Balancing VPN Termination SSL Termination Firewall Services Intrusion Detection

WAN MAN

Enterprise LAN/MAN/WAN Switching

Now Inside BladeSystem

Integrated Storage Services
Virtual Fabrics (VSANs) Storage Virtualization Data Replication Svcs Fabric Routing Svcs

Multiprotocol Gateway Services
Enterprise SAN Switching
Topspin Server Fabric Family Switching

Integrated Virtualization Services
V
Server Virtualization Virtual I/O Grid/Utility Computing Low Latency RDMA Services Clustering

High End Disk Storage
BRKDCT-2869

Tape Drive Storage

Mid-Tier Storage
Cisco Public

Tier 1 Servers

Tier 2 Servers

Tier 3 Servers
6

© 2008 Cisco Systems, Inc. All rights reserved.

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

3

Design with Pass-Thru Module and Modular Access Switch
Cable density Rack example:
Four Enclosures Per Rack Up to 16 servers per enclosure 32 LOMs + 16 Service NICs per enc. 192 available access ports Requires structured cabling to support 192 connections/blade rack Single pair of access switches supports 12 blade server enclosures (three racks) Modular Access Switches Blade Server Rack

Gigabit Ethernet Connections
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

7

Design with Pass-Thru Module and Modular Access Switch

Does this look Manageable?

How to I find and replace bad cable?

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

8

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

4

Design with Pass-Thru Module and Top of the Rack (TOR) Switches
High Cable density within the rack High capacity uplinks provide aggregation layer connectivity Rack example:
Up to Four blade enclosures/rack Up to 128 cables for server traffic Up to 64 cables for Server management Up to four rack switches support local blade servers Up to two switches for server management ports Requires up to 192 cables within the rack
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

Aggregation Layer

10 GigE Uplinks

9

Design with Blade Switches
Reduces cables within the rack High capacity uplinks provide aggregation layer connectivity Rack example:
Up to Four blade enclosures/rack Two switches per enclosure Either 8 GE or 1 10GE uplink per switch Between 8 and 64 cables/fibers per rack Reduces number of cables within the rack but increases the number of uplinks compared to ToR solution Based on cable cost 10GE from Blade Switch is a better option. 10 GigE Or GE Uplinks Aggregation Layer

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

10

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

5

Design with Virtual Blade Switches
Removes Cables from Rack High capacity uplinks provide aggregation layer connectivity Rack example:
Up to Four blade enclosures/rack Up to 64 Servers per rack Two switches per enclosure One Virtual Blade Switch per Enclosure Two or Four 10GE uplinks per Rack Reduces number of Access Layer switches by factor of 8 Allows for local Rack traffic to stay within the Rack
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

Aggregation Layer

10 GigE Or GE Uplinks

11

Networking in a Data Center
Each Layer Has a Distinct Role in the Network
• Layer 3 Core • High Speed / Bandwidth • High Availability • Layer 2/3 Edge • Data Center Service Integration
Aggregation
– SSL termination, Load Balancer – Firewall, IPS/ IDS, DOS Mitigation – Caching – Traffic Analysis

Core

Access

• High Perf Server Connectivity • Port Density • Partitioning with VLANs • Layer 2 resiliency with STP

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

12

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

6

Layer 2 Access and Layer 3 Access Compared
Aggregation Rapid PVST+, or PVST+ Aggregation OSPF, EIGRP
13 14

trunks Access

L3 links Access

Layer 2 Layer 3
The Choice of one design versus the other one has to do with:
Layer 2 loops being more difficult to manage than Layer 3 loops Convergence time, Link Utilization, Specific Application Requirements Requirements of NIC teaming and Clustering
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

When is Layer 2 Adjacency Required?
Meeting Server Farm Application Requirements

Clustering: applications often execute on multiple servers clustered to appear as a single device. Common for HA, Load Balancing and High Performance computing requirements. (Windows 2003 Advanced Server, Linux Beowulf) NIC teaming software typically requires layer 2 adjacency
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

MS-Windows Advanced Server Clustering Linux Beowulf or proprietary clustering (HPC)

AFT SFT ALB

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

7

Blade NIC Teaming Configurations
Network Fault Tolerance (NFT)
Used when server sees two or more upstream switches NIC connectivity is PREDEFINED with built-in switches and may limit NIC configuration options
Se co nd ar
Se co

Typical referred to as Active/Standby
Pr im y ar

Transmit Load Balancing (TLB)
Secondary adapters transmit only Rarely used
nd a
Pr ar im y

Primary adapter transmit and receives

Switch Assisted Load Balancing (SLB)
Often referred to as Active/Active Server must see same switch on all member NICs GEC/802.3ad Increased throughput Available with VBS switches
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

Active Standby
15

Datacenter Access Evolution
Customers Have Two Options with Blade Servers Create Two level Access Layer
Core

Maintain Single Access Layer
Core

or
Aggregation Aggregation

Access

Access

Decision Based on:
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved.

- Existing Investment/Capacity in Aggregation/Access Layer - Size of Spanning Tree Domain - Latency
Cisco Public

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

ry
16

y

8

Blade Server Access Topologies
Different Alternatives:
V-Topology U-Topology Trunk-Failover Topology

• Very Popular Topology • Some Bandwidth not available
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved.

• Not as Popular

• Maximum Bandwidth available • Needs NIC Teaming

Cisco Public

17

Reducing STP Complexity with Integrated Switching
Higher Resiliency with “Layer 2 Trunk Failover”
Typical Blade Network Topologies
L3 Switches

Link State Group 1

Cisco Blade Switches

Link State Group 1

Blade Server Chassis
FEATURE • Map Uplink EtherChannel to downlink ports (Link State Group) • If all uplinks fail, instantly shutdown downlink ports • Server gets notified and starts using backup NIC/switch
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

Blade Server Chassis
CUSTOMER BENEFIT • Higher Resiliency / Availability • Reduce STP Complexity
18

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

9

Flexlink Overview
Achieve Layer 2 resiliency without using STP Access switches have backup links to Aggregation switches Target of sub-100msec convergence upon forwarding link failover Convergence time independent of #vlans and #mac-addresses Interrupt based link-detection for Flexlink ports. Link-Down detected at a 24msec poll. No STP instance for Flexlink ports. Forwarding on all vlans on the <up> flexlink port occurs with a single update operation – low cost.

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

19

Data Center / 3-Tier Network
Cat6K Cat6K

Core

Aggregation

Access

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

20

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

10

(Mac address Move Notification) MMN Overview
Achieve near sub-100 msec downtime for the downstream traffic too, upon flexlink switchover. Lightweight protocol : Send a MMN packet to [(Vlan1, Mac1, Mac2..) (Vlan2, Mac1, Mac2..) ..] distribution network. Receiver parses the MMN packet and learns or moves the contained mac-addresses. Alternatively, it can flush the mac-address table for the vlans. Receiver forwards packet to other switches.

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

21

Flexlink MMN Performance – Timings

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

22

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

11

Flexlink Preemption
Flexlink enhanced to :
provide flexibility in choosing FWD link, optimizing available bandwidth utilization

User can configure Flexlink pair when previous FWD link comes back up :
Current FWD link continues Preemption mode Off Previous FWD link preempts the current and begins FWD instead Preemption mode Forced Higher bandwidth interface preempts the other and goes FWD Preemption mode Bandwidth

Note: By default, flexlink preemption mode is OFF When configuring preemption delay:
user can specify a preemption delay time (0 to 300 sec) default preemption delay is 35 secs

Preemption Delay Time :
Once the switch identifies a Flexlink preemption case, it waits an amount of <preemption delay> seconds before preempting the currently FWD Flexlink interface.

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

23

Flexlink Configuration Commands
CBS3120-VBS-TOP#config t Enter configuration commands, one per line. CBS3120-VBS-TOP(config)#int po1 CBS3120-VBS-TOP(config-if)#switchport backup int po 2 CBS3120-VBS-TOP(config-if)# CBS3120-VBS-TOP#show interface switchport backup detail Switch Backup Interface Pairs: Active Interface Port-channel1 Preemption Mode Backup Interface Port-channel2 : off State Active Up/Backup Down End with CNTL/Z.

------------------------------------------------------------------------

Bandwidth : 20000000 Kbit (Po1), 10000000 Kbit (Po2) Mac Address Move Update Vlan : auto CBS3120-VBS-TOP#

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

24

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

12

Outline
Network Design for Blade Servers
– DC Access Options for Blade Switchess: Pass-thru, ToR or Integrated Switch – L2 vs L3 inside Blade Enclosure – Single vs Two tier Access – Services in a Blade Server deployment – NIC teaming designs – Virtual Server Deployment on bladeservers

Architectural Considerations for VBS Technology
– Overview & Benefits of Virtualization Technology – Deployment scenarios – Pros & Cons

SAN
– Key issues faced by customers – Solutions: NPV, FlexAttach

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

25

Topology Highlighting Key Benefits
Access Layer (Virtual Blade Switch)
Mix-n-match GE & 10GE switches

Cisco Catalyst Virtual Blade Switch
Distribution Layer

Local Traffic doesn’t go to distribution switch Higher Resiliency with Etherchannel Single Switch / Node (for Spanning Tree or Layer 3 or Management)
BRKDCT-2869

With VSS on Cat 6K, all links utilized

Greater Server BW – via Active-Active Server Connectivity
Cisco Public

© 2008 Cisco Systems, Inc. All rights reserved.

26

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

13

“Multiple Deployment Options for Customers”
Caters to Different Customer Needs
Benefits Common Scenario Cost Effective Single Virtual Blade switch per rack Entire rack can be deployed with as little as two 10 GE uplinks or two GE Etherchannels Allows for Active/Active NIC teams Creates a single router for entire rack if deploying L3 on the edge Keeps Rack traffic in the Rack Design Considerations Ring is limited to 64 Gbps May cause Oversubscription
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

27

“Multiple Deployment Options for Customers”
Caters to Different Customer Needs
Benefits Separate VBS divide Left/Right switches More resilient Provides more Ring capacity since two rings per Rack Design Considerations Requires more Uplinks per Rack Servers can not form A/A NIC teams

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

28

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

14

“Multiple Deployment Options for Customers”
Caters to Different Customer Needs
Benefits Allows for 4 NICs per server Can Active/Active Team all 4 NICs More Server Bandwidth Design Considerations Creates smaller Rings Requires more Uplinks May Increase Traffic on each Ring

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

29

Additional Options
By combining above three scenarios, the user can:
– Deploy up to 8 switches per enclosure – Build smaller Rings with fewer Switches – Split VBS between LAN on Motherboard (LOM) and Daughter Card Ethernet NICs – Split VBS across racks (See next slide) – Connect unused uplinks to other Devices such as additional Rack Servers or Appliances such as storage

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

30

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

15

Proper VBS Ring Configuration
Each offer a full ring, could be built with 1 meter cables, and looks similar – But: Certain designs could lead to a split ring if an entire enclosure is powered down
For “No” example, in the 4 enclosure example, if enclosure 3 had power removed you would end up with two rings, one made up of the switches in enclosures 1 and 2, and one made up of the switches in enclosure 4. This, at a minimum would leave each VBS contending for the same IP address, and remote switch management would become difficult The “Yes” examples also have a better chance of maintaining connectivity for the servers in the event a ring does get completely split due to multiple faults
Cable Lengths are 0.5, 1.0 and 3.0 Meter. The 1.0 Meter cable ships standard
No Yes Yes

No

ENC 4

ENC 3 No Yes

ENC 2

ENC 1

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

31

Virtual Blade Switch across Racks
VBS cables are limited to max of 3 meters Insure that switches are not isolated in case of failure of switch or enclosure May require cutting holes through side walls of Cabinets/Racks

~2 FT

~2 FT
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

32

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

16

Most Common Deployment Scenario
Straight forward configuration
Ensure uplinks are spread across switches and enclosures If using EC, make sure members are not in same enclosure By using RSTP and EC, recovery time on failure is minimized Make Master Switch (and Alternate) are not Uplink switches Use FlexLinks if STP is not desired Aggregation Layer

Core Layer

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

33

Deployment Example
Switch Numbering 1 to 8, left to Right, Top to Bottom Master Switch is Member 1 Alternate Masters will be 3,5,7 Uplink Switches will be Members 2,4,6,8
1 2

10 GE ECs from 2,4 and 6,8 will be used RSTP will be used User Data VLANs will be interleaved
3 4

5

6

7

8

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

34

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

17

Configuration Commands
switch 1 priority 15 switch 3 priority 14 switch 5 priority 13 switch 7 priority 12 spanning-tree mode rapid-pvst vlan 1-10 state active interface range gig1/0/1 – gig1/0/16 switchport access vlan xx Assign ports to VLANs Sets Sw 1 to pri master Sets Sw 3 to sec master Sets Sw 5 to 3rd master Sets Sw 7 to 4th Master Enables Rapid STP Configures VLANs

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

35

Configuration Commands
interface range ten2/0/1, ten4/0/1 switchport mode trunk switchport trunk allowed vlans 1-10 channel group 1 mode active interface range ten6/0/1, ten8/0/1 switchport mode trunk switchport trunk allowed vlans 1-10 channel group 2 mode active interface po1 spanning-tree vlan 1,3,5,6,7,9 port-priority 0 spanning-tree vlan 2,4,6,8,10 port-priority 16 interface po2 spanning-tree vlan 1,3,5,6,7,9 port-priority 16 spanning-tree vlan 2,4,6,8,10 port-priority 0

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

36

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

18

7 Rules to Live by for EtherChanneling
1. Split links across line cards on 6K side – prevents against Line Card Outage 2. Split across 6Ks if using VSS - prevents against 6K outage 3. Split links across members on blade side if using VBS - prevents against blade switch outage 4. Split links across Blade Enclosures if possible – prevents against enclosure outage 5. Split VLANs across ECs for load balancing – prevents idle ECs. 6. Chose appropriate EC load balancing algorithm – example: Blade servers generally have even number MAC addresses 7. Last but Not least, monitor your ECs - Only way to know if you need more BW or Better EC load balance

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

37

Device Manager Screen Shot
• Embedded HTML server • Built into each Ethernet Blade Switch • Provides Initial Configuration • Simple Monitoring Tool

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

38

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

19

CNA Screenshot – Topology View

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

39

CNA Screenshot – Front Panel View

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

40

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

20

Catalyst 6500 – Virtual Switching
What are the benefits?
VSS Benefits
• Single point of configuration and managements • Multi Chassis EtherChannel (MEC) for active/active uplinks (No STP Loops) • Virtual Switch presents itself as a single device consistently upstream and downstream • All PFC-based features handled in hardware (multicast and unicast) • Fully functional at either L2 or L3 • 50% Reduction in Routing Protocol Neighbors = better scalability! • VBS allows P2P with single EC
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

Network View (Logical)
L3 Distribution

MEC

Blade Switch Access Layer
41

Catalyst 6500 – Virtual Switch Link (VSL) Hardware / Software Requirement
Physical View
L3 Distribution
Sup720-10GE

Minimum Requirement

VSL Interconnect
WS-X6708-10GE DFC3C/CXL

Sup720-10GE

Software:

>=12.2 RLS6
Access Layer Equipment:
User/Server Access
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

Any device with EtherChannel support
42

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

21

Outline
Network Design for Blade Servers
– DC Access Options for Blade Switchess: Pass-thru, ToR or Integrated Switch – L2 vs L3 inside Blade Enclosure – Single vs Two tier Access – Services in a Blade Server deployment – NIC teaming designs – Virtual Server Deployment on bladeservers

Architectural Considerations for VBS Technology
– Overview & Benefits of Virtualization Technology – Deployment scenarios – Pros & Cons

SAN
– Key issues faced by customers – Solutions: NPV, FlexAttach

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

43

SAN Storage Topology

Blade Server Integrated Blade Switch

Director Class SAN Switch
FC

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

44

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

22

Key issues faced by customers
Not enough Domain IDs World Wide Name (WWN) virtualization

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

45

What is FlexAttach
Flexibility for Adds, Moves, and Changes
Blade Server

….

FlexAttach (Based on WWN NAT)
Each Blade Switch F-Port assigned a virtual WWN
Flex Attach

Blade N

No Blade Switch Config Change

No Switch Zoning Change

No Array Configuration Change

Blade 1

New Blade

NPV

Blade Switch performs NAT operations on real WWN of attached server

Benefits
SAN

No SAN re-configuration required when new Blade Server attaches to Blade Switch port Provides flexibility for server administrator, by eliminating need for coordinating change management with networking team

Storage

Reduces downtime when replacing failed Blade Servers
Cisco Public

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

46

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

23

What is FlexAttach?
Creation of virtual PWWN for host initiators Allows Server Administrative tasks with minimal involvement of Storage Administrators
Pre-configure server for SAN access Replacing Server HBA(s) Replace Server HBA on same port Moving blade server or server around the fabric Can move blade server to another slot in same chassis Can move blade server to another slot in another chassis (has to be in the same physical SAN fabric)

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

47

NPV Brief Overview
Login Process for NP-port
NP-port first does FLOGI and PLOGI into the core to register into the FC Name Server (Cisco PWWN) Any subsequent login from NPV switch will do FDISC to the FC Name Server
F F
Core Switch

Must be NPIV capable

VSAN Membership
Core switch interface and NP switch interface must have VSAN match Servers on NPV switch must reside on a VSAN that matches one or more NP uplink’s VSAN
F NP NP

NPV Edge Switch
F

Login Process for End Devices
Server login as FLOGI NPV switch converts FLOGI to FDISC PWWN of server gets registered in FC Name Server
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

PWWN1

PWWN2

48

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

24

FlexAttach – How does it work?
1. Interface fc1/1 is FlexAttach enabled and assigned vpwwn1 2. Server S1 does FLOGI to interface fc1/1 3. pwwn1 FLOGI is rewritten to use vpwwn1 FLOGI
Core Switch (MDS or 3rd party switch with NPIV support)

Server S1 is known by vpwwn1 in the SAN

F
port WWN of S1= vpwwn1 pwwn rewrite rules vpwwn1 port WWN of S1 = pwwn1

4. vpwwn1 FLOGI is NP converted to FDISC to be entered into FC Name NPV-FlexAttach Server

F fc1/1 N
pwwn1

Server S1
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

49

A Typical Virtualized Data Center
Network Is Even More Important
VirtualCenter Management Server
NAS Storage

Mobility of VMs possible ONLY thru Network
Ethernet Network VMotion

Virtualization enables SOA – requires server and Network Dynamic Provisioning

ESX Servers

Multiple Network / SAN connections

Storage Network

Centralized Storage for OS & data (SAN or NAS)

Hence, Network needs to be Highly Resilient, Secure, Agile and Manageable
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

SAN Storage
50

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

25

Cisco VFrame DC Enables Dynamic Provisioning for Virtualized Environments
Business Service Management Mercury,
Tideway, BMC

Monitoring
IBM Tivoli, HP Openview, BMC Patrol, CA Unicenter

Management and Monitoring

Orchestrate across infrastructure resources Platform for service abstraction Integrate with other management systems

Cisco VFrame Data Center
Network-Driven Service Orchestration
Virtualization Managers
VMware VirtualCenter

Element Managers
Cisco Fabric Manager, VMS, CiscoWorks, ANM

SOI Control Layer

SAN

NAS

Server Pool

Network Pool

Storage Pool

Data Center Networked Infrastructure
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

51

VFrame Screenshot – Server Provisioning

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

52

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

26

VFrame Screenshot – Network Provisioning

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

53

VFrame Screenshot – VMWare Example

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

54

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

27

In Summary
Blade server and switch architectures simplify your vary your data center design Blade Switches provide the same rich feature set as the Aggregation and Core switches Many options available for blade server integration into the data center for both storage and IP connectivity Virtualization is a Key part of the Next Generation DC

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

55

Q and A

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

56

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

28

Other Data Center Sessions
BRKDCT-2823 BRKDCT-2825 BRKDCT-2840 BRKDCT-2863 BRKDCT-2866 BRKDCT-2867 BRKDCT-2868 BRKDCT-2870 BRKDCT-3831 BRKRST-3470 BRKRST-3471 LABDCT-2870 LABDCT-2871 TECDCT-2887 TECDCT-3873 TECRST-2003
BRKDCT-2869

The Server Switch, Demultiplexing Networks with a Unified Fabric Nexus Architecture Data Center Networking – Taking risk out of Layer 2 Interconnects DC Migration & Consolidation Discovery Methodology Data Center Architecture Strategy and Planning Data Center Facilities Consideration in Designing and Building Networks Network Integration of Server Virtualization - LAN & Storage Data Center Virtualization Overview/Concepts Advanced Data Center Virtualization Cisco Nexus 7000 Switch Architecture Cisco NXOS Software - Architecture and Deployment Cisco Nexus 7000 Series Lab I/O Consolidation using Fibre Channel over Ethernet Architecting Distributed, Resilient Data Centers Data Center Design Power Session Cisco Nexus 7000 Series Technical Deep Dive
Cisco Public

© 2008 Cisco Systems, Inc. All rights reserved.

57

Product Pages
For More information on Cisco Ethernet or FC Blades:
www.cisco.com/go/bladeswitch

For More information on VFrame Datacenter:
http://www.cisco.com/en/US/products/ps8463/index.html

For More information on Blade Server Partners:
DELL: http://www.dell.com/content/products/compare.aspx/blade?c=us&cs=5 55&l=en&s=biz&~ck=mn HP: http://h18004.www1.hp.com/products/blades/components/c-classcomponents.html IBM: http://www-03.ibm.com/systems/bladecenter/

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

58

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

29

Recommended Reading
Continue your Cisco Live learning experience with further reading from Cisco Press Check the Recommended Reading flyer for other suggested books

Available Onsite at the Cisco Company Store
BRKDCT-2869 © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

http://www.ciscopress.com/bookstore/browse.asp?st=60113

59

Complete Your Online Session Evaluation
Cisco values your input Give us your feedback—we read and carefully consider your scores and comments, and incorporate them into the content program year after year Go to the Internet stations located throughout the Convention Center to complete your session evaluations Thank you!
60

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

30

BRKDCT-2869

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

61

© 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr

31