You are on page 1of 84

Data Center Architecture

Strategy Update

BRKDCT-2866

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 2
Depth vs. Breadth
Quick level set
 DCT-2703 Implementing DC Services
 DCT-2840 DC L2 Interconnect
 DCT-2867 DC Facilities
 DCT-2868 DC Virtualization
Depth

 DCT-2825 Nexus 5000 Architecture
 RST-3470 Nexus 7000 Architecture
 RST-3471 Nexus Software Architecture
 SAN-2701 SAN Design
 …Many, many more
Time

Breadth This Session
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 3
The Data Center Dilemma

EFFICIENCY AGILITY

Increased Utilization Demand Capacity

Consolidation Globalization

‘Green’ Availability

How do I align my Data Center strategy?

How can Cisco help me accomplish this?
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 4
Agenda
•Trends
•Architecture Strategy
•Architecture Evolution

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 5
Agenda
•Trends
Consolidation
Network Technology
Software Services

•Architecture Strategy
•Architecture Evolution

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 6
Trends:
Consolidation

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 7
WHAT IT MEANS
CONSOLIDATION BY DEFAULT WILL BEGET ORGANIC IT’s FRUITION

“As IT consolidation solidifies into standard procedure for infrastructure
management, we will see operational benefits and technical innovations arise that will
deliver fundamentally better efficiency than can be achieved today. Forrester expects
that by 2010, nearly all Intel/AMDbased servers will ship with a pre-installed
hypervisor and that the default allocation of any service will be the partition. This will
allow the use of new management and HA tools that act at the hypervisor layer,
allowing true Organic IT: dynamic, policy-driven reallocation of running production
workloads to drive greater power efficiency, accelerate business change, and drive
down operational costs. Through these tools will come abstraction between the
infrastructure, the application, and even the data center itself. Such a change will give
IT professionals new degrees of freedom, allowing services to be deployed where,
when, and however needed to best meet the businesses’ objectives.”

The IT Consolidation Imperative: Out Of Space, Out Of Power, Out Of Money
© 2007, Forrester Research, Inc

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 8
Data Center Consolidation
Reducing operational costs & improving manageability 

Reduce:  Network Implications:
Number of distributed server farms Higher Server Farm Density
Operational costs Higher Average Traffic Loads 
Increase Higher number of network-based
Flexibility on application rollouts Services
Uptime Larger & Flatter Networks 
Standardize: At Least N+1 Redundancy
Physical Requirements  Facilities Implications
Operational best practices Higher Power Demands
Server platform Higher Cooling Demands 
Establish: Higher Square Footage
Future DC architecture  Requirements:
Initial phase network design Future DC architecture
Technology Adoption Strategy Initial phase network design
Migration Strategy Technology Adoption Strategy
Migration Strategy

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 9
Server Consolidation
Reducing capital costs & improving effeciency 

Reduce:  Network Implications:
Number of OSs Higher uplink capacity
Server Idle time Increase throughput per server
Costs per RU Larger & Flatter Networks 
Increase At Least N+1 Redundancy
Application Performance Availability beyond a single DC
Application Uptime  Facilities Implications
Server Density Higher Power Demands
I/O, MEM and CPU capacity per RU Higher Cooling Demands 
Standardize: Higher Square Footage
SW architecture Closer integration with DC Arch
NG HW platforms (bound to tiers)  Requirements:
I/O (capacity, cabling) Scalability of DC architecture 
Establish: Initial phase network design
Server architecture direction Technology Adoption Strategy
Facilities support strategy Migration Strategy
Migration Strategy

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 10
Server & Infrastructure Virtualization
Improving utilization and agility 

Reduce:  Network Implications:
Idle CPU cycles Higher # of uplinks
Server proliferation Increase throughput per server
Power and Cooling Demands L2 Adjacency (larger & flatter) 
Increase Availability beyond a single DC
Workload Mobility Server Trunking
Server rollout flexibility More VLANs & IP subnets
Average Server CPU Utilization 10GE in the access
I/O, MEM and CPU capacity per server  Facilities Implications 
Standardize: Higher power/cooling draw per server
Virtual server SW infrastructure Lower power/cooling overall (less
NG HW platforms (bound to tiers) servers)
I/O and MEM capacity Cabling to match access requirements 
Establish:  Requirements:
Server architecture direction Scalability of DC architecture
Server Support Strategy Broad L2 adjacency
Migration Strategy Well-defined Access Layer Strategy
Provisioning/management strategy Migration Strategy

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 11
Green-Field Data Centers
Addressing growth and consolidation requirements 

Reduce:  Network Implications:
Wasted rack space Predictable Scalability Increase
After the fact cabling Physical: ports, slots, boxes
Power or cooling retrofitting Logical: table sizes 
Increase Well Identified Access Model
Per rack server density Specific Oversubscription Targets
Data Center Longevity  Server & Network Oversubscription
DC Space Utilization  Facilities Implications 
Standardize: Per server, per rack and per pod
High and low density areas Power Requirements
Power to server and network racks Cooling Capacity
Cabling Cabling Selection 
Establish:  Requirements:
Server farm growth potential 4-5 Architecture Strategy
Environmentals Control Strategy Migration Strategy to new Architecture
Usability Strategy Good handle on growth
Provisioning/management strategy Servers and storage
I/O interfaces and capacity

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 12
Trends:
Network Technology

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 13
Ethernet Standards
Applicable to Data Center Environments

802.3  10GE
10GBase-T (IEEE 802.3an Ratified)
10GBase-CX4 (IEEE 802.3ak)
10GBase-*X (IEEE 802.3ae)
The 802.3ak 10GbE standard defines copper categories
The 802.3ae 10GbE standard defines MM and SM fiber categories
802.3 HSSG  40-100 GE (Project Authorization Request has been agreed)
Higher Speed Study Group Support Full Duplex Operation Only
Preserve 802.3 Ethernet frame format
Preserve Minimum and Maximum frame size
Support BER equal or better than 10-12
Support Optical Transport Networks
 40G
At least 100M over OM3 MMF
10M over Copper
 100G
At least 40KM over SMF, 10Km over SMF, 100M over OM3 MMF
10M over copper

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 14
Demand for 40GE & 100GE in the DC
 100GE in the 2010+ for switch interconnects
 Switch platforms need to be architected for delivery of capacity in
excess of 200GBps per slot
 DC facilities environmental specifications need to accommodate the
higher speed technology requirements:
Class 1: hazard level does not warrant special precaution
40/100 GE MMF may not meet Class1
Relax the Class1M (current proposal to IEEE):
Good for restricted location which include DC Facilities
More information at http://www.ieee802.org/3/ba/public/mar08/petrilla_02_0308.pdf

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 15
Ethernet Interface Evolution:
40G and 100G
40G Muxed 40G Native 100G Native

IEEE Standard None None Call for interest: July 2006.
Expect ratification in 2010-
2011.

Increased Bandwidth No, 4 x 10GE Yes, true 40G per Yes, true 100G per
vs. 10GE muxed solution interface interface

EtherChannel 2 links 8 links 8 links

Fiber savings Yes Yes Yes

Approximate 2008 2009 2010-11
Availability

Estimated FCS Cost 2-3 x 10GE 10 x 10GE At least 10 x 10GE

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 16
Emerging Standards
All applicable to Data Center Environments

L2 Multipathing  IETF TRILL WG
Proposal to solve L2 STP forwarding limitations
 IEEE 802.1aq
Enhancement to 802.1Q to provide Shortest Path Bridging (Optimal Bridging) in L2
Ethernet topologies
Data Center  IEEE 802.1Qbb – Priority-based Flow Control
Bridging
Intended to specify protocols, procedures and managed objects that support flow
control per traffic class as identified by the VLAN tag encoded priority code point
 IEEE 802.1Qaz - Enhanced Transmission Selection
Specifies enhancement of transmission selection to support allocation of bandwidth
amongst traffic classes
 Discovery and Capability Exchange Protocol - DCBX
Identify the DCB cloud nodes and their capabilities
 IEEE 802.1Qau – Congestion Notification (Congestion Management)
Signal congestion information to end stations to avoid frame loss
.1Q tag encoded priority values to segregate flows
Support higher layer protocols that are loss sensitive
Unified I/O  T11 FCoE – FC- BB-5

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 17
Data Center Ethernet Features
Enhanced Ethernet Standards
Feature Benefit
Priority-based Flow Provides class of service flow control. Ability to support
Control (PFC) storage traffic
CoS Based BW Grouping classes of traffic into “Service Lanes”
Management IEEE 802.1Qaz, CoS based Enhanced Transmission

Congestion Notification
End to End Congestion Management for L2 network
(BCN/QCN)

Data Center Bridging
Auto-negotiation for Enhanced Ethernet capabilities
Capability Exchange
DCBCXP (Switch to NIC)
Protocol
L2 Multi-path for Unicast & Eliminate Spanning Tree for L2 topologies
Multicast Utilize full Bi-Sectional bandwidth with ECMP

Lossless Service Provides ability to transport various traffic types (e.g.
Storage, RDMA)

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 18
Evolution of Ethernet
Physical layer enabling these technologies

Mid 1980’s Mid 1990’s Early 2000’s Late 2000’s

10Mb 100Mb 1Gb 10Gb

UTP Cat 5
UTP Cat 3 UTP Cat 5 X2
SFP Fiber SFP+ Cu
SFP+ Fiber
Cat 6/7 ??

Power Transceiver
Technology Cable Distance (each side) Latency (link)

SFP+ CU 0W
Twinax 10m ~0.1µ
µs
Copper normalized
SFP+ USR MM OM2 10m
ultra short 1W ~0
reach
MM OM3 100m

SFP+ SR MM 62.5µ
µm 82m
1W ~0
short reach MM 50µµm 300m
Cat6 55m ~8W 2.5µ
µs
10GBASE-T Cat6a/7 100m ~8W 2.5µ
µs
Cat6a/7 30m ~4W 1.5µ
µs
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 19
Trends:
Software Services

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 20
SaaS
 SaaS (Software as a Service) an alternative
application/application suite built entirely
on Web Services
– Hosted and supported by the Software
Vendor
–Per seat/ monthly $
–Available via the Internet
–Multi-Tenant structure
– API’s available for integration with other
business applications
–Known for scalability and availability
–Broad Portfolio’s, Application Categories
that meet the needs of the smallest
business to the largest (www.saas-showplace.com)

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 21
SaaS Growth Predictions

 AMR found that 40% of all companies are currently using hosted applications, and
49% will use them within the next 12 months
 Gartner forecasts large companies will fulfill 25% of their application demands with
hosted software by 2010
 IDC predicts the SaaS market will grow at a 21% compound annual growth rate
(CAGR) during the next four years, reaching $10.7B worldwide in 2009
 Forrester Research predicts the market for traditional on-premise enterprise
applications will only grow 4% through 2008.
 Gartner forecasts large companies will fulfill 25% of their application demands with
hosted software by 2010
http://thinkstrategies.icentera.com/portals/file_getfile.asp?method=1&uid=11753&docid=5045&filetype=pdf

*The new estimate calls for an Salesforce.Com
average annual growth rate of Published Growth
22.1 percent with the estimate http://www.salesforce.com/company/

for 2007 to come in at around 21
percent, ultimately becoming an
$11.5 billion market by 2011.
http://www.formtek.com/blog/?p=380

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 22
What is Cloud Computing?

Grid New
Computing? Development
Parallel Platform?
Computing?

Cluster SaaS
Computing? XaaS?

Utility Can all
Computing? applications
Be Cloud
Stateless
enabled?
Computing?
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 23
Cloud Computing
No real simple answer…
Many Servers for the cloud

 Users: Cloud appears as single
Dynamic
application or “service” Provisioning
Transparent on geography
Transparent to a specific server –
it could be one or many
 Cloud manager:
Applications are provisioned
dynamically in server clusters
Communications
Network
Clusters can be clustered or geo- Cloud
diverse for availability purposes
Goal is to provide a simpler
scalable solution for large
applications (allow server upgrade
and refresh, simpler provisioning,
reduce patch management)

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 24
The Latest Evolution Of Hosting

Session_ID
March 2008 “Is Cloud Computing Ready For The Enterprise?”
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 25
A map of the players in the Cloud Computing,
SaaS and PaaS markets

Session_ID
Source: http://dev2dev.bea.com/blog/plaird/archive/2008/05/understanding_t.html
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 26
Data Center Trends
Summary

 Consolidation
Data Centers, Servers, & Infrastructure

 Virtualization
Servers, Storage, & Networks

 Understand evolution of Ethernet technologies
10 Gig, 40 Gig, 100Gig, DCE, & FCoE

 Plan for heterogeneous application environment
Internal/External hosting, SaaS/XaaS, & Cloud

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 27
Agenda

•Trends
•Data Center Strategy
Deployment Strategy
Technology Strategy

•Architecture Evolution

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 28
Data Center Strategy:
Deployment Strategy

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 29
Topics in the Minds of Data Center Architects
Application Deployment ‘Green’
Green’
Server Farm, VM, XaaS,
XaaS, Cloud Power
Cooling

Deployment Agility
Automated Provisioning Security
Lights out Management

Service
Management Integration
Virtualization
Role based access
I/O Consolidation
Ethernet, FC, FCoE
Facilities
Consolidation
Greenfield
End-
End-of-
of-Row
1/10/40/10 Top-
Top-of-
of-Rack
0 Gbps Blade Switch

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 30
Data Center Strategy

Applications
Applications
External
Hosting
Service

Virtualized
External
Compute

Internal Network
Compute Infrastructure
Storage
Resources
Data Center
Facilities
Management
Provisioning
Operations

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 31
Data Center Strategy
Utility Deployment Strategy

 All areas are interdependent
 Complex evaluation
Do applications dictate facilities?
Do facilities dictate hosting alternatives?

 Consider consistent user experience - SLAs
 Budgetary and costing model considerations
 Management and operational aspects
 Requires broad cross-functional collaboration

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 32
Data Center Strategy
Application Architecture
• Key Considerations
- Application Architecture
- Monolithic
- N-Tier
- Web 2.0 / Mash-up
- Core Business
- Off the shelf
- Custom Application • Determining ‘Care-Abouts’
- Security Considerations
- Utility Environment
- Data Warehousing
- Demand Capacity
- Business Economics
- Application RPO/RTO
- SaaS/XaaS
- Internally/Externally Hosted - Projected longevity
- Cloud - Service level requirements
- Application Redundancy - Anticipated annual growth
- At server level – backup server
- Single DC, Multiple DC
RPO – Recovery Point Objective
RTO – Recovery Time Objective
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 33
Data Center Strategy
Compute Infrastructure

• Key Considerations
- % of Traffic Patterns
- Client to server
- Server to server
- Server to storage
- Storage to storage • Determining ‘Care-Abouts’
- Server Capacity - Utility Server Infrastructure
- Server BUS capacity mem/cpu - Virtualization
- # of Ethernet I/O interfaces
- Provisioning
- # of FC I/O interfaces
- # of servers per application
- Expected outbound load
- Size of subnet/VLAN
- Server Redundancy
- NIC teaming
- % of server annual growth
- Clustering - % of virtual server annual growth

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 34
Data Center Strategy
Storage Resources
• Key Considerations
- Storage Capacity
- Internal/External Resources
- Application requirements
- Host access model
- Fibre Channel
- FC over Ethernet – FCoE
- iSCSI
- Oversubscription
• Determining ‘Care-Abouts’
- Sync/Async Replication
- Storage Virtualization
- SAN Interconnect
- N-Port Virtualization NPV
- Data RPO/RTO
- N-Port ID Virtualization NPIV
- Data Security
- Volume Virtualization
- Data Growth & Migration
- Number Storage of racks
- SAN Topology
- Number of SAN devices to manage
- Number of physical SAN interfaces

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 35
Data Center Strategy
Network Infrastructure
• Key Considerations
- Type of access model
- Modular, ToR, Blade Switches
- Number of server per rack
- Number of racks
- Topology
- # of access switches & uplinks
- L2 Adjacency Boundaries
• Determining ‘Care-Abouts’
- Number of network devices to manage - Fault isolation & recovery
- Number of physical interfaces per server - Services insertion
- Consolidated I/O - Data Center Interconnect
- Oversubscription - L3 Features
- Server - L2 Features
- Access to aggregation
- Aggregation to core
- L2 Adjacency
- Subnets/VLANs scope

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 36
Data Center Strategy
Data Center Facilities

• Key Considerations
- Total DC power capacity
- Total DC space
- Per rack
- Power capacity
- Servers
- Cabling
- Number of racks per pod • Determining ‘Care-Abouts’
- Power - DC Tier target
- Cooling - Disaster recovery
- Racks of network equipment - ‘Green’ - Efficiency
- Power
- Airflow
- Cooling
- Cabling - Cable routes
- Number of pods per area - Power routes
- Number of areas per DC

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 37
Data Center Strategy
Management Provisioning Operations

• Key Considerations
- Management
- Monitoring
- Measuring
- Tracking
- Provisioning
• Determining ‘Care-Abouts’
- Internal Compute
- Performance Criteria
- External Compute
- RPO/RTO
- Service Insertion
- Monitoring - Netflow
- Network
- Fault isolation & recovery
- Operations
- Testing
- Power & Cooling
- Servers (Internal & External)
- Cabling

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 38
Data Center Strategy:
Technology Strategy

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 39
Data Center Strategy in Action
Physical Facilities

HOT AISLE
DC
COLD AISLE Pod

Zone

Pod

Network

Servers

Storage

4 - 6 Zones Per DC & 6 – 15 MW per DC
60,000 – 80,000 SQF per zone – 1-3 MW per zone
200 – 400 racks/cabinets per zone
Cooling and power per pod (per pair of rack rows)
It all depends on server types
8 – 48 servers per rack/cabinet – 1-1.5 KW per
cabinet and network access layer model
2 – 11 interfaces per server
2500 – 30000 server per DC
4000 – 120,000 ports per DC
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 40
Reference Physical Topology
Network Equipment and Zones
COLD AISLE DC
Pod
Zone
HOT AISLE
Pod

Network Rack

Server Rack

Storage Rack

Pod

Module 1 Module N

Pod Pod

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 41
Pod Concept
Network Zones and Pods
COLD AISLE DC Sizing
Pod
HOT AISLE
•DC: a group of zones (or clusters, or areas)
•Zone: Typically mapped to aggregation pair
•Not all use hot-cold aisle design
•Predetermined cable/power/cooling capacity

DC
Pod

Pod
Pod/Module Sizing
▪Typically mapped to access topology
▪ Size: determined by distance and density
▪ Cabling distance from server racks to network racks
▪ 100m Copper
▪ 200-500m Fiber
▪ Cable density: # of servers by I/Os per server
▪Racks
▪ Server: 6-30 Servers per rack
▪ Network (based on access model)
▪ Storage: special cabinets

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 42
Network Equipment Distribution
End of Row and Middle of Row
End of Row
End of Row
▪Traditionally used
▪Copper from server to access switches
▪Poses challenges on highly dense server farms Patch panel Patch panel
▫ Distance from farthest rack to access point Patch panel Patch panel
▫ Row length may not lend itself well to X-connect X-connect
server server
switch port density

Network Network
Common Characteristics Access Point Access Point
▪Typically used for modular access A-B C-D
▪Cabling is done at DC build-out server server
▪Model evolving from EoR to MoR
▪Lower cabling distances (lower cost) Fiber
▪Allows denser access (better flexibility)
▪6-12 multi-RU servers per Rack Copper
▪4-6 Kw per server rack, 10Kw-20Kw per network Middle of Row
rack
▪Subnets and VLANs: one or many per switch.
Subnets tend to be medium and large: /24, /23
Patch panel Patch panel

Middle of Row server
Patch panel
X-connect
Patch panel
server
X-connect
▪Use is starting to increase given EoR challenges
▪Copper from servers to access switches
▪It addresses aggregation requirements for ToR
access environments
Network Network
▪Fiber may be used to aggregate ToR Access Point Access Point
A-B C-D
server server
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 43
Network Equipment Distribution
Top of Rack

ToR Patch panel Patch panel
▪Used in conjunction with dense access Top of Rack Top of Rack
racks(1U servers) server
Patch panel Patch panel
server
X-connect X-connect
▪Typically one access switch per rack
▪Some customers are considering two +
cluster
▪Use of either side of rack is gaining traction Network
Aggregation
Network
Aggregation
▪ Cabling: Point Point
A-B A-B
▪Within rack: Copper from server to server server
access switch
▪Outside rack (uplink):
▪Copper (GE): needs a MoR model for fiber
aggregation
▪Fiber (GE or 10GE):is more flexible and also
requires aggregation model (MoR)
▪Subnets and VLANS:
▪ one or many subnets per access switch Patch panel Patch panel
▪ Subnets tent to be small: /24, /25, /26 Top of Rack Top of Rack
Patch panel Patch panel
Top of Rack Top of Rack
X-connect X-connect
server server

Network Network
Aggregation Aggregation
Point Point
A-B C-D
server server
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 44
Network Equipment Distribution
Blade Chassis

Switch to Switch
▪Potentially higher oversubscription
▪Scales well for blade server racks Patch panel Patch panel

(~3 blade chassis per rack)
Patch panel Patch panel
▪Most current uplinks are copper but sw1 sw2 X-connect X-connect sw1 sw2
the newer switches offer fiber Blade Chassis Blade Chassis

▪Migration from GE to 10GE uplinks sw1 sw2 sw1 sw2
is taking place Blade Chassis Network Network Blade Chassis
Aggregation Aggregation
Point Point
Pass-through sw1 sw2 A–B–C-D A–B-C-D sw1 sw2
▪Scales well for pass-through blade Blade Chassis Blade Chassis
racks
▪Copper from servers to access
switches

ToR Patch panel
Patch panel
▪Have not seen it used in Top of Rack
conjunction with blade switches Pass-through
Patch panel Patch panel
Pass-through
X-connect X-connect
▪May be a viable option on pass- Blade Chassis Blade Chassis
through environments is the
access port count is right Pass-through Pass-through
Blade Chassis Network Blade Chassis
▪Efficient when used with Blade Aggregation
Network
Aggregation
Virtual Switch environments Point Point
Pass-through A–B–C-D A–B-C-D Pass-through
Blade Chassis Blade Chassis

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 45
Network Equipment Distribution
End of Row, Top of Rack & Blade Switches

End of Row ToR Blade Switches
Network Modular Switch at the end of a Low RU, lower port density switch Switches Integrated in to blade
Component & row of server racks per server rack enclosures per server racks
Location
Cabling Typically copper from server to Copper from server to ToR switch Servers have intra-blade chassis
access switches and fiber from and fiber from ToR to aggregation connection to internal switches.
Cabling
access to aggregation switches switches Devices
Switches use copper (and fiber) to
aggregation switches
Port Density 240 – 336 ports 40 – 48 ports 14 – 16 Servers (dual-homed)

Server Density 6 – 12 multi-RU server per rack 8-30 1 RU server per rack 3-4 blade enclosures per rack

VLANs & One or more subnets/VLANs per One smaller VLAN/subnet per A subnet/VLAN is shared across
Subnets access switch access switch multiple access switches

Session_ID
Row 1 Row 2 Row Row
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 46
Reference Network Topology
Hierarchical Architecture
Core
L3

Aggregation
L3
L2

Access
L2

VLAN A VLAN B VLAN C VLAN D VLAN E

Module 1 Module 2

• Hierarchical Design
• Triangle and Square Topologies
• Multiple Access Models: Modular, Blade Switches and ToR
• Multiple Oversubscription Targets
• Highly scalable
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 47
Data Center Topology
Scalable Server Architecture
Core
Layer

Aggregation
Layer

Access
Layer

Small - Medium Medium - Large Large-Very Large

Up to 192 192 – 1500 1500 – 4000 4000 to 10000
Servers Servers Servers Servers

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 48
Server Oversubscription
What is the right number?

Capacity: 1Gbps
100 Mbps – 1/10:1
1. Single homed Servers: GE NIC 200 Mbps – 1/5:1
500 Mbps – 1/2:1
GE or 10GE
10GE NIC 500 Mbps – 1/20:1
1 Gbps – 1/10:1
2 Gbps – 1/5:1
Capacity: 10Gbps 4 Gpbs – 1/2.5:1
5 Gbps – 1/2:1

2. Multi-homed & Virtual Servers: NIC
4 times per systen
4 x GE 2 times per system
1.x times per system
2 x 10GE NIC

Active-Standby

Depends…

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 49
Server Oversubscription
What to do…
Client to server: low bandwidth
Server to server: high bandwidth
1st Understand application and their traffic patterns:
Server to storage: bulk
Storage to storage: bulk

Maximum server peak: single server max capacity
Average server peak: likely to be seen across server
2nd Consider Peak Time Behavior: farm
Aggregate server peak:

Consider server BUS
3rd Plan Network Oversubscription based on Peak Loads: Consider server growth
Consider steady vs failover
states

Factor I/O Module Oversubscription
4th Network Oversubscription: Consider Network Layers: Access, aggregation and core
Ranges: 1:1 – 1:20 increase over time
Factor in Server Oversubscription

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 50
Server Virtualization
Design Considerations

Module VLAN maps to Subnet… size of subnet?
VM Mobility – within L2 boundaries
Is VM cluster limited by:
VLAN single switch
VLAN multiple switches
VLAN all access switches single module
VLAN A VLAN B VLAN D
How many clusters
VLAN E

Hypothetical Example
1000 Servers using a single IP/MAC pair
Virtualized using 20 VMs per server
(1,000 x 20) + 1,000 = 21,000 or 20,000 New IP/MAC pairs
20,000 / 250 (/24 subnet) = 80 new subnets / VLANs

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 51
Data Center Strategy
Summary

 Complex interdependencies
 Focus on Applications <-> User experience
 Identify key objectives for each aspect of infrastructure
 Map physical and logical topologies
 Consider I/O options and requirements
 Evaluate the Network impact of Virtualization

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 52
Agenda
•Trends
•Architecture Strategy
•Architecture Evolution

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 53
Architecture Evolution

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 54
Data Center Architecture
Mapping Initiatives to Architecture
IT Initiatives
Architectural Goals
Application Flexibility: SaaS,
Internal/External compute,
Virtualized Images Improved Efficiency

Server Consolidation: Faster
CPUs, Multi-
Multi-core, more
memory, higher I/O Capacity
Scalable Bandwidth
Server Virtualization:
Application availability and
scalability, Server utilization Common
Systems Simplified I/O

Application Availability: Lower Architecture
RPO/RTO, better stability
Improved Robustness
Workload Management: faster
application rollout, dynamic
server movement

Automated Provisioning: Integrated Services
Template driven configuration
& dynamic provisioning
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 55
Network Architecture
Mapping Architecture to Technology

Architectural Goals Technology Cisco
Requirements Technology
Improved
Alignment
Scalable 10G
Efficiency
Infrastructure
10G Ethernet
Scalable Efficient L2
Bandwidth pathing
I/O Consolidation
Common Increase STP
Simplified I/O Technology Stability
Architecture FCoE
Virtual Switch
Partitioning and
Improved
Isolation
Robustness Virtual Switching
Scalable DC
Services
Integrated
Services

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 56
Dense 10GE Network Topology
10G Ethernet
High Density 10GE Aggregation
core1 core2
L3
10GE Uplinks

Module 1 agg1 agg2 L3 aggx aggx+1 Module 2
L2

acc1 acc2 accN accN+1 acc1 acc2 accN accN+1

VLAN A VLAN B VLAN C VLAN D VLAN E

Common Topology – Starting Point
Nexus at Core and Aggregation Layers
2-Tier L2 topology
VLANs contained within Agg Module
Topology Highlights
Lower Oversubscription
Higher Density 10 GE at Core and Agg Layers
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 57
10GE Server Farms
10G Ethernet
10GE Access and Aggregation

agg1 agg2 agg1 agg2
8 8
64 10GE ports 64 10GE ports

4 6 4 8
4 6 4 8
acc1 acc2 accN accY
acc1 acc2 accN accY

VLAN A VLAN B
VLAN A VLAN B
VLAN C 40-44 Ports VLAN C 192 Ports

52 10GE ports = 8 - 12 ToR Switches 52 10GE ports = 4 - 12 Modular Switches

10GE in the Access
Positioned for I/O Consolidation
Using ToR – lower oversubscription 3.3:1
Using Modular – higher oversubscription 12:1
ToR uses Twinax cable
Modular uses Fiber

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 58
10GE Server Access 10G Ethernet

10 Gig Ethernet End-host Mode

End Host Mode
Switch Perspective
MAC based uplink Selection L3
Active-active uplinks using different MACs
No STP on access device
BPDUs are not processed – they are dropped
L3
Separate loop avoidance mechanisms
L2
Host Perspective
Active-standby only
Network Environment L2
STP is not fully removed
A B
Some switch would run it, some would not
VLAN C
Looped conditions have to be considered w/o STP
Path to Service Devices is challenging STP Cloud
No STP Cloud
Virtual-port Channels solves most issues
Enet

DCE

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 59
I/O Consolidation in the Network I/O Consolidation

Processor Processor

Memory Memory

I/O I/O I/O I/O Subsystem
Storage

Storage
LAN
IPC

LAN
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public
IPC 60
I/O Consolidation in the Host I/O Consolidation

 Fewer CNAs (Converged Network adapters) instead of
NICs, HBAs and HCAs
 Limited number of interfaces for Blade Servers

FC
FC HBA
HBA FC Traffic

FC
FC HBA
HBA FC Traffic
CNA
CNA All traffic
NIC
NIC LAN Traffic goes over
10GE
NIC
NIC LAN Traffic CNA
CNA

NIC
NIC Mgmt Traffic

HCA
HCA IPC Traffic

HCA
HCA IPC Traffic
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 61
What Is FCoE?
FCoE
Fibre Channel over Ethernet

 From a Fibre Channel standpoint it’s
FC connectivity over a new type of cable called… an Ethernet
cloud

 From an Ethernet standpoints it’s
Yet another ULP (Upper Layer Protocol) to be transported, but…
a challenging one!

 And technically…

FCoE is an extension of Fibre Channel
onto a Lossless Ethernet fabric

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 62
Fibre Channel over Ethernet
FCoE
Brief look at the Technology
 A method for a direct mapping of FC frames
over Ethernet
Seamlessly connects to FC networks
Extends FC in the datacenter over the Ethernet
FCoE appears as FC to the host and the SAN
Preserves current FC infrastructure
Ethernet
and management
FC frame is unchanged Fibre
Can operate over standard switches Channel
(with jumbo frames) Traffic
Priority Flow Control guarantees no-drops
Mimics FC credit-buffer system, avoids
TCP
Does not require expensive off-loads

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 63
Discrete Network Fabrics
FCoE
Typical Ethernet and Storage Topology

Core SAN Fabric
L3
Fabric A Fabric B

Aggregation
L3
L2

Access
L2
A B C D E F
VLAN A VLAN B VLAN C
VSAN 2 VSAN 3

Single Ethernet Network Fabric Dual Storage Fabrics Enet
FC
Typically 3 tiers Typically 2 tiers
Access Switches are dual-homed Edge switches are dual-homed
Servers are single or multi-homed Servers are dual-homed to different fabrics

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 64
Unified Fabric: Phase I – DCE
FCoE Server Access FCoE

Core SAN Fabric
L3 Fabric A Fabric B

Aggregation
L3
L2

Access

L2
CNA
VLAN A VLAN B
A B D E
VLAN C VLAN D

Enet
CNA – Converged Network Adaptor
FC
DCE

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 65
Unified Network Fabric 10G Ethernet
I/O Consolidation
Benefits to Customers FCoE

FC Traffic
FC Traffic FCoE
FCoE SAN A
FCoE

Enet Traffic FCoE SAN B
Enet Traffic

FCoE SAN

Fewer Interfaces and Cables Same SAN Management as Native FC

Display

FCoE
Adapter

FC Storage FC Switch FCoE Server
Switch

No Gateway Less Power and Cooling

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 66
N-Port Virtualization (NPV)
Virtual Switching
Solves Domain-ID Explosion
NPV-Core Switches
FC
FC

10.1.1 20.2.1

F-port

0
VS

VSAN 1
NP-port AN
15
Can have multiple
uplinks, on different
VSANs

Nexus
5000
FC Interface 10.5.2 10.5.7
FC

Target
NPV Device 20.5.1 Initiator
Uses the same domain(s) as
the NPV-core switch(es)
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 67
Virtual Ethernet Switching
Virtual Switching
Improving Management and Pathing

• Virtual Switches: Logical instances of physical switches
- Many to one: grouping of multiple physical switches
- Reduce management overhead (single switch) and simplify configuration (single sw config)
- One to Many: partitioning of physical switches
- Isolate control plane and control plane protocols
• Virtual PortChannels: Etherchannel across multiple chassis
- Simplify L2 pathing by supporting non-blocking cross-chassis concurrent L2 paths
- Lessen reliance on STP (loopfree L2 paths are not established by STP)
• Virtual Switching Implementations
- Virtual Switching System – VSS: Catalyst 6500
- Virtual Blade Switches – VBS: 10GE-based Blade Switches
- Virtual Device Context – VDC: Nexus 7000
- Virtual Port-Channel – vPC: Catalyst 6500, Nexus Family

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 68
Virtual Switch – VSS
Virtual Switching
Two to One
OSPF IGMP

OSPF SNMP OSPF SNMP
STP HSRP
STP HSRP STP HSRP

A1 A2 A

Two Physical Switches into One Virtual
Two switches look like one
Two physical switches
One virtual switch
Virtual Switch:
All ports appear to be on the same physical switch
Single point of management
Single configuration
Single IP/MAC
Single control plane protocol instance
Benefits
Simplify infrastructure management
L2 DC Interconnect High Availability
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 69
Virtual Blade Switch - VBS
Virtual Switching
Many to One

A1 A2
2
A
A3 A4
2
A5 A6

A7 A8

Many to One
Many switches look like one
Up to Eight physical switches
One virtual switch
Virtual Switch:
All ports appear to be on the same physical switch
Single point of management
Single configuration
Single IP/mac
Benefits
Simplify infrastructure management
Single switch to manage
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 70
Virtual Switching - VDC Virtual Switching
One to Many
OSPF IGMP OSPF IGMP

STP HSRP STP HSRP
OSPF IGMP
A1 A2
STP HSRP

A OSPF IGMP OSPF IGMP

STP HSRP STP HSRP

A3 A4

One to Many
One switch looks like many
One physical switch
Many logical switches
Virtual Switch:
Switch ports only exist on a single logical instance
Per virtual switch point of management
Per virtual switch configuration
Per virtual switch IP/mac
Per virtual switch control plane protocol instance
Benefits
Control plane isolation
Control protocol isolation
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 71
Isolating Collapsed L2 Domains Virtual Switching
Though Virtual Device Contexts

VDCs at Aggregation Layer VDCs at Aggregation and Access Layers
STP topology per VDC environment STP topology per VDC environment
Access switches only on one VDC Access switches support VDCs as well
VLAN instances per VDC per access switch VLANs instances per VDC
One STP process per access switch
Two STP processes per access switch

Module 1
agg1 agg1
agg2 L3 agg3
agg2 L3
agg4 agg1 agg2 L3
L2 L2 L2

acc1
acc1 acc2
acc2 accN
accN accN+1
accN+1 acc1 acc2 accN accN+1

VLAN
VLAN C–CVDC1 VLANVLAN
C – VDC2
C VLAN C – VDC1
Module
Module11 VLAN C – VDC2

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 72
Virtual Portchannels - vPC Virtual Switching
L2 Topology
A
AG1 AG2

2 2
4

Two to one
Two Physical to a single logical
Devices connect to a single “logical” switch
Connections are treated as portchannel
Virtual PortChannel:
i. Ports to virtual switch could form a cross-chassis portchannel
ii. virtual Portchannel behaves like a regular Etherchannel
Benefits
i.Provide non-blocking L2 paths
ii.Lessen Reliance on STP

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 73
Simplifying the topology
Virtual Switching
Through Virtual PortChannels
core1 core2

2 2 2 2

agg1 agg2

4 4 4
2 2
accN accY accX
acc1 acc2

VLAN A VLAN B VLAN D
VLAN C VLAN E

Simplify network topology
Build loopfree topologies without STP
Take advantage of all available L2 paths
Use all available network bandwidth capacity
STP is still used as a fail-safe mechanism
Simplify Server to Network Connectivity
Servers are also able to use more than 1 interface concurrently
Session_ID NIC Teaming is no longer necessary
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 74
Overlaying Stateful Services
Leveraging Virtual PortChannels Virtual Switching

Service Appliances of Service Switches
Leverage Virtual Port Channels
Non-blocking path to STP root/HSRP primary
Service Integration
Services Switches – housing service devices
Service Appliances Services Switch 2
modules appliances
Most Services support 10GE connections

core1 core2

2 2 2 2

agg1 svcs1 agg2 svcs2

4 4 4
2 2
accN accY accX
acc1 acc2

VLAN A VLAN B VLAN D
VLAN C VLAN E

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 75
Architecture Evolution:
Summary

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 76
Data Center Architecture Summary
DC-wide VLANs
SAN Fabric
L3
L2

Fabric A
Fabric B

Agg-wide VLANs
L3
L2

Pod-wide VLANs 

Topology Layers:
Core Layer: Support high density L3 10GE aggregation
Aggregation Layer: Support high density L2/L3 10GE aggregation
Access Layer: Support EoR/MoR, ToR, & Blade for 1GE, 10GE, DCE & FCoE attached servers 
Topology Service:
Services through service switches attached at L2/L3 boundary 
Topology Flexibility:
Pod-wide VLANs, Aggregation-wide VLANs or DC-wide VLANs
Trade off between flexibility and fault domain

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 77
Architecture Evolution
Summary
 10 Gig Core, Aggregation, & Access Cisco
Technology
 DCE – Ethernet Enhancements Alignment
I/O Consolidation
10G Ethernet
Unified Fabric - FCoE

 Virtualization I/O Consolidation

N-Port Virtualization – NPV
FCoE
Virtual Switch - VSS
Virtual Blade - VBS Virtual Switching
Virtual Device - VDC
Virtual Portchannel - vPC

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 78
Additional Resources 
URLs
• VSS Independent Testing
http://www.networkworld.com/reviews/2008/010308-cisco-virtual-switching-test.html
• 6500 Cabinet Information: http://wwwin.cisco.com/dss/isbu/6500/enviro/index.shtml
• Panduit http://www.panduit.com/default.asp
• Chatsworth Chatsworth Cabinets http://www.chatsworth.com/common/n-series
• TIA – Telecommunications Industry Association http://www.tiaonline.org/
• ASHRAE – American Society of Heating, Refrigerating and Air-Conditioning
Engineers http://www.ashrae.org/
• Uptime Institute http://uptimeinstitute.org/
• Government work on server and DC Energy Efficiency:
http://www.energystar.gov/index.cfm?c=prod_development.server_efficiency

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 79
Useful Standard Effforts Resources
 http://www.ietf.org/html.charters/trill-charter.html
 http://www.ietf.org/internet-drafts/draft-ietf-trill-prob-01.txt
 http://www.ietf.org/internet-drafts/draft-ietf-trill-rbridge-protocol-02.txt
--- o ---
 http://www.ieee802.org/1/files/public/docs2005/aq-nfinn-shortest-path-0905.pdf
 http://www.ieee802.org/1/files/public/docs2006/aq-nfinn-shortest-path-2-0106.pdf
 http://www.ieee802.org/1/pages/802.1au.html
 http://www.ieee802.org/3/ar/public/0503/wadekar_1_0503.pdf
 http://www.ieee802.org/1/files/public/docs2007/au-bergamasco-ecm-v0.1.pdf
--- o ---
 http://grouper.ieee.org/groups/802/3/hssg/
--- o ---
 http://www.t11.org/index.html

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 80
Q and A

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 81
Recommended Reading for BRKDCT-2866
 Data Center Fundamentals

 Storage Networking Protocol
Fundamentals

 Storage Networking
Fundamentals: An Introduction
to Storage Devices,
Subsystems, Applications,
Management, and File Systems

Available Onsite at the Cisco Company Store
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 82
Complete Your Online
Session Evaluation

 Cisco values your input
 Give us your feedback—we read
and carefully consider your scores
and comments, and incorporate
them into the content program
year after year
 Go to the Internet stations located
throughout the Convention Center
to complete your session
evaluations
 Thank you!

Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 83
Session_ID
Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public 84