You are on page 1of 128

Data Center Deployment Guide

February 2013 Series

Preface
Who Should Read This Guide

How to Read Commands

This Cisco® Smart Business Architecture (SBA) guide is for people who fill a
variety of roles:

Many Cisco SBA guides provide specific details about how to configure
Cisco network devices that run Cisco IOS, Cisco NX-OS, or other operating
systems that you configure at a command-line interface (CLI). This section
describes the conventions used to specify commands that you must enter.

• Systems engineers who need standard procedures for implementing
solutions
• Project managers who create statements of work for Cisco SBA
implementations

Commands to enter at a CLI appear as follows:

• Sales partners who sell new technology or who create implementation
documentation

Commands that specify a value for a variable appear as follows:

• Trainers who need material for classroom instruction or on-the-job
training

Commands with variables that you must define appear as follows:

configure terminal

ntp server 10.10.48.17

class-map [highest class name]

In general, you can also use Cisco SBA guides to improve consistency
among engineers and deployments, as well as to improve scoping and
costing of deployment jobs.

Commands shown in an interactive example, such as a script or when the
command prompt is included, appear as follows:

Release Series

Long commands that line wrap are underlined. Enter them as one command:

Cisco strives to update and enhance SBA guides on a regular basis. As
we develop a series of SBA guides, we test them together, as a complete
system. To ensure the mutual compatibility of designs in Cisco SBA guides,
you should use guides that belong to the same series.
The Release Notes for a series provides a summary of additions and
changes made in the series.
All Cisco SBA guides include the series name on the cover and at the
bottom left of each page. We name the series for the month and year that we
release them, as follows:
month year Series
For example, the series of guides that we released in February 2013 is
the “February Series”.

Router# enable

wrr-queue random-detect max-threshold 1 100 100 100 100 100
100 100 100

Noteworthy parts of system output or device configuration files appear
highlighted, as follows:
interface Vlan64
ip address 10.5.204.5 255.255.255.0

Comments and Questions
If you would like to comment on a guide or ask questions, please use the
SBA feedback form.
If you would like to be notified when new comments are posted, an RSS feed
is available from the SBA customer and partner pages.

You can find the most recent series of SBA guides at the following sites:
Customer access: http://www.cisco.com/go/sba
Partner access: http://www.cisco.com/go/sbachannel

February 2013 Series

Preface

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Cisco Nexus Virtual Port Channel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Configuring Cisco MDS 9148 Switch SAN Expansion . . . . . . . . 8 Technology Overview. . . . . . . . . . . . . . . . . . 10 Single-Homed Server Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Technology Overview. . . . . . . . . . . . 75 Configuring Fabric Extender Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Table of Contents What’s In This SBA Guide. . . . . . . . . . . . . . . . . . . . . . . . . . 41 February 2013 Series Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Cisco SBA Data Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Deployment Details. . . . 8 Business Overview. . . . . . . . . . . . 72 Technology Overview. 22 Configuring the Data Center Core IP Routing . . . . . . . . . . . . . . . . . . . . . . . 8 Configuring Fibre Channel SAN on Cisco Nexus 5500UP. . . . . . . . . . . . . . . . 2 Design Goals . . . . . . . . . . . 4 Physical Environment. . . . . . . . . . . . 74 Summary . . . . . . . . . . . . . . . . . . . . . 70 Business Overview. . . . . . 69 Ethernet Infrastructure. 14 Configuring the Data Center Core Setup and Layer 2 Ethernet. . . . . . . . 1 Technology Overview. . . . . 48 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Compute Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Cisco Nexus Fabric Extender. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Configuring FCoE Host Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Deployment Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Enhanced Fabric Extender and Server Connectivity. . . . . . . . . . . . . . . . . . . . . . . 3 Technology Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Server with Teamed Interface Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Business Overview. . . . . . . . . . . . . . . . . . . . . . . . 67 Business Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 About This Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Route to Success. . . . . . . . . . . . . . . . . . . . . . . . 35 Third-Party Blade Server System Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Configuring Ethernet Out-of-Band Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Business Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Storage Infrastructure. . . . . . . . . 10 Cisco UCS System Network Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Configuring Connectivity to the Data Center Core Switches . . . . . . . . . . 110 Configuring Firewall High Availability. . . . . . . . . . . . . . . .Network Security. . . . . . . . . . . . . 94 Appendix A: Product List . . . . 76 Deployment Details. . . . . . . . . . . . . . . . . . . 86 Load Balancing and SSL Offloading for HTTPS Servers. 87 Deploying Firewall Intrusion Prevention Systems (IPS) . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Configuring the Cisco ACE Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Business Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 February 2013 Series Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Setting Up Load Balancing for HTTP Servers . . . . . . . . . . . . . . . . . . . 103 Technology Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Deployment Details. . . . . . . . . . . . . . . . . . . 115 Evaluating and Deploying Firewall Security Policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Application Resiliency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Configuring the Data Center Firewall. . . . . . . . . . . . . . 105 Configuring Cisco ASA Firewall Connectivity . . . . . 120 Appendix B: Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Technology Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Prerequisite Guides • Business Overview—Describes the business use case for the design. Technical decision makers can use this section to understand how the design works. This deployment guide contains one or more deployment chapters. As you read this guide. application optimization. WAN.000 connected users. wireless. security. Cisco SBA Data Center is a comprehensive design that scales from a server room to a data center for networks with up to 10. You can find the most recent series of Cisco SBA guides at the following sites: Customer access: http://www. application resiliency. This component-level approach simplifies system integration of multiple technologies. Business decision makers may find this section especially useful. • Technology Overview—Describes the technical design for the business use case.com/go/sbachannel You Are Here Dependent Guides DATA CENTER Data Center Design Overview February 2013 Series Data Center Deployment Guide Additional Deployment Guides What’s In This SBA Guide 1 . • Deployment Details—Provides step-by-step instructions for deploying and configuring the design.cisco.com/go/sba Partner access: http://www. security. and virtualization. scalable. Systems engineers can use this section to get the design up and running quickly and reliably. Route to Success To ensure your success when implementing the designs in this guide. This design incorporates compute resources. out-of-the-box. and unified communication technologies—tested together as a complete system. including an introduction to the Cisco products that make up the design.What’s In This SBA Guide Cisco SBA Data Center About This Guide Cisco SBA helps you design and quickly deploy a full-service business network. you should first read any guides that this guide depends upon—shown to the left of this guide on the route below. specific prerequisites are cited where they are applicable.cisco. allowing you to select solutions that solve your organization’s problems—without worrying about the technical complexity. and flexible. A Cisco SBA deployment is prescriptive. which each include the following sections: Cisco SBA incorporates LAN. data center.

scalable. cooling. mounting racks. or features from Cisco and Cisco partners that may be important to solving your organization’s requirements. The Cisco SBA—Data Center Deployment Guide incorporates Ethernet. and reliability.Introduction The Cisco Smart Business Architecture (SBA) data center foundation is a comprehensive architecture designed to provide data center Ethernet and storage networking. The storage infrastructure chapter shows in depth how to deploy a Fibre Channel storage area network (SAN) using the Cisco Nexus 5500UP switches as the SAN core. monitor server and application operation. The ultimate goal of the design is to support the user services that drive the organization’s success. and balance loads across multiple servers for better performance. The Cisco SBA program follows a consistent design process of building a network based on layers of services. This architecture: • Provides a solid foundation • Makes deployment fast and easy • Accelerates your ability to easily deploy new servers and additional services • Avoids the need to reengineer the network as your organization grows This guide includes the following chapters: • The “Storage Infrastructure” chapter shows how the foundation Ethernet design accommodates IP-based network storage for network attached storage (NAS). which add value. Cisco SBA was designed to be easy to configure. The aspects of power. and blade server systems’ connectivity to the network. February 2013 Series Introduction 2 . The architecture for the Cisco SBA data center builds upon the server room deployment detailed in the Server Room Deployment Guide. affordable. security. technologies. Figure 1 illustrates the Cisco SBA data center design layered services. and flexible to support data center services. there are also a number of supplemental guides that address specific functions. as well as the software revisions used on the products. • The “Compute Connectivity” chapter explains the various host connectivity options that you can use in the data center. and load balancing for up to 300 server ports with a mix of physical and logical servers. The intrusion prevention system (IPS) section explains how to deploy Cisco IPS to monitor your network for intrusions and attacks. Design Goals • The “Ethernet Infrastructure” chapter establishes the foundation connectivity for your data center network as it outgrows the server farm size. and a list of major changes to the guide since it was last published. The data center foundation must be resilient. and manage. and application resiliency tested together as a solution. servers. The Ethernet chapter explains how to configure Layer 2 and Layer 3 connectivity in the data center and the communications path to the rest of the organization. To enhance the architecture. and flexible. This section focuses on building a central connection point for the application servers that drive the organization and the services that surround them. and space required are outlined for consideration in your data center design. • The first chapter covers elements of the data center design regarding the physical environment. scalable. This out-of-the-box approach is simple. The primary building block is the foundation layer upon which all other services rely. security. performance. • The “Application Resiliency” chapter shows how server load balancing can be used to quickly grow server application farms. • The appendices provide the complete list of products used in the lab testing of this architecture. deploy. storage network. easy to use. This solution-level approach to building out an architecture simplifies the system integration normally associated with multiple technologies. allowing you to select the modules that meet your organization’s requirements rather than worrying about matching components and worrying about interoperability. • The “Network Security” chapter focuses on the deployment of firewalls to protect the critical and sensitive information assets of your organization. The chapter covers dual-homed and single-homed servers.

and web services. storage capacity. which improves application availability. ERP The Cisco SBA data center design provides an evolution from the basic “server room” infrastructure. This can initially cause issues when storage requirements for a given server increase beyond the physical capacity of the server hardware platform in use. demand for increased processing capacity. the need for additional data storage capacity also increases. Managing Growing Data Storage Requirements As application requirements grow. or basic network port count to attach new servers. Centralized storage systems can increase the reliability of disk storage. The first phase of the server room evolution is often triggered when the organization outgrows the capacity of the existing server room network. Storage. email. Over time. • Reuse—The goal. best-practices configuration. Introduction 3 . As the organization grows. Products selected needed to have the ability to grow or be repurposed within the architecture. Switching. The Cisco SBA data center design is designed to address five primary business challenges: Security. Many factors can limit the capacity of the existing facility. including rack space. A dedicated storage system provides multiple benefits beyond raw disk capacity. A centralized storage system can provide disk capacity across multiple applications and servers. Each module is focused on the following principles: • Ease of use—A top requirement was to develop a design that could be deployed with the minimal amount of configuration and day-two management. Application Resilience • Supporting rapid application growth • Managing growing data storage requirements • Optimizing the investment in server processing resources • Securing the organization’s critical data Routing. was to reuse the same products throughout the various modules to minimize the number of products required for spares. when possible. Compute 2196 Data Center Foundation The Cisco SBA deployment guides are all designed to use a modular concept of building out a network. Business Overview Organizations encounter many challenges as they work to scale their information-processing capacity to keep up with demand. This deployment guide provides reference architecture to facilitate rapid adoption of these data center technologies by using a common. power. or new applications are deployed. providing greater scalability and flexibility in storage provisioning.” An organization can February 2013 Series • Increasing application availability Supporting Rapid Application Growth As applications scale to support a larger number of users. Figure 1 . so too must its infrastructure.Cisco SBA data center pyramid of service layers User Services Data Center Services Voice.then use some of the same data center technologies that larger organizations use to meet expanding business requirements in a way that keeps capital and operational expenses in check. The architecture outlined in this guide is designed to allow the organization to smoothly scale the size of the server environment and network topology as business requirements grow. a small group of server resources may be sufficient to provide necessary applications such as file sharing. the investment in additional storage capacity is most efficiently managed by moving to a centralized storage model. database applications. Email. • Cost-effective—Another critical requirement in the selection of products was to align with the requirements for a data center that scales to up to 300 server ports. switching throughput. In a new organization. cooling. CRM. the number of servers required to meet the needs of the organization often increases. • Flexibility and scalability—As the company grows. Virtualization. and distinct operational control over specific servers can cause a growth explosion commonly known as “server sprawl. Storage systems allow an organization to provide increased capacity to a given server over the network without needing to physically attach new devices to the server itself.

Introduction 4 . Figure 3 provides a high-level overview of this architecture. the data center firewall can provide policy isolation from the production servers located in the same data center domain. Server virtualization and centralized storage technologies complement one another. Running multiple VMs on server hardware helps to more fully utilize the organization’s investment in processing capacity. physical servers are often dedicated to single applications to increase stability and simplify troubleshooting. which are critical to an organization’s success. Optimizing the Investment in Server Processing Resources The Cisco SBA data center design illustrates how to cleanly integrate network security capabilities such as firewall and intrusion prevention. This may come in the form of onsite vendors. and security. VMs can be stored completely on the centralized storage system. This allows the organization great flexibility when rolling out new applications or upgrading server hardware. Figure 2 . which decouples the identity of the VM from any single physical server. while still providing support for the existing installed base of equipment. network security quickly becomes a primary concern in a growing organization. The architecture provides the flexibility to secure specific portions of the data center or insert firewall capability between tiers of a multi-tier application according to the security policy agreed upon by the organization. However. Server virtualization technologies allow a single physical server to run multiple virtual instances of a “guest” operating system. Underutilized processing resources represent an investment by the organization that is not being leveraged to its full potential. contaminated employee laptops. these servers do not operate at high levels of processor utilization for much of the day. Supplemental guides are included with this series which focus specifically on server virtualization. key applications that drive the business must be available when the workforce needs them. configuration.Application server farm in various states of operation Application Server Farm x The architecture defined in this guide is designed to facilitate easy deployment of server virtualization. Often organizations will begin by securing their Internet edge connection. and to be able to add more servers to an application server farm quickly and transparently. creating virtual machines (VMs). IT organizations require the ability to monitor beyond simple server availability to application availability. flexibility. Application availability drives productivity and customer satisfaction. Availability of applications can be threatened by overloaded servers and server or application failure. However. Technology Overview The Cisco SBA data center design is designed to allow organizations to take an existing server room environment to the next level of performance. With the centralized repository of the organization’s most critical data typically being the data center. considering the internal network a trusted entity. an Internet firewall is only one component of building security into the network infrastructure. In virtual desktop deployments. security is no longer considered an optional component of February 2013 Series Data Center 2212 As an organization grows. while still allowing each VM to be viewed independently from a security. which helps protect the organization against data loss and application outages. making it difficult for IT teams to diagnose. a complete data center architecture plan. allowing rapid deployment of new servers and reduced downtime in the event of server hardware failures. or existing servers that have already become compromised and may be used as a platform to launch further attacks. Securing the Organization’s Critical Data With communication and commerce in the world becoming increasingly Internet-based.More sophisticated backup and data replication technologies are available in centralized storage systems. and troubleshooting perspective. Unbalanced utilization can drive unacceptable response times for some users and satisfactory operation for others. protecting areas of the network housing critical server and storage resources. where the user’s desktop is hosted on a server located in the data center. Increasing Application Availability With the expanding global presence and around-the-clock operations of organizations. Frequently. threats to an organization’s data may come from within the internal network.

Figure 3 . and Fabric Interconnects Nexus 2200 Series Fabric Extenders Cisco ACE Server Load Balancing Nexus 5500 Layer 2/3 Ethernet and SAN Fabric LAN Core Cisco ASA Firewalls with IPS Ethernet Fibre Channel Fibre Channel over Ethernet UCS I/O and FEX Uplinks Data Center February 2013 Series SAN A FCoE and iSCSI Storage Array Fibre Channel Storage Array Fibre Channel Storage Array 2216 UCS Fabric Interconnect Link SAN B Expanded Cisco MDS 9100 Storage Fabric Introduction 5 . Chassis.Cisco SBA data center design Cisco UCS C-Series Servers Third-party Rack Servers Cisco UCS Blade Servers.

which provides an active/ standby resiliency to prevent downtime in the event of a failure or platform maintenance. Compute Connectivity There are many ways to connect a server to the data center network for Ethernet and Fibre Channel transport. Cisco Nexus 5500UP series is a high-speed switch capable of Layer 2 and Layer 3 switching with the Layer 3 daughter card tested in this design. Network Security Within a data center design. The Cisco SBA data center design uses Cisco Nexus 5500UP series switches as the core of the network. This allows the data center core to support multiple storage networking technologies like Fibre Channel storage area network (SAN). Gigabit Ethernet.The Cisco SBA data center design is designed to stand alone. Cisco ASA 5585-X also has a slot for services. Cisco MDS series can provide an array of advanced services for Fibre Channel SAN environments where high-speed encryption. Internet Small Computer February 2013 Series System Interface (iSCSI). simplifying the task of configuration by reducing the number of devices you have to touch to deploy a server port. to detect attacks and snooping. Dual-homing the 10-Gigabit Ethernet servers with FCoE provides resilient Ethernet transport and Fibre Channel connections to SAN-A/SAN-B topologies. and network attached storage (NAS) on a single platform type. This chapter provides an overview of connectivity ranging from single-homed Ethernet servers to a dual-homed Fabric Extender. Ethernet Infrastructure The Ethernet infrastructure forms the foundation for resilient Layer 2 and Layer 3 communications in the data center. The importance of this model switch is that it has universal port (UP) capabilities. and 10-Gigabit Ethernet connectivity for hundreds of servers in a modular approach. This layer provides the ability to migrate from your original server farm to a scalable architecture capable of supporting Fast Ethernet. which provides a loop-free approach to building out the data center in which any VLAN can appear on any port in the topology without spanningtree loops or blocking links. and in this design provides an IPS module to inspect application layer data. inter-VSAN routing. or to connect to one of the Cisco SBA Layer-3 Ethernet core solutions as documented in Cisco SBA—Borderless Networks LAN Design Overview. The data center design is tested with the Cisco ASA 5585-X series firewall. or Fibre Channel over IP extension might be required. if deployed at an offsite facility. The physical interfaces on the Cisco FEX are programmed on the Cisco Nexus 5500UP switches. The Cisco Nexus 5500UP series features Virtual Port Channel (vPC) technology. and to block malicious traffic based on the content of the packet or the reputation of the sender. and Fibre Channel over Ethernet (FCoE) on any port. Introduction 6 . Fibre Channel. with an available 96-port model for higher density requirements. This deployment chapter includes procedures for interconnecting between Cisco Nexus 5500UP series and Cisco MDS series for Fibre Channel SAN. Gigabit Ethernet. This not only reduces costs to deploy the network but saves rack space in expensive data center hosting environments. The Cisco ASA 5585-X firewalls with IPS modules are deployed in a pair. which provides a remote line card approach for fan out of server connectivity to top of rack for Fast Ethernet. there are many requirements and opportunities to include or improve security for customer confidential information and the organization’s critical and sensitive applications. Servers that use 10-Gigabit Ethernet can collapse multiple Ethernet NICs and Fibre Channel host bus adapters (HBAs) onto a single wire using converged network adapters (CNAs) and FCoE. The core of the Cisco SBA data center is built on the Cisco Nexus 5500UP series switches. A universal port is capable of supporting Ethernet. This chapter also provides an overview of how the integrated connectivity of Cisco Unified Computing System (UCS) blade server systems work and considerations for connecting a non–Cisco blade server system to the network. Cisco Nexus 5500UP supports Fabric Extender (FEX) technology. Centralized storage reduces the amount of disk space trapped on individual server platforms and eases the task of providing backup to avoid data loss. and dual-homed servers that might use active/standby network interface card (NIC) teaming or EtherChannel for resiliency. tape services. Storage Infrastructure Storage networking is key to solving the growing amount of data storage that an organization has to struggle with. The following technology areas are included within this reference architecture. and 10-Gigabit Ethernet requirements. Cisco Nexus 5500UP Fibre Channel capabilities are based on the Cisco NX-OS operating system and seamlessly interoperate with the Cisco MDS Series SAN switches for higher-capacity Fibre Channel requirements. Cisco ASA 5585-X provides high-speed processing for firewall rule sets and high bandwidth connectivity with multiple 10-Gigabit Ethernet ports for resilient connectivity to the data center core switches. The data center core switches are redundant with sub-second failover so that a device failure or maintenance does not prevent the network from operating. Cisco Nexus 5500UP series 48-port model is used in this design.

and various other acceleration technologies. This approach allows you to take advantage of some of the newer technologies being used in the data centers of very large organizations without encountering a steep learning curve for the IT staff. This architecture is designed to allow an organization to position its network for growth while controlling both equipment costs and operational costs. This architecture includes Cisco Application Control Engine (ACE) to provide the latest technology for Layer 4 through Layer 7 switching and server load balancing (SLB). Secure Sockets Layer (SSL) offload. February 2013 Series Introduction 7 . Server load balancers can spread the load across multiple servers for an application. step-by-step instructions for completing the configuration of the components of the architecture to get your network up and running. compression. and actively probe the servers and applications for load and health statistics to prevent overload and application failures. the modular nature of this chapter allows you to perform a gradual migration by choosing specific elements of the architecture to implement first. they find it even more important to make sure that critical applications are operating at peak performance. which drives the bottom line of an organization. Although this architecture has been designed and validated as a whole. The deployment processes documented in this chapter provide concise. Cisco ACE also offers TCP processing offload. As organizations expand to do business in a 24-hour. globally available environment.Application Resiliency Application performance and availability directly affect employee productivity and customer satisfaction. The Cisco ACE 4710 appliances used in this architecture are scalable to multiGigabit operation and are deployed as an active/standby pair to prevent outage from device failure or maintenance.

Power Know what equipment will be installed in the area. which means the power is filtered through the batteries all the time. which requires that you develop a proper cooling design that includes locating equipment racks to prevent hotspots. and deeper cabinets give more flexibility for cable and power management within the cabinet. To prevent power outages. and managed correctly. UPSs vary by how much load they can carry and for how long. etc. Careful planning is required to make sure the correct UPS is purchased. causing business downtime and possible data loss. you need to install racking or cabinets. and wall-mounted cooling. Does the power need to be on all the time? In most cases where servers and storage are involved. switches. These vertical strips also assist in proper cable management of the power cords. or even a data center. you need an uninterruptable power supply (UPS). Be sure to at least plan with your facilities team what the options are for current and future cooling. the UPS will switch over the current load to a set of internal or external batteries. Most UPSs provide for remote monitoring and the ability to trigger a graceful server shutdown for critical servers if the UPS is going to run out of battery.) need more than building air conditioning for proper cooling. a switch closet. Short C13/C14 and C19/C20 power cords can be used instead of much longer cords to multiple 110V outlets or multiple 110V power strips. raised floor with underfloor cooling. Physical Environment 8 . you have to carefully consider the location where you will install the equipment. and racking. Servers tend to be fairly deep and take up even more space with their network connections and power connections. Some UPSs are online. and how long you will retain power in a backup power event. Cooling With power comes the inevitable conversion of power into heat. Other equipment might require much more power. There are many options available to distribute the power from the outlet or UPS to the equipment. Technology Overview The Cisco SBA data center design provides a resilient environment with redundant platforms and links. One example would be using a power strip that resides vertically in a cabinet that usually has an L6-30 input and then C13/C19 outlets with the output voltage in the 200–240V range. February 2013 Series installed. you must consider how much power you will require. The meter provides a current reading of the load on the circuit. Proper placement and planning allow for easy growth. When building a server room. After you have evaluated power and cooling. and appliances in your data center dissipate heat as they operate. this cannot protect your data center from a complete failure resulting from a total loss of power or cooling. Most servers will fit in a 42-inch deep cabinet. Equipment Racking It’s important to plan where to put the equipment. Applications don’t react very well when the power goes out. Many options are available. Multiple servers and blade servers (along with storage. because a circuit breaker that trips due to being overloaded will bring down everything plugged into it with no warning. This is critical. overhead cooling. including in-row cooling. meaning they use batteries only during power loss. To put it simply: power in equals heat out. When designing your data center. the answer is yes. Distributing the power to the equipment can change the power requirements as well. Know your options in each of these categories.Physical Environment Business Overview When building or changing a network. others are switchable. For complete remote control. how you will provide backup power in the event of a loss of your power feed from your provider. You cannot plan electrical work if you do not know what equipment is going to be used. networking equipment. cooling. You also need to consider that servers. During a power interruption. and you will minimize surprises and moving of equipment later on. take three things into consideration: power. power strips are available with full remote control of each individual outlet from a web browser. Some equipment requires standard 110V outlets that may already be available. however. Planning for cooling of one or two servers and a switch with standard building air conditioning may work. These strips should be—at a minimum—metered so one does not overload the circuits.

even if you begin on a smaller scale. that you may need. Cage nuts can be used to provide threaded mounts for such things as routers. Working toward deployment of Cisco SBA allows you to plan the physical space for your data center with a vision towards the equipment you will be installing over time. Summary The physical environmental requirements for a data center require careful planning to provide for efficient use of space. Most servers now come with rack mounts that use the square hole–style vertical cabinet rails. switches. shelves. contact Cisco partners in the area of data center environmental products such as Panduit and APC. scalability. Data center racks should use the square rail mounting options in the cabinets. which makes managing servers and equipment difficult if not sometimes impossible without removing other equipment or sacrificing space. cooling. etc. For additional information on data center power. and equipment racking. Not having the proper rails can mean that you have to use adapters or shelves. February 2013 Series Physical Environment 9 .Be aware of what rails are required by your servers. and ease of operational maintenance.

• The Layer 3 engine supports up to 8000 adjacencies or MAC addresses for the Layer 2 domain.com/en/US/docs/switches/datacenter/nexus5000/ sw/configuration_limits/limits_521/nexus_5000_config_limits_521.2(1)N1(1) or later. and database layers into multiple servers. html#wp327738 The Cisco Nexus 5500UP switches with universal port (UP) capabilities provide support for Ethernet. Fabric Extenders allow the switching fabric of the resilient switching pair to be physically extended to provide port aggregation in the top of multiple racks. others may rent floor space. The Cisco SBA data center is designed to allow easy migration of servers and services from your original server room to a data center that can scale with your organization’s growth. Technology Overview The foundation of the Ethernet network in the Cisco SBA data center is a resilient pair of Cisco Nexus 5500UP Series switches. These switches offer the ideal platform for building a scalable. increasing the amount of server-to-server traffic and driving performance requirements higher. It is important to be prepared for the ongoing transition of available server hardware from 1-Gigabit Ethernet attachment to 10-Gigabit Ethernet.Ethernet Infrastructure Business Overview As your organization grows. • The solution provides for up to 1000 IP Multicast groups when operating in the recommended Virtual Port Channel (vPC) mode. The Nexus 5500UP can act as the Fibre February 2013 Series Ethernet Infrastructure 10 . The Cisco Nexus 5000 Series also supports the Cisco Nexus 2000 Series Fabric Extenders. and power from a communications service provider to lower their capital costs. Using 10-Gigabit Ethernet connections helps to improve overall network performance and also reduces the number of physical links required to provide the bandwidth. it also becomes more challenging to elegantly manage the cabling required to attach servers to the network. business logic. the data center may be located at a facility other than the headquarters building. high-performance data center supporting both 10-Gigabit and 1-Gigabit Ethernet attached servers. Some organizations will locate their data center at a remote facility where power or cooling more suitable for a data center is located. you may outgrow the capacity of the basic “server-room” Ethernet switching stack illustrated in the Cisco SBA—Data Center Design Overview. and Fibre Channel ports on a single platform.cisco. As the physical environment housing the organization’s servers grows to multiple racks. Reader Tip More specific scalability design numbers for the Cisco Nexus 5500 Series platform can be found at: http://www. racks. The ability to locate the data center in a number of different locations requires a data center architecture that is flexible to adapt to different locations while still providing the core elements of the architecture. A second generation of the Layer 3 engine for the Cisco Nexus 5548 and 5596 switches is now available. Multi-tier applications often divide browser-based client services. This second generation hardware version of the Layer 3 module doubles the scalability for routing and adjacencies when you are running Cisco NX-OS software release 5. In some organizations. reducing cable management issues as the server environment expands. The Cisco SBA data center design leverages many advanced features of the Cisco Nexus 5500UP Series switch family to provide a central Layer 2 and Layer 3 switching fabric for the data center environment: • The Layer 3 routing table can accommodate up to 8000 IPv4 routes. Fibre Channel over Ethernet (FCoE). Channel SAN for the data center and connect into an existing Fibre Channel SAN.

This section provides an overview of the key features used in this topology and illustrates the specific physical connectivity that applies to the example configurations provided in the “Deployment Details” section.Traditional design with spanning tree blocked links Data Center Core VLAN 148 VLAN 148 LAN Internet and DMZ WAN 2198 LAN Distribution Layer or Collpased Core The result of using Layer 3 to interconnect the two core layers is: • A resilient Layer 3 interconnect with rapid failover. Ethernet Infrastructure 11 . For Cisco EtherChannel technology. switch. Figure 5 . Traditional Layer 2 designs with LAN switches use spanning tree. The third device can be a server. This capability allows the two data center core switches to build resilient.3ad port channels. February 2013 Series Spanning Tree Root Switch 2052 Spanning Tree Blocked Link Separate Change Control Domains The Cisco Nexus 5500UP Series switch pair providing the central Ethernet switching fabric for the Cisco SBA data center is configured using vPC. as part of a single Ethernet port channel. Of the vPC peers. and also the ability to move a server load to any other physical server in the data center. loop-free Layer 2 topologies that forward on all connected links instead of requiring Spanning Tree Protocol blocking for loop prevention. Cisco NX-OS Software vPC used in the data center design and Cisco Catalyst Virtual Switching Systems (VSS) used in the Cisco SBA— Borderless Networks LAN Design Overview are similar technologies in that they allow the creation of Layer 2 port channels that span two switches. • A data center core that provides interconnect for all data center servers and services. MCEC links from a device connected using vPC to the data center core and provides spanningtree loop–free topologies. The vPC feature allows links that are physically connected to two different Cisco Nexus switches to appear to a third downstream device to be coming from a single device. • A data center that has a logical separation point for moving to an offsite location while still providing core services without redesign. or any other device or appliance that supports IEEE 802. A vPC consists of two vPC peer switches connected by a peer link. the term multichassis EtherChannel (MCEC) refers to either technology interchangeably. • A logical separation of change control for the two core networks.The Layer 3 data center core connects to the Layer 3 LAN core designed in the Cisco SBA—Borderless Networks LAN Deployment Guide as shown in Figure 4.Data center core and LAN core change control separation Data Center Servers and Services Resilient Data Center Core The data center needs to provide a topology where any data center VLAN can be extended to any server in the environment to accommodate new installations without disruption. • A LAN core that provides a scalable interconnect for LAN. WAN. one is primary and one is secondary. which creates loops when a VLAN is extended to multiple access layer switches. Figure 4 . as shown in Figure 5. allowing VLANs to be extended across the data center while maintaining a resilient architecture. The system formed by the switches is referred to as a vPC domain. Spanning Tree Protocol blocks links to prevent looping. • Intra-data center Layer 2 and Layer 3 traffic flows between servers and appliances that are switched locally on the data center core. and Internet Edge.

Because the Cisco FEX acts as a line card on the Cisco Nexus 5500UP switch. You can provide network resiliency by dual-homing servers into two separate fabric extenders. which provide a February 2013 Series Ethernet Infrastructure 12 .” here: www. Both the single-homed and dual-homed topologies provide the flexibility to have a VLAN appear on any port without loops or spanning-tree blocked links. When combining HSRP with vPC. there is no need for aggressive HSRP timers to improve convergence.com Our reference architecture example shown in Figure 8 illustrates singlehomed and dual-homed Cisco FEX configurations with connected servers. Figure 7 . extending VLANs to server ports on different Cisco FEXs does not create spanningtree loops across the data center.Cisco NX-OS vPC design VLAN 148 VLAN 148 Layer 2 EtherChannels vPC Peer Keepalive vPC Domain 2217 vPC Peer Link This feature enhances ease of use and simplifies configuration for the data center-switching environment. Figure 6 . All configuration for Cisco FEX–connected servers is done on the data center core switches. Each Cisco FEX includes dedicated fabric uplink ports that are designed to connect to upstream Cisco Nexus 5500UP Series switches for data communication and management. without needing to manage these ports as a separate logical switch.Cisco FEX and vPC combined Single-homed FEX with VLAN 148 Dual-homed FEX with VLAN 148 Single-homed FEX with VLAN 148 Reader Tip vPC Peer Link The Cisco SBA data center design uses Hot Standby Router Protocol (HSRP) for IP default gateway resiliency for data center VLANs. because both gateways are always active and traffic to either data center core will be locally switched for improved performance and resiliency. the Cisco FEX itself may instead be dual-homed using vPC into the two members of the data center core switch pair. Fabric extension allows you to aggregate a group of physical switch ports at the top of each server rack.centralized point to configure all connections for ease of use.cisco. To provide high availability for servers that only support single-homed network attachment. The Cisco Nexus 2000 Series Fabric Extender (FEX) delivers cost-effective and highly scalable 1-Gigabit Ethernet and 10-Gigabit Ethernet environments. Any 10-Gigabit Ethernet port on the Cisco Nexus 5500UP switch may be used for a Cisco FEX connection. The Cisco FEX behaves as a remote line card to the Cisco Nexus 5500UP switches. Ethernet Fabric Extension Nexus 5500UP Ethernet vPC Switch Fabric vPC Peer Keepalive 2218 For more information on vPC technology and design. each of which is single-homed to one member of the Cisco Nexus 5500UP Series switch pair. refer to the documents “Cisco NX-OS Software Virtual PortChannel: Fundamental Concepts” and “Spanning-Tree Design Guidelines for Cisco NX-OS Software and Virtual PortChannels.

Ethernet switching fabric physical connections Single-homed Server a data center focus. to configure the maximum number of fabric uplinks leveraging either twinax (CX-1) cabling or the Fabric Extender Transceiver (FET) and OM3 multimode fiber. as determined by Data Center Bridging Exchange (DCBX) negotiation with hosts. depending on the model of Cisco FEX in use and the level of oversubscription you want in your design. For example. The remaining four queues are available for use to support queuing consistent with the rest of Cisco SBA. the Cisco SBA data center deployment takes the following QoS approach: Figure 8 . while one hardware queue is assigned for use by lossless FCoE traffic. either directly or after traversing a FEX connection. Lacking the guarantee that all non-FCoE devices in the data center can generate an appropriate CoS marking required for application of QoS policy at ingress to a FEX. also known as Class of Service (CoS) bits in the header of the Layer 2 frame from hosts supporting FCoE and other trunked devices. a CoS marking is also applied per Cisco Nexus 5500 internal queue. a priority queue will be defined for jitter-intolerant multimedia services in the data center. Non-FCoE devices requiring DSCP-based classification with guaranteed queuing treatment can be connected directly to the Cisco Nexus 5500 switch. At least two Cisco FEX uplinks to the data center core are recommended for minimum resiliency. with suitable aggregated link bandwidth available to mitigate oversubscription situations. As IP traffic arrives at an Ethernet port on the Cisco Nexus 5500 Series switch.2(1)N1(1). One queue is predefined for default traffic treatment. versus taking the default uplink treatment when connected to a Cisco FEX port. The CoS marking is used for classification of traffic ingress to the Layer 3 engine. The Cisco FEX will support up to four or eight uplinks to the Cisco Nexus 5500UP parent switches. is given priority and lossless treatment end-toend within the data center. allowing application of system queuing policies. Traffic in the reverse direction toward the FEX is handled by the QoS egress policies on the Cisco Nexus 5500 switch. they support up to sixteen connected Cisco FEXs as of Cisco NX-OS release 5. It is recommended. Much of the QoS for classification and marking in the system is constructed through the use of the IEEE 802. Ethernet Infrastructure 13 . This classification is used to map traffic into the default queue or into one of the four non-FCoE internal queues to offer a suitable QoS per-hop behavior. Nexus 5500UP Ethernet vPC Switch Fabric To Cisco SBA LAN Core The traffic classifications are used for mapping traffic into one of six hardware queues. • To ensure consistent policy treatment for traffic directed through the Layer 3 engine. it can also be classified at Layer 3 by differentiated services code point (DSCP) bits and IP access control lists (ACLs). the Nexus 5500 switches and the Nexus 2000 fabric extenders as a system implement an approach that uses Quality of Service (QoS) with February 2013 Series • Non-FCoE traffic without CoS classification for devices connected to a FEX is given default treatment over available links on ingress toward the Cisco Nexus 5500 switch.1Q Priority Code Point. Dual-homed Servers Dual-homed FEX 2248 Single-homed FEX 2232 2219 Quality of Service To support the lossless data requirement of FCoE on the same links as IP traffic. when possible. each appropriately configured for desired traffic handling. • Classification by DSCP is configured at the port level and applied to IP traffic on ingress to the Cisco Nexus 5500 switch.Tech Tip When the Cisco Nexus 5500UP Series switches are configured for Layer 3 and vPC operation. • FCoE traffic.

Configure switch universal settings 3. A switch like Cisco Catalyst 3560X is ideal for this purpose because it has dual power supplies for resiliency. Figure 9 . February 2013 Series Out of Band Ethernet Switch Mgmt 0 Mgmt 0 2220 An increasing number of switching platforms. Apply the switch global configuration The out-of-band network provides: • A Layer 2 path. the WAN router can provide Layer 3 connectivity to the data center management subnet. Configure platform-specific switch settings 2.Core LAN switch providing Layer 3 connectivity 4.The QoS policy is also the method for configuring jumbo frame support on a per-class basis. Consistent per-CoS maximum transmission unit (MTU) requirements are applied system-wide for FCoE. because the equipment is typically contained within in a few racks and does not require fiber-optic interconnect to reach far-away platforms. for vPC keepalive packets running over the management interface • A path for configuration synchronization between Cisco Nexus 5500UP switches via the management interfaces • A common connection point for data center appliance management interfaces like firewalls and load balancers • A connectivity point for management ports on servers Although the Layer 2 switch does provide a common interconnect for packets inside the data center. Configure switch links to the Layer 3 core This design uses a fixed-configuration Layer 2 switch for the out-of-band Ethernet management network. Ethernet Infrastructure 14 . If your data center is located at a facility separate from a large LAN. appliances. Configure switch access ports 5. it needs to provide the ability for IT management personnel outside of the data center to access the data-center devices. as opposed to the portbased MTU configuration typical of devices used outside of the data center. the core LAN switch can provide Layer 3 connectivity to the data center management subnet. Deployment Details The following configuration procedures are required to configure the Ethernet switching fabric for the Cisco SBA data center design. Increasing MTU size can increase performance for bulk data transfers. If your data center is at the same location as your headquarters LAN. monitoring. and keepalive processes. The typical mid-tier data center is an ideal location for an Ethernet out-ofband management network. The options for providing IP connectivity depend on the location of your data center. Process Configuring Ethernet Out-of-Band Management 1. independent of the data path of the Cisco Nexus 5500UP data center core switches. and servers utilize discrete management ports for setup.

This is the configuration described in this guide. As such. because Link Aggregation Control Protocol (LACP) and many other protocols rely on the stack MAC address and must restart. you need to manually configure the global QoS settings by defining a macro that you will use in later procedures to apply the platform-specific QoS configuration.WAN router providing Layer 3 connectivity Procedure 1 WAN Configure platform-specific switch settings Step 1: Configure the Catalyst 2960-S and 3750-X platform.Figure 10 . the management ports are in the same IP subnet. as illustrated in Figure 11. February 2013 Series stack-mac persistent timer 0 The default behavior when the stack master switch fails is for the newly active stack master switch to assign a new stack MAC address. 2222 Mgmt 0 Step 2: Ensure the original master MAC address remains the stack MAC address after a failure. Out of Band Ethernet Switch Mgmt 0 2221 Mgmt 0 When three or more switches are configured in a stack. because the Nexus 5500UP management ports are in a separate management Virtual Routing and Forwarding (VRF) path than the global packet switching of the Cisco Nexus 5500UP switches. the stackmac persistent timer 0 command should be used to ensure that the original master MAC address remains the stack MAC address after a failure. the Layer 2 path for vPC keepalive packets will use the Ethernet out-of-band switch. Configure QoS for Cisco Catalyst 3750-X and 3560-X Step 1: Define a macro that you can use later to apply the platform-specific QoS configuration for Cisco Catalyst 3750-X and 3560-X switches. so they do not need a Layer 3 switch for packets between the data center core switches. A third option for providing Layer 3 connectivity to the data center management subnet is to use the data center core Cisco Nexus 5500UP switches. switch [switch number] priority 15 When there are multiple Cisco Catalyst 2960-S or Cisco Catalyst 3750-X Series switches configured in a stack. mls mls mls mls mls mls mls mls mls qos qos qos qos qos qos qos qos qos map policed-dscp 0 10 18 24 46 to 8 map cos-dscp 0 8 16 24 32 46 48 56 srr-queue input bandwidth 70 30 srr-queue input threshold 1 80 90 srr-queue input priority-queue 2 bandwidth 30 srr-queue input cos-map queue 1 threshold 2 3 srr-queue input cos-map queue 1 threshold 3 6 7 srr-queue input cos-map queue 2 threshold 1 4 srr-queue input dscp-map queue 1 threshold 2 24 Ethernet Infrastructure 15 . Because AutoQoS may not be configured on this device. choose a switch that does not have uplinks configured to configure as the stack master. Figure 11 .Providing Layer 3 connectivity by using core Cisco Nexus 5500UP switches Out of Band Ethernet Switch Mgmt 0 Tech Tip When you use the data center core Cisco Nexus 5500UP switches for Layer 3 connectivity. Also. one of the switches controls the operation of the stack and is called the stack master. This new MAC address assignment can cause the network to have to reconverge. The Layer 3 switched virtual interface (SVI) will provide connectivity for access outside of the data center. Option 1.

mls qos map policed-dscp mls qos map cos-dscp 0 8 mls qos srr-queue output mls qos srr-queue output mls qos srr-queue output mls qos srr-queue output mls qos srr-queue output mls qos srr-queue output mls qos srr-queue output 41 42 43 44 45 mls qos srr-queue output mls qos srr-queue output 19 20 21 22 23 mls qos srr-queue output 29 30 31 34 35 mls qos srr-queue output 39 mls qos srr-queue output mls qos srr-queue output 51 52 53 54 55 mls qos srr-queue output 59 60 61 62 63 mls qos srr-queue output 4 5 6 7 0 10 18 24 46 to 8 16 24 32 46 48 56 cos-map queue 1 threshold 3 4 5 cos-map queue 2 threshold 1 2 cos-map queue 2 threshold 2 3 cos-map queue 2 threshold 3 6 7 cos-map queue 3 threshold 3 0 cos-map queue 4 threshold 3 1 dscp-map queue 1 threshold 3 32 33 40 dscp-map queue 1 threshold 3 46 47 dscp-map queue 2 threshold 1 16 17 18 dscp-map queue 2 threshold 1 26 27 28 dscp-map queue 2 threshold 1 36 37 38 dscp-map queue 2 threshold 2 24 dscp-map queue 2 threshold 3 48 49 50 dscp-map queue 2 threshold 3 56 57 58 dscp-map queue 3 threshold 3 0 1 2 3 Ethernet Infrastructure 16 .mls qos srr-queue 51 52 53 54 55 mls qos srr-queue 59 60 61 62 63 mls qos srr-queue 41 42 43 44 45 mls qos srr-queue mls qos srr-queue mls qos srr-queue mls qos srr-queue mls qos srr-queue mls qos srr-queue mls qos srr-queue mls qos srr-queue 41 42 43 44 45 mls qos srr-queue mls qos srr-queue 19 20 21 22 23 mls qos srr-queue 29 30 31 34 35 mls qos srr-queue 39 mls qos srr-queue mls qos srr-queue 51 52 53 54 55 mls qos srr-queue 59 60 61 62 63 mls qos srr-queue 4 5 6 7 mls qos srr-queue 13 15 mls qos srr-queue mls qos queue-set mls qos queue-set mls qos queue-set mls qos queue-set mls qos queue-set February 2013 Series input dscp-map queue 1 threshold 3 48 49 50 input dscp-map queue 1 threshold 3 56 57 58 input dscp-map queue 2 threshold 3 32 33 40 input dscp-map queue 2 threshold 3 46 47 output cos-map queue 1 threshold 3 4 5 output cos-map queue 2 threshold 1 2 output cos-map queue 2 threshold 2 3 output cos-map queue 2 threshold 3 6 7 output cos-map queue 3 threshold 3 0 output cos-map queue 4 threshold 3 1 output dscp-map queue 1 threshold 3 32 33 40 output dscp-map queue 1 threshold 3 46 47 output dscp-map queue 2 threshold 1 16 17 18 output dscp-map queue 2 threshold 1 26 27 28 output dscp-map queue 2 threshold 1 36 37 38 output dscp-map queue 2 threshold 2 24 output dscp-map queue 2 threshold 3 48 49 50 output dscp-map queue 2 threshold 3 56 57 58 output dscp-map queue 3 threshold 3 0 1 2 3 output dscp-map queue 4 threshold 1 8 9 11 output output output output output output dscp-map queue 4 threshold 2 10 12 14 1 threshold 1 100 100 50 200 1 threshold 2 125 125 100 400 1 threshold 3 100 100 100 3200 1 threshold 4 60 150 50 200 1 buffers 15 25 40 20 mls qos ! macro name EgressQoS mls qos trust dscp queue-set 1 srr-queue bandwidth share 1 30 35 5 priority-queue out @ ! Option 2. Configure QoS for Cisco Catalyst 2960-S Step 1: Define a macro that you can use later to apply the platform-specific QoS configuration for Cisco Catalyst 2960-S switches.

PVST+ provides an instance of RSTP (802. udld enable Step 5: Set EtherChannels to use the traffic source and destination IP address when calculating which link to send the traffic across. in most cases. VLANs are defined once during switch setup with few.4.4.15 Network Time Protocol (NTP) Server 10. When UDLD detects a unidirectional link. including spanning-tree loops.17 Step 1: Configure the device host name to make it easy to identify the device.1D). By enabling spanning tree.local Active Directory. Domain Name System (DNS). VTP allows network managers to configure a VLAN in one location of the network and have that configuration dynamically propagate out to other network devices. port-channel load-balance src-dst-ip Ethernet Infrastructure 17 . UDLD enables faster link failure detection and quick reconvergence of interface trunks. if any. and non-deterministic forwarding.48.4.1w) per VLAN. hostname [hostname] February 2013 Series Step 2: Configure VLAN Trunking Protocol (VTP) transparent mode. Rapid PVST+ greatly improves the detection of indirect failures or linkup restoration events over classic spanning tree (802. Table 1 . Unidirectional links can cause a variety of problems.10 Cisco Access Control System (ACS) Server 10. no actual Layer 2 loops will occur. UDLD is a Layer 2 protocol that enables devices connected through fiber-optic or twisted-pair Ethernet cables to monitor the physical configuration of the cables and detect when a unidirectional link exists. especially with fiber-optic cables. Dynamic Host Configuration Protocol (DHCP) server 10. spanning-tree mode rapid-pvst Step 4: Enable Unidirectional Link Detection (UDLD) Protocol. EtherChannels are used extensively in this design because they contribute resiliency to the network. In addition.mls qos srr-queue output dscp-map queue 4 threshold 1 8 9 11 13 15 mls qos srr-queue output dscp-map queue 4 threshold 2 10 12 14 mls qos queue-set output 1 threshold 1 100 100 50 200 mls qos queue-set output 1 threshold 2 125 125 100 400 mls qos queue-set output 1 threshold 3 100 100 100 3200 mls qos queue-set output 1 threshold 4 60 150 50 200 mls qos queue-set output 1 buffers 15 25 40 20 mls qos ! macro name EgressQoS mls qos trust dscp queue-set 1 srr-queue bandwidth share 1 30 35 5 priority-queue out @ ! Procedure 2 Configure switch universal settings This procedure configures system settings that simplify and secure the management of the switch. you must still enable spanning tree.Common network services used in the deployment examples Service Address Domain name cisco. vtp mode transparent Step 3: Enable Rapid Per-VLAN Spanning-Tree (PVST+). which can be susceptible to unidirectional failures.48. This normalizes the method in which traffic is load-shared across the member links of the EtherChannel.48. However. Although this architecture is built without any Layer 2 loops. additional modifications. you ensure that if any physical or logical loops are accidentally configured. it disables the affected interface and alerts you. black holes. This deployment uses VTP transparent mode because the benefits of the alternative mode—dynamic propagation of VLAN information across the network—are not worth the potential for unexpected behavior that is due to operational error. The values and actual settings in the examples provided will depend on your current network configuration.

4.0 0.48. only devices on the 10.0. long timeout delays may occur for mistyped commands. snmp-server community cisco RO snmp-server community cisco123 RW February 2013 Series Step 9: If network operational support is centralized in your network. Step 10: Configure the local login and password The local login account and password provide basic access authentication to a switch which provides limited operational privileges. ip domain-name cisco.0/24 network will be able to access the device via SSH or SNMP.255 line vty 0 15 access-class 55 in ! snmp-server community cisco RO 55 snmp-server community cisco123 RW 55 Caution If you configure an access list on the vty interface. you prevent the disclosure of plaintext passwords when viewing configuration files. it is helpful to be able to type a domain name instead of the IP address for a destination. access-list 55 permit 10. username admin password c1sco123 enable secret c1sco123 service password-encryption aaa new-model By default. and then configure SNMPv2c both for a readonly and a read/write community string.4. In this example. The SSH and HTTPS protocols enable secure management of the LAN device.local ip ssh version 2 no ip http server ip http secure-server line vty 0 15 transport input ssh transport preferred none Step 8: Enable Simple Network Management Protocol (SNMP) in order to allow the network infrastructure devices to be managed by a network management system (NMS). you may lose the ability to use SSH to log in from one router to the next for hop-by-hop troubleshooting. The enable password secures access to the device configuration mode. Specify the transport preferred none command on vty lines to prevent errant connection attempts from the CLI prompt. Ethernet Infrastructure 18 .10 Step 7: Configure device management protocols. if the IP name server is unreachable. HTTPS access to the switch will use the enable password for authentication. By enabling password encryption. They use Secure Sockets Layer (SSL) and Transport Layer Security (TLS) to provide device authentication and data encryption.48. and the unsecure protocols—Telnet and HTTP—are turned off.48.Step 6: Configure DNS for host lookup. Without this command.0. Secure HTTP (HTTPS) and Secure Shell (SSH) Protocol are secure replacements for the HTTP and Telnet protocols. you can increase network security by using an access list to limit the networks that can access your device. ip name-server 10. Both protocols are encrypted for privacy.4. At the command line of a Cisco IOS device.

and assign an IP default gateway. BPDU Guard protects against a user plugging a switch into an access port. vlan [vlan number] name DC_ManagementVLAN Step 2: Configure the switch with an IP address so that it can be managed via in-band connectivity. A local AAA user database is also defined in Step 10 on each network infrastructure device to provide a fallback authentication source in case the centralized TACACS+ server is unavailable. Configure console messages. configure centralized user authentication by using the TACACS+ protocol to authenticate management logins on the infrastructure devices to the authentication. VLAN 163. such as when an unauthorized device is connected. interface vlan [management vlan] ip address [ip address] [mask] no shutdown ip default-gateway [default router] Step 3: Configure bridge protocol data unit (BPDU) Guard globally to protect PortFast-enabled interfaces. ntp server 10. Ethernet Infrastructure 19 . The BPDU Guard feature prevents loops by moving a nontrunking interface into an errdisable state when a BPDU is received on an interface when PortFast is enabled. Reader Tip The AAA server used in this architecture is Cisco ACS. As networks scale in the number of devices to maintain. For details about Cisco ACS configuration. logs. and debug output to provide time stamps on output.48. Step 12: Configure a synchronized clock by programming network devices to synchronize to a local NTP server in the network. and accounting (AAA) server. all management access to the network infrastructure devices (SSH and HTTPS) is controlled by AAA. A centralized AAA service reduces operational tasks per device and provides an audit log of user access for security compliance and root-cause analysis.Step 11: If you want to reduce operational tasks per device. which could cause a catastrophic. TACACS+ is the primary protocol used to authenticate management logins on the infrastructure devices to the AAA server. The local NTP server typically references a more accurate clock feed from an outside source. see the Cisco SBA—Borderless Networks Device Management Using ACS Deployment Guide.17 ! clock timezone PST -8 clock summer-time PDT recurring ! service timestamps debug datetime msec localtime service timestamps log datetime msec localtime Procedure 3 Apply the switch global configuration Step 1: Configure the management VLAN. undetected spanning-tree loop.48. the operational burden to maintain local user accounts on every device also scales. A PortFast-enabled interface receives a BPDU when an invalid configuration exists.15 key SecretKey ! aaa group server tacacs+ TACACS-SERVERS server name TACACS-SERVER-1 ! aaa authentication login default group TACACS-SERVERS local aaa authorization exec default group TACACS-SERVERS local aaa authorization console ip http authentication aaa February 2013 Series The out-of-band management network will use a single VLAN.4. When AAA is enabled for access control. for device connectivity.4. which allows cross-referencing of events in a network. tacacs server TACACS-SERVER-1 address ipv4 10. authorization.

1 ! spanning-tree portfast bpduguard default ! interface range GigabitEthernet 1/0/1–22 switchport access vlan 163 switchport mode access switchport host Procedure 5 Configure switch links to the Layer 3 core As described earlier. which can save a lot of time because most of the interfaces in the access layer are configured identically. and disable channel grouping. Out of Band Ethernet Switch Mgmt 0 Mgmt 0 2222 Figure 12 .Scenario that BPDU Guard protects against switchport host February 2013 Series Ethernet Infrastructure 20 . the following command allows you to enter commands on all 24 interfaces (Gig 0/1 to Gig 0/24) simultaneously.5 255.255. The following steps describe configuring an EtherChannel for connectivity to the data center core Cisco Nexus 5500UP switches. Because only end-device connectivity is provided for the Ethernet management ports. For example.255. use the interface range command. spanning-tree portfast bpduguard default Procedure 4 Configure switch access ports To make configuration easier when the same configuration will be applied to multiple interfaces on the switch. shorten the time it takes for the interface to go into a forwarding state by enabling PortFast.Example: Procedures 3 and 4 Cisco SBA Access-Layer Switch User-Installed Low-End Switch Loop caused by mis-cabling the switch 2093 Spanning tree doesn’t detect the loop because PortFast is enabled Disable the interface if another switch is plugged into the port. vlan 163 name DC_ManagementVLAN ! interface vlan 163 description in-band management ip address 10.0 no shutdown ! ip default-gateway 10.4.63.1Q trunking. there are various methods to connect to Layer 3 for connectivity to the data center out-of-band management network. disable 802. interface range [interface type] [port number]–[port number] switchport access vlan [vlan number] switchport mode access Step 2: Configure the switch port for host mode. This command allows you to issue a command once and have it apply to many interfaces at the same time. This host interface configuration supports management port connectivity.4.63. interface range Gigabitethernet 1/0/1-24 Step 1: Configure switch interfaces to support management console ports.

[interface type] [port 2] channel-protocol lacp channel-group 1 mode active logging event link-status logging event trunk-status logging event bundle-status Step 2: Configure the trunk. An 802. interface [interface type] [port 1] description Link to DC Core port 1 interface [interface type] [port 2] description Link to DC Core port 2 interface range [interface type] [port 1]. Step 3: Save your management switch configuration. copy running-config startup-config Example interface range GigabitEthernet 1/0/23-24 description Links to DC Core for Layer 3 channel-protocol lacp channel-group 1 mode active logging event link-status logging event trunk-status logging event bundle-status no shutdown ! interface Port-channel 1 description Etherchannel to DC Core for Layer 3 switchport trunk encapsulation dot1q switchport trunk allowed vlan 163 switchport mode trunk logging event link-status no shutdown Ethernet Infrastructure 21 . “Configure management switch connection. This forms a proper EtherChannel that does not cause any issues. interface Port-channel1 description Etherchannel Link to DC Core for Layer 3 switchport trunk encapsulation dot1q switchport trunk allowed vlan [management vlan] switchport mode trunk logging event link-status no shutdown February 2013 Series Reader Tip The configuration on the data center core Cisco Nexus 5500UP switches for Layer 3 connectivity to the out-of-band management network will be covered in Procedure 5. The VLANs allowed on the trunk are pruned to only the VLANs that are active on the server room switch.” in the “Configuring the Data Center Core” process later in this chapter. The Catalyst 2960-S does not require the switchport trunk encapsulation dot1q command.Step 1: Configure two or more physical interfaces to be members of the EtherChannel and set LACP to active on both sides.1Q trunk is used for the connection to this upstream device. which allows it to provide the Layer 3 services to all the VLANs defined on the management switch.

which allows the peer connection to form and supports forwarding of traffic between the switches if necessary during a partial link failure of one of the vPC port channels. Configure Spanning Tree This guide refers to one of the two data center core Nexus 5500UP switches as the “first switch” or switch-A. which enables full Enhanced Interior Gateway Routing (EIGRP) functionality. February 2013 Series Ethernet Infrastructure 22 . Step 2: Connect two available Ethernet ports on each Cisco Nexus 5500UP Series switch to the Cisco SBA LAN core. Perform initial device configuration 3. These ports will be used to form the vPC peer-link. although you can add more to accommodate higher switch-to-switch traffic. so the Cisco Nexus 5500UP Series switch requires the Layer 3 license. Configure QoS policies 4. The Fibre Channel license will be required when running native Fibre Channel or FCoE. Configure virtual port channel 5. Four 10-Gigabit Ethernet connections will provide resilient connectivity to the Cisco SBA LAN core with aggregate throughput of 40 Gbps to carry data to the rest of the organization. Single-homed FEX 1.Process Configuring the Data Center Core Setup and Layer 2 Ethernet Procedure 1 Establish physical connectivity Complete the physical connectivity of the Cisco Nexus 5500UP Series switch pair according to the illustration below. It is recommended that you use at least two links for the vPC peer-link resiliency. and the other as the “second switch” or switch-B. Establish physical connectivity Dual-homed FEX Single-homed FEX 2. vPC Peer Keepalive vPC Peer Link Cisco SBA LAN Core 2223 Cisco Nexus 5500UP Series offers a simplified software management mechanism based on software licenses. These licenses are enforceable on a per-switch basis and enable a full suite of functionalities. Nexus 5500UP Ethernet vPC Switch Fabric Step 1: Connect two available Ethernet ports between the two Cisco Nexus 5500UP Series switches. The data center core layer is characterized by a Layer 3 configuration. Configure data center core global settings 6.

Active Directory.63. To support a dual-homed FEX with single-homed servers. standalone switch for connecting the management ports of the Cisco Nexus 5500 switches.48.10 Cisco ACS server 10.11 Support single-homed FEX attachment by connecting fabric uplink ports 1 and 2 on each FEX to two available Ethernet ports on only one member of the Cisco Nexus 5500UP Series switch pair. and the management interface addressing. Setup configures only enough connectivity for management of the system. SSH login. but will not be configured as a vPC port channel because they have physical ports connected to only one member of the switch pair. The Cisco SBA data center uses pairs of dual-homed FEX configurations for increased resilience and uniform connectivity.4.15 NTP server 10. These ports will operate as a port channel to support the dual-homed Cisco FEX configuration.local Step 4: Connect to a single-homed FEX. and then powering on the system in order to enter the initial configuration dialog box. This script sets up a system login password. Some setup steps will be skipped and covered in a later configuration step.48.48. DNS. Do you want to enforce secure password standard (yes/no): y Enter the password for “admin”: Confirm the password for “admin”: ---. This design uses a physically separate. These ports will be a port channel. Step 1: Connect to the switch console interface by connecting a terminal cable to the console port of the first Cisco Nexus 5500UP Series switch (switch-A).4. Service Address Domain name cisco. up to four or eight ports can be connected to provide more throughput from the Cisco FEX to the core switch. Depending on the model Cisco FEX being used.Step 3: Connect to a dual-homed FEX.17 EIGRP Autonomous System (AS) 100 Cisco Nexus 5500 Switch-A management address 10.10 Cisco Nexus 5500 Switch-B management address 10. Procedure 2 Perform initial device configuration This procedure configures system settings that simplify and secure the management of the solution.Common network services used in the deployment examples Depending on the model Cisco FEX being used.63.4. Table 2 . February 2013 Series Ethernet Infrastructure 23 .4. The values and actual settings in the examples provided will depend on your current network configuration.Basic System Configuration Dialog ---This setup utility will guide you through the basic configuration of the system. one on each Cisco Nexus 5500UP Series switch. which are a part of the protection mechanism for vPC operation.4. you can connect up to four or eight ports to provide more throughput from the Cisco FEX to the core switch. Single-homed FEX configurations are beneficial when FCoE connected servers will be connected. The management ports provide out-of-band management access and transport for vPC peer keepalive packets. Step 5: Connect to the out-of-band management switch. connect fabric uplink ports 1 and 2 on the Cisco FEX to an available Ethernet port. DHCP server 10. Step 2: Run the setup script and follow the Basic System Configuration Dialog for initial device configuration of the first Cisco Nexus 5500UP Series switch.

255.1 ssh key rsa 2048 force feature ssh no feature telnet no feature http-server ntp server 10.Please register Cisco Nexus 5000 Family devices promptly with your supplier. Failure to register may affect response times for initial service calls.63. Nexus devices must be registered to receive entitled support services.4.63.255.48.4.10 Mgmt0 IPv4 netmask : 255.0.0/0 10.255.4. Use ctrl-c at anytime to skip the remaining dialogs.48.0. Would you like to enter the basic configuration dialog (yes/ no): y Create another login account (yes/no) [n]: n Configure read-only SNMP community string (yes/no) [n]: n Configure read-write SNMP community string (yes/no) [n]: n Enter the switch name : dc5548ax Enable license grace period? (yes/no) [n]: y Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: y Mgmt0 IPv4 address : 10. Press Enter at anytime to skip a dialog.0 no shutdown ip route 0.255.0 Configure the default gateway? (yes/no) [y]: y IPv4 address of the default gateway : 10.63.17 Configure default switchport interface state (shut/noshut) [shut]:shut February 2013 Series Configure best practices CoPP profile (strict/moderate/ lenient/skip) [strict]: moderate The following configuration will be applied: password strength-check switchname dc5548ax license grace-period interface mgmt0 ip address 10.1 Configure advanced IP options? (yes/no) [n]:n Enable the ssh service? (yes/no) [y]: y Type of ssh key you would like to generate (dsa/rsa)[rsa] : rsa Number of key bits <768-2048> : 2048 Enable the telnet service? (yes/no) [n]: n Enable the http-server? (yes/no) [y]: n Configure clock? (yes/no) [n]: n Configure timezone? (yes/no) [n]: n Configure summertime? (yes/no) [n]: n Configure the ntp server? (yes/no) [n]: y NTP server IPv4 address : 10.4.4.63.17 use-vrf management system default switchport system default switchport shutdown system default switchport trunk mode auto system default zone default-zone permit system default zone distribute full no system default zone mode enhanced policy-map type control-plane copp-system-policy Would you like to edit the configuration? (yes/no) [n]: n Use this configuration and save it? (yes/no) [y]: y [########################################] 100% dc5548ax login: Ethernet Infrastructure 24 .10 255.4.

For more information on licensing.48. Now set the local time for the device location. As a result. the feature-name command can only be used after the appropriate license is installed. When AAA is enabled for access control.15 use-vrf default aaa authentication login default group tacacs ip name-server 10. Step 4: Configure the name server command with the IP address of the DNS server for the network. all management access to the network infrastructure devices (SSH and HTTPS) is controlled by AAA.4. requiring you to reapply any existing configuration commands to the switch. A local AAA user database is also defined in the setup script on each Cisco Nexus 5500 switch to provide a fallback authentication source in case the centralized TACACS+ server is unavailable. NTP is designed to synchronize time across all devices in a network for troubleshooting. As networks scale in the number of devices to maintain.com. if the Fibre Channel–specific feature NPV is required for your network. Step 5: Set local time zone for the device location.4. A centralized AAA service reduces operational tasks per device and provides an audit log of user access for security compliance and root-cause analysis. the operational burden to maintain local user accounts on every device also scales. For licensed features.48. TACACS+ is the primary protocol used to authenticate management logins on the infrastructure devices to the AAA server. you set the NTP server address.10 February 2013 Series Ethernet Infrastructure 25 . feature feature feature feature feature feature feature feature feature udld interface-vlan lacp vpc eigrp fex hsrp pim fcoe Tech Tip Although it is not used in this design. Because of the modular nature of Cisco NX-OS.4. consult the Cisco NX-OS Licensing Guide on www. At the command line of a Cisco IOS device. In the initial setup script. processes are only started when a feature is enabled. The NPV feature is the only feature that when enabled or disabled will erase your configuration and reboot the switch. Cisco Nexus 5500UP Series requires a license for Layer 3 operation. it is helpful to be able to type a domain name instead of the IP address. configure centralized user authentication by using the TACACS+ protocol to authenticate management logins on the infrastructure devices to the AAA server. Fibre Channel storage protocols. you should enable it prior to applying any additional configuration to the switch.Step 3: Enable and configure system features.15 key SecretKey aaa group server tacacs+ tacacs server 10.cisco. clock timezone PST -8 0 clock summer-time PDT 2 Sunday march 02:00 1 Sunday nov 02:00 60 Step 6: Define a read-only and a read/write SNMP community for network management. snmp-server community cisco group network-operator snmp-server community cisco123 group network-admin Step 7: If you want to reduce operational tasks per device. and FCoE N-Port Virtualization (NPV) operation. The example configurations shown in this guide use the following features. feature tacacs+ tacacs-server host 10.48. commands and command chains only show up after the feature has been enabled.

control traffic. February 2013 Series Slot 2 GEM FC Ports Ethernet FC 2224 Slot 1 (Baseboard) Ethernet Ports Tech Tip Step 11: On the second Cisco Nexus 5500UP Series switch (switch-B). all switch ports are enabled for Ethernet operation. you may lose the ability to use SSH to log in from one router to the next for hop-by-hop troubleshooting. and then later defined for assignment to Ethernet port and Ethernet port-channel configurations.4. Procedure 3 Configure QoS policies QoS policies have been created for the Cisco SBA data center to align with the QoS configurations in the Cisco SBA LAN and WAN to protect multimedia streams. Step 9: Configure port operation mode.48. you do not need to reload the switch at this point. Because the Cisco Nexus switch requires a reboot to recognize ports configured for Fibre Channel operation. In this example.48. QoS policies in this procedure are configured for Cisco Nexus 5500 and 2200 systems globally.63. you can increase network security by using an access list to limit the networks that can access your device. and then reload the switch. In Step 2. The system default FCoE policies are integrated into the overall Cisco SBA policies. only devices on the 10.0/24 network will be able to access the device via SSH or SNMP.4. it is recommended that FCoE QoS be configured to provide no-drop protected behavior in the data center.4. use a unique device name (dc5548bx) and IP address (10. this is a good point for you to reload the switch. all configuration details are identical. This is intended to be a baseline that you can customize to your environment if needed. If there is not a current Ethernet Infrastructure 26 . Fibre Channel ports must be enabled in a contiguous range and be the high numbered ports of the switch baseboard and/or the high numbered ports of a universal port expansion module. If you are not enabling Fibre Channel port operation. By default. The Cisco Nexus 5500 ports can use Layer 2 CoS or Layer 3 DSCP packet marking for queue classification. copy running-config startup-config reload Caution If you configure an access list on the vty interface. ip access-list vty-acl-in permit tcp 10.0/24 any eq snmp snmp-server community cisco use-acl snmp-acl snmp-server community cisco123 use-acl snmp-acl Step 10: Save your configuration.48.4. At a minimum. Ports will not show up in the configuration as FC ports if you did not enable the FCoE feature in Step 3. Cisco Nexus FEX ports can use Layer 2 CoS markings for queuing. and FCoE traffic.11) for the mgmt0 interface—otherwise. slot 1 port 28-32 type fc The Cisco Nexus 5500UP switch has universal ports that are capable of running Ethernet+FCoE or Fibre Channel on a per-port basis. to allow for the integration of FCoE-capable devices into the data center without significant additional configuration. repeat all of the steps of this procedure (Procedure 2).Step 8: If operational support is centralized in your network.0/24 any eq 22 line vty ip access-class vty-acl-in in ! ip access-list snmp-acl permit udp 10. you enable ports 28 through 32 on a Cisco Nexus 5548UP switch as Fibre Channel ports. that flow through the data center. In this example. Changing port type to FC requires a reboot to recognize the new port operation.

or future need to deploy FCoE in the data center. The bandwidth assignment for FCoE queuing should be adapted to the deployment requirements to guarantee end-to-end lossless treatment. to match specific CoS bits. There is an existing system class-default which will automatically match any unmarked packets. and also to map traffic destined for Cisco Nexus 5500 Layer 3 engine for traffic prioritization. Apply the same QoS map to both data center core switches. class-map type qos match-any match dscp ef match dscp cs5 cs4 match dscp af41 match cos 5 class-map type qos match-any match dscp cs3 match cos 4 class-map type qos match-any match dscp af21 af22 af23 match cos 2 class-map type qos match-any match dscp af11 af12 af13 match cos 1 PRIORITY-QUEUE CONTROL-QUEUE TRANSACTIONAL-QUEUE BULK-QUEUE Ethernet Infrastructure 27 . and CoS mapping (for a Layer 3 daughter card) via class-map type network qos and policy-map type network-qos. All nonmatched traffic will be handled by the system-defined class-default queue. • System-wide QoS service-policy will be configured in the system QoS configuration. The system-defined qos-group 0 is automatically created and does not require definition. policy-map type qos DC-FCOE+1P4Q_GLOBAL-COS-QOS class type qos PRIORITY-COS set qos-group 5 class type qos CONTROL-COS set qos-group 4 class type qos class-fcoe set qos-group 1 class type qos TRANSACTIONAL-COS set qos-group 2 class type qos BULK-COS set qos-group 3 Step 3: Configure class-map type qos classification for Ethernet interface use. • Interface classification will be based on Layer 3 DSCP values via classmap type qos and policy-map type qos configurations to associate specific IP traffic types with corresponding internal qos-groups. the QoS policy can be adapted to use the standard FCoE qos-group for other purposes. For example. unmatched CoS values. class-map type qos match-any PRIORITY-COS match cos 5 class-map type qos match-any CONTROL-COS match cos 4 class-map type qos match-any TRANSACTIONAL-COS February 2013 Series match cos 2 class-map type qos match-any BULK-COS match cos 1 Step 2: Configure policy-map type qos policy for global use. and packets marked with a CoS of zero. The match cos is used to match inbound Layer 2 CoS marked traffic. This allows for the mapping of traffic based on IP DSCP into the internal qos-groups of the Cisco Nexus 5500 switch. This creates the CoS-to-internal-qos-group mapping. The FCoE class-map type qos classfcoe is pre-defined and will be used in the policy map for FCoE traffic to ensure correct operation. The following configurations will be created: • Overall system classification via class-map type qos and policy-map type qos configurations will be based on CoS to associate traffic with the system internal qos-groups. based on matching qos-group. • System queue scheduling. Step 1: Configure class-map type qos classification for global use. • System queue attributes based on matching qos-group are applied to set Layer 2 MTU. will be applied to set a priority queue for jitter-sensitive multimedia traffic and to apply bandwidth to weighted round-robin queues via class-map type queuing and policy-map type queuing. reallocating bandwidths to allow FCoE to assign bandwidth percent 40 would be more appropriate for 4Gbps fibre channel traffic over a 10Gbps Ethernet link to a server or storage array. • Interface QoS service-policy will be defined for later use when configuring Ethernet end points for connectivity. buffer queue-limit.

plus an additional system qos-group 0 which is automatically created for default CoS traffic. As with the type queuing class-maps. and in order to match to a specific internal qos-group for setting queue attributes. class-map type queuing match qos-group 5 class-map type queuing match qos-group 4 class-map type queuing match qos-group 2 class-map type queuing match qos-group 3 February 2013 Series PRIORITY-GROUP CONTROL-GROUP TRANSACTIONAL-GROUP BULK-GROUP Step 6: Configure policy-map type queuing policy for global use. policy-map type qos DC-FCOE+1P4Q_INTERFACE-DSCP-QOS   class PRIORITY-QUEUE     set qos-group 5 class CONTROL-QUEUE set qos-group 4   class TRANSACTIONAL-QUEUE     set qos-group 2   class BULK-QUEUE set qos-group 3 Step 5: Configure class-map type queuing classification for global use. The internal qos-group number is arbitrarily assigned. priority. This matches traffic for queue scheduling on a system-wide basis. These policies will also be assigned to port-channel virtual interfaces. This creates appropriate system-wide qos-group attributes of bandwidth. The FCoE class-map type network-qos class-fcoe is pre-defined and will be used in the policy map for FCoE traffic to ensure correct operation. for mapping DSCP classifications into internal qos-group. and FCoE lossless scheduling. or weight. policy-map type queuing DC-FCOE+1P4Q_GLOBAL-GROUP-QUEUING class type queuing PRIORITY-GROUP priority class type queuing CONTROL-GROUP bandwidth percent 10 class type queuing class-fcoe bandwidth percent 20 class type queuing TRANSACTIONAL-GROUP bandwidth percent 25 class type queuing BULK-GROUP bandwidth percent 20 class type queuing class-default bandwidth percent 25 Step 7: Configure class-map type network-qos class-maps for global use. The internal qos-group number is arbitrarily assigned and does not necessarily match an equivalent CoS value. Interface policies are created to classify incoming traffic on Ethernet interfaces which are not members of a port-channel.Step 4: Configure policy-map type qos policy to be applied to interfaces. class-map type network-qos match qos-group 5 class-map type network-qos match qos-group 4 class-map type network-qos match qos-group 2 class-map type network-qos match qos-group 3 PRIORITY-SYSTEM CONTROL-SYSTEM TRANSACTIONAL-SYSTEM BULK-SYSTEM Ethernet Infrastructure 28 . and does not necessarily match an equivalent CoS value. along with an additional system configured qos-group 0 which is automatically created for default CoS. the type network-qos class-maps can use one of five internal groups. but not the port-channel member physical interfaces. The FCoE class-map type queuing class-fcoe is pre-defined and will be used in the policy map for FCoE traffic to ensure correct operation. Five internal qos groups are available for assignment.

additional classification and queuing can be added to map iSCSI storage traffic into the appropriate queue for bulk data. Setting CoS ensures that traffic destined through the engine to another subnet is handled consistently. system qos service-policy service-policy QUEUING service-policy GROUP-QUEUING service-policy NETWORK-QOS type qos input DC-FCOE+1P4Q_GLOBAL-COS-QOS type queuing input DC-FCOE+1P4Q_GLOBAL-GROUPtype queuing output DC-FCOE+1P4Q_GLOBALtype network-qos DC-FCOE+1P4Q_GLOBAL-SYSTEM- The output queuing applied with system qos defines how the bandwidth is shared among different queues for Cisco Nexus 5500 and Cisco Nexus FEX interfaces. and the network-qos policy is where the CoS marking by system qos-group is accomplished. The Layer 3 routing engine requires CoS bits to be set for QoS treatment on ingress to and egress from the engine. and the default buffer size of 79.360 bytes. ip access-list ISCSI 10 permit tcp any eq 860 any 20 permit tcp any eq 3260 any 30 permit tcp any any eq 860 40 permit tcp any any eq 3260 ! class-map type qos match-all ISCSI-QUEUE match access-group name ISCSI policy-map type qos DC-FCOE+1P4Q_INTERFACE-DSCP-QOS class ISCSI-QUEUE set qos-group 3 Ethernet Infrastructure 29 . The iSCSI class of traffic can then be added to the existing policy map to put the traffic into the correct qos-group. The remaining queues take the default queue-limit of 22.720 bytes with an MTU of 1500. with two exceptions: the BULK-SYSTEM queue is assigned additional buffer space and a jumbo MTU of 9216 to improve performance for iSCSI and large data transfer traffic. and also defines how the bandwidth is shared among different queues on Cisco Nexus 5500 Layer 3 engine.Step 8: Configure a policy-map type network-qos policy for global use. policy-map type network-qos DC-FCOE+1P4Q_GLOBAL-SYSTEMNETWORK-QOS class type network-qos PRIORITY-SYSTEM set cos 5 class type network-qos CONTROL-SYSTEM set cos 4 class type network-qos class-fcoe pause no-drop mtu 2158 class type network-qos TRANSACTIONAL-SYSTEM set cos 2 class type network-qos BULK-SYSTEM mtu 9216 queue-limit 128000 bytes set cos 1 class type network-qos class-default multicast-optimize set cos 0 February 2013 Series Step 9: Apply the created global policies. Step 10: If iSCSI is being used. no-drop treatment. This applies system-wide queue scheduling parameters. by default. the class-default queue is assigned all remaining buffer space. The required FCoE queue behavior is configured with the recommended MTU of 2158. Classification of iSCSI traffic can be matched by well-known TCP ports through an ACL.

Tech Tip Use only permit actions in the ACLs for matching traffic for QoS policies on Cisco Nexus 5500. Ethernet Infrastructure 30 . role priority 16000 The vPC secondary switch. switch-B.667. the vPC secondary switch will suspend its vPC member ports to prevent potential looping while the vPC primary switch keeps all of its vPC member ports active. the vPC peer will detect the peer switch’s failure through the vPC peer keepalive link. interface Ethernet105/1/1-32 service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS February 2013 Series Mgmt 0 vPC Peer Link Mgmt 0 vPC Peer Keepalive 2225 Use the show queuing interface command to display QoS queue statistics. Example: interface port-channel 2-3 . will be left at the default value of 32. Example: interface Ethernet1/1-27 service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS • Ethernet port-channel interfaces on Cisco Nexus 5500. The switch with lower priority will be elected as the vPC primary switch. vpc domain 10 Step 2: Define a lower role priority for switch-A. The port-channel member physical links do not require the policy. If the peer link fails.com/en/US/docs/switches/datacenter/nexus5000/ sw/qos/521_n1_1/b_5k_QoS_Config_521N11_chapter_011. interface Ethernet104/1/1-32 . Step 1: Define a vPC domain number on switch-A. html#task_1135158  Procedure 4 Configure virtual port channel Before you can add port channels to the switch in virtual port channel (vPC) mode. basic vPC peering must be established between the two Cisco Nexus 5500UP Series switches. port-channel 9 service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS • FEX host port Ethernet interfaces. Example: interface Ethernet102/1/1-48 . port-channel 5 . If the vPC primary switch is alive and the vPC peer link goes down. they will inherit the service policy from the logical port-channel interface. The vPC peer link provides a communication path between the data center core switches that allows devices that connect to each core switch for resiliency to do so over a single Layer 2 EtherChannel. The Interface QoS service-policy DC-FCOE+1P4Q_INTERFACE-DSCP-QOS created in Step 4 will be assigned later in this guide to: • Non-FEX Ethernet interfaces on Cisco Nexus 5500. Layer 2 Ether Channels Step 11: On the second Cisco Nexus 5500UP Series switch (switch-B). This service policy is not required on port-channels connected to FEX network uplinks.cisco. This identifies the vPC domain to be common between the switches in the pair. the vPC primary switch. apply the same QoS configuration Step 1 through Step 10. which are not port-channel members. For more details on configuring QoS policies on Cisco Nexus 5500 please refer to: http://www.

Step 5: Create a port channel interface on switch-A to be used as the peer link between the two vPC switches. and reduce disruptions in vPC operations. delay restore 360 auto-recovery graceful consistency-check peer-gateway ip arp synchronize The auto-recovery command has a default timer of 240 seconds. The peer-keepalive source IP address should be the address being used on the mgmt0 interface of the switch currently being configured.4. The peer link is the primary link for communications and for forwarding of data traffic to the peer switch.11 source 10.63.4. A minimum of two physical interfaces is recommended for link resiliency. Step 4: Configure the following vPC commands in the vPC domain configuration mode.63.4. February 2013 Series interface Ethernet1/17 description vpc peer link switchport mode trunk channel-group 10 mode active no shutdown interface Ethernet1/18 description vpc peer link switchport mode trunk channel-group 10 mode active no shutdown Step 7: Configure the corresponding vpc commands on Cisco Nexus 5500UP switch-B.Step 3: Configure vPC peer keepalive on Cisco Nexus 5500 switch-A. if required.4. Different 10-Gigabit Ethernet ports (as required by your specific implementation) may replace the interfaces shown in the example. vpc domain 10 peer-keepalive destination 10. The channel-group number must match the port-channel number used in the previous step.11 delay restore 360 auto-recovery graceful consistency-check peer-gateway ip arp synchronize ! interface port-channel 10 switchport mode trunk vpc peer-link spanning-tree port type network service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS ! interface Ethernet1/17 description vpc peer link switchport mode trunk channel-group 10 mode active no shutdown interface Ethernet1/18 description vpc peer link switchport mode trunk channel-group 10 mode active no shutdown Ethernet Infrastructure 31 . This will increase resiliency. optimize performance.10 The peer-keepalive is ideally an alternate physical path between the two Cisco Nexus 5500UP switches running vPC to ensure that they are aware of one another’s health even in the case where the main peer link fails. The destination address is the mgmt0 interface on the vPC peer. peer-keepalive destination 10. configure the physical interfaces that connect the two Cisco Nexus 5500 switches together to the port channel.10 source 10. This time can be extended by adding the reload-delay variable with time in seconds.63.63. Change the destination and source IP addresses for Cisco Nexus 5500UP switch-B. The auto-recovery feature for vPC recovery replaces the need for the original peer-config-check-bypass feature. interface port-channel 10 switchport mode trunk vpc peer-link spanning-tree port type network service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS Step 6: On Cisco Nexus 5500UP switch-A.

61.4.54.0/24 Used in the “Network Security” chapter for firewall-protected servers 155 FW_Inside_2 10. which can be a problem on some devices.0/24 Reserved for VMware VMotion traffic future use 162 iSCSI 10.Step 8: Ensure that the vPC peer relationship has formed successfully by using the show vpc command.4.0/24 General server use 153 FW_Outside 10.0/24 General network server use 149 Servers_2 10. double-check the IP addressing assigned for the keepalive destination and source addresses.50. If the status does not indicate success. forwarding via vPC peer-link vPC domain id : Peer status : vPC keep-alive status : Configuration consistency status: Per-vlan consistency status : Type-2 consistency status : vPC role : Number of vPCs configured : Peer Gateway : Dual-active excluded VLANs : Graceful Consistency Check : 10 peer adjacency formed ok peer is alive success success success primary 55 Enabled Enabled vPC Peer-link status ------------------------------------------------------------id Port Status Active vlans ---------.0/24 Used in “Network Security” for firewall+IPS protected servers 156 PEERING_VLAN 10.-----------------------------------------1 Po10 up 1 Step 9: Verify successful configuration by looking for the peer status of “peer adjacency formed ok” and the keepalive status of “peer is alive”.Cisco SBA data center VLANs VLAN VLAN name IP address Comments 148 Servers_1 10.56.48.0/24 Reserved for iSCSI storage traffic 163 DC-Management 10.55.62.4.63.4. you will notice a difference in the IP address ranges used.53.4. we have used the third octet of the IP address and added 100 in order to determine the VLAN number for easier reference.4.4.4.0/25 Used for firewall outside interface routing 154 FW_Inside_1 10. as well as the physical connections.local vPC is down. February 2013 Series Procedure 5 Configure data center core global settings The data center core requires basic core operational configuration beyond the setup script.4. Adding 100 prevents a VLAN number from being one or zero.0/30 Cisco Nexus 5500 intra-data center Layer 3 peering link 161 VMotion 10. However. After you have defined vPC port channels and if one of its member links is down or not yet configured. this information becomes a legend that shows the meaning of an asterisk next to your port channel in the listing.49. Table 3 . while still making the VLAN ID easy to remember.local vPC is down.4. In this deployment guide. The VLAN assignments build on the assignments in the Server Room Deployment Guide. Tech Tip Do not be concerned about the “(*) . dc5548ax# show vpc Legend: (*) . As you review configuration guidance. the VLAN usage remains the same for Servers and Firewall. forwarding via vPC peer-link” statement at the top of the command output at this time.0/24 Out-of-band data center management VLAN Ethernet Infrastructure 32 .0/24 Used in the “Application Resiliency” chapter for the server load balancing VLAN 150 Servers_3 10.

the vPC peer link is excluded from the spanning tree computation. The loopback interface for Cisco Nexus 5500UP switch-B will be 10.56. port-channel load-balance ethernet source-dest-port Procedure 6 Configure Spanning Tree Although this architecture is built without any Layer 2 loops.1w) per VLAN.4. The edge port configured interface immediately transitions to the forwarding state. For data center environments where Layer 2 switches running spanning-tree are connected to the data center core. aaa group server tacacs+ tacacs source-interface loopback 0 Step 4: Configure EtherChannel port channels to use Layer 3 IP address and Layer 4 port number for load balance hashing. it is a good practice to assign spanning-tree root to the core switches. This design assigns spanning-tree root for a range of VLANs that may be contained in the data center. (This immediate transition is also known as Cisco PortFast. Cisco Nexus 5500UP runs Rapid PVST+ by default.254/32 ip pim sparse-mode The loopback interface is a logical interface that is always reachable as long as the device is powered on and any IP interface is reachable to the network. The BPDU Guard feature prevents loops by moving a nontrunking interface into an errdisable state when a BPDU is received on an interface when PortFast is enabled. a new spanning tree feature which can reduce spanning tree convergence. Rapid Per-VLAN Spanning-Tree (PVST+) provides an instance of RSTP (802. This optimizes load balancing on EtherChannel links and improves throughput to the Layer 3 routing engine in the Cisco Nexus 5500UP switch.253/32. This example uses an IP address out of the data center core addressing with a 32-bit address (host) mask.2(1)N1(1) introduced vPC peer-switch. which can cause traffic disruption. This feature eliminates the need to pin the spanning tree root to the vPC primary switch and improves vPC convergence if the vPC primary switch fails. interface loopback 0 ip address 10. undetected spanning-tree loop.4.Step 1: Create the necessary VLANs for data center operation. Layer 2 switches connected to the data center core Although not required in the Cisco SBA data center design. Layer 3 process and features are also bound to the loopback interface to ensure process resiliency.1D).56. without moving through the spanning tree blocking or learning states. the loopback address is the best way to manage the switch in-band and provides an additional management point to the out-of-band management interface. Because of this capability. An edge port configured interface receives a BPDU when an invalid configuration exists. spanning tree bridge protocol data units (BPDU) are sent from both vPC peer devices for the same vPC in order to avoid issues related to spanning tree BPDU timeout on the downstream switches. vlan [vlan number] name [vlan name] Step 2: Configure an in-band management interface. To avoid loops. Rapid PVST+ greatly improves the detection of indirect failures or linkup restoration events over classic spanning tree (802. Cisco 5000 NX-OS version 5. BPDU Guard Spanning tree edge ports are interfaces which are connected to hosts and can be configured as either an access port or a trunk port. which could cause a catastrophic. such as when an unauthorized device is connected. In vPC peer switch mode. This feature allows a pair of Cisco Nexus 5500 Series devices to appear as a single spanning tree root in the Layer 2 topology. Step 3: Bind the device process for TACACS+ to the loopback address for optimal resilience. February 2013 Series Ethernet Infrastructure 33 .) BPDU Guard protects against a user plugging a switch into an access port. some data centers will have the need to connect some existing Layer 2 switches to their data center core switches.

using a Ethernet port channel from the Layer 2 switch to the core as show in the figure below. Configure spanning tree with vPC peer switch Step 1: Configure Cisco Nexus 5500UP switch-A for peer-switch in the vpc domain that you configured earlier in Procedure 4. “Configure virtual port channel. spanning-tree vlan 1-1000 root primary Ethernet Infrastructure 34 . the peer-switch feature is not supported and you should follow Option 2: “Configure standard spanning tree operation” below. Disable the interface if another switch is plugged into the port. In this scenario. Option 1. February 2013 Series Step 1: Configure Cisco Nexus 5500UP switch-A as the spanning tree primary root. Please make sure to configure spanning tree “bridge” priority as per recommended guidelines to make vPC peer-switch operational. Step 2: Configure Cisco Nexus 5500UP switch-A spanning tree priority. the vPC peers use the same STP root ID as well same bridge ID. If all Layer 2 switches connected to the data center core are vPC connected.e. i. then you can use the vPC peer-switch Option 1: “Configure spanning tree with vPC peer switch” below. spanning-tree vlan 1-1000 priority 8192 Step 3: Configure BPDU Guard globally to protect spanning tree port type edge configured interfaces. there is no impact on north/south traffic but east-west traffic will be lost (black-holed). With the peer link failed. This design assigns spanning-tree root for a range of VLANs that may be contained in the data center.vPC peer switch can be used with the pure peer switch topology in which the devices all belong to the vPC.. Configure standard spanning tree operation This design assigns primary and secondary spanning-tree root for a range of VLANs that may be contained in the data center. you can lose traffic.” vpc domain 10 peer-switch exit You will receive a warning message on your console: “%STP-2-VPC_ PEERSWITCH_CONFIG_ENABLED: vPC peer-switch configuration is enabled. VLAN 148 VLAN 148 Layer 2 EtherChannels vPC Peer Keepalive vPC Domain 2217 vPC Peer Link If you will have a hybrid of vPC and spanning-tree-based redundancy connection. The access switch traffic is split in two with half going to the first vPC peer and the other half to the second vPC peer. vpc domain 10 peer-switch exit spanning-tree vlan 1-1000 priority 8192 spanning-tree port type edge bpduguard default Option 2. as shown in the figure below. If the vPC peer-link fails in a hybrid peer-switch configuration. spanning-tree port type edge bpduguard default Step 4: Configure Cisco Nexus 5500UP switch-B to match.

EIGRP is enabled on a per-interface basis. Configure the IP routing protocol Step 2: Configure EIGRP on Layer 3 interfaces. Instead of adding networks to be advertised via network statements.253/32.4.56.56. spanning-tree port type edge bpduguard default Procedure 1 Configure the IP routing protocol Step 1: Configure EIGRP as the IP routing protocol. Configure IP routing for VLANs 3. Each Layer 3 interface that carries a network that may be advertised via EIGRP requires the ip router eigrp statement. Configure vPC object tracking interface loopback 0 ip router eigrp 100 Cisco NX-OS routing configuration follows an interface-centric model.4.56. 2. EIGRP must be enabled on each end of a Layer 3 link.2/30. Process EIGRP is the IP routing protocol used in the data center to be compatible with the Cisco SBA foundation LAN core and WAN. a single link will be used for active EIGRP peering in the data center core. Configuring the Data Center Core IP Routing In this configuration. The Cisco SBA data center design configures IP routing on the Cisco Nexus 5500 core switches to allow the core to provide Layer 2 and Layer 3 switching for the data center servers and services. The peer Cisco Nexus 5500UP switch-B will use IP address 10. February 2013 Series Ethernet Infrastructure 35 . spanning-tree vlan 1-1000 root secondary Step 3: Configure BPDU Guard globally to protect spanning tree port type edge configured interfaces.4.56. Interface Vlan 156 ip address 10. the only parameter configured under the EIGRP process (router eigrp 100) is the router-ID. The loopback 0 IP address is used for the EIGRP router ID. Step 3: Configure the core Layer 3 peering link. Disable the interface if another switch is plugged into the port.Step 2: Configure Cisco Nexus 5500UP switch-B as the spanning tree secondary root. Configure IP Multicast routing 4.1/30 ip router eigrp 100 ip pim sparse-mode no shutdown To pass EIGRP routing updates between routing peers. This example uses the same routing process ID so that routes can be exchanged with the LAN core. 1. To avoid unnecessary EIGRP peering between the core data center switches across all data center VLAN-switched virtual interfaces. Configure connectivity to the SBA LAN core 5. router eigrp 100 router-id 10.4.254 The router ID for Cisco Nexus 5500UP switch-B will be 10. Configure management switch connection 6.

Configure a priority greater than 100 for the primary HSRP peer. Step 1: Configure the SVI.1 no shutdown description Servers_1 Ethernet Infrastructure 36 .4. To avoid unnecessary EIGRP peer processing. It is recommended that Internet Control Message Protocol (ICMP) IP redirects in vPC environments be disabled on SVIs for correct operation. no ip redirects Step 4: Configure the EIGRP process number on the interface. interface Vlan148 no ip redirects ip address 10. Step 2: Configure the IP address for the SVI interface. hsrp [group number] priority [priority] ip [ip address of hsrp default gateway] February 2013 Series • The following is an example configuration for the Cisco Nexus 5500UP switch-A. ip router eigrp 100 Step 5: Configure passive mode EIGRP operation. ip address [ip address]/mask Step 3: Disable IP redirect on the SVI. For ease of use. configure server VLANs as passive.48. interface Vlan [vlan number] Tech Tip Both data center core Cisco Nexus 5500UP switches can process packets for the assigned ip address of their SVI and for the HSRP address. This advertises the subnet into EIGRP.3/24 ip router eigrp 100 ip passive-interface eigrp 100 ip pim sparse-mode hsrp 148 ip 10. interface Vlan148 no ip redirects ip address 10. In a vPC environment. and leave the second switch at the default priority of 100.2/24 ip router eigrp 100 ip passive-interface eigrp 100 ip pim sparse-mode hsrp 148 priority 110 ip 10.48.Procedure 2 Configure IP routing for VLANs Every VLAN that needs Layer 3 reachability between VLANs or to the rest of the network requires a Layer 3 switched virtual interface (SVI) to route packets to and from the VLAN. a packet to either switch destined for the default gateway (HSRP) address is locally switched and there is no need to tune aggressive HSRP timers to improve convergence time.4.1 no shutdown description Servers_1 • The following is an example configuration for the peer Cisco Nexus 5500UP switch-B.48.48. ip passive-interface eigrp 100 Step 6: Configure HSRP.4. number the HSRP group number the same as the SVI VLAN number. The Cisco Nexus 5500UP switches use HSRP to provide a resilient default gateway in a vPC environment.4.

4.40.40.4.4.58 10.40.61 — Te4/7 dc5500-B Table 5 .58 10.50 10.4.58 10. Configure connectivity to the SBA LAN core Table 6 . If your design has a single resilient Cisco Catalyst 4500 with redundant supervisors and redundant line cards.40.49 Te1/4/6 e1/20 10.4. you will instead connect each data center Cisco Nexus 5500UP switch to each of the redundant line cards.4.Example data center to collapsed LAN core with Catalyst 4500 Data Center Core LAN Core Switch Port IP Address IP Address C4500 dc5500-A e1/19 10.4. The configuration of IP Multicast for the rest of the network can be found in the Cisco SBA— Borderless Networks LAN Deployment Guide. Step 1: Configure the data center core switches to discover the IP Multicast rendezvous point (RP) from the Cisco SBA LAN core. It must not be defined in the VLAN database commands and does not get included in the VLAN allowed list for the vPC core.Example data center to LAN core with standalone Catalyst 6500 switches Data Center Core Port IP Address IP Address C6500-1 C6500-2 dc5500-A e1/19 10. ip pim auto-rp forward listen Step 2: Configure an unused VLAN for IP Multicast replication synchronization between the core Cisco Nexus 5500UP switches. The ip pim auto-rp forward listen command allows for discovery of the RP across ip pim sparse-mode links.57 Te2/4/6 e1/19 10.62 10. It will automatically program packet replication across the vPC peer link trunk when needed.40.40.4. Table 4 .53 Te1/4/8 e1/20 10.4.Example data center to collapsed LAN core with Catalyst 6500 VSS pair Data Center Core no ip igmp snooping mrouter vpc-peer-link Step 4: Configure all Layer 3 interfaces for IP Multicast operation with the pim sparse-mode command.40.40.40.4.61 Te2/7 dc5500-B Ethernet Infrastructure 37 .4.40.4.4.62 10.57 — Te4/7 e1/19 10.40.4.40.4.53 Te1/7 e1/20 10.4.40.49 Te1/4 e1/20 10. This design will use dual-homed point-to-point Layer 3 interfaces between each data center core Cisco Nexus 5500UP switch to each Cisco Catalyst 6500 core LAN switch for data to and from the data center to the rest of the network.49 Te 4/7 — e1/20 10.40.40.53 Te4/8 — e1/20 10.4.40.40.40.40.4.50 10.57 Te2/4 e1/19 10.40.4.4.4.54 10.40.61 Te2/4/8 dc5500-B Step 3: Configure IP Multicast to only be replicated across the vPC peer link when there is an orphan port of a vPC. ip pim sparse-mode It is not necessary to configure IP Multicast on the management VLAN interface (interface vlan 163).Procedure 3 Configure IP Multicast routing The Cisco SBA Foundation LAN network enables IP Multicast routing for the organization by using pim sparse-mode operation.54 10.40. Every Layer 3 switch and router must be configured to discover the IP Multicast RP. vpc bind-vrf default vlan 900 Procedure 4 Virtual Port Channel does not support peering to another Layer 3 router across a vPC.62 10.40.4.4.50 10. February 2013 Series LAN Core Switch Port IP Address IP Address C6500VSS dc5500-A e1/19 10.54 10.4.40. LAN Core Switch Tech Tip The VLAN used for the IP Multicast bind-vrf cannot appear anyplace else in the configuration of the Cisco Nexus 5500UP switches.

4.40.It is recommended you have at least two physical interfaces from each switch connected to the network core.62/30 ip router eigrp 100 ip pim sparse-mode service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS Step 3: On the Cisco SBA LAN Core 6500 switches. configure two point-to-point Layer 3 interfaces.252 ip pim sparse-mode macro apply EgressQoSTenOrFortyGig no shutdown interface TenGigabitEthernet4/8 description DC5548b Eth1/19 no switchport ip address 10. Each link will be configured as a point-to-point Layer 3 link with IP multicast.40.255.255.54/30 ip router eigrp 100 ip pim sparse-mode service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS Cisco SBA LAN Core 6500s 2227 Layer 3 Links Step 1: On data center core Cisco Nexus 5500UP switch-A. Cisco SBA Data Center Core Step 2: On data center core Cisco Nexus 5500UP switch-B.4. and QoS. interface TenGigabitEthernet4/7 description DC5548a Eth1/19 no switchport ip address 10.53 255.255.49 255. configure two point-to-point Layer 3 interfaces.50/30 ip router eigrp 100 ip pim sparse-mode service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS interface Ethernet1/20 description Core-2 Ten4/7 no switchport ip address 10. configure two links.40. for a total port channel of four resilient physical 10-Gigabit Ethernet links and 40 Gbps of throughput.4.40. EIGRP routing.40. interface Ethernet1/19 description Core-1 Ten4/8 no switchport ip address 10.252 ip pim sparse-mode macro apply EgressQoSTenOrFortyGig no shutdown Ethernet Infrastructure 38 .40.255.4.4. interface Ethernet1/19 description Core-1 Ten4/7 no switchport ip address 10. • On the first Cisco Catalyst LAN core switch.4. configure the four corresponding point-to-point Layer 3 links.58/30 ip router eigrp 100 ip pim sparse-mode service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS February 2013 Series interface Ethernet1/20 description Core-2 Ten4/8 no switchport ip address 10.

For resiliency. the Ethernet out-of-band management switch will be dualhomed to each of the data center core switches by using a vPC port channel.40. you should be able to see the IP routes from the rest of the network on the core Cisco Nexus 5500UP switches. interface Ethernet1/21 description Link to Management Switch for VL163 Routing switchport mode trunk switchport trunk allowed vlan 163 speed 1000 channel-group 21 mode active Ethernet Infrastructure 39 .255.4.255. you configured the switch for Layer 2 operation and uplinks to the data center core as the option of providing Layer 3 access to the management VLAN to provide access beyond the data center. vlan 163 name DC_Management Step 2: Configure vPC port channel to the Ethernet management switch.4. configure two links. If you have selected this option to provide Layer 3 access to the out-of-band Ethernet VLAN.40. You will configure the same values on each data center core Cisco Nexus 5500UP switch. Procedure 5 Configure management switch connection The first process of this “Ethernet Infrastructure” chapter covered deploying an out-of-band Ethernet management switch.61 255. interface port-channel21 description Link to Management Switch for VL163 switchport mode trunk switchport trunk allowed vlan 163 speed 1000 vpc 21 service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS Step 3: Configure the physical ports to belong to the port channel.• On the second Cisco Catalyst LAN core switch. You will configure the same values on each data center core Cisco Nexus 5500UP switch.252 ip pim sparse-mode macro apply EgressQoSTenOrFortyGig no shutdown Step 1: Configure the Ethernet out-of-band management VLAN. February 2013 Series Out of Band Ethernet Switch Mgmt 0 Mgmt 0 2222 interface TenGigabitEthernet4/7 description DC5548a Eth1/20 no switchport ip address 10. In that process.57 255.255. You will configure the same values on each data center core Cisco Nexus 5500UP switch. follow this procedure to program the uplinks and the Layer 3 SVI on the Cisco Nexus 5500UP switches. interface TenGigabitEthernet4/8 description DC5548b Eth1/20 no switchport ip address 10.255.252 ip pim sparse-mode macro apply EgressQoSTenOrFortyGig no shutdown At this point.

2/24 ip router eigrp 100 ip passive-interface eigrp 100 hsrp 163 priority 110 ip 10. that switch will relinquish vPC domain control to the peer data center core Nexus 5500 switch. track 1 interface port-channel10 line-protocol track 2 interface Ethernet1/19 line-protocol track 3 interface Ethernet1/20 line-protocol Step 2: Configure a track list on each data center core switch that contains all of the objects being tracked in the previous step.63. vpc domain 10 track 10 Ethernet Infrastructure 40 .4.63.63.4.4. February 2013 Series SBA Data Center Core Nexus 5500s vPC Peer Link e 1/20 Point-to-Point Layer 3 Routed Links to Core • Configure data center core Cisco Nexus 5500UP switch-B. Use a boolean or condition in the command to indicate that all three objects must be down for the action to take effect. you can track the state of the Layer 3 links to the Cisco SBA LAN core and the vPC peer link port channel. vPC Primary vPC Peer Keepalive X Port-Channel 10 X e 1/19 X X X Procedure 6 Configure vPC object tracking If you want to provide the ability to monitor the state of critical interfaces on a Cisco Nexus 5500 data center core switch in order to influence vPC operations and prevent potential outages.3/24 ip router eigrp 100 ip passive-interface eigrp 100 hsrp 163 ip 10. Data Center Servers and Services • Configure the data center core Cisco Nexus 5500UP switch-A.1 vPC Secondary SBA LAN Core 2284 interface Vlan163 description DC-Management no ip redirects ip address 10.4. interface Vlan163 description DC-Management no ip redirects ip address 10.Step 4: Configure an SVI interface for VLAN 163. The signaling of vPC peer switchover requires the vPC peer keepalive link between the Nexus 5500 switches to remain operational in order to communicate vPC peer state. in the figure below.63.1 SBA Rest of Network Step 1: Configure the interfaces to track on each data center core switch. you can track interfaces and enable an action. You can then program the switch such that if all three of these tracked interfaces on the switch are in a down state at the same time on a data center core Nexus 5500 switch. track 10 object object object list boolean or 1 2 3 Step 3: Configure the vPC process on each data center core switch to monitor the track list created in the previous step. As an example.

Configure Fabric Extender connections 2. Configure end node ports • Cisco FEX ports do not support connectivity to LAN switches that generate spanning-tree BPDU packets. There are some design rules to be aware of when connecting devices to Cisco FEX ports: Procedure 1 Configure Fabric Extender connections When assigning Cisco FEX numbering. interface Ethernet1/13 channel-group 102 ! interface Ethernet1/14 channel-group 102 Ethernet Infrastructure 41 . it will shut down with an Error Disable status. such as a rack number that is specific to your environment. Every end node or server connected to a dual-homed FEX is logically dual homed to each of the Cisco Nexus 5500 core switches and will have a vPC automatically generated by the system for the Ethernet FEX edge port. February 2013 Series 2228 Cisco Fabric Extender (FEX) ports are designed to support end host connectivity. • Cisco Fabric Extender connections are also configured as port channel connections on Cisco Nexus 5500 Series for uplink resiliency and load sharing. Option 1. • If the Cisco FEX is to be dual-homed to both members of the vPC switch pair to support single-homed servers or for increased resiliency. • Cisco FEX ports do not support connectivity to Layer 3 routed ports where routing protocols are exchanged with the Layer 3 core. If a Cisco FEX port receives a BPDU packet. Step 1: Assign the physical interfaces on the connected Cisco Nexus 5500 switch to the port channels that are the supporting Cisco FEX attachment. Configure single-homed FEX A single-homed FEX requires configuration for the FEX and the uplinks on the Cisco Nexus 5500 switch it is connected to. • If the Cisco FEX is to be single-homed to only one member of the switch pair. • The Cisco Nexus 5500UP switch running Layer 3 routing supports a maximum of sixteen connected Cisco FEX on a switch. it is configured as a standard port channel.Single-homed FEX 102 Process Configuring Fabric Extender Connectivity PoCh-102 Dual-homed FEX 104 Single-homed FEX 103 PoCh-104 vPC-104 PoCh-103 1. it is configured as a vPC on the port channel. These Ethernet interfaces form the uplink port channel to the connected FEX. you have the flexibility to use a numbering scheme (different from the example) that maps to some other identifier. they are only for Layer 2–connected end hosts or appliances.

interface Ethernet1/25 channel-group 104 ! interface Ethernet1/26 channel-group 104 February 2013 Series Ethernet Infrastructure 42 . interface port-channel104 description dual-homed 2232 switchport mode fex-fabric fex associate 104 vpc 104 no shutdown Step 3: Repeat the configuration on the second connected Cisco Nexus 5500 switch. The switchport mode fex-fabric command informs the Cisco Nexus 5500UP Series switch that a fabric extender should be at the other end of this link. The switchport mode fex-fabric command informs the Cisco Nexus 5500UP Series switch that a fabric extender should be at the other end of this link. interface Ethernet1/13 channel-group 103 ! interface Ethernet1/14 channel-group 103 ! interface port-channel103 description single-homed 2248 switchport mode fex-fabric fex associate 103 no shutdown Option 2. The vpc command creates the dual-homed port-channel for the dualhomed FEX. These Ethernet interfaces form the uplink port channel to the connected FEX. interface Ethernet1/25 channel-group 104 ! interface Ethernet1/26 channel-group 104 ! interface port-channel104 description dual-homed 2232 switchport mode fex-fabric fex associate 104 vpc 104 no shutdown Step 1: Assign the physical interfaces on the first connected Cisco Nexus 5500 switch to the port channels that are the supporting Cisco FEX attachment. Step 2: Configure port channel interfaces on the first connected Cisco Nexus 5500 switch to support the dual-homed Cisco FEX attachment. Configure dual-homed FEX A dual-homed FEX requires configuration for the FEX and the uplinks on both of the Cisco Nexus 5500 switches it is connected to.Step 2: Configure port channel interfaces to support the single-homed FEX attachment. interface port-channel102 description single-homed 2248 switchport mode fex-fabric fex associate 102 no shutdown Step 3: Configure the second single-homed FEX to the second Cisco Nexus 5500 switch.

” Because the server is connected to a dual-homed FEX. Procedure 2 Configure end node ports When configuring Cisco Nexus FEX Ethernet ports for server or appliance connectivity. Enable QoS classification for the connected server or end node as defined in Procedure 3. Option 1. Setting the spanning-tree port type to edge allows the port to provide immediate connectivity on the connection of a new device. assign physical interfaces to support servers or devices that belong in a single VLAN as access ports. Most virtualized servers will require trunk access to support management access plus user data for multiple virtual machines. Setting the spanning-tree port type to edge allows the port to provide immediate connectivity on the connection of a new device. Example interface Ethernet103/1/2 switchport mode trunk switchport trunk allowed vlan 148-163 spanning-tree port type edge trunk service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS Ethernet Infrastructure 43 . “Configure QoS policies. assign physical interfaces to support servers or devices that require a VLAN trunk interface to communicate with multiple VLANs. Enable QoS classification for the connected server or end node as defined in Procedure 3. “Configure QoS policies.After configuration is completed for either FEX attachment model. you can power up the FEX and verify the status of the fabric extender modules by using the show fex command and then looking for the state of “online” for each unit. and other characteristics for the Ethernet port should be identically programmed on each Cisco Nexus 5500UP switch. VLAN list. this configuration must be done on both Cisco Nexus 5500UP data center core switches. Failure to configure the port on both Nexus 5500 switches with matching VLAN assignment will prevent the Ethernet interface from being activated. dc5548ax# show fex FEX FEX FEX FEX Number Description State Model Serial -------------------------------------------------------102 FEX0102 Online N2K-C2248TP-1GE SSI14140643 104 FEX0104 Online N2K-C2232PP-10GE SSI142602QL Tech Tip Step 1: When connecting a single-homed server to a dual-homed Cisco FEX. Single-homed server to dual-homed FEX Because the server is connected to a dual-homed FEX. this configuration must be done on both Cisco Nexus 5500UP data center core switches. Step 2: When connecting a single-homed server to a dual-homed Cisco FEX. February 2013 Series You must assign the Ethernet interface configuration on both data center core Cisco Nexus 5500UP switches as the host is dual homed because it is on a dual-homed Cisco FEX. you must configure the port on one or both of the Cisco Nexus 5500UP core switches depending on the FEX connection (single-homed or dual-homed). The spanning-tree mode.” Example interface Ethernet103/1/1 switchport access vlan 148 spanning-tree port type edge service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS Tech Tip It may take a few minutes for the Cisco FEX to come online after it is programmed. because the initial startup of the Cisco FEX downloads operating code from the connected Cisco Nexus switch.

Dual-homed server using EtherChannel to two dualhomed FEX This connectivity option. When connecting ports via vPC. Dual-homed Server PoCh-600 vPC-600 PoCh-102 Nexus 5500UP Ethernet vPC Switch Fabric Single-homed FEX 103 PoCh-103 When connecting a dual-homed server that is using IEEE 802.3ad EtherChannel from the server to a pair of single-homed Cisco FEX. Option 3. When connecting a dual-homed server that is using IEEE 802. If the configuration for each port is not identical with the other. this configuration must be done on both Cisco Nexus 5500UP data center core switches. Dualhoming a server with EtherChannel to a dual-homed FEX is not supported on the older Cisco Nexus 5000 switches.3ad EtherChannel from the server to a pair of dual-homed Cisco FEX. the port will not come up. Example Step 1: On Cisco Nexus 5500 switch-A. spanning-tree mode. Ethernet Infrastructure 44 . you must configure the Ethernet interface on each of the Cisco FEX interfaces as a port channel but not as a vPC. and other characteristics match between the ports configured on each switch that make up a vPC. you must configure the Cisco FEX Ethernet interface as a port channel and assign a vPC interface to the port channel to talk EtherChannel to the attached server.1(3)N1(1) or later for the Cisco Nexus 5500 switches. The Cisco Nexus 5500 switches will automatically create a vPC to track the dual-homed port channel. Cisco NX-OS does consistency checks to make sure that the VLAN list. referred to as enhanced vPC. interface ethernet102/1/1-2 switchport mode trunk switchport trunk allowed vlan 148-163 spanning-tree port type edge trunk channel-group 600 no shutdown interface port-channel 600 service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS vpc 600 no shutdown February 2013 Series interface ethernet103/1/1-2 switchport mode trunk switchport trunk allowed vlan 148-163 spanning-tree port type edge trunk channel-group 600 no shutdown interface port-channel 600 service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS vpc 600 no shutdown Tech Tip 2226 Single-homed FEX 102 Step 2: On Cisco Nexus 5500 switch-B. Dual-homed server using EtherChannel to two single-homed FEX Because the server is dual-homed using vPC EtherChannel. requires Cisco NX-OS release 5.Option 2.

because the server and the FEX are dual-homed. you use FEX numbers 106 and 107. interface ethernet107/1/3-4 channel-group 1002 Step 3: Configure the port-channel for the VLANs to be supported.Step 4: Repeat the commands on Cisco Nexus 5500 switch-B with the same settings. Step 1: Configure the Ethernet interfaces of the first dual-homed FEX on Cisco Nexus 5500 switch-A for a port channel to the server. interface ethernet106/1/3-4 channel-group 1002 Step 2: Configure the Ethernet interfaces of the second dual-homed FEX on Cisco Nexus 5500 switch-A for a port channel to the server. interface port-channel 1002 switchport mode trunk switchport trunk allowed vlan 148-163 spanning-tree port type edge trunk service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS February 2013 Series Ethernet Infrastructure 45 . Dual-homed Server interface ethernet106/1/3-4 channel-group 1002 ! interface ethernet107/1/3-4 channel-group 1002 ! interface port-channel 1002 switchport mode trunk switchport trunk allowed vlan 148-163 spanning-tree port type edge trunk service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS PoCh-1002 PoCh-106 vPC-106 Nexus 5500UP Ethernet vPC Switch Fabric Dual-homed FEX 107 PoCh-107 vPC-107 2229 Dual-homed FEX 106 In this configuration option. Both FEX would have to be configured as dual-homed to the Cisco Nexus 5500 data center core switches as defined in Option 2: “Configure dual-homed FEX”.

This approach allows a growing organization to gain the advantages of centralized storage without needing to deploy and administer a separate Fibre Channel network. iSCSI services on the server must contend for CPU and bandwidth along with other network applications. The Cisco MDS 9148 Multilayer Fabric Switch is ideal for a larger SAN fabric with up to 48 Fibre Channel ports.Storage Infrastructure Business Overview There is a constant demand for more storage in organizations today. Storage for servers can be physically attached directly to the server or connected over a network. Technology Overview IP-based Storage Options Many storage systems provide the option for access using IP over the Ethernet network. so you need to ensure that the processing requirements and performance are suitable for a specific application. it connects to two separate fabrics. providing 48 line-rate 8-Gbps Fibre Channel ports and cost-effective scalability. This Cisco SBA data center design uses the Cisco Nexus 5500UP series switches as the core that provides Fibre Channel and Fibre Channel over Ethernet (FCoE) SAN switching. Fibre Channel Storage Fibre Channel allows servers to connect to storage across a fiber-optic network. Options for IP-based storage connectivity include Internet Small Computer System Interface (iSCSI) and network attached storage (NAS). Multiple servers can share a single storage array. across a data center. CIFS originated in the Microsoft network environment and is a common desktop file-sharing protocol. In a SAN. Fibre Channel for the high performance database and production servers. the most common implementations use Common Internet File System (CIFS) or network file server (NFS). and application vendors. This capability allows storage administrators to easily expand capacity for servers supporting data-intensive applications. It is standard practice in SANs to create two completely separate physical fabrics. Direct attached storage (DAS) is physically attached to a single server and is difficult to use efficiently because it can be used only by the host attached to it. Both NAS protocols provide file-level access to shared storage resources. and NAS for desktop storage access. NICs also can provide support to offload iSCSI to a separate processor to increase performance. providing two distinct paths to the storage. iSCSI is a protocol that enables servers to connect to storage over an IP connection and is a lower-cost alternative to Fibre Channel. or even across a WAN by using Fibre Channel over IP. This design prevents failures or misconfigurations in one fabric from affecting the other fabric. Most organizations will have applications for multiple storage access technologies—for example. similar to Fibre Channel. The Cisco MDS family of Multilayer SAN Fabric Switches also offers options like hardware-based encryption services. NFS is a multi-platform protocol that originated in the February 2013 Series Storage Infrastructure 46 . a fabric consists of servers and storage connected to a Fibre Channel switch (Figure 13). The Cisco Nexus 5500UP offers the density required for collapsed Fibre Channel connectivity requirements by supporting both Fibre Channel and FCoE servers and storage arrays. tape acceleration. iSCSI has become a storage technology that is supported by most server. Network attached storage (NAS) is a general term used to refer to a group of common file access protocols. storage. Fibre Channel fabric services operate independently on each fabric so when a server needs resilient connections to a storage array. UNIX environment and can be used for shared hypervisor storage. and Fibre Channel over IP for longer distance SAN extension. iSCSI provides block-level storage access to raw disk resources. Storage area networks (SANs) allow multiple servers to share a pool of storage over a Fibre Channel or Ethernet network.

which prevents an issue in one VSAN from affecting other VSANs. VSANs provide the ability to create many logical SAN fabrics on a single Cisco MDS 9100 Family switch. VSANs allow all of these fabrics to be created on a single physical switch with the same amount of protection provided by separate switches. quality of service (QoS). The WWN naming format is cumbersome. Use a naming convention that makes initiator and target identification easy. When configuring features such as zoning. Each port has a port worldwide name (pWWN). Initiators are servers or devices that initiate access to disk or tape.cisco. In the past. and any changes to a zoning configuration are disruptive to the entire connected fabric. The specific storage array configuration may vary. Storage Array Fabric A Zoning provides a means of restricting visibility and connectivity among devices connected to a SAN. and port security on a Cisco MDS 9000 Family switch. and departmental environments. each host connects a port to each of the fabrics. backup.com/en/US/docs/switches/datacenter/mds9000/ interoperability/matrix/intmatrx.Dual fabric SAN with a single disk array The terms target and initiator will be used throughout this section. It is a service that is common throughout the fabric. Please consult the installation instructions from the specific storage vendor. it was a common practice to build physically separate fabrics for production. • HBA number: hba0 February 2013 Series • Port on HBA: a Storage Array Tested The storage arrays used in the testing and validation of this deployment guide are the EMC VNX-5300 and the NetApp FAS3200. WWNs must be specified. For resilient connectivity.html Storage Infrastructure 47 . If a host’s cable is moved to a different port. and manually typing WWNs is error prone.Zoning Figure 13 . Each VSAN has its own set of services and address space. In data networking this would compare to a MAC address for an Ethernet adapter. The use of zones allows an administrator to control which initiators can see which targets. which is the port’s address that uniquely identifies it on the network. For example. Server 1 Server 2 2230 Device Aliases Each server or host on a SAN connects to the Fibre Channel switch with a multi-mode fiber cable from a host bus adapter (HBA). it will still work if the port is a member of the same VSAN. Targets are disk or tape devices. The Cisco interoperability support matrix for Fibre Channel host bus adapters and storage arrays can be found at: http://www. p3-c210-1-hba0-a in this setup identifies: • Rack location: p3 • Host type: c210 VSANs • Host number: 1 The virtual storage area network (VSAN) is a technology created by Cisco that is modeled after the virtual local area network (VLAN) concept in Ethernet networks. Device aliases provide a user-friendly naming format for WWNs in the SAN fabric (for example: “p3-c210-1-hba0-a” instead of “10:00:00:00:c9:87:be:1c”). lab. An example of a pWWN is: 10:00:00:00:c9:87:be:1c. Fabric B SAN A SAN B Initiator-based zoning allows for zoning to be port-independent by using the world wide name (WWN) of the end host.

If you have already configured ports for Fibre Channel operation. Step 1: Configure universal port mode for Fibre Channel. and device aliases will likely be different. you enable ports 28 through 32 on the Cisco Nexus 5548UP switch as Fibre Channel ports. Storage Infrastructure 48 . This is subject to change in later releases of software. and device aliases are examples from the lab. addresses. Configure VSANs 3. interfaces. Configure Fibre Channel operation 2. Configure zoning 6. all switch ports are enabled for Ethernet operation. February 2013 Series Slot 2 GEM FC Ports Ethernet FC 2224 Tech Tip In this design. Slot 1 (Baseboard) Ethernet Ports Configuring Fibre Channel SAN on Cisco Nexus 5500UP 1. Configure Fibre Channel ports 4. you can skip Step 1 through Step 3 of this procedure. slot 1 port 28-32 type fc Tech Tip Changing port type to fc requires a reboot in Cisco Nexus 5500UP version 5.Specific interfaces. Configure device aliases 5. • FCoE access to storage from Cisco UCS C-Series servers using Cisco Nexus 5500. Deployment Details Deployment examples documented in this section include: • Configuration of Cisco Nexus 5500UP–based SAN network to support Fibre Channel–based storage. Ports will not show up in the configuration as fc ports if you did not previously enable the FCoE feature. • Configuration of a Cisco MDS SAN switch for higher-density Fibre Channel environments. Your WWN addresses. Verify the configuration Complete each of the following procedures to configure the Fibre Channel SAN on the data center core Cisco Nexus 5500UP switches. Reader Tip The first part of this procedure was outlined in Procedure 2 in the “Configuring the Data Center Core” process in the “Ethernet Infrastructure” chapter of this deployment guide. By default.2(1)N1(1) software to recognize the new port operation. Fibre Channel ports must be enabled in a contiguous range and be the high numbered ports of the switch baseboard and/or the high numbered ports of a universal port expansion module. Process Procedure 1 Configure Fibre Channel operation The Cisco Nexus 5500UP switch has universal ports that are capable of running Ethernet+FCoE or Fibre Channel on a per-port basis.

cisco. connect to the switch’s management IP address. Step 3: Using Device Manager. and in the name box. all ports are assigned to VSAN 1 at initialization of the switch. The Create VSAN General window appears. save your configuration and reboot the switch so that the switch recognizes the new fc port type operation. The CLI can also be used to configure Fibre Channel operation. Step 1: Install Cisco DCNM for SAN Essentials. The example below describes creating two VSANs. The SAN fabrics operate in parallel. enable FCOE operation. feature npiv feature fport-channel-trunk Reader Tip More detail for connecting to a Cisco UCS B-Series fabric interconnect for Fibre Channel operation can be found in the Cisco SBA—Data Center Unified Computing System Deployment Guide. Step 4: In the VSAN id list. type General-Storage. DCNM for SAN Essentials includes Cisco MDS Device Manager and Cisco SAN Fabric Manager. By not using VSAN 1. Fibre Channel hosts and targets connect to both fabrics for redundancy. If you have already done this. To manage a switch with Cisco DCNM Device Manager. It is a best practice to create a separate VSAN for production and to leave VSAN 1 for unused ports. and should be installed on your desktop before accessing either application. connect to Cisco Nexus data center core switch-A IP address (10. Procedure 2 Configure VSANs Cisco Data Center Network Manager (DCNM) for SAN Essentials Edition is a no-cost application to configure and manage Cisco MDS and Cisco Nexus SAN switches. there is no need to reboot. By default. choose 4. available for download from www. Java runtime environment (JRE) is required to run Cisco DCNM Fabric Manager and Device Manager. click FC > VSANS. You can use the CLI or Device Manager to create a VSAN. where you create two separate SAN fabrics. one on each data center core Cisco Nexus 5500UP switch. which enables both native Fibre Channel and FCoE operation.4. February 2013 Series Storage Infrastructure 49 . Step 3: If you have not done so.10). Step 2: Using DCNM Device Manager.com.Step 2: If you are changing the port type at this time. you can avoid future problems with merging of VSANs when combining other existing switches that may be set to VSAN 1. feature fcoe Step 4: Enable SAN port-channel trunking operation and Fibre Channel N-Port ID Virtualization for connecting to Cisco UCS fabric interconnects. Managing more than one switch at the same time requires a licensed version.63. Fibre Channel operates in a SAN-A and SAN-B approach.

February 2013 Series Storage Infrastructure 50 . However. Step 1: If you want to change the port mode by using Device Manager. right-click the port you want to configure. Use the same VSAN name. You can add additional VSAN members in the Membership tab of the main VSAN window. Step 7: Click Create. the ports are configured for port mode Auto.Step 5: Next to the Interface Members box. click the ellipsis (…) button. The VSAN is created. and this setting should not be changed for most devices that are connected to the fabric. you will have to assign a VSAN to the port. . Procedure 3 Configure Fibre Channel ports By default. Step 6: Select the interface members by clicking the port numbers you want. The General tab appears. The preceding steps apply this configuration in CLI. vsan database vsan 4 name General-Storage vsan 4 interface fc1/28 Step 8: Repeat the steps in this procedure to create a VSAN 5 on Cisco Nexus 5500UP switch-B.

the initiator or target WWN is made known to the fabric. Until you have storage arrays or servers with active HBAs plugged into the switch on Fibre Channel ports. An incorrect device name may cause unexpected results. you can use this as another way to assign a port to a VSAN. This enables the port. February 2013 Series Storage Infrastructure 51 . QoS. Upon login. depending on which switch you are working on.” If you have already created VSANs. Procedure 4 Configure device aliases Device aliases map the long WWNs for easier zoning and identification of initiators and targets. Reader Tip For more information about preparing Cisco UCS B-Series and C-Series servers for connecting to the Fibre Channel network see the Cisco SBA—Data Center Unified Computing System Deployment Guide. Step 2: Next to Status Admin. Example dc5548ax# show flogi database --------------------------------------------------------------------------INTERFACE VSAN FCID PORT NAME NODE NAME --------------------------------------------------------------------------fc1/29 4 0xbc0002 20:41:00:05:73:a2:b2:40 20:04:00:05:73:a2:b2:41 fc1/29 4 0xbc0005 20:00:00:25:b5:77:77:9f 20:00:00:25:b5:00:77:9f fc1/30 4 0xbc0004 20:42:00:05:73:a2:b2:40 20:04:00:05:73:a2:b2:41 vfc1 4 0xbc0000 20:00:58:8d:09:0e:e0:d2 10:00:58:8d:09:0e:e0:d2 vfc27 4 0xbc0006 50:0a:09:81:89:3b:63:be 50:0a:09:80:89:3b:63:be Total number of flogi = 5. you will not see entries in the FLOGI database to use for device alias configuration. This changes the VSAN and activates the ports. Tech Tip When the initiator or target is plugged in or starts up. select up. “Configure VSANs. Step 3: In the PortVSAN list. and show commands. vsan database vsan 4 interface fc1/28 This step assigns ports to a VSAN similar to Step 5 in the previous procedure. Device aliases can be used for zoning.You can see in the preceding figure that the PortVSAN assignment is listed in the top left of the General tab. Step 4: Connect Fibre Channel devices to ports. Step 5: Display fabric login (FLOGI) by entering the show flogi database on the switch CLI. you will not see entries in the FLOGI database. and then click Apply. You can configure device aliases via Device Manager or CLI. choose 4 or 5. port-security. it automatically logs into the fabric. The preceding steps apply this configuration in CLI. Tech Tip Until you have storage arrays or servers with active HBAs plugged into the switch on Fibre Channel ports.

access the Device Alias window by navigating to FC > Advanced > Device Alias. Step 1: Enter device alias database configuration mode. Configure device aliases by using Device Manager Option 2. Step 2: Click Create. click CFS > Commit. Aliases are now visible. and in the WWN box. and then click Create. enter a name. device-alias database Step 2: Enter device alias names mapping to a PWWN from the FLOGI database above. February 2013 Series Storage Infrastructure 52 . The changes are written to the database. As an example: device-alias device-alias device-alias device-alias device-alias name name name name name emc-a0-fc pwwn 50:06:01:61:3c:e0:30:59 emc-2-a0-fc pwwn 50:06:01:61:3c:e0:60:e2 Netapp-e2a-FCOE pwwn 50:0a:09:82:89:ea:df:b1 NetApp2-e2a-FCOE pwwn 50:0a:09:81:89:3b:63:be p12-c210-27-vhba3 pwwn 20:00:58:8d:09:0e:e0:d2 Step 3: Exit device alias configuration mode. vfc27 4 4 4 0xbc0005 20:00:00:25:b5:77:77:9f 20:00:00:25:b5:00:77:9f 0xbc0000 20:00:58:8d:09:0e:e0:d2 10:00:58:8d:09:0e:e0:d2 [p12-c210-27-vhba3] 0xbc0006 50:0a:09:81:89:3b:63:be 50:0a:09:80:89:3b:63:be [NetApp2-e2a-FCOE] Step 4: After you have created your devices aliases. Configure device aliases by using CLI Step 1: In Device Manager. device-alias commit Step 5: Enter the show flogi database command.Option 1. exit Step 4: Commit the changes. paste in or type the WWN of the host. dc5548ax# show flogi database ---------------------------------------------------------------------INTERFACE VSAN FCID PORT NAME NODE NAME fc1/29 4 0xbc0002 20:41:00:05:73:a2:b2:40 20:04:00:05:73:a2:b2:41 fc1/30 4 0xbc0004 20:42:00:05:73:a2:b2:40 20:04:00:05:73:a2:b2:41 --------------------------------------------------------------------fc1/29 vfc1 Step 3: In the Alias box.

Procedure 5 Configure zoning Step 3: Create and activate a zoneset. member device-alias emc-2-a0-fc member pwwn 20:00:00:25:b5:77:77:9f Server 1 Server 2 Zone Server 1-to-Array Zone Server 2-to-Array 2231 Step 1: In configuration mode. • Zone naming should follow a simple naming convention of initiator_x_target_x: ◦◦ Tech Tip A zoneset is a collection of zones. Fabric B SAN A SAN B Option 1. • You can also configure a single initiator to multiple targets in the same zone. zoneset activate name SAN_4 vsan 4 February 2013 Series Storage Infrastructure 53 . activate the configuration. enter the zone name and VSAN number. After you add all the zones as members. you must activate the zoneset. Zones are members of a zoneset. you will not see entries in the FLOGI database to use for zone configuration. Step 4: Add members to the zoneset. Configure a zone by using CLI zone name p12-ucs-b-fc0-vhba1_emc-2 vsan 4 Step 2: Specify device members by WWN or device alias. zoneset name SAN_4 vsan 4 Leading practices for zoning: • Configure zoning between a single initiator and a single target per zone. p12-ucs-b-fc0-vhba1_emc-2 • Limit zoning to a single initiator with a single target or multiple targets to help prevent disk corruption and data loss. There can only be one active zoneset per VSAN. Storage Array Zoning can be configured from the CLI and from Cisco DCNM for SAN Fabric Manager. Tech Tip Fabric A Until you have storage arrays or servers with active HBAs plugged into the switch on Fibre Channel ports. member p12-ucs-b-fc0-vhba1_emc-2 member p12-c210-27-vhba3_netapp-2-e2a Step 5: After all the zones for VSAN 4 are created and added to the zoneset.

choose initiator or targets you want to add to the zone. Step 9: Drag the zones you just created from the zone box to the zoneset folder that you created. in the left pane. Step 2: Log in to DCNM-SAN manager. This creates a new zone. zoneset distribute full vsan 4 Step 7: Save your switch configuration. copy running-config startup-config Option 2. February 2013 Series Step 10: Click Activate. This activates the configured zoneset Storage Infrastructure 54 . The default username is admin and the password is password.4. Step 8: Right-click Zoneset to insert a new zoneset. and then click Add to Zone. and then click OK. enter the name of the new zone. This prepares for expanding your Fibre Channel SAN beyond a single switch. Step 3: Choose a seed switch by entering the IP address of Cisco Nexus 5500UP switch-A (for example. and then choosing Cisco Nexus 5500UP from the list. Step 7: Select the new zone. right-click Zones. and then click Edit Local Full Zone Database. Step 5: In the Zone Database window. Step 4: From the DCNM-SAN menu. Configure a zone by using Cisco DCNM Step 1: Launch the Cisco DCNM for SAN Fabric Manager installed in Step 1 in the previous procedure of “Configure VSANs”. from the bottom of the right-hand side of the database window.63.10). choose Zone. and then click Insert.Step 6: Distribute the zone database to other switches in the SAN. Step 6: In the Zone Name box. and then. 10.

Procedure 6 Verify the configuration Step 1: Verify the Fibre Channel login. the fabric login is successful. select Save Running to Startup Configuration. The FCNS database shows the same PWWN login along with vendor specific attributes and features. this ID is assigned by the fabric. you may have a part of the configuration on the end host or storage device misconfigured or a device driver issue. When a fabric login (FLOGI) is received from the device. and then click Continue Activation. each host or disk requires a Fibre Channel ID (FC ID). February 2013 Series Storage Infrastructure 55 . In a Fibre Channel fabric.Step 11: On the Save Configuration dialog box. Step 2: Verify Fibre Channel Name Server (FCNS) attributes. If the required device is displayed in the FLOGI table. dc5548ax# show fcns database VSAN 4: -------------------------------------------------------------------------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE 0xb90100 N 50:06:01:61:3c:e0:60:e2 (Clariion) scsi-fcp:target 0xbc0000 N 20:00:58:8d:09:0e:e0:d2 scsi-fcp:init fc-gs 0xbc0002 N 20:41:00:05:73:a2:b2:40 (Cisco) npv 0xbc0005 N 20:00:00:25:b5:77:77:9f scsi-fcp:init fc-gs -------------------------------------------------------------------------- Step 12: Configure SAN B the same way by using the procedures in this process to create VSAN 5 on data center core Cisco Nexus 5500UP switch-B. 0xbc0004 0xbc0006 N N [emc-2-a0-fc] [p12-c210-27-vhba3] 20:42:00:05:73:a2:b2:40 (Cisco) 50:0a:09:81:89:3b:63:be (NetApp) [NetApp2-e2a-FCOE] npv scsi-fcp:target Total number of entries = 6 dc5548ax# show flogi database --------------------------------------------------------------------------INTERFACE VSAN FCID PORT NAME NODE NAME --------------------------------------------------------------------------fc1/29 4 0xbc0002 20:41:00:05:73:a2:b2:40 20:04:00:05:73:a2:b2:41 fc1/29 4 0xbc0005 20:00:00:25:b5:77:77:9f 20:00:00:25:b5:00:77:9f fc1/30 4 0xbc0004 20:42:00:05:73:a2:b2:40 20:04:00:05:73:a2:b2:41 vfc1 4 0xbc0000 20:00:58:8d:09:0e:e0:d2 10:00:58:8d:09:0e:e0:d2 [p12-c210-27-vhba3] vfc27 4 0xbc0006 50:0a:09:81:89:3b:63:be 50:0a:09:80:89:3b:63:be [NetApp2-e2a-FCOE] Total number of flogi = 5. Check that your initiators and targets have logged in and show FC4-TYPE:FEATURE attributes as highlighted below. If the feature attributes do not show.

Configure VSANs 3. Procedure 1 Perform initial setup for Cisco MDS The following is required to complete this procedure: • Setting a management IP address • Configuring console access • Configuring a secure password When initially powered on. February 2013 Series Storage Infrastructure 56 . you may choose to use Cisco MDS 9100 series SAN switches. the host is either down and not logged into the fabric or there is a misconfiguration of the port VSAN or zoning. Configure the trunk for SAN interconnect If your Fibre Channel SAN environment requires a higher density of Fibre Channel port connectivity. Perform initial setup for Cisco MDS 2. Each zone that is a member of the active zoneset is displayed with an asterisk (*) to the left of the member. The following procedures describe how to deploy a Cisco MDS 9124 or 9148 SAN switch to connect to the data center core Cisco Nexus 5500UP switches. a new Cisco MDS 9148 switch starts a setup script when accessed from the console. and then trace the routes to the host by using the fctrace command. Use the show zone command to display all configured zones on the Cisco Fibre Channel switches. Cisco created these commands to provide storage networking troubleshooting tools that are familiar to individuals who use ping and traceroute.Check the fabric configuration for proper zoning by using the show zoneset active command. which displays the active zoneset. dc5548ax# show zoneset active zoneset name SAN_4 vsan 4 zone name p12-ucs-b-fc0-vhba1_emc-2 vsan 4 * fcid 0xb90100 [pwwn 50:06:01:61:3c:e0:60:e2] [emc-2-a0-fc] * fcid 0xbc0005 [pwwn 20:00:00:25:b5:77:77:9f] Process Configuring Cisco MDS 9148 Switch SAN Expansion 1. If there is not an asterisk to the left. zone name p12-c210-27-vhba3_netapp-2-e2a vsan 4 * fcid 0xbc0006 [pwwn 50:0a:09:81:89:3b:63:be] [NetApp2-e2aFCOE] * fcid 0xbc0000 [pwwn 20:00:58:8d:09:0e:e0:d2] [p12-c210-27vhba3] Fabric A Fabric B Cisco MDS 9100 Series Storage Fabrics SAN A SAN B Expansion Fibre Channel Ports Step 4: Test Fibre Channel reachability by using the fcping command. Cisco Nexus 5500UP Series Data Center Core 2232 Step 3: Verify active zoneset.

4.63. NTP.255.System Admin Account Setup ---Do you want to enforce secure password standard (yes/no) [y]: y Enter the password for “admin”: Confirm the password for “admin”: ---. out-ofband management.Step 1: Follow the prompts in the setup script to configure login. SSH.12 Mgmt0 IPv4 netmask : 255. ---.17 Configure default switchport interface state (shut/noshut) [shut]: noshut Configure default switchport trunk mode (on/off/auto) [on]: Configure default switchport port mode F (yes/no) [n]: n Configure default zone policy (permit/deny) [deny]: Enable full zoneset distribution? (yes/no) [n]: y Configure default zone mode (basic/enhanced) [basic]: The following configuration will be applied: password strength-check switchname mds9148ax interface mgmt0 ip address 10.1 Configure advanced IP options? (yes/no) [n]: Enable the ssh service? (yes/no) [y]: y Type of ssh key you would like to generate (dsa/rsa) [rsa]: rsa Number of rsa key bits <768-2048> [1024]: 2048 Enable the telnet service? (yes/no) [n]: n Enable the http-server? (yes/no) [y]: Configure clock? (yes/no) [n]: Configure timezone? (yes/no) [n]: February 2013 Series Configure summertime? (yes/no) [n]: Configure the ntp server? (yes/no) [n]: y NTP server IPv4 address : 10. Would you like to enter the basic configuration dialog (yes/ no): y Create another login account (yes/no) [n]: Configure read-only SNMP community string (yes/no) [n]: Configure read-write SNMP community string (yes/no) [n]: Enter the switch name : mds9148ax Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: y Mgmt0 IPv4 address : 10.4. when no configuration is present.255.4. *Note: setup is mainly used for configuring the system initially.63.4.12 255.1 ssh key rsa 2048 force feature ssh no feature telnet feature http-server ntp server 10.255.63.Basic System Configuration Dialog ---This setup utility will guide you through the basic configuration of the system. So setup always assumes system defaults and not the current system configuration values.0 no shutdown ip default-gateway 10. Storage Infrastructure 57 .48.48.4. and default zone policies.63.17 no system default switchport shutdown system default switchport trunk mode on no system default zone default-zone permit system default zone distribute full no system default zone mode enhanced Would you like to edit the configuration? (yes/no) [n]: n Use this configuration and save it? (yes/no) [y]: y [########################################] 100% Tech Tip NTP is critical to troubleshooting and should not be overlooked.255.4.0 Configure the default gateway? (yes/no) [y]: y IPv4 address of the default gateway : 10. Press Enter at anytime to skip a dialog. switch port modes. Setup configures only enough connectivity for management of the system. Use ctrl-c at anytime to skip the remaining dialogs.

The CLI and GUI tools work the same way for Cisco MDS as they do with Cisco Nexus 5500UP. In the setup mode.Step 2: Run the setup script for the second Cisco MDS 9100 switch (switchB) using a unique switch name and 10. TACACS+ is the primary protocol used to authenticate management logins on the infrastructure devices to the AAA server. see the Cisco SBA—Borderless Networks Device Management Using ACS Deployment Guide. A local AAA user database is also defined in the setup script on each MDS 9100 switch to provide a fallback authentication source in case the centralized TACACS+ server is unavailable. you configured the NTP server address.15 aaa authentication login default group tacacs Reader Tip The AAA server used in this architecture is the Cisco Access Control System (ACS). Step 5: Configure the clock.15 key SecretKey aaa group server tacacs+ tacacs server 10. Step 4: Set the SNMP strings in order to enable managing Cisco MDS switches with Device Manager. all management access to the network infrastructure devices (SSH and HTTPS) is controlled by AAA. A centralized AAA service reduces operational tasks per device and provides an audit log of user access for security compliance and root-cause analysis. For details about Cisco ACS configuration.63.13 for the Mgmt0 IPv4 address. and then click FC > VSANS. In this step. Set both the read-only (network-operator) and read/write (network-admin) SNMP strings: snmp-server community cisco group network-operator snmp-server community cisco123 group network-admin February 2013 Series Storage Infrastructure 58 . configuring the clock enables the clock to use the NTP time for reference and makes the switch output match the local time zone.48. clock timezone PST -8 0 clock summer-time PDT 2 Sunday march 02:00 1 Sunday nov 02:00 60 Procedure 2 Configure VSANs To configure the Cisco MDS switches to expand the Fibre Channel SAN that you built on the Cisco Nexus 5500UP switches. log in to the first Cisco MDS SAN switch.48.4. The Create VSAN General window appears.4. use the same VSAN numbers for SAN A and SAN B.4. As networks scale in the number of devices to maintain. When AAA is enabled for access control. configure centralized user authentication by using the TACACS+ protocol to authenticate management logins on the infrastructure devices to the AAA server. Step 3: If you want to reduce operational tasks per device. feature tacacs+ tacacs-server host 10. the operational burden to maintain local user accounts on every device also scales. respectively. Step 1: In Device Manager.

The preceding steps apply this configuration in CLI. Step 1: In Device Manager. and then click Create. Next. you configure the trunk ports on Cisco MDS.Step 2: In the VSAN id list. Step 3: Choose the port channel Id number. select Mode E. click Interfaces > Port Channels. choose 4. and in the Name box. navigate to the Cisco MDS switch. vsan database vsan 4 name General-Storage Step 4: Configure Cisco MDS SAN switch-B for VSAN 5 and VSAN name General-Storage using Step 1 through Step 3 in this procedure. Step 2: In the Device Manager screen. and then select Force. enter General-Storage. February 2013 Series Storage Infrastructure 59 . Step 3: Click Create. select trunk. Procedure 3 Configure the trunk for SAN interconnect Connect the Cisco MDS switch to the existing Cisco Nexus 5500UP core Fibre Channel SAN.

February 2013 Series interface port-channel 1 switchport mode E switchport trunk allowed vsan 1 switchport trunk allowed vsan add 5 switchport rate-mode dedicated interface fc1/13 switchport mode E channel-group 1 force switchport rate-mode dedicated no shutdown interface fc1/14 switchport mode E channel-group 1 force switchport rate-mode dedicated no shutdown Storage Infrastructure 60 . the VSANs to enter would be 1 and 5. click the ellipsis button (…). Step 6: Click Create. Step 7: Right-click the Fibre Channel ports used for the port channel. interface port-channel 1 switchport mode E switchport trunk allowed vsan 1 switchport trunk allowed vsan add 4 switchport rate-mode dedicated interface fc1/13 switchport mode E channel-group 1 force switchport rate-mode dedicated no shutdown interface fc1/14 switchport mode E channel-group 1 force switchport rate-mode dedicated no shutdown The preceding steps apply this Cisco MDS 9100 configuration to the MDS SAN-B switch. Step 5: To the right of the Interface Members box. The new port channel is created.Step 4: In the Allowed VSANs box. and then select the interface members that will belong to this port channel. For the Cisco MDS switch for SAN Fabric B. and then select enable. The preceding steps apply this Cisco MDS 9100 configuration to the MDS SAN-A switch.4. enter 1.

zoneset distribute full vsan 4 Configure the Cisco Nexus 5500UP CLI for SAN-B to distribute the zone database to the new Cisco MDS 9100 switch. which uses a 10/100 or 10/100/1000 Ethernet port.Step 8: Create the corresponding SAN port channel to connect to the Cisco MDS switch for Cisco Nexus 5500UP by following the preceding steps in this procedure (Procedure 3). interface san-port-channel 31 switchport trunk allowed vsan 1 switchport trunk allowed vsan add 4 interface fc1/31 switchport description Link to dcmds9148ax port fc-1/13 switchport mode E channel-group 31 force no shutdown interface fc1/32 switchport description Link to dcmds9148ax port fc1/14 switchport mode E channel-group 31 force no shutdown The resulting Cisco Nexus 5500UP CLI for this SAN port channel is the following for the SAN-B switch. The Cisco Nexus 5500UP Series switch that connects the Cisco UCS 5100 Series Blade Server Chassis to the network can also be used to extend Fibre Channel traffic over 10-Gigabit Ethernet. the Cisco UCS C-Series rack-mount server is configured with a dual-port CNA. cables. Verify FCoE connectivity Cisco UCS C-Series rack-mount servers ship with onboard 10/100/1000 Ethernet adapters and Cisco Integrated Management Controller (CIMC). and ports. Configure the Cisco Nexus 5500UP CLI for SAN-A to distribute the zone database to the new Cisco MDS 9100 switch. To get the most out of the rack servers and minimize cabling in the Cisco SBA Unified Computing architecture. Configure host-facing FCoE ports 3. zoneset distribute full vsan 5 Process Configuring FCoE Host Connectivity 1. Configure FCoE QoS 2. eliminating redundant adapters. A single converged network adapter (CNA) card and set of cables connects servers to the Ethernet and Fibre Channel networks by using FCoE. interface san-port-channel 31 switchport trunk allowed vsan 1 switchport trunk allowed vsan add 5 interface fc1/31 switchport description Link to dcmds9148bx port fc-1/13 switchport mode E channel-group 31 force no shutdown interface fc1/32 switchport description Link to dcmds9148bx port fc1/14 switchport mode E channel-group 31 force no shutdown February 2013 Series Step 9: Distribute the zone database created on the Cisco Nexus 5500UP switch to the new Cisco MDS 9100 switch. FCoE and CNA also allows the use of a single cabling infrastructure within server racks. Cabling the Cisco UCS C-Series server with a CNA limits the cables to three: one for each port on the CNA and one for the CIMC connection. In the Cisco SBA data center. The Cisco Nexus 5500UP Series switch consolidates I/O onto one set of 10-Gigabit Ethernet cables. the Cisco UCS C-Series rack-mount server is connected to a unified fabric. The resulting Cisco Nexus 5500UP CLI for this SAN port channel is the following for the SAN-A switch. Storage Infrastructure 61 .

This way. you could connect up to 32 FCoE servers to a Cisco FEX 2232PP and only use Fibre Channel port licenses for the Cisco FEX uplinks. A standard server without a CNA could have a few Ethernet connections or multiple Ethernet and Fibre Channel connections. standard Ethernet and Fibre Channel connections. February 2013 Series Storage Infrastructure 62 . only the 5500UP ports connected to the Cisco FEX require a Fibre Channel port license for each port connecting to the Cisco FEX.Tech Tip A server connecting to Cisco Nexus 5500UP that is running FCoE consumes a Fibre Channel port license. The following figure shows a topology with mixed unified fabric. If you are connecting the FCoE attached servers to a Cisco FEX model 2232PP. and optional Cisco MDS 9100 Series for Fibre Channel expansion.

and Fabric Interconnects Nexus 2200 Series Fabric Extenders Cisco ACE Server Load Balancing Nexus 5500 Layer 2/3 Ethernet and SAN Fabric LAN Core Cisco ASA Firewalls with IPS Ethernet Fibre Channel Fibre Channel over Ethernet UCS I/O and FEX Uplinks Data Center February 2013 Series SAN A FCoE and iSCSI Storage Array Fibre Channel Storage Array Fibre Channel Storage Array Storage Infrastructure 2216 UCS Fabric Interconnect Link SAN B Expanded Cisco MDS 9100 Storage Fabric 63 .Cisco SBA data center design Cisco UCS C-Series Servers Third-party Rack Servers Cisco UCS Blade Servers.Figure 14 . Chassis.

VLAN 305 is mapped to VSAN 5. you perform the following tasks to allow a Cisco C-Series server to connect using FCoE: • Create a virtual Fibre Channel interface • Assign the VSAN to a virtual Fibre Channel interface • Configure the Ethernet port and trunking Procedure 1 Configure FCoE QoS Configuration is the same across both of the Cisco Nexus 5500UP Series switches with the exception of the VSAN configured for SAN fabric A and for SAN fabric B. The recommended approach is to connect the CIMC management port(s) to an Ethernet port on the out-of-band management switch. • In the following. vlan 304 fcoe vsan 4 exit • On Cisco Nexus 5500UP switch-B. Cisco Nexus 5500UP Configuration for FCoE In previous processes.” February 2013 Series Procedure 2 Configure host-facing FCoE ports On the Cisco Nexus 5500UP switches. Step 1: Create a VLAN that will carry FCoE traffic to the host. In this process. FCoE-connected hosts can only connect over 10-Gigabit Ethernet and must use a fiber optic or twinax connection. interface vfc1 bind interface Ethernet 103/1/3 no shutdown exit Storage Infrastructure 64 . Step 1: Ensure that the Cisco Nexus 5500UP data center core switches have been programmed with a QoS policy to support lossless FCoE transport. you can connect the CIMC management port(s) to a Cisco Nexus 2248 fabric extender port in the management VLAN (163). You need to do this in order to be able to map an FCoE interface to Fibre Channel. This command will be the same on both Cisco Nexus 5500UP switches. VLAN 304 carries all VSAN 4 traffic to the CNA over the trunk for Cisco Nexus 5500UP switch-A. VLAN 304 is mapped to VSAN 4. This example shows binding to a Cisco FEX 2232PP Ethernet interface. The QoS policy for the data center core Nexus 5500UP switches was defined in Procedure 3 “Configure QoS policies. Tech Tip You must have a QoS policy on the Cisco Nexus 5500UP switches that classifies FCoE for lossless operation. and then bind it to the corresponding host Ethernet interface. you enabled Cisco Nexus 5500UP Series FCoE functionality.The Cisco UCS C-Series server is connected to both Cisco Nexus 5500UP Series switches from the CNA with twinax or fiber optic cabling. The Cisco Nexus 5500UP does not preconfigure QoS for FCoE traffic. vlan 305 fcoe vsan 5 exit Step 2: Create a virtual Fibre Channel (vfc) interface for Fibre Channel traffic. Tech Tip At this time. The Cisco UCS server running FCoE can also attach to a single-homed Cisco FEX model 2232PP. Alternatively. configure the Ethernet ports connected to the CNA on the dual-homed host.

vsan database vsan 5 interface vfc 1 exit Step 4: Configure the Ethernet interface to operate in trunk mode.4) Storage Infrastructure 65 . please see the Cisco SBA—Data Center Unified Computing System Deployment Guide. dc5548ax# show interface vfc1 vfc1 is trunking (Not all VSANs UP on the trunk) Bound interface is Ethernet103/1/3 Hardware is Virtual Fibre Channel Port WWN is 20:00:54:7f:ee:17:cf:3f Admin port mode is F. Please see CNA documentation for specific host drivers and configurations. interface Ethernet 103/1/3 switchport mode trunk switchport trunk allowed vlan 148-162. For more information on configuring the C-Series server for FCoE connectivity. trunk mode is on snmp link state traps are enabled Port mode is TF Port vsan is 4 Trunk vsans (admin allowed and active) (1. Reader Tip Host configuration is beyond the scope of this guide. vsan database vsan 4 interface vfc 1 exit • On Cisco Nexus 5500UP switch-B.Step 3: Add the vfc interface to the VSAN database.305 spanning-tree port type edge trunk service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS no shut February 2013 Series Procedure 3 Verify FCoE connectivity Step 1: On the Cisco Nexus 5500UP switches. the vfc is mapped to VSAN 4. the vfc is mapped to VSAN 5. use the show interface command to verify the status of the virtual Fibre Channel interface. • This example shows the configuration of Cisco Nexus 5500UP switch-A. Step 5: Configure VSAN on a Cisco UCS C-Series server. Tech Tip The Cisco UCS C-Series server using the Cisco P81E CNA must have the FCoE VSANs configured for virtual host bus adapter (vHBA) operation to connect to the Fibre Channel fabric. The interface should now be up as seen below if the host is properly configured to support the CNA. and configure the spanning-tree port type as trunk edge. interface Ethernet 103/1/3 switchport mode trunk switchport trunk allowed vlan 148-162.304 spanning-tree port type edge trunk service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS no shut • This example shows the configuration of Cisco Nexus 5500UP switch-B. • On Cisco Nexus 5500UP switch-A. configure the interface with the FCoE VSAN and any data VLANs required by the host.

209 bytes/sec. however. Step 3: Show the FLOGI database for FCoE login. earlier in this chapter. 0 frames/sec 1 minute output rate 320 bits/sec. The FCNS database shows the FCoE host logged in and the FC-4 TYPE:FEATURE information. 33264140 bytes 0 discards. 0 frames/ sec 117038 frames input. Device Manager for SAN Essentials cannot be used to configure VLANs or Ethernet trunks on the Cisco Nexus 5500UP Series switches. 40 bytes/sec. dc5548ax# show fcns database VSAN 4: ------------------------------------------------------------------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE ------------------------------------------------------------------0xbc0000 N 20:00:58:8d:09:0e:e0:d2 csi-fcp:init fc-gs [p12-c210-27-vhba3] Now you can configure zoning and device aliases per the procedures in the “Configuring Fibre Channel SAN on Cisco Nexus 5500UP” process.Trunk vsans (up) (4) Trunk vsans (isolated) () Trunk vsans (initializing) (1) 1 minute input rate 1672 bits/sec. 39607100 bytes 0 discards. dc5548ax# show fcoe database Tech Tip --------------------------------------------------------------------INTERFACE FCID PORT NAME MAC ADDRESS --------------------------------------------------------------------vfc1 0xbc0000 20:00:58:8d:09:0e:e0:d2 58:8d:09:0e:e0:d2 Much of the configuration of the Cisco Nexus 5500UP Series switch can also be done from within Device Manager. Step 2: On the Cisco Nexus 5500UP switches. 0 errors last clearing of “show interface” counters never Interface last changed at Tue Nov 8 11:11:29 2011 Step 4: Show the FCNS database for FCoE login. The vfc1 addresses appear in the current FLOGI database on the Cisco Nexus 5500 switch. display the FCoE addresses. 0 errors 128950 frames output. dc5548ax# show flogi database -----------------------------------------------------------------------INTERFACE VSAN FCID PORT NAME NODE NAME -----------------------------------------------------------------------vfc1 4 0xbc0000 20:00:58:8d:09:0e:e0:d2 10:00:58:8d:09:0e:e0:d2 February 2013 Series [p12-c210-27-vhba3] Storage Infrastructure 66 .

typically with a diverse set of management tools with different interfaces and approaches. even in geographically dispersed locations • Applications that are deployed on standardized hardware platforms. These components in the data center also need to be managed and maintained. The ability to move VMs or application loads from one server to the next. because there are fewer lightly loaded boxes idling away expensive wattage The ability to virtualize server platforms to handle multiple operating systems and applications with hypervisor technologies building virtual machines (VMs) allows the organization to lower capital and operating costs by collapsing more applications onto fewer physical servers. requires the network to be flexible and scalable. The hypervisor technology also provides the ability to cluster many virtual machines into a domain where workloads can be orchestrated to move around the data center to provide resiliency and load balancing. so that the organization can add new applications while controlling costs as they move from a small server room environment into a data center. which requires a flexible architecture that accommodates both legacy and new servers and applications • Increased resiliency and migration-path challenges. February 2013 Series • Multiple applications can be combined in a single hardware chassis. networking equipment. and greater operational expense to administer and maintain diverse hardware and OS platforms • Migration from existing servers and applications to newer platforms and connection methods. Scaling a data center with conventional servers. Increased capability and reduced costs are realized through multiple aspects: • Increased data center square footage and rack space • More power and cooling. Multiple hardware platforms and technologies must be integrated to deliver the expected levels of performance and availability to application end users. Cisco Virtual Port Channel (vPC) and Fabric Extender (FEX) technologies are used extensively Compute Connectivity 67 . This imposes several challenges: Server virtualization offers the capability to run multiple application servers on a common hardware platform. which reduces platform-management consoles and minimizes hardware spare stock challenges • Minimized box count reduces power and cooling requirements. the number and type of servers required to handle the information processing tasks of the organization grows as well. allowing any VLAN to appear anywhere in the data center. reducing the number of boxes that must be accommodated in datacenter space • Simplified cable management. and storage resources can pose a significant challenge to a growing organization.Compute Connectivity Business Overview Technology Overview As an organization grows. because appliancecentric or server-centric application platforms tend to be platformcentric and may not lend themselves well to being load-balanced or moved to disparate platforms Organizations frequently need to optimize their investment in server resources. and to allow new applications to be deployed in hours versus days or weeks. particularly in light of the fact that every new CPU generation increases wattage dissipation as core speeds increase • Increased complexity of the data-networking cable plant to provide adequate capacity and capability for increasing server counts • More hardware capital expense to buy server platforms and spares. allowing an organization to focus on maximizing the application capability of the data center while minimizing costs. due to fewer required cable runs and greater flexibility in allocating network connectivity to hosts on an asneeded basis • Improved resiliency and application portability as hypervisors allow workload resiliency and load-sharing across multiple platforms. whether the server is a blade server in a chassis-based system or a standalone rack-mount server.

which simplifies these complex interactions and allows an organization to deploy the same efficient technologies as larger enterprises do. switch. Streamlining the management of server hardware and its interaction with networking and storage equipment is another important component of using this investment in an efficient manner. The Host with Active-Standby teaming interfaces would be considered as vPC orphan ports. and are referred to as vPC member ports. The ports that form the vPC are split between the vPC peers. storage interfaces. The system formed by the switches is referred to as a vPC domain. If the vPC peer link connectivity is lost. When deployed in conjunction with the Cisco SBA data center network foundation. The vPC peer link between the two Cisco Nexus switches is the most important connectivity element in the system. the environment provides the flexibility to support the concurrent use of the Cisco UCS B-Series Blade Servers. allowing VLANs to be extended across the data center while maintaining a resilient architecture. Virtual Port Channel (vPC) allows links that are physically connected to two different Cisco Nexus switches to appear to a third downstream device to be coming from a single device and as part of a single Ethernet port channel. vPC Member Ports The important point to remember about vPC orphan ports is that if the vPC peer link is lost and the secondary vPC shuts down vPC ports.3ad port channels. MCEC links from a device connected to the data center core that provides spanning-tree loop–free topologies. it will not shut down vPC orphan ports unless programmed to do so with the vpc orphanport suspend command on the switch interface. A vPC port is a port that is assigned to a vPC channel group. A non-vPC port. For Cisco EtherChannel technology.vPC member and non-member ports Switch Port Channel Host Switch Host Port Active-Standby Without Channel Teaming Port Channel The following sections describe features that enhance connectivity options in the data center.in the Cisco SBA data center to provide flexible Ethernet connectivity to VLANs distributed across the data center in a scalable and resilient manner.and 10-Gigabit Ethernet connections. The following figure illustrates vPC ports and orphan ports. The primary computing platforms targeted for the Cisco SBA Unified Computing reference architecture are Cisco UCS B-Series Blade Servers and Cisco UCS C-Series Rack-Mount Servers. is a port that belongs to a VLAN that is part of a vPC. network interfaces. Spanning Tree Blocking Link Cisco Nexus Virtual Port Channel A vPC domain consists of two vPC peer switches connected by a peer link. and carries critical control plane packets as well as data packets when devices are single-homed due to design or EtherChannel link failure. providing a resilient architecture. the term “multichassis EtherChannel” (MCEC) refers to this technology. Of the vPC peers. without a dramatic learning curve. This model benefits from the ease of use offered by Cisco UCS. or any other device or appliance that supports IEEE 802. and third-party servers connected to 1. one is primary and one is secondary. For a VLAN to be forwarded on a vPC. and the network components directly attached to them. also known as an orphaned port. Compute Connectivity 68 . This link is used to create the illusion of a single control February 2013 Series Standby Interface Mgmt 0 vPC Peer Link Nexus 5500UP Ethernet vPC Switch Fabric Mgmt 0 vPC Peer Keepalive or 2233 As described in the “Ethernet Infrastructure” chapter. plane between the two switches. Cisco UCS provides a single graphical management tool for the provisioning and management of servers. that VLAN must exist on the peer link and both vPC peer switches. the secondary vPC peer will shut down all vPC member links and the primary vPC switch will continue forwarding packets. Cisco offers a simplified reference model for managing a small server room as it grows into a full-fledged data center. The third device can be a server. Cisco UCS treats all of these components as a cohesive system. The vPC peer-keepalive link is used to resolve dual-active scenarios in which the peer link connectivity is lost. The Cisco UCS Manager graphical interface provides ease of use that is consistent with the goals of Cisco SBA. Cisco UCS C-Series Rack-Mount Servers. Figure 15 . but is not programmed as a vPC member. must be defined identically on both vPC switches.

and the Cisco FEX uplinks are typically configured as a port channel as well.Example Figure 16 . The host connected to a pair of single-homed Cisco FEXs can be configured for port channel operation to provide resilient connectivity to both data center core switches through the connection to each Cisco FEX. Tech Tip Devices such as LAN switches that generate spanning-tree bridge protocol data units (BPDUs) should not be connected to Cisco FEXs. the Cisco Fabric Extender (FEX) acts as a remote line card to the attached Cisco Nexus 5500UP switch that it is connected to. extending VLANs to server ports on different Cisco FEXs does not create spanning-tree loops across the data center. “Configure virtual port channel.” earlier in this guide. The Cisco FEX–to-core connectivity ranges from 4 to 8 uplinks. Because the Cisco FEX acts as a line card on the Cisco Nexus 5500UP switch.com Nexus 5500UP Ethernet vPC Switch Fabric or 2234 interface Ethernet104/1/2 description to_teamed_adapter switchport mode access switchport access vlan 50 vpc orphan-port suspend Host Port Channel vPC Member Ports The dual-homed (active/active) Cisco FEX uses vPC to provide resilient connectivity to both data center core switches for single attached host servers. and provides fan out to higher-density Fast Ethernet. The Cisco FEX can be single-homed to a data center core switch (also called straight-through mode) or dual-homed using vPC (also called active/ active mode).” located here: http://www. depending on the Cisco FEX type in use. 1-Gigabit Ethernet. Compute Connectivity 69 .0. Each host is considered to be vPC connected through the associated connectivity to a vPC dual-homed Cisco FEX. and the Cisco FEX uplinks can be configured as a port channel as well. February 2013 Series Mgmt 0 vPC Peer Keepalive Reader Tip The fundamental concepts of vPC are described in detail in the whitepaper titled “Cisco NX-OS Virtual PortChannel: Fundamental Design Concepts with NXOS 5.Cisco Nexus FEX connectivity to data center core interface Ethernet103/1/2 description to_teamed_adapter switchport mode access switchport access vlan 50 vpc orphan-port suspend Host Single Attached Dual-Homed Fabric Extender Single-Homed Fabric Extender vPC Peer Link Mgmt 0 The complete vPC domain programming for the Cisco Nexus 5500UP switches is detailed in the Procedure 4. The Cisco FEX–to-core connectivity ranges from 4 to 8 uplinks. The Cisco FEX is designed for host connectivity and will error disable a port that receives a BPDU packet. depending on the Cisco FEX type in use. This allows for central configuration of all switch ports on the data center core switches.cisco. and 10-Gigabit Ethernet for top-of-rack server connectivity. Cisco Nexus Fabric Extender As described earlier in the “Ethernet Infrastructure” chapter.

The primary components included within this architecture are as follows: • Cisco UCS 6200 Series Fabric Interconnects—Provide both network connectivity and management capabilities to the other components in the system. The links between the blade chassis and the fabric interconnects carry all server data traffic. Cisco UCS Manager Cisco UCS Manager is embedded software resident on the fabric interconnects. with a variety of high-performance processors and memory architectures to allow customers to easily customize their compute resources to the specific needs of their most critical applications.Cisco UCS System Network Connectivity Figure 17 . February 2013 Series I/O Module Ports UCS 5100 Blade Server Chassis Ethernet. and management purposes. a CLI and an XML API are also included with the system. 10-Gigabit Ethernet. with failover capability in the event of loss of the primary link. The Ethernet traffic from the fabric interconnects shown in Figure 18 uses vPC links to the data center core for resiliency and traffic load sharing. This configuration information is replicated between the two fabric interconnects. Cisco UCS B-Series Blade Chassis System Components The Cisco UCS Blade Chassis system has a unique architecture that integrates compute. and storage network access into a common set of components under a single-pane-of-glass management interface. The Fibre Channel links to the core use SAN port channels for load sharing and resiliency as well. Compute Connectivity 70 . which means they do not operate as full LAN switches but rather rely on the upstream data center switching fabric. and management traffic generated by Cisco UCS Manager. centralized storage traffic. earlier in this guide. In this way. The following figure shows a detailed example of the connections between the fabric interconnects and the Cisco Nexus 5500UP Series data center core. Cisco UCS appears to the network as a virtualized compute cluster with multiple physical connections. The default and recommended configuration for the fabric interconnects is end-host mode. providing a highly available solution for this critical function. FCoE. Individual server traffic is pinned to specific interfaces.Cisco UCS Blade Chassis System component connections UCS Fabric Interconnects Inter-fabric Links Integrated Ports Both Cisco UCS B-Series Blade Servers and C-Series Rack Mount Servers integrate cleanly into the Cisco SBA data center design. The most common way to access Cisco UCS Manager for simple tasks is to use a web browser to open the Java-based GUI. • Cisco UCS 2200 Series Fabric Extenders—Logically extend the fabric from the fabric interconnects into each of the enclosures for Ethernet. • Cisco UCS 5100 Series Blade Server Chassis—Provides an enclosure to house up to eight half-width or four full-width blade servers. and four power supplies for system resiliency. The following figure shows an example of the physical connections required within a Cisco UCS Blade Chassis system to establish the connection between the fabric interconnects and a single blade chassis. FCoE. The Cisco Nexus 5500UP data center core provides 1-Gigabit Ethernet. • Cisco UCS B-Series Network Adapters—A variety of mezzanine adapter cards that allow the switching fabric to provide multiple interfaces to a server. and Management Transport 2206 The complete Cisco FEX connectivity programming to the Cisco Nexus 5500UP data center core switches and Ethernet port configuration for server connection is detailed in the “Configure Fabric Extender Connectivity” chapter. data network access. • Cisco UCS B-Series Blade Servers—Available in half-width or fullwidth form factors. their associated fabric extenders. and Fibre Channel SAN connectivity in a single platform. For command-line or programmatic operations against the system. Cisco UCS B-Series System Network Connectivity Cisco UCS 6200 Series Fabric Interconnects provide connectivity for Cisco UCS Blade Server systems. providing complete configuration and management capabilities for all of the components in Cisco UCS.

The Cisco FEX–to–data center core uplinks use a port channel to load balance server connections over multiple links and provide added resiliency. This is because FCoE uplinks must use a fiber optic or twinax connection to maintain bit error rate (BER) thresholds for Fibre Channel transport. each server interface must be identically configured on each data center core Cisco Nexus 5500UP switch. The Cisco UCS C-Series Server with 10-Gigabit Ethernet and FCoE connectivity uses a converged network adapter (CNA) in the server and must connect to either a Cisco Nexus 2232PP FEX or directly to the Cisco Nexus 5500UP switch. Figure 18 shows some examples of dual-homed connections from Cisco UCS C-Series servers to single-homed Cisco FEXs. providing 1-Gigabit and 10-Gigabit Ethernet connections. Cisco UCS C-Series servers extend Unified Computing innovations and benefits to rack-mount servers.Figure 18 . performance.Example Cisco UCS C-Series FEX Connections Nexus 5500UP Data Center Core Nexus 5500UP Data Center Core Fibre Channel SAN Port Links to Core FEX Uplinks in Port Channel Mode Layer 2 vPC Links to Core Cisco 2232PP Fabric Extenders Cisco UCS Fabric Interconnects Integrated Ports Detailed configuration for Cisco UCS B-Series deployment can be found in the Cisco SBA—Data Center Unified Computing System Deployment Guide.Cisco UCS fabric interconnect to core Figure 19 . and data center workloads. The Cisco Nexus switching fabric provides connectivity for 10-Gigabit or 1-Gigabit Ethernet attachment for Cisco UCS C-Series servers.3ad EtherChannel from the host to single-homed Cisco Nexus 2232PP FEXs. Cisco supports FCoE on 10-Gigabit Ethernet only at this time. Ten-Gigabit Ethernet connections capable of supporting Ethernet and FCoE are available either through the Cisco Nexus 2232PP Fabric Extender or by using 10-Gigabit ports directly on the Cisco Nexus 5500UP Series switch pair. the Ethernet traffic is load balanced across the server links with EtherChannel and Fibre Channel runs up each link to the core. If used with vPC. Connections for Fast Ethernet or 1-Gigabit Ethernet can also use the Cisco Nexus 2248TP Fabric Extender. Cisco UCS C-Series Network Connectivity Cisco UCS C-Series Rack-Mount Servers balance simplicity. as is typical of Fibre Channel SAN traffic. When using vPC for server connections. web infrastructure. and SAN-B traffic on the other link to the connected Cisco FEX and data center core switch. and density for production-level virtualization. February 2013 Series UCS C-Series UCS C-Series Server 10 Gigabit Server Ethernet and 10 Gigabit FCoE Connected Ethernet UCS C-Series Server Multiple 1 Gigabit Ethernet Layer 2 vPC Links 2238 Cisco UCS 5100 Blade Chassis 2236 I/O Module Ports The Cisco UCS C-Series Server connectivity to Cisco FEX options in Figure 19 above all make use of vPC connections by using IEEE 802. depending on the throughput requirements of the applications or virtual machines in use and the number of network interface cards installed per server. Compute Connectivity 71 . with SAN-A traffic on one link to the connected Cisco FEX and data center core switch.

the Ethernet traffic is load balanced across the server links with EtherChannel. If used with vPC. These server connections can be fiber optic.305 spanning-tree port type edge trunk no shut The Cisco UCS C-Series Server with 10-Gigabit Ethernet without FCoE can connect to a Cisco Nexus 2232 FEX or directly to the Cisco Nexus 5500UP switch.304 spanning-tree port type edge trunk no shut • This example shows the configuration of the FEX interface on Cisco Nexus 5500UP switch-B. a dualhomed Cisco FEX using vPC for the Cisco FEX connection to the data center is recommended as shown in the figure below. it may need to provide connectivity in the data center for many legacy servers and appliances with a single Fast Ethernet or Gigabit Ethernet. To provide added resiliency for these servers. interface Ethernet 103/1/3 description Dual-homed server FCoE link to SAN-A VSAN 304 switchport mode trunk switchport trunk allowed vlan 148-163. Figure 20 . Detailed configuration for Cisco UCS C-Series deployment can be found in the Cisco SBA—Data Center Unified Computing System Deployment Guide.• This example shows the configuration of the FEX interface on Cisco Nexus 5500UP switch-A. February 2013 Series Single-Homed Server Connectivity As an organization grows. In a non-vPC server connection where you want independent server interfaces.Single-homed server to dual-homed Cisco FEX Nexus 5500UP Data Center Core Cisco 2248TP Fabric Extenders Single-Homed Server Layer 2 vPC Links 2239 Example The vPC connection from the Cisco Nexus 2248TP FEX provides both control plane and data plane redundancy for servers connected to the same Cisco FEX. Compute Connectivity 72 . The use of vPC is not a requirement. Although this approach does provide added resiliency. or twinax. This topology provides resiliency for the attached servers in the event of a fabric uplink or Cisco Nexus 5500UP core switch failure. single-homed servers hosting important applications should be migrated to dual-homed connectivity to provide sufficient resiliency. The Cisco UCS C-Series Server with multiple 1-Gigabit Ethernet uses vPC to load balance traffic over multiple links using EtherChannel. you may prefer connecting to a dual-homed Cisco FEX for resiliency unless the server operating system provides resilient connectivity. All servers connected to the vPC dual-homed Cisco FEX are vPC connections and must be configured on each data center core Cisco Nexus 5500UP switch. interface Ethernet 104/1/3 description Dual-homed server FCoE link to SAN-B VSAN 305 switchport mode trunk switchport trunk allowed vlan 148-163. however there is no resiliency in the event of a Cisco Nexus 2248TP failure. depending on the Cisco FEX and server combination used. Configuration for the Cisco Nexus FEX to Cisco Nexus 5500UP switch connections is detailed in the “Configure Fabric Extender Connectivity” chapter earlier in this guide. copper.

3ad EtherChannel from servers to a Cisco FEX would use the vPC option covered in the “Cisco UCS C-Series Connectivity” section. and the Ethernet interfaces on the FEX connected to the server interfaces are programmed with a different port channel for the server port channel. Figure 21 . This new capability is referred to as Enhanced vPC.Server with active/standby NIC–to–Cisco FEX connection Nexus 5500UP Data Center Core Figure 22 . 2240 Standby Links The vPC connection from the Cisco Nexus 2248TP FEX provides both control plane and data plane redundancy for servers connected to each Cisco FEX. However. earlier in this chapter. The result is a more resilient and simplified FEX deployment in the data center that can support single. This topology provides resiliency for the attached servers in the event of a fabric uplink or Cisco Nexus 5500UP core switch failure. The Cisco 5000 switch does not support Enhanced vPC.Enhanced vPC Nexus 5500UP Data Center Core FEX Port Channel and vPC Cisco Nexus Fabric Extenders Server Port Channel 2202 Cisco Nexus Fabric Extenders As of Cisco NX-OS release 5.Server with Teamed Interface Connectivity Server NIC teaming comes in many options and features. the dual-homed FEX uplinks are programmed with a port channel and vPC that connects it to both data core switches. respectively. This condition meant that you may have needed a mix of single-homed FEX and dual-homed FEX in the data center to support different server connectivity requirements. In the event of a Cisco FEX failure. The FCoE traffic can only use the FEX-to–Cisco Nexus 5500 uplinks on the left side or right side. Enhanced Fabric Extender and Server Connectivity The dual-homed Cisco Nexus fabric extender enhances system reliability by connecting the FEX to both core switches. NIC adapters and operating systems using an active/standby method for connecting to the Cisco FEX are best served by a dual-homed Cisco FEX as shown in the figure below. Compute Connectivity 73 . NIC adapters and operating systems capable of using IEEE 802. the NIC teaming switches to the standby interfaces. this may not be suitable for a high-bandwidth FCoE environment. February 2013 Series With Enhanced vPC. one of the data center core Cisco Nexus 5500UP switches can be taken out of service and traffic will continue to flow over the FEX uplinks to the remaining active data center core switch. IP connectivity) from the dual-homed FEX can utilize all FEX-to–data center core uplinks. Until recently. Enhanced vPC also supports a dual-homed server with EtherChannel running FCoE. because SAN traffic must maintain SAN-A and SAN-B isolation and therefore cannot connect to both data center core switches. the dual-homed FEX connection was unable to support a server that was connected to two dual-homed FEX with a single EtherChannel connected server. because the FCoE traffic can only use a subset of the FEX uplinks to the data center core as shown in Figure 23. the Cisco Nexus 5500 switch can now support a port channel connected to two dual-homed FEXes as shown in Figure 22. Non-FCoE Ethernet traffic (for example.and dual-homed servers with or without EtherChannel from the server. maximizing traffic load balancing and bandwidth.1(3)N1(1) for the Cisco Nexus 5500 Series switches. With a dual-homed FEX. The Cisco Nexus 5500 switches then automatically create a vPC to enable the server port channel that is connected to the dual-homed FEX pair.

the server NIC connections can use the Cisco Nexus FEX for high-density port fan out and resilient connections. Compute Connectivity 74 . is recommended.Third-party blade server system with pass-through module Nexus 5500UP Data Center Core FCoE SAN-A Traffic Nexus 5500UP Data Center Core FCoE SAN-B Traffic Cisco Nexus Fabric Extenders Cisco 2232PP Fabric Extenders FCoE SAN-B Traffic Blade Server with Passthrough UCS C-Series Server Mulitple 1 Gigabit Ethernet Layer 2 Port Channel Links 2203 UCS C-Series Server 10 Gigabit Ethernet and FCoE Connected 2241 FCoE SAN-A Traffic Third-Party Blade Server System Connectivity Blade server systems are available from manufacturers other than Cisco. as shown in the figure below.Third-party blade server system with integrated switch Nexus 5500UP Data Center Core Blade Server with Integrated Switch February 2013 Series 2242 The first option is using a blade server system with a pass-through module that extends server interfaces directly out of the blade server chassis without using an internal switch fabric in the blade server system. Another consideration is that a blade server with an integrated switch generally uses a few high-speed 10-Gigabit Ethernet uplinks where direct connection to the Cisco Nexus 5500UP switch core.Figure 23 . A second option for connecting a non-Cisco blade server system to the Cisco SBA data center involves a blade server system that has an integrated Ethernet switch. Figure 25 . as shown in Figure 25.Enhanced vPC with FCoE traffic Figure 24 . you have multiple options for connecting to your Cisco SBA data center design. In this scenario. the integrated switch in the blade server chassis generates spanning-tree BPDUs and therefore cannot be connected to fabric extenders. In the event you have a non–Cisco blade server system to connect to your data center. When using pass-through modules.

A third option is imbedding Cisco Nexus fabric extenders directly into the non–Cisco blade server system to connect to the Cisco SBA data center core. The data center architecture also provides support for resilient. February 2013 Series Compute Connectivity 75 . it has proven to be a desirable connectivity option for many organizations. For further detail on deploying Cisco UCS Server systems. non–Cisco server and blade system connectivity. please refer to the Cisco SBA—Data Center Unified Computing System Deployment Guide. Although this option has not been tested and documented in the Cisco SBA data center deployment guide. as shown in Figure 26.Non–Cisco blade server system with embedded Cisco Nexus fabric extenders Blade Server with Embedded Fabric Extender 2243 Nexus 5500UP Data Center Core Summary The compute connectivity options outlined in this chapter show how the Cisco SBA data center foundation design integrates with Cisco UCS to build flexible and scalable compute connectivity. Figure 26 .

which provides load balancing as well as rapid and transparent failure recovery. Data center threat landscape: Technology Overview The data center security design employs a pair of Cisco Adaptive Security Appliance (ASA) 5585-X with SSP-20 firewall modules and matching IPS Security Service Processors (SSP) installed. viruses. The Cisco February 2013 Series Network Security 76 . • Internet • Remote access and teleworker VPN hosts • Remote office/branch networks • Business partner connections • Campus networks • Unprotected data center networks • Other protected data center networks All of the ports on modules installed in the Cisco ASA chassis are available to the firewall SSP. Figure 27 . This configuration provides up to 10 Gbps of firewall throughput. Although worms. which offers a very flexible configuration. In virtual desktop deployments. portions of networks in specific business sectors may be subject to industry or government regulations that mandate specific security controls to protect customer or client information. To protect the valuable electronic assets located in the data center. The Cisco ASA firewalls are dual-homed to the data center core Cisco Nexus 5500UP switches using two 10-Gigabit Ethernet links for resiliency. and botnets pose a substantial threat to centralized data—particularly from the perspective of host performance and availability—servers must also be protected from employee snooping and unauthorized access. Customer and personnel records. Additionally. and it helps prevent compromise of hosts by resource-consuming worms. email stores.Network Security In today’s business environment. The IPS and firewall SSPs deliver 3 Gbps of concurrent throughput. There is a range of Cisco ASA 5585-X with IPS firewalls to meet your processing requirements. Statistics have consistently shown that the majority of data loss and network disruptions have occurred as the result of humaninitiated activity (intentional or accidental) carried out within the boundaries of the business’s network. The pair of links on each Cisco ASA is configured as an EtherChannel. network security helps ensure the facility is protected from automated or humanoperated snooping and tampering. financial data. firewalls and intrusion prevention systems (IPSs) should be deployed between clients and centralized data resources. or botnets. and intellectual property must be maintained in a secure environment to assure confidentiality and availability. where the user’s desktop is hosted on a server located in the data center. the security policy associated with protecting those resources has to include the following potential threat vectors. the data center firewall can provide policy isolation from the production servers located in the same data center domain. viruses. the data center contains some of the organization’s most valuable assets. To minimize the impact of unwanted network intrusions.Deploy firewall inline to protect data resources Collapsed LAN Core Cisco Nexus 5500 Layer 3 Data Center Core Cisco ASA 5585-X Firewall with IPS SSPs Internet LAN/WAN Data Center Resources 2210 Business Overview Because everything else outside the protected VLANs hosting the data center resources can be a threat.

incurring an unwanted security exposure.The pair of Cisco ASAs is configured for firewall active/standby high availability operation to ensure that access to the data center is minimally impacted by outages caused by software maintenance or hardware failure. Traffic between VLANs should be kept to a minimum. The Cisco ASAs are configured in routing mode. High-value applications. open VLANs without any security policy applied are configured physically and logically on the data center core switches. Network Security 77 . the design is an example of how to create multiple secured networks to host services that require separation. All protected VLANs are logically connected via Layer 3 to the rest of the network through Cisco ASA and. they will be deployed on a VLAN behind the firewalls. hosts might inadvertently be connected to the wrong VLAN. The ability to run in IDS mode or IPS is highly configurable to allow the maximum flexibility in meeting a specific security policy. risk aversion for inadvertently dropping valid traffic. and any hosts or servers that reside in that VLAN are outside the firewall and therefore receive no protection from Cisco ASA for attacks originating from anywhere else in the organization’s network. This EtherChannel link is configured as a VLAN trunk in order to support access to multiple secure VLANs in the data center. Other VLANs on the EtherChannel trunk will be designated as being firewalled from all the other data center threat vectors or firewalled with additional IPS services. to prevent Internetborne compromise of some servers from spreading to other services that are not exposed. A firewall failover will occur if either the Cisco ASA itself has an issue or the IPS module becomes unavailable. as a result. Because the Cisco ASAs are physically attached only to the data center core Nexus switches. When Cisco ASA appliances are configured in active/standby mode. and other possibly externally driven reasons like compliance requirements for IPS. the secure network must be in a separate subnet from the client subnets. services that are indirectly exposed to the Internet (via a web server or other application servers in the Internet demilitarized zone) should be separated from other services. these protected VLANs will also exist at Layer 2 on the data center core switches. Although the IPS modules do not actively exchange state traffic. the standby appliance does not handle traffic. February 2013 Series Security Topology Design The Cisco SBA secure data center design provides two secure VLANs in the data center. Keeping traffic between servers intra-VLAN will improve performance and reduce the load on network devices. if possible. such as Enterprise Resource Planning and Customer Relationship Management. where they would still be able to communicate with the network. so the primary device must be sized to provide enough throughput to address connectivity requirements between the core and the data center. One VLAN on the data center core acts as the outside VLAN for the firewall. Devices that require both an access policy and IPS traffic inspection will be deployed on a different VLAN that exists logically behind the Cisco ASAs. The choice to have the sensor drop traffic or not is one that is influenced by several factors: risk tolerance for having a security incident. however. wherein they will block malicious traffic before it reaches its destination. they participate in the firewall appliances’ active/standby status by way of reporting their status to the firewall’s status monitor. The number of secure VLANs is arbitrary. Figure 28 .Example design with secure VLANs Data Center Core Data Center Firewalls with IPS LAN/WAN Internet Firewalled VLANs Firewall +IPS VLANs Open VLANs 2244 NX-OS Virtual Port Channel (vPC) feature on the Cisco Nexus 5500UP data core switches allow the firewall EtherChannel to span the two data center core switches (multichassis EtherChannel) but appear to be connected to a single upstream switch. For devices that need an access policy. therefore. The IPS sensors can be deployed in promiscuous intrusion detection system (IDS) mode so that they only monitor and alert for abnormal traffic. IP subnet allocation would be simplified if Cisco ASA were deployed in transparent mode. may need to be separated from other applications in their own VLAN. The data center IPSs monitor for and mitigate potential malicious activity that is contained within traffic allowed by the security policy defined on the Cisco ASAs. are reachable only by traversing the appliance. unless your security policy dictates service separation. The IPS modules can be deployed inline in IPS mode to fully engage their intrusion prevention capabilities. As another example. For this deployment.

If there is no organization-wide security policy. A blacklist policy is simpler to maintain and less likely to interfere with network applications. you should seek the highest level of detail possible regarding the expected network behaviors.Blacklist policy example Cisco ASA firewalls implicitly end access lists with a deny-all rule. A whitelist policy offers a higher implicit security posture.An organization should have an IT security policy as a starting point in defining its firewall policy. blocking all traffic except that which must be allowed (at a sufficiently granular level) to enable applications. At a minimum. Network security policies can be broken down into two basic categories: whitelist policies and blacklist policies. Non-compliance may result in regulatory penalties such as fines or suspension of business activity. Reader Tip A detailed examination of regulatory compliance considerations exceeds the scope of this document. you should include industry regulation in your network security design. IDS or IPS can aid with forensics to determine the origin of a data breach. This reduces the volume of data that will be forwarded to an IDS or IPS. it will be very difficult to define an effective policy for the organization while maintaining a secure computing environment. Other traffic is blocked and does not need to be monitored to assure that unwanted activity is not occurring. you will be better positioned to define a security policy that enables a business’s application traffic and performance requirements while optimizing security. Whether you choose a whitelist or blacklist policy basis. prior to the implicit deny-all rule. and also minimizes the number of log entries that must be reviewed in the event of an intrusion or data loss. Figure 29 . If you have greater detail of the expectations. a blacklist policy only denies traffic that specifically poses the greatest risk to centralized data resources. to allow any traffic that is not explicitly allowed or denied. Telnet SNMP Other Data 3019 Figure 30 . A whitelist policy is the best-practice option if you have the opportunity to examine the network’s requirements and adjust the policy to avoid interfering with desired network activity. IPS can detect and prevent attacks as they occur and provide detailed information to track the malicious activity to its source.0). Whitelist policies are generally better positioned to meet regulatory requirements because only traffic that must be allowed to conduct business is allowed. Blacklist policies include an explicit rule. IDS or IPS may also be required by the regulatory oversight to which a network is subject (for example. A blacklist policy that blocks high-risk traffic offers a lower-impact—but less secure—option (compared to a whitelist policy) in cases where a detailed February 2013 Series Network Security 78 . Ideally. PCI 2.Whitelist policy example Xterm FTP Microsoft Data SQL DNS/HTTP/HTTPS SNMP MSRPC Bypass Other Data 3020 Security Policy Development Inversely. To effectively deploy security between the various functional segments of a business’s network. consider IDS or IPS deployment for controlling malicious activity on otherwise trustworthy application traffic.

• Deploying Cisco IPS—Integrates connectivity and policy configuration in one process. • Evaluating and Deploying Firewall Security Policy—Outlines the process for identifying security policy needs and applying a configuration to meet requirements.4. and the secondary Cisco ASA firewall to both Cisco Nexus 5500 data center core switches as shown in Figure 31.4.54. or if the network availability requirements prohibit application troubleshooting. an organization can more easily develop an effective whitelist policy.Data Center firewall VLANs VLAN IP address Trust state Use 153 10.X /24 Trusted Firewall + IPS protected VLAN Process Procedure 1 Configuring Cisco ASA Firewall Connectivity 1.4. Configure port channels on core switches Complete the following procedures to configure connectivity between the Cisco ASA chassis and the core. With details about its network’s behavior in hand. The Cisco ASA firewall connects between the data center core–routed interface and the protected VLANs that also reside on the switches. vlan 153 name FW_Outside vlan 154 name FW_Inside_1 vlan 155 name FW_Inside_2 Network Security 79 . • Configuring Firewall High Availability—Describes configuring the high availability active/standby state for the firewall pair. • Firewall-B Ten Gigabit Ethernet 0/8 connects to the Cisco Nexus 5500UP switch-A Ethernet 1/2 • Firewall-B Ten Gigabit Ethernet 0/9 connects to the Cisco Nexus 5500UP switch-B Ethernet 1/2 Table 7 .1 /25 Untrusted Firewall to data center core routing 154 10.53. Note that this design describes a configuration wherein the Cisco ASA firewalls are connected to the Nexus 5500UP data center core switches by using a pair of 10-Gigabit Ethernet interfaces in an EtherChannel. Cisco ASA network ports are connected as follows: • Firewall-A Ten Gigabit Ethernet 0/8 connects to the Cisco Nexus 5500UP switch-A Ethernet 1/1 Deployment Details • Firewall-A Ten Gigabit Ethernet 0/9 connects to the Cisco Nexus 5500UP switch-B Ethernet 1/1 Data center security deployment is addressed in five discrete processes: • Gigabit Ethernet 0/1 connects via a crossover or straight-through Ethernet cable between the two firewalls for the failover link • Configuring Cisco ASA Firewall Connectivity—Describes configuring network connections for the Cisco ASA firewalls on the Cisco Nexus 5500UP data center core. If identifying all of the application requirements is not practical. • Configuring the Data Center Firewall—Describes configuring Cisco ASA initial setup and the connections to the data center core. Configure firewall VLANs on Nexus 5500s 2. Connect the interfaces on the primary Cisco ASA firewall to both Cisco Nexus 5500 data center core switches.X /24 Trusted Firewall protected VLAN 155 10.55. February 2013 Series Configure firewall VLANs on Nexus 5500s Step 1: Configure the outside (untrusted) and inside (trusted) VLANs on Cisco Nexus 5500UP data center core switch-A.study of the network’s application activity is impractical. you can apply a blacklist policy with logging enabled to generate a detailed history of the policy.

53.54.53.4.4.4.0/24 ! router eigrp 100 redistribute static route-map static-to-eigrp Step 6: Configure the Layer 3 SVI for VLAN 153 on Cisco Nexus 5500UP data center core switch-B.55.4.0/24 ! router eigrp 100 redistribute static route-map static-to-eigrp Step 5: Configure the outside (untrusted) and inside (trusted) VLANs on Cisco Nexus 5500UP data center core switch-B.4.4.53.4.4.0/24 Vlan 153 10.55. vlan 153 name FW_Outside vlan 154 name FW_Inside_1 vlan 155 name FW_Inside_2 February 2013 Series Network Security 80 .Step 2: Configure the Layer 3 SVI for VLAN 153 on Cisco Nexus 5500UP data center core switch-A.54.53.4.53.1 Step 7: Configure static routes pointing to the trusted subnets behind the Cisco ASA firewall on Cisco Nexus 5500UP data center core switch-B.1 and the HSRP priority for this switch to 110.126 Step 4: Redistribute the trusted subnets into the existing EIGRP routing process on the first Cisco Nexus 5500UP data center core switch.53.2/25 ip router eigrp 100 ip passive-interface eigrp 100 ip pim sparse-mode hsrp 153 priority 110 ip 10.0/24 Vlan 153 10.55.0/24 route-map static-to-eigrp permit 20 match ip address 10. Set the HSRP address for the default gateway to 10.4.0/24 Vlan 153 10.54.4.0/24 route-map static-to-eigrp permit 20 match ip address 10.1 Step 3: Configure static routes pointing to the trusted subnets behind the Cisco ASA firewall on Cisco Nexus 5500UP data center core switch-A. route-map static-to-eigrp permit 10 match ip address 10.4.4. interface Vlan153 no shutdown description FW_Outside no ip redirects ip address 10.55.53.4.53. route-map static-to-eigrp permit 10 match ip address 10.54.53. ip route 10.3/25 ip router eigrp 100 ip passive-interface eigrp 100 ip pim sparse-mode hsrp 153 ip 10. This design uses route maps to control which static routes will be redistributed.4.4. This design uses route maps to control which static routes will be redistributed.0/24 Vlan 153 10.126 Step 8: Redistribute the trusted subnets into the existing EIGRP routing process on Cisco Nexus 5500UP data center core switch-B.126 ip route 10. ip route 10.1 and leave the HSRP priority for this switch at the default setting.4. interface Vlan153 no shutdown description FW_Outside no ip redirects ip address 10.126 ip route 10.53.4. Set the HSRP address for the default gateway to 10.

Procedure 2

Configure port channels on core switches

The Cisco ASA firewalls protecting applications and servers in the data
center will be dual-homed to each of the data center core Cisco Nexus
5500UP switches by using EtherChannel links.
Figure 31 - Firewall to data center core switch connections
Data Center
Firewall-A

Data Center
Firewall-B
Failover Cable

Po Ch-53
vPC-53

Po Ch-54
vPC-54

To Cisco SBA
LAN Core

Dual-homed or multichassis EtherChannel connectivity to the Cisco Nexus
5500UP switches uses vPCs, which allow Cisco ASA to connect to both of
the data center core switches with a single logical EtherChannel.
Step 1: Configure the physical interfaces that will make up the port channels on Cisco Nexus 5500UP data center core switch-A.
interface Ethernet1/1
description DC5585a Ten0/8
channel-group 53 mode active
!
interface Ethernet1/2
description DC5585b Ten0/8
channel-group 54 mode active

interface port-channel53
switchport mode trunk
switchport trunk allowed vlan 153-155
service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
vpc 53
!
interface port-channel54
switchport mode trunk
switchport trunk allowed vlan 153-155
service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
vpc 54

The port channels are created as vPC port channels, because the fabric
interfaces are dual-homed EtherChannels to both Nexus 5500UP data
center core switches.

2245

Nexus 5500UP
Ethernet vPC
Switch Fabric

Step 2: Configure the logical port-channel interfaces on data center core
switch-A. The physical interfaces tied to the port channel will inherit the settings from the logical port-channel interface. Assign the QoS policy created
in Procedure 3, “Configure QoS policies,” to the port channel interfaces.

Tech Tip
The default interface speed on the Cisco Nexus 5500 Ethernet
ports. If you are using a 1-Gigabit Ethernet SFP you must program the interface for 1-Gigabit operation with the speed 1000
command on either the port-channel interface or the physical
interfaces.

When you assign the channel group to a physical interface, it creates the
logical EtherChannel (port-channel) interface that will be configured in the
next step.

February 2013 Series

Network Security

81

Step 3: Apply following configuration to Cisco Nexus 5500UP data center
core switch-B.
interface Ethernet1/1
description DC5585a Ten0/9
channel-group 53 mode active
!
interface Ethernet1/2
description DC5585b Ten0/9
channel-group 54 mode active
!
interface port-channel53
switchport mode trunk
switchport trunk allowed vlan 153-155
service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
vpc 53
!
interface port-channel54
switchport mode trunk
switchport trunk allowed vlan 153-155
service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS
vpc 54

Process

Configuring the Data Center Firewall
1. Configure initial Cisco ASA settings
2. Configure firewall connectivity
3. Configure firewall static route to the core
4. Configure user authentication
5. Configure time synchronization and logging
6. Configure device management protocols

You apply the configuration for this process by using CLI through the
console port on the Cisco ASA firewall that is the primary unit of the highavailability pair. The standby unit synchronizes the configuration from
the primary unit when it is programmed in the next process, “Configuring
Firewall High Availability.”
The factory default password for enable mode is <CR>.
Table 8 - Cisco ASA 5500X firewall and IPS module addressing

ASA firewall
failover status

Firewall IP address

IPS module management
IP address

Primary

10.4.53.126 /25

10.4.63.21 /24

Secondary

10.4.53.125 /25

10.4.63.23 /24

Table 9 - Common network services used in the deployment examples

February 2013 Series

Service

Address

Domain name

cisco.local

Active Directory, DNS, DHCP server

10.4.48.10

Cisco ACS server

10.4.48.15

NTP server

10.4.48.17

Network Security

82

Configure initial Cisco ASA settings

Connect to the console of the Cisco ASA firewall and perform the following
global configuration.
Step 1: Select anonymous monitoring preference. When you enter configuration mode for an unconfigured unit, you are prompted for anonymous
reporting. You are given a choice to enable anonymous reporting of error
and health information to Cisco. Select the choice appropriate for your
organization’s security policy.
*************************** NOTICE ***************************
Help to improve the ASA platform by enabling anonymous
reporting, which allows Cisco to securely receive minimal
error and health information from the device. To learn more
about this feature, please visit: http://www.cisco.com/go/
smartcall

Step 5: Configure enable password.
enable password [password]
Procedure 2

Configure firewall connectivity

Two 10-Gigabit Ethernet links connect each Cisco ASA chassis to the two
core Cisco Nexus switches. The two interfaces are paired in a port channel group. Subinterfaces are created on the port channel for the outside
VLAN 153 and all the protected VLANs inside (154 and 155). Each interface
created will be assigned the correct VLAN, an appropriate name, a security
level, and an IP address and netmask.
Cisco ASA 5585-X with IPS

Firewall
Protected VLANs

Would you like to enable anonymous error reporting to help
improve the product? [Y]es, [N]o, [A]sk later:N
Step 2: Configure the Cisco ASA firewall host name to make it easy to
identify.
hostname DC5585ax
Step 3: Disable the dedicated management port. This design does not use it.
interface Management0/0
shutdown
Step 4: Configure local user authentication.
Username [username] password [password]

Tech Tip
All passwords in this document are examples and should not be used
in production configurations. Follow your company’s policy, or—if no
policy exists—create a password using a minimum of eight characters with a combination of uppercase, lowercase, and numbers.

February 2013 Series

Standby

Active

Open VLANs

Data Center
Core

Firewall and IPS VLANs
Internet
LAN/WAN
Non-secure VLANs

2211

Procedure 1

Secure VLANs

All interfaces on Cisco ASA have a security-level setting. The higher the
number, the more trusted the interface, relative to other interfaces. By
default, the inside interface is assigned 100, the highest security level. The
outside interface is assigned 0. By default, traffic can pass from a highsecurity interface to a lower-security interface. In other words, traffic from an
inside network is permitted to an outside network, but not conversely.

Network Security

83

Step 1: Configure the port channel group by using the two 10-Gigabit
Ethernet interfaces.
interface Port-channel10
description ECLB Trunk to 5548 Switches
no shutdown
!
interface TenGigabitEthernet0/8
description Trunk to DC5548x eth1/1
channel-group 10 mode passive
no shutdown
!
interface TenGigabitEthernet0/9
description Trunk to DC5548x eth1/2
channel-group 10 mode passive
no shutdown
Step 2: Configure the subinterfaces for the three VLANs: VLAN 153 outside, VLAN 154 inside the firewall, and VLAN 155 inside the firewall with IPS.
interface Port-channel10.153
description DC VLAN Outside the FW
vlan 153
nameif outside
security-level 0
ip address 10.4.53.126 255.255.255.128 standby 10.4.53.125
no shutdown
!
interface Port-channel10.154
description DC VLAN Inside the Firewall
vlan 154
nameif DC-InsideFW
security-level 75
ip address 10.4.54.1 255.255.255.0 standby 10.4.54.2
no shutdown
!
interface Port-channel10.155
description DC VLAN Inside the FW w/ IPS
vlan 155
nameif DC-InsideIPS
security-level 75
ip address 10.4.55.1 255.255.255.0 standby 10.4.55.2
no shutdown

February 2013 Series

Procedure 3

Configure firewall static route to the core

Because the Cisco ASAs are the gateway to the secure VLANs in the data
center, the Cisco ASA pair is configured to use a static route to the HSRP
address of the Cisco Nexus switches on outside VLAN 153.
Step 1: Configure the static route pointing to the data center core HSRP
address on the Cisco ASA pair.
route outside 0.0.0.0 0.0.0.0 10.4.53.1 1
Procedure 4

Configure user authentication

(Optional)
If you want to reduce operational tasks per device, configure centralized
user authentication by using the TACACS+ protocol to authenticate management logins on the infrastructure devices to the AAA server.
As networks scale in the number of devices to maintain, there is an operational burden to maintain local user accounts on every device also scales. A
centralized AAA service reduces operational tasks per device and provides
an audit log of user access for security compliance and root-cause analysis.
When AAA is enabled for access control, it controls all management access
to the network infrastructure devices (SSH and HTTPS).

Reader Tip
The AAA server used in this architecture is the Cisco Secure
Access Control Server (ACS). Configuration of Cisco Secure ACS
is discussed in the Cisco SBA—Borderless Networks Device
Management Using ACS Deployment Guide.

TACACS+ is the primary protocol used to authenticate management logins
on the infrastructure devices to the AAA server. A local AAA user database
was defined already to provide a fallback authentication source in case the
centralized TACACS+ server is unavailable.

Network Security

84

and the unsecure protocols—Telnet and HTTP—are turned off.4. The local NTP server typically references a more accurate clock feed from an outside source. ntp server 10. aaa aaa aaa aaa authentication authentication authentication authentication enable console AAA-SERVER LOCAL ssh console AAA-SERVER LOCAL http console AAA-SERVER LOCAL serial console AAA-SERVER LOCAL Step 3: Configure the appliance to use AAA to authorize management users. February 2013 Series There is a range of detail that can be logged on the appliance. 10.Step 1: Configure the TACACS+ server. clock timezone PST -8 0 clock summer-time PDT recurring Step 3: Configure which logs to store on the appliance. and then the local user database if the TACACS+ server is unavailable. They use SSL and TLS to provide device authentication and data encryption.48. the appliance can offer controlled Cisco ASDM access for a single address or management subnet (in this case. NTP then distributes this time across the organization’s network. Network Security 85 .48. aaa-server AAA-SERVER protocol tacacs+ aaa-server AAA-SERVER (outside) host 10. Network devices should be programmed to synchronize to a local NTP server in the network.48. but they do not produce enough detail to effectively audit network activity. Step 1: Configure the NTP server IP address. Use SSH and HTTPS protocols in order to more securely manage the device. Both protocols are encrypted for privacy. NTP is designed to synchronize time across a network of devices.4. unlike Cisco IOS devices. such as a radio clock or an atomic clock attached to a time server.0/24). SNMPv2c is configured for a read-only community string.17 Step 2: Configure the time zone. but they do not add sufficient value to justify the number of messages logged. Lower log levels produce fewer messages. Higher log levels produce a larger volume of messages. aaa authorization exec authentication-server Tech Tip User authorization on the Cisco ASA firewall. SNMP is enabled to allow the network infrastructure devices to be managed by a network management system (NMS). Informationallevel logging provides the ideal balance between detail and log-message volume. Be sure that the configuration includes networks where administrative staff has access to the device through Cisco ASDM. Procedure 5 Configure time synchronization and logging Logging and monitoring are critical aspects of network security devices to support troubleshooting and policy-compliance auditing.15 SecretKey Step 2: Configure the appliance’s management authentication to use the TACACS+ server first. logging enable logging buffered informational Procedure 6 Configure device management protocols Cisco Adaptive Security Device Manager (ASDM) requires that the appliance’s HTTPS server be available. An NTP network usually gets its time from an authoritative time source. does not automatically present the user with the enable prompt if they have a privilege level of 15.4. HTTPS and SSH are more secure replacements for the HTTP and Telnet protocols.

5 to 5 seconds. which minimizes the downtime a user experiences during failover.0 outside ssh 10. because this allows the same appliance to be used for firewall and VPN services if required in the future (VPN functionality is disabled on the appliance in active/active configuration).255.0 255. with identical feature licenses and IPS (if the software module is installed).255. It is recommended that you do not reduce the failover timer intervals below the values in this guide.0 255. failover interface ip failover 10.53. and then assign it as the primary unit. failover failover lan unit primary Step 2: Configure the failover interface.4. There can be a substantial amount of data.5 to 5 seconds.48.252 standby 10. depending on the failure. snmp-server host outside 10.130 255.4. Procedure 1 Configure the primary appliance for HA Step 1: Enable failover on the primary appliance.255. Active/ standby is used. Both units in the failover pair must be the same model.129 Network Security 86 . depending on the failure.0 outside ssh version 2 Step 2: Specify the list of supported SSL encryption algorithms for Cisco ADSM. In the event that the active appliance fails or needs to be taken out of service for maintenance.53. rather than an active/active configuration.4. All session state data is replicated from the primary to the secondary unit through this interface. Tuning the failover poll times can reduce that to 0. failover failover failover failover lan interface failover GigabitEthernet0/1 key [SecretKey] replication http link failover GigabitEthernet0/1 Step 3: If you want to speed up failover in the event of a device or link failure.local http server enable http 10. On an appropriately sized appliance. you can tune the failover timers. Configure the secondary Cisco ASA for HA Cisco ASAs are set up as a highly available active/standby pair. domain-name cisco. the appliance can take from 2 to 25 seconds to recover from a failure.4.35 community [cisco] snmp-server community [cisco] Process Configuring Firewall High Availability 1. which the appliances use to share configuration updates. By default. Enter a key for the failover that you will later enter on the secondary appliance to match.48. The failover interface carries the state synchronization information. the poll times can be tuned down without performance impact to the appliance. February 2013 Series One interface on each appliance is configured as the state-synchronization interface. the Cisco ASAs must be sized so that the entire traffic load can be handled by either device in the pair. failover polltime unit msec 200 holdtime msec 800 failover polltime interface msec 500 holdtime 5 Step 4: Configure the failover interface IP address. and it is recommended that this be a dedicated interface. thus. For failover to be enabled.255.255. only one device is passing traffic at a time. With the default setting.255. the secondary ASA unit needs to be powered up and cabled to the same networks as the primary unit. determine which device in the high availability pair is active. Cisco ASA can take from 2 to 25 seconds to fail over to the standby unit.Step 1: Allow internal administrators to remotely manage the appliance over HTTPS and SSH. and exchange state information for active connections. In an active/standby configuration. Tuning the failover poll times can reduce that to 0. Configure the primary appliance for HA 2.48. ssl encryption aes256-sha1 aes128-sha1 3des-sha1 Step 3: Configure the appliance to allow SNMP polling from the NMS.4. the secondary appliance assumes all active firewall and IPS functions.

Deploy the appropriate security policy This process describes the steps required to evaluate which type of policy fits an organization’s data center security requirements and provides the procedures necessary to apply these policies. issue the copy running-config startup-config command. failover failover failover failover lan interface failover GigabitEthernet0/1 key [SecretKey] replication http link failover GigabitEthernet0/1 Step 5: Verify high availability standby synchronization between the Cisco ASA devices.Primary Active None Other host .255.4.Step 5: Enable the failover interface. enable failover and assign it as the secondary unit. Evaluate security policy requirements 2.Secondary Standby Ready None UTC May 25 2012 Date/Time 15:18:12 ====Configuration State=== Sync Done ====Communication State=== Mac set Step 6: Save your firewall configuration. The Cisco ASA units synchronize their configuration from the primary unit to the secondary.4. interface GigabitEthernet0/1 no shutdown February 2013 Series Evaluating and Deploying Firewall Security Policy 1. DC5585ax# show failover state State Last Failure Reason This host . monitor-interface outside monitor-interface DC-InsideFW monitor-interface DC-InsideIPS Procedure 2 Configure the secondary Cisco ASA for HA Step 1: On the secondary Cisco ASA.252 standby 10.129 Step 4: Enable the failover interface. This will save the configuration on the primary appliance and replicate the configuration to the secondary appliance. failover interface ip failover 10. interface GigabitEthernet0/1 no shutdown Step 6: Configure failover to monitor the inside and outside interfaces so that the active firewall will defer to the standby firewall if connectivity is lost on the data center VLANs. On the CLI of the primary appliance.53. issue the show failover state command. copy running-config startup-config Process Step 3: Configure the failover interface IP address.130 255. Network Security 87 . On the CLI of the primary appliance.255. failover failover lan unit secondary Step 2: Configure the failover interface.53.

4. Deploy a whitelist security policy A basic whitelist data-service policy can be applied to allow common business services such as HTTP and HTTPS access to your servers. Cisco ASDM starts from a Java Web Start application.53. “Configure firewall connectivity. https Any Hr_Web_Server 10.4. determine which security policy enables application requirements.54. “Configure initial Cisco ASA settings.54.4. After the system setup and high availability is complete via CLI. Step 2: Enter the username and password configured for the Cisco ASA firewall in Step 4 of Procedure 1. navigate to the Cisco ASA firewall outside interface programmed in Step 2 of Procedure 2.81 http.80 http. Step 1: Using a secure HTTP session (Example: https://10.Sample policies for servers • What applications will be served from the secure data center? • Can the applications’ traffic be characterized at the protocol level? Source Destination IP address Protocols allowed • Is a detailed description of application behavior available to facilitate troubleshooting if the security policy interferes with the application? Any IT_Web_Server 10. https Any Research_Web_Server 10.4. examples here should be used as a basis for security policy configuration. February 2013 Series Network Security 88 . you will use the integrated GUI management tool.224 – 254 ssh.” and then click Run ASDM. navigate to Configuration > Firewall > Objects > Network Objects/Groups. If you are deploying a blacklist security policy.81 http.4. Table 10 . complete Option 2 of this procedure. Cisco ASDM.Procedure 1 Evaluate security policy requirements Step 1: Evaluate security policy requirements by answering the following questions: Option 1.4.48. Step 4: Click Add > Network Object.” Step 3: In the Cisco ASDM work pane.55. https • What is the network’s baseline performance expectation between the controlled and uncontrolled portions of the network? Any Finance_Web_Server 10.126). snmp • What is the peak level of throughput that security controls will be expected to handle.55. complete Option 1 of this procedure.80 http. https IT_Management_ Host_Range Server Room VLANs 10. to program security policies: • Network Objects—such as hosts and IP subnets • Firewall access rules If you are deploying a whitelist security policy. Thus. including bandwidth-intensive activity such as workstation backups or data transfers to a secondary data replication site? Step 2: For each data center VLAN. Each firewall VLAN requires either a permissive (blacklist) or restrictive (whitelist) security policy. Procedure 2 Deploy the appropriate security policy Network security policy configuration can vary greatly among organizations and is dependent on the policy and management requirements of the organization.

Step 7: Click Add > Add Access Rule. enter the following information. click Apply. and then click OK. February 2013 Series Network Security 89 . and then press OK. This saves the configuration. • Name—IT_Web_Server • Interface—Any • Type—Host • Action—Permit • IP Version—IPv4 • Source—any • IP Address—10. Step 9: In the Access Rules pane.4.80 • Destination—Network Object IT_Web_Server • Description—IT Web Server • Service—tcp/http and tcp/https • Description—HTTP and HTTPS to IT Web Server Next you will create an access list to permit HTTP and HTTPS traffic from the outside to the server. Step 8: On the Add Access Rule dialog box.54. Step 6: Navigate to Configuration > Firewall > Access Rules.Step 5: On the Add Network Object dialog box enter the following information.

enter the following information: • Group Name—Mgmt-Traffic • Description—Management Traffic SSH and SNMP Step 17: In the Existing Service/Service Group list.48. and you create an access list to permit the SSH and SNMP traffic service group from the network management range to the server subnets. Step 12: Click Add > Network Object. and then click OK.4. Step 16: On the Add Service Group dialog box. Step 15: Click Add > Service Group.254 • Description—IT Management Systems Range February 2013 Series Network Security 90 . management hosts in the IP address range 10.Step 10: Repeat Step 3 through Step 9 of this procedure for the remaining servers. Step 14: Navigate to Configuration > Firewall > Objects > Service Objects/Groups. • Name—IT_Management_Host_Range • Type—Range Next you will create a service group containing SSH and SNMP protocols. Step 11: Navigate to Configuration > Firewall > Objects > Network Objects/Groups. click Add. In this example. IT management staff or network users) can use to access management resources. choose tcp > ssh and udp > snmp. • IP Version—IPv4 • Start Address—10. and then click OK.48.4. Step 13: On the Add Network Object dialog box.48.224–254 are allowed SSH and SNMP access to server room subnets.224 • End Address—10.4. Next specify which resources certain users (for example. enter the following information.

Step 1: Navigate to Configuration > Firewall > Objects > Network Objects/Groups. click Apply.48. enter the following information. This saves the configuration.48. all other traffic is permitted.4. Network administrative users may need to issue SNMP queries from desktop computers to monitor network activity and SSH to connect to devices.Step 18: Navigate to Configuration > Firewall > Access Rules. • Name—IT_Management_Host_Range • Type—Range • IP Version—IPv4 • Start Address—10.4. Step 2: Click Add > Network Object. and then click OK. and then click OK. Step 3: On the Add Network Object dialog box. In this example you will allow SNMP queries and SSH requests from a specific address range that will be allocated for IT staff.224 • End Address—10. Step 19: Click Add > Add Access Rule. restrictive policy to control access between centralized data and the user community. This policy is typically configured such that only specific services’ access is blocked. Deploy a blacklist security policy If an organization does not have the desire or resources to maintain a granular. a simpler. February 2013 Series Network Security 91 . • Interface—outside • Action—Permit • Source—IT_Management_Host_Range • Destination—DC-InsideFW-network and DC-InsideIPS-network • Service—Mgmt-Traffic • Description—Permit Mgmt Traffic from Mgmt Range to DC Secure VLANs Option 2. Step 20: On the Add Access Rule dialog box enter the following information. easy-to-deploy policy that limits only the highest-risk traffic may be more attractive.254 • Description—IT Management Systems Range Step 21: In the Access Rules pane.

and you will also create an access list to permit the SSH and SNMP traffic service group from the network management range to the server subnets. enter the following information. and then click OK. choose and click Add for tcp > ssh and then udp > snmp. Step 4: Navigate to Configuration > Firewall > Objects > Service Objects/ Groups. Step 8: Navigate to Configuration > Firewall > Access Rules. Step 10: On the Add Access Rule dialog box.Next you will create a service group containing SSH and SNMP protocols. • Interface—outside • Action—Permit • Source—IT_Management_Host_Range • Destination—DC-InsideFW-network and DC-InsideIPS-network • Group Name—Mgmt-Traffic • Service—Mgmt-Traffic • Description—Management Traffic SSH and SNMP • Description—Permit Mgmt Traffic from Mgmt Range to DC Secure VLANs Step 7: In the Existing Service/Service Group list. February 2013 Series Network Security 92 . and then click OK. Next. Step 5: Click Add > Service Group. you block SSH and SNMP to and from all other hosts. Step 6: On the Add Service Group dialog box enter the following information: Step 9: Click Add > Add Access Rule.

click Apply. • Interface—any • Interface—any • Action—Deny • Action—Permit • Source—any • Source—any • Destination—any • Destination—DC-InsideFW-network and DC-InsideIPS-network • Service—Mgmt-Traffic • Description—Permit all other traffic to DC Secure VLANs • Description—Deny SSH and SNMP from all other hosts Finally. enter the following information.Step 11: Navigate to Configuration > Firewall > Access Rules. Step 16: On the Add Access Rule dialog box. and then click OK. you add a rule to allow all other traffic to pass to the data center VLANs. Step 13: On the Add Access Rule dialog box. enter the following information. This saves the configuration. Step 14: Navigate to Configuration > Firewall > Access Rules. Step 12: Click Add > Add Access Rule. February 2013 Series Step 17: In the Access Rules pane. Network Security 93 . and then click OK. for the Blacklist security policy. Step 15: Click Add > Add Access Rule.

“Configure switch access ports. Configure the LAN switch access port 2. It is very easy to initially deploy an IDS. If it detects an attack. There are specific reasons for each deployment model based on risk tolerance and fault tolerance: • In promiscuous mode (IDS). a firewall can be used to remove access to a large number of application ports. The secure data center design using a Cisco ASA 5585-X with IPS implements a policy for IPS. Procedure 1 Configure the LAN switch access port A LAN switch port on the data center Ethernet Out-of-Band Management switch provides connectivity for the IPS sensor’s management interface. IDS and IPS sensors look for attacks in network and application traffic that is permitted to go through the firewall. February 2013 Series Deployment Considerations Use IDS when you do not want to impact the availability of the network or create latency issues. Use IPS when you need higher security than IDS can provide and when you need the ability to drop malicious data packets. which sends all traffic to the IPS module inline. Apply initial configuration 3. intrusion detection systems (IDS) and intrusion prevention systems (IPS) are complementary to firewalls because firewalls are generally access-control devices that are built to block access to an application or host. Modify the inline security policy From a security standpoint. slammer worm over User Datagram Protocol[UDP]). An IDS sensor must use another inline enforcement device in order to stop malicious traffic. Complete basic configuration 4. it can apply an action to block the attack before it reaches the destination.Process Deploying Firewall Intrusion Prevention Systems (IPS) 1. In this way. However. Gigabit Ethernet 1/0/34 switchport switchport access vlan 163 switchport mode access switchport host Network Security 94 . reducing the threat to the servers. interface GigabitEthernet1/0/32 description SR-5585X-IPSa ! interface GigabitEthernet1/0/34 description SR-5585X-IPSb ! Interface range GigabitEthernet1/0/32. Step 1: Connect the IPS module’s management port on each appliance to the data center Ethernet Out-of-Band Management switch configured in earlier in this guide in Procedure 4. the sensor inspects copies of packets. • In an inline (IPS) deployment. an IDS sensor could not prevent the attack from occurring. which prevents it from being able to stop a malicious packet when it sees one.” Step 2: Ensure that the ports are configured for the management VLAN 163 so that the sensors can route to or directly reach the management station. IPS is similar in that it generates alerts due to malicious activity and. The advantage IPS mode offers is that when the sensor detects malicious behavior. the sensor can simply drop the malicious packet. or promiscuous. Your organization may choose an IPS or IDS deployment depending on regulatory and application requirements. additionally. Promiscuous versus Inline Deployment Modes There are two primary deployment modes when using IPS sensors: promiscuous (IDS) or inline (IPS). design and then move to IPS after you understand the traffic and performance profile of your network and you are comfortable that production traffic will not be affected. the sensor inspects the actual data packets. the IDS sensor generates an alert to inform the organization about the activity. because the packet flow is sent through the sensor and returned to Cisco ASA. This allows the IPS device a much greater capacity to actually prevent attacks. an IDS sensor can offer great value when identifying and cleaning up infected hosts. This means that for activity such as single-packet attacks (for example.

login: cisco Password:[password] If this is the first time the sensor has been logged into. either module or appliance. and then input a new password.Basic Setup ----. Cisco ASA is managed in-band.63. Table 11 . Use ctrl-c to abort configuration dialog at any prompt. This controls management access to the IPS module.23 /24 Step 1: Connect to the IPS SSP console through the serial console on the IPS SSP module on the front panel of the Cisco ASA 5585-X primary firewall.Step 2: Log in to the IPS device.4.63.10.63. Press Enter at a blank Permit prompt to go to the next step. Default settings are in square brackets ‘[]’.1 Step 6: Define the access list. Procedure 2 Apply initial configuration sensor# setup The IPS module enters the interactive setup.System Configuration Dialog --At any point you may enter a question mark ‘?’ for help.0/24 February 2013 Series Network Security 95 . --.4. Change the password to a value that complies with the security policy of your organization. Enter IP interface [192. the rest of the configuration is accomplished by using Cisco Adaptive Security Device Manager/IPS Device Manager (ASDM/IDM).4. and access lists that allow remote access.4. Use the sensor’s CLI in order to set up basic networking information. Step 4: Define the IPS module’s host name.192. and then press Enter. After these critical pieces of data are entered.4.53.4. Tech Tip You can also gain access to the console on the IPS SSP by using the session 1 command from the CLI of the Cisco ASA’s SSP. launch the System Configuration Dialogue.63.62/24. Tech Tip In this deployment.48. the IPS will display the new host name for the CLI prompt upon the next login to the sensor.125 /25 10. is always managed from the dedicated management port. Note that unlike Cisco IOS devices where the host name instantly changes the CLI prompt to reflect the new host name.250]: 10. Current time: Mon Oct 12 23:31:38 2009 Setup Configuration last modified: Mon Oct 12 23:22:27 2009 Enter host name [sensor]: IPS-SSP20-A Step 5: Define the IP address and gateway address for the IPS module’s external management port.1.21 /24 Secondary 10. Modify current access list?[no]: yes Current access list entries: No entries Permit: 10.126 /25 10. and the IPS.168. you are prompted to change the password.Cisco ASA 5585-X firewall and IPS module addressing ASA firewall failover status Firewall IP address IPS module management IP address Primary 10.53.4.21/24. Enter the current password.1. Step 3: At the IPS module’s CLI. gateway address. The default username and password are both cisco. specifically: the IP address. the embedded GUI console.168.

Step 7: Configure the DNS server address.. save your configuration and exit setup by entering 2..4. be sure to use a different IP address (10.. Enter your selection [3]: 2 Warning: DNS or HTTP proxy is required for global correlation inspection and reputation filtering. Do you agree to participate in the SensorBase Network?[no]: yes . enter partial and agree to participate based on your security policy. navigate to the Cisco ASA firewall outside interface programmed in Step 2 of the “Configure firewall connectivity” procedure. Step 9: On the System Configuration dialog box. and then accept the default answer (no) for the next two questions. you will use the startup wizard in the integrated management tool. to complete the remaining tasks in order to configure a basic IPS configuration: • Configure time settings • Configure DNS and NTP servers • Define a basic IDS configuration • Configure inspection service rule policy • Assign interfaces to virtual sensors Using ASDM to configure the IPS module operation allows you to set up the communications path from the Cisco ASA firewall to the IPS module. and then click Run ASDM. Use DNS server for Global Correlation? [no]: yes DNS server IP address[]: 10. [2] Save this configuration and exit setup. as well as configure the IPS module settings. Step 8: For the option to participate in the SensorBase Network.4. point your web browser at https://<sensor-ip-address>. but no DNS or proxy servers are defined. [1] Return to setup without saving this configuration.53.. • You will configure time details in the IPS module’s GUI console... [3] Continue to Advanced setup. --..23) on the other sensor’s management interface. Participation in the SensorBase Network allows Cisco to collect aggregated statistics about traffic sent to your IPS. [removed for brevity] exit [0] Go to the command prompt without saving this configuration. Network Security 96 . Procedure 3 Complete basic configuration After the basic setup in the System Configuration dialog box is complete. The IPS SSP displays your configuration and a brief menu with four options.10 Use HTTP proxy server for Global Correlation? [no]: Modify system clock settings?[no]: Note the following: • An HTTP proxy server address is not needed for a network that is configured according to this guide. February 2013 Series Step 10: Repeat this procedure for the IPS sensor installed in the other Cisco ASA chassis.63. Cisco ASDM/IDM. Step 2: Enter the username and password configured for the Cisco ASA firewall in Step 4 of the “Configure initial Cisco ASA settings” procedure.48. To use IDM.Configuration Saved --Complete the advanced setup using CLI or IDM. SensorBase Network Participation level? [off]: partial . be sure to use a different host name (IPSSSP20-B) and in Step 5.. The following configuration was entered. which runs Cisco ASDM from a Java Web Start application. In Step 4.126). Step 1: Using a secure HTTP session (Example: https://10.4.

Step 5: Follow the instructions in the wizard.Step 3: In the Cisco ASDM work pane. and then click Continue. Step 4: Click Configuration. and password that you configured for IPSSSP20-A access. verify the settings. February 2013 Series Network Security 97 . and then click Launch Startup Wizard. and then click Next. click the Intrusion Prevention tab. enter the IP address. click the IPS tab. username. Cisco ASDM downloads the IPS information from the appliance for IPS-SSP20-A. Note the following: • On the Sensor Setup page.

particularly when it is coupled with reputation-based technologies. a copy of the traffic is passively sent to the sensor and the sensor inspects—and can send alerts about—traffic that is malicious. Tech Tip NTP is particularly important for security event correlation if you use a Security Event Information Manager product to monitor security activity on your network. This procedure assigns IPS mode. • Skip the Virtual Sensors page by clicking Next. February 2013 Series Network Security 98 . the sensor inspects—and can drop—traffic that is malicious. and then click Next. in the Zone Name list. set the summertime settings. IPS mode provides more protection from Internet threats and has a low risk of blocking important traffic at this point in the network. Alternatively. ensure the Authenticated NTP is cleared. you can easily change the sensor to IPS mode.4. You must now decide the sensor mode. choose the appropriate time zone.• On the next Sensor Setup page. After you understand the impact on your network’s performance and after you perform any necessary tuning. You can deploy IDS mode as a temporary solution to see what kind of impact IPS would have on the network and what traffic would be stopped. Enter the NTP Server IP address (Example: 10.17). In IPS mode. • Skip the Signatures page by clicking Next.48. • On the Traffic Allocation page. click Add. In this mode. the sensor is inline in the traffic path. in IDS mode.

com username and password that holds entitlement to download IPS software updates. and next to Traffic Inspection Mode. click Yes. and then click OK. select Inline. enter a time between 12:00 AM and 4:00 AM for the update Start Time. proceed to the next step. Click Finish. ASDM/IDM applies your changes and replies with a message that a reboot is required. choose DC-InsideIPS.• In the Specify traffic for IPS Scan window. in the Interface list. and delay the reboot until the end of this procedure. Provide a valid cisco. select Enable Signature and Engine Updates. com. February 2013 Series Network Security 99 . • Configure the IPS device to automatically pull updates from Cisco. Select Daily. Step 6: When you are prompted if you want to commit your changes to the sensor. and then select Every Day. Step 7: Click OK. On the Auto Update page. click Next. • At the bottom of the Traffic Allocation page.

Next, you assign interfaces to your virtual sensor.

Step 11: At the bottom of the main work pane, click Apply.

Step 8: Navigate to Sensor Setup > Policies > IPS Policies.
Step 9: Highlight the vs0 virtual sensor, and then click Edit.
Step 10: On the Edit Virtual Sensor dialog box, for the PortChannel0/0
interface, select Assigned, and then click OK.

Next, you reboot the sensor.
Step 12: Navigate to Sensor Management > Reboot Sensor, click Reboot
Sensor, and then click OK to approve.

February 2013 Series

Network Security

100

There is no configuration synchronization between the two IPS modules like
there is between the Cisco ASA firewalls, so you’ll need to configure each
IPS module separately.

Step 2: In the Virtual Sensor panel, right-click the vs0 entry, and then click
Edit.

Step 13: Repeat the steps in this procedure for the IPS module in the
second Cisco ASA firewall. Note that in Step 1, navigate to the second
firewall’s outside IP address, and then launch Cisco ASDM. (Example:
https://10.4.53.125) You are now logging into the secondary active firewall +
IPS module pair, thus you will log into the IPS SSP20-B module in Step 3 by
using IP address 10.4.63.23.

Caution
Do not attempt to modify the firewall configuration on the standby
appliance. Configuration changes are only made on the primary
appliance.

Procedure 4

Step 3: In the Event Action Rule work pane, select Deny Packet Inline
(Inline), and then click Delete.

Modify the inline security policy

(Optional)
If you opted to run inline mode on an IPS device, the sensor is configured
to drop high-risk traffic. By default, this means that if an alert fires with a risk
rating of at least 90 or if the traffic comes from an IP address with a negative reputation that raises the risk rating to 90 or higher, the sensor drops
the traffic. If the risk rating is raised to 100 because of the source address
reputation score, then the sensor drops all traffic from that IP address.

Step 4: In the Event Action Rule work pane, Click Add.

The chances of the IPS dropping traffic that is not malicious when using a
risk threshold of 90 is very low. However, if you want to adopt a more conservative policy, for the risk threshold, raise the value to 100.
Step 1: In Cisco ASDM, navigate to Configuration > IPS > Policies > IPS
Policies.

February 2013 Series

Network Security

101

Step 5: On the Add Event Action Override dialog box, in the Risk Rating list,
enter new value of 100-100, select Deny Packet Inline, and then click OK.

Step 6: In the Edit Virtual Sensor pane, click OK.
Step 7: Click Apply.
Step 8: For the secondary sensor, repeat Step 1 through Step 7.
There is no configuration synchronization between the two sensors.

February 2013 Series

Network Security

102

However. resource utilization is often out of balance. including TCP-processing offload. Cisco ACE provides the following benefits: • Scalability—Cisco ACE scales the performance of a server-based program. must be available around-the-clock to provide uninterrupted business services. Application performance. known as a server farm. This means that the infrastructure used for a particular application is often unique to that application. and have evolved to take on additional responsibilities. SLBs balance the load on groups of servers to improve server response to client requests.and document-embedded attacks that compromise application performance and availability. but Cisco ACE also provides an array of acceleration and server offload benefits. Cisco Application Control Engine (Cisco ACE) is the latest SLB offering from Cisco. February 2013 Series One possible solution to improve application performance and availability is to rewrite the application completely to make it network-optimized. Key applications. Because the application and infrastructure are tightly coupled. and asymmetric application acceleration (from server to client browser). The Cisco ACE appliance sits in the data center in front of the application servers and provides a range of services to maximize server and application availability. as well as server and application failures. Some of the factors that make applications difficult to deploy and deliver effectively over the network include: • Inflexible application infrastructure—Application infrastructure design has historically been done on an application-by-application basis. This type of design tightly couples the application to the infrastructure and offers little flexibility. it becomes even more important to address application availability and performance issues to ensure achievement of business processes and objectives. resulting in the low-performance resources being overloaded with requests while the high-performance resources remain idle. this requires application developers to have a deep understanding of how different applications respond to bandwidth constraints. and compression. by distributing its client requests across multiple servers. Such attacks can also potentially cause the loss of vital application data. • Server availability and load—The mission-critical nature of applications puts a premium on server availability. Cisco ACE gives IT departments more control over application and server infrastructure.Application Resiliency The network is playing an increasingly important role in the success of a business. such as enterprise resource planning. In addition. Furthermore. email. Its main role is to provide Layer 4 through 7 switching. As more users work more hours while using key business applications. while leaving networks and servers unaffected. Application Resiliency 103 . and other network variances. Technology Overview The idea of improving application performance began in the data center. SSL-processing offload. the availability of these applications is often threatened by network overloads. With the advent of server virtualization. This is simply not feasible for every business application. delay. such as a web server. which in turn increases power and cooling requirements. application servers can be staged and added dynamically as capacity requirements change. developers need to accurately predict each end-user’s foreseeable access method. As traffic increases. additional servers can be added to the farm. The Internet boom ushered in the era of the server load balancers (SLBs). • Application security and compliance—Many of the new threats to network security are the result of application. such as application proxies and complete Layer 4 through 7 application switching. the number of physical servers continues to grow based on new application deployments. As a result. security. However. particularly traditional applications that took years to write and customize. Despite the benefits of server virtualization technology. as well as availability. jitter. which enables them to manage and secure application services more easily and improve performance. and portals. e-commerce. it is difficult to partition resources and levels of control to match changing business requirements. directly affects employee productivity and the bottom-line of a company.

and to maintain performance in a failure scenario. A variety of health-checking features are supported. the URLs. SSL processing. and a switch failure cuts the available bandwidth in half. is controlled by careful design of VLANs. or policy routes on the layer 2/layer 3 switch. the network topology can take many forms. You can purchase a 1 Gbps license for your Cisco ACE appliance and then. February 2013 Series • Cisco ACE can be used to partition components of a single web application across several application server clusters. the appliance uses two links for 2Gbps of available throughput. By using four ports. Cisco ACE receives connections and HTTP-requests. in which the Cisco ACE is connected off to the side of the layer 2/layer 3 infrastructure. but two additional gigabit ports are available. it maximizes the cache coherency of the servers by keeping requests for the same pages on the same servers. com/trades/order. Requests to the application cluster are directed to a virtual IP address (VIP) configured on the appliance. including the ability to verify web servers. For example. all of the links from each Cisco ACE appliance connect to only a single switch. as your performance requirements increase. By periodically probing servers and monitoring the return traffic from the real servers. server default gateway selection. Depending on how session-state redundancy is configured. Traffic. and compression from the server. databases. By offloading SSL processing. this failover may take place without disrupting the client-to-server connection. virtual server addresses. Running SSL on the web application servers is a tremendous drain on server resources. upgrade the same hardware to 4 Gbps with a new license. persistence options become available for secure sites. Logically. and routes them to the appropriate application server based on configured policies. Application Resiliency 104 . By terminating these sessions before applying content switching decisions. the Cisco ACE appliance can scale the solution to 4Gbps.• High availability—Cisco ACE provides high availability by automatically detecting the failure of a server and repartitioning client traffic among the remaining servers within seconds.jsp. Cisco ACE operates in an active standby mode. this information is no longer visible when carried inside SSL sessions. In this design. If the primary Cisco ACE appliance fails. and streaming media servers. • Health monitoring—Cisco ACE uses both active and passive techniques to monitor server health. such as image files. the Cisco ACE appliance is deployed in front of the application cluster. the secondary appliance takes control. with one primary and one secondary appliance.com/quotes/getquote. • Flexible licensing model—Cisco ACE is available in a number of performance options.mycompany. and reduces bandwidth requirements by up to 90% without increasing the number of servers.mycompany. from 500 Mbps to 4 Gbps of throughput. In addition. SSL servers. It is not directly in the path of traffic flow and receives only traffic that is specifically intended for it. because persistence information used by the content switches is inside the HTTP header. • Effective content allocation—Cisco ACE may be used to push requests for cacheable content. Each Cisco ACE has a port channel that is connected to the switch to scale performance. Physically. application servers. could be located on two different server clusters even though the domain name is the same. FTP servers. to a set of caches that can serve them more cost-effectively than the application servers. while providing users with continuous service.jsp and www. which allows the server to handle more requests so more users can be served. This prevents the scenario in which Cisco ACE is connected to both switches. which should be directed to it. • Application acceleration—Cisco ACE improves application performance and reduces response time by minimizing latency and compressing data transfers for any HTTP-based application. This partitioning allows the application developer to easily scale the application to several servers without numerous code modifications. There are several ways to integrate Cisco ACE into the data center network. www. depending on which license is purchased. those resources can be applied to traditional web-application functions. for any internal or external end-user. Deployment Details Cisco ACE 4710 hardware is always deployed in pairs for highest availability. Cisco ACE rapidly detects server failures and quickly reroutes connections to available servers. • Server offload—Cisco ACE offloads TCP processing. Furthermore. One-armed mode is the simplest deployment method.

Tech Tip When configuring the interfaces. the switch will not let you enter the vpc orphan-port suspend command on the interface. you must enter the vpc orphanport suspend command before the channel-group command. Configure port channels on core switches Procedure 1 Configure port channels on core switches The Cisco ACE server load balancers serving applications and servers in the data center will each connect to one of the data center core Cisco Nexus 5500UP switches by using EtherChannel links. load balances traffic over the links. Because the Cisco ACE s are single-homed to each data center core switch and do not use a vPC for connectivity—but instead are using a VLAN that is part of other vPC connections—they are non-vPC ports. If you enter the channel-group command on the interface first. Therefore. Use the speed 1000 command to set the ports connected to Cisco ACE from the default of 10-Gigabit Ethernet to 1-Gigabit Ethernet. Data Center Cisco ACE-A Data Center Cisco ACE-B Po Ch-13 2246 Nexus 5500UP Ethernet vPC Switch Fabric The use of EtherChannel links for connectivity to the core provides a resilient connection.Process Configuring Connectivity to the Data Center Core Switches Cisco ACE does support EtherChannel but does not support Link Aggregation Control Protocol (LACP). and makes it easier to add bandwidth in the future. The active Cisco ACE on the switch that remains in service will continue operating and provides the resiliency in the design. one of the switches will go into error recovery and shut down interfaces associated with VLANs that are part of vPC connections to prevent any loops in the infrastructure. the channel-group mode will be forced on. 1. interface Ethernet1/3 description ACE 1 Gig 1/1 speed 1000 vpc orphan-port suspend channel-group 13 mode on no shutdown ! interface Ethernet1/4 description ACE 1 Gig 1/2 speed 1000 vpc orphan-port suspend channel-group 13 mode on no shutdown Application Resiliency 105 . You must enter the vpc orphan-port suspend on all physical interface members of this port-channel to ensure consistent and proper operation. The data center core Cisco Nexus 5500UP switches use Virtual Port Channel (vPC) for many dual-homed EtherChannel devices. Po Ch-13 To Cisco SBA LAN Core the EtherChannel interfaces to the attached Cisco ACE on each switch in the event that the vPC peer link is broken between the data center core switches and a switch goes into error recovery mode. Use the vpc orphan-port suspend command to shut down February 2013 Series Step 1: Configure physical interfaces to the port channels on Cisco Nexus 5500UP data center core switch-A. also called vPC orphan ports. If the vPC peer link between the data center core switches fails.

49. configure the logical port-channel interfaces on both data center core switches.1 no shutdown description Servers_2 Step 5: Configure an unused VLAN for the Cisco ACE fault-tolerant heartbeat VLAN. it creates the logical EtherChannel (port-channel) interface. “Configure IP routing for VLANs. interface Ethernet1/3 description ACE 2 Gig 1/1 speed 1000 vpc orphan-port suspend channel-group 13 mode on no shutdown ! interface Ethernet1/4 description ACE 2 Gig 1/2 speed 1000 vpc orphan-port suspend channel-group 13 mode on no shutdown ! interface port-channel13 switchport mode trunk switchport trunk allowed vlan 149.When you assign the channel group to a physical interface. Assign the QoS policy created in Procedure 3.4. “Configure QoS policies.912 spanning-tree port type edge trunk service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS ! vlan 149 name Servers_2 ! interface Vlan149 no ip redirects ip address 10.49.4. vlan 912 name ACE-Heartbeat February 2013 Series Step 6: Apply the following configuration to Cisco Nexus 5500UP data center core switch-B.49.” vlan 149 name Servers_2 Step 4: Configure the Layer 3 SVI for VLAN 149 if it has not been configured earlier in Procedure 2. “Configure data center core global settings. In the next step. Step 2: Configure the logical port-channel interface.1 no shutdown description Servers_2 ! vlan 912 name ACE-Heartbeat Application Resiliency 106 . The physical interfaces tied to the port channel will inherit the settings.4.4. interface port-channel13 switchport mode trunk switchport trunk allowed vlan 149.49.912 spanning-tree port type edge trunk service-policy type qos input DC-FCOE+1P4Q_INTERFACE-DSCP-QOS Step 3: Configure the VLAN for server load balancing operation if it has not already been configured earlier in Procedure 5.3/24 ip router eigrp 100 ip passive-interface eigrp 100 ip pim sparse-mode hsrp 149 ip 10.” interface Vlan149 no ip redirects ip address 10.2/24 ip router eigrp 100 ip passive-interface eigrp 100 ip pim sparse-mode hsrp 149 priority 110 ip 10.” to the port channel interface.

ACE4710-B. ACE4710-A. Configure high availability Procedure 1 Perform initial Cisco ACE setup In this procedure you will configure your first Cisco ACE 4710. perform the initial configuration. Step 1: Connect to Cisco ACE via the console.255. Process Configuring the Cisco ACE Network 1. set the system host name. Perform initial Cisco ACE setup 2. Enter the new password for user “admin”: password Confirm the new password for user “admin”: password admin user password successfully changed. and then configure the second Cisco ACE 4710.49.Step 3: Set up the basic network security policies. switch login: admin Password: admin Admin user is allowed to log in only from console until the default password is changed. Enter the new password for user “www”: password Confirm the new password for user “www”: password www user password successfully changed. <text wall removed> ACE>Would you like to enter the basic configuration dialog (yes/no) [y]: n switch/Admin# Step 2: In configuration mode.0 access-group input ALL service-policy input remote_mgmt_allow_policy no shutdown Application Resiliency 107 . and then exit from the initial configuration dialog box at the prompt.255. www user is allowed to log in only after the default password is changed. include Gigabit Ethernet ports 1/3 and 1/4 for a total of 4 Gbps of throughput.119 255.4. This allows for management access into Cisco ACE. If a 4-Gbps license is being used. interface vlan 149 ip address 10. Step 5: Configure the VLAN 149 interface on the Cisco ACE for management access and general network connectivity. interface gigabitEthernet 1/1 channel-group 1 no shutdown interface gigabitEthernet 1/2 channel-group 1 no shutdown interface port-channel 1 switchport trunk native vlan 1 switchport trunk allowed vlan 149 no shutdown This configuration provisions a 2-Gbps port channel and is sufficient for Cisco ACE 4710 with up to a 2-Gbps license. hostname ACE4710-A February 2013 Series access-list ALL line 8 extended permit ip any any class-map type management match-any remote_access 2 match protocol xml-https any 3 match protocol icmp any 4 match protocol telnet any 5 match protocol ssh any 6 match protocol http any 7 match protocol https any 8 match protocol snmp any policy-map type management first-match remote_mgmt_allow_policy class remote_access permit Step 4: Configure port channel and trunking on the Gigabit Ethernet interfaces.

in the Password box.Step 6: Configure the default route.0.49.49.4. Start with the Cisco ACE appliance that you want to be primary.1 Step 2: In the Username box. replacing the IP address in Step 5 with 10.4.0.48. and then click Edit.119 into the address field. snmp-server community cisco ro The Cisco ACE appliance should now be reachable via the network.0. February 2013 Series Application Resiliency 108 .49.17 Step 8: Configure SNMP. the primary is 10. the devices will be synchronized and further configuration is only necessary on the primary Cisco ACE appliance. Step 7: Configure NTP.120. ip route 0. Step 9: Repeat Step 1 to Step 8 on the second Cisco ACE appliance.0.4.” and then click Log In. Procedure 2 Configure high availability Next. type admin. The Cisco ACE GUI opens. ntp server 10.0 10.49. ACE4710-A.4. In this example. type the password that you configured in Step 1 of Procedure 1. After you configure high availability. Step 1: Open a browser window and enter https://10. you configure the Cisco ACE appliances as an active/standby failover pair.4.0 0. Step 3: Navigate to Config > Virtual Contexts > High Availability (HA) > Setup.119. “Perform initial Cisco ACE setup.

• IP Address Peer Appliance—10.0 • Management IP Address—10. The Cisco ACE GUI opens.255.255.2 • Netmask—255. Step 6: Leave all of the values at their defaults.4. To configure high availability on the secondary appliance. Step 9: Navigate to Config > Virtual Contexts > High Availability (HA) > Setup. and then click Edit.2 Step 5: On the ACE HA Groups dialog box.4. and then click Deploy Now.49. • IP Address—10. February 2013 Series Application Resiliency 109 .255. ACE4710-B.120 into the address field. enter the following values.4.255.0 • Management IP Address—10.120 Step 10: On the ACE HA Management dialog box. enter the following values. High availability is now configured on the primary Cisco ACE appliance. click Add.119 Step 11: On the ACE HA Groups dialog box. in the Password box.49. type the password that you configured in Procedure 1. • VLAN—912 • Interface—Port Channel 1 • IP Address—10.1 • Netmask—255. click Add.4.255.255.Step 4: On the ACE HA Management dialog box. and then click Deploy Now.49.255. • VLAN—912 • Interface—Port Channel 1 Step 8: In the Username box.255.1 • IP Address Peer Appliance—10.4. Step 7: Open a browser window and enter https://10.119 (automatically populated) • Management IP Address Peer Appliance—10.255.255. and then click Deploy Now. you must log in to the secondary Cisco ACE appliance.255. type admin.255.49.49.120 (automatically populated) • Management IP Address Peer Appliance—10. and then click Log In. Step 1.

Make any additional configurations on the primary Cisco ACE appliance.119. Application Resiliency 110 . and then click Add.49.4. The Cisco ACE GUI opens. and then. in the Name box. The device you just finished configuring should show a state of “Standby Hot” and the peer should be “Active. enter icmp-probe. The two Cisco ACE appliances should be communicating and high availability should be up and active. type the password you configured in Procedure 1.120.Step 12: Leave all of the values at their defaults. and then click Add.” Step 1. in the Type list. choose ICMP. and then click Add. and then click Deploy Now. “Perform initial Cisco ACE setup. type admin. Step 5: Click Deploy Now. Step 3: Navigate to Config > Virtual Contexts > Load Balancing > Health Monitoring. in the Type list.49. in the Password box. enter http-probe. in the Name box. Step 8: Click Deploy Now.4. Configure a NAT pool Step 6: Navigate to Config > Virtual Contexts > Load Balancing > Health Monitoring. All changes are automatically replicated to the secondary Cisco ACE appliance. Configure Inband-Health checking 5.4. Configure a virtual server Procedure 1 Configure health probes Health probes poll the servers or applications to make sure that the server or service is available and to allow the system to remove failed devices. For this configuration.119 into the address field. Step 2: In the Username box. choose HTTP. 10. February 2013 Series Step 9: Click the Expect Status tab. Configure real servers 3. Configure health probes 2. and then click Log In. 10. and then. 6. ACE4710-A. Configure a server farm 4. Step 4: On the New Health Monitoring dialog box.” as shown in the ACE HA Groups dialog box below. Step 7: On the New Health Monitoring dialog box. Step 1: Open a browser window and enter https://10.49. Process Setting Up Load Balancing for HTTP Servers 1. you will build an Internet Control Message Protocol (ICMP) and an HTTP probe.

Step 4: On the New Real Server dialog box. “Configure real servers. and then click Deploy Now. • Name—webserver2 You have now created the ICMP and HTTP probes.49. • Name—webserver1 • IP Address—10. and then click Add. If you have additional servers that you plan on using. Tech Tip If your real server has a firewall running on it. enter the values below. and then click Add. make sure that in the firewall rules you permit ICMP from the Cisco ACE appliance to the server so that the probes will work. enter the values below.112 • Probes—icmp-probe This example uses the ICMP probe to monitor the real servers configured in this example. February 2013 Series Application Resiliency 111 . enter 200. you add the real servers across which Cisco ACE load balances client connections. Step 1: Navigate to Config > Virtual Contexts > Load Balancing > Server Farms. Step 2: On the New Real Server dialog box.4.111 • Probes—icmp-probe You have just configured the two web servers. Procedure 2 Configure real servers In this procedure. Step 3: Navigate to Config > Virtual Contexts > Load Balancing > Real Servers.Step 10: For both the maximum and minimum status codes. thereby ensuring the server is monitored rather than a specific service. and then click Deploy Now. • IP Address—10. This is the most flexible configuration and allows load balancing for multiple services on a single physical or virtual server.” Procedure 3 Configure a server farm A server farm on Cisco ACE is a pool of real servers that you can use to connect to the virtual IP address that the clients will use to connect to the HTTP service.4. you can configure them now by repeating Procedure 2. and then click Deploy Now. which will be used to monitor the real and virtual servers in the load balancing server farm. and then click Add. Step 1: Navigate to Config > Virtual Contexts > Load Balancing > Real Servers.49.

You have just created the server-farm. In this procedure. Step 7: On the New Real Server dialog box. for HTTP on port 80. Step 5: Click Deploy Now. Step 6: Click the Real Server tab. and then click Deploy Now. This is because a small amount of errors of this type are normal on servers. and in the Port box. select webserver1. • Log—Triggers a syslog message to be sent to a Network Management System (NMS). Step 1: Navigate to Config > Virtual Contexts > Load Balancing > Server Farms. It can identify. when a server is having issues. using Remove mode could mean that the threshold would be too low and would take a system out of service unnecessarily. as well as keeping the log locally on Cisco ACE. allowing you to view server issues from the CLI. enter the values below. The http-probe will monitor all of the servers in the server farm to ensure that the HTTP service is available. • Remove—Triggers a log and takes the server out of service. webserver1 and webserver2. When a failure is detected. • Name—webfarm • Probes—http-probe Step 9: On the Edit Server Farm dialog box. webfarm. Log mode allows you to see errors and identify which real sever is having problems. next to Name. Without more information about the server farm. and then click Add. Log mode is used. and then click View/Edit.Step 2: On the New Server Farm dialog box. faster than active probes. February 2013 Series Application Resiliency 112 . select webserver2. select webfarm. or too high and not take a failing server out of service. and in the Port box. and then click Add. next to Name. with the real-server members. Procedure 4 Configure Inband-Health checking Inband-health checking on Cisco ACE monitors return traffic and looks for failures from the real servers to the clients. click Deploy Now. the following modes are available: • Count—Logs the failures locally on Cisco ACE. enter 80 for HTTP. enter 80. Step 8: Click Deploy Now. Step 3: Click the Real Server tab. Step 4: On the New Real Server dialog box.

enter the values below. within a 10-second period. • Lowest Retcode—500 • Highest Retcode—505 • Type—Log • Threshold—5 • Reset—10 If. and local statistics will be maintained on Cisco ACE and can be checked from the CLI. February 2013 Series Application Resiliency 113 . enter the values below. a syslog message will be sent to the NMS. click the Retcode Map tab. and then click Add. click the Retcode Map tab. and then click Deploy Now. the inband-health check can be set to use Count mode. Step 3: At the bottom of the Server Farm dialog box. • Inband-Health Check—Log • Lowest Retcode—404 • Connection Failure Threshold Count—5 • Highest Retcode—404 • Reset Timeout (Milliseconds)—500 • Type—Log • Threshold—5 • Reset—10 If a server in the webfarm responds to a client with the HTTP return code 404 five times in 10 seconds. Step 5: At the bottom of the Server Farm dialog box. If five errors occur within a 500-ms period. a syslog message will be sent to the NMS. a server in the webfarm responds to a client five times with the HTTP return code in the range of 500 to 505. and then click Add.Step 2: On the Edit Server Farm dialog box. and then click Deploy Now. If there is not a syslog server available on the network. and then click Deploy Now. a syslog message will be sent to the NMS. Step 4: On the New Retcode Map dialog box. enter the values below. Servers in the webfarm are now being monitored for TCP errors. Step 6: On the New Retcode Map dialog box.

enter the following values: Step 9: On the Syslog dialog box.4. • Virtual IP Address—10. and then click Add.35.35. • Virtual Server Name—http-vip Now the syslog messages that are triggered by the inband-health checks are sent to the syslog server at 10. and then select Enable Syslog and select Enable Timestamp.255.48.255. Step 2: On the Properties dialog box. and then click Deploy Now. February 2013 Series Application Resiliency 114 .49. and then click Deploy Now.99 • Netmask—255.99 • End IP Address—10.4. Procedure 6 Configure a virtual server Step 1: Navigate to Config > Virtual Contexts > Load Balancing > Virtual Servers.4. enter 10.48.4. Step 2: On the New NAT Pool dialog box.4. click Add.Step 7: Navigate to Config > Virtual Contexts > System > Syslog.49. • Start IP Address—10.49. and then click Add.100 Procedure 5 • VLAN—149 Configure a NAT pool Step 1: Navigate to Config > Virtual Contexts > Network > NAT Pools. enter the following values.0 Step 8: On the Log Host tab. click Deploy Now.

Procedure 1 Configure real servers In this procedure. you add the real servers across which Cisco ACE load balances client SSL connections. type the password you configured in Step 1 of Procedure 1. and then click Deploy Now. Configure a server farm Step 4: On the NAT dialog box. Process Load Balancing and SSL Offloading for HTTPS Servers 1. Configure real servers 2. in the Server Farm list. The Cisco ACE GUI opens. Step 2: In the Username box.Step 3: On the Default L7 Load-Balancing Action dialog box. Step 1: Open a browser window and enter https://10. Configure an HTTP-to-HTTPS Redirect You can configure a group of servers for load balancing.119 into the address field. Configure a virtual server 6. “Perform initial Cisco ACE setup. Configure SSL proxy service 4. choose webfarm.” and then click Log In. in which the Cisco ACE appliance performs all of the SSL processing.100 on port 80 will be load balanced across the real servers webserver1 and webserver2 in the server farm webfarm. Step 3: Navigate to Config > Virtual Contexts > Load Balancing > Real Servers. February 2013 Series Application Resiliency 115 .49. type admin. in the Password box. click OK. Clients going to the virtual IP 10. thereby offloading it from the servers. and then click Add.4. Configure HTTP-cookie sticky service 5. 3.4.49. click Add. and then select Deflate.

and then click Add. In this example. the ICMP-probe monitors the real servers.114 Step 3: On the Real Server tab. This is the most flexible configuration and allows load-balancing for multiple services on a single physical or virtual server.Step 4: On the New Real Server dialog box. choose webserver3. in the Name list. and then click Add. • Name—webserver3 • IP Address—10. • Name—webserver4 • IP Address—10. you can configure them now by repeating this procedure.49. February 2013 Series Application Resiliency 116 . Step 7: If you have additional servers that you plan on using. rather than a specific service. thereby ensuring that the server is monitored. This saves your changes. You have just configured the two web servers. and then click Deploy Now. click Add. Step 6: Click Deploy Now for the newly created server farm. Step 5: Click Deploy Now.49. • Name—appfarm • Probes—http-probe Step 5: Navigate to Config > Virtual Contexts > Load Balancing > Real Servers. and then click Deploy Now. enter the values below. • Probes—icmp-probe Step 4: On the New Real Server dialog box. and then click Deploy Now. Step 6: On the New Real Server dialog box.4.4.113 Procedure 2 Configure a server farm A server farm is a pool of real servers that you can use to connect to the VIP-address that the clients will use to connect to the HTTP service. Step 2: On the New Server Farm dialog box. and then in the Port box. enter the values below. • Probes—icmp-probe Step 1: Navigate to Config > Virtual Contexts > Load Balancing > Server Farms. enter 80 for HTTP. enter the values below.

Step 4: Select both Enable Insert and Browser Expire. you need to configure an SSL proxy service. appfarm. in a production deployment. select appfarm. choose HTTP Cookie. enter app-ssl-proxy. February 2013 Series Application Resiliency 117 . enter app-sticky. Step 8: On the New Real Server dialog box. Step 5: Next to Sticky Server Farm. Procedure 5 Configure a virtual server Step 1: Navigate to Config > Virtual Contexts > Load Balancing > Virtual Servers. Step 2: On the New Proxy Service dialog box. In order for Cisco ACE to offload the SSL processing. in the Name list. you would most likely purchase a certificate from a trusted certificate authority (CA). with the real-server members. and in the Cookie Name box. This is useful for applications where state could be lost if the client connection was balanced across several servers. for HTTP on port 80. Step 9: Click Deploy Now. Step 3: In the Type list. and then click Add. You have just created the server farm. webserver3 and webserver4. and then click Add. Procedure 3 Configure SSL proxy service Procedure 4 Configure HTTP-cookie sticky service The HTTP cookie sticky service keeps traffic from a client “stuck” to a single real server. and then click Deploy Now. Step 3: Select both cisco-sample-key and cisco-sample-cert. even though clients will access the application on these servers via HTTPS. The Cisco ACE appliance will perform all of the SSL-processing so. choose webserver4. and then click Deploy Now. The http-probe will monitor all of the servers in the server farm to ensure that the HTTP service is available. the traffic from Cisco ACE to the servers will happen over port 80. Step 1: Navigate to Config > Virtual Contexts > Load Balancing > Stickiness. in the Name box. Step 1: Navigate to Config > Virtual Contexts > SSL > Proxy Service. and then in the Port box. enter 80. Step 2: On the New Sticky Group dialog box. and then click Add. and then click Add. However. the Cisco sample key and certificate is used. enter APPSESSIONID. In this guide.Step 7: On the Real Server tab. in the Group Name box.

49. will be load-balanced across the real-servers. click Add.101 to the HTTPS service configured above. and then click Deploy Now. enter the following values: • Virtual Server Name—https-vip Step 6: On the NAT dialog box. in the server farm. and then click Deploy Now. Step 1: Navigate to Config > Virtual Contexts > Load Balancing > Real Servers. • Virtual IP Address—10.4. By following this procedure. choose app-sticky (HTTP Cookie). appfarm. and then click Add.49. Step 5: In the Sticky Group list. • Name—redirect1 • Type—Redirect • Web Host Redirection—https://%h%p • Redirection Code—302 February 2013 Series Application Resiliency 118 . and then select Deflate. in the Primary Action list. Step 2: On the New Real Server dialog box. webserver3 and webserver4.101 on port 443. 10.49. enter the values below. you can create a service that redirects any HTTP traffic directed to 10.101 • Application Protocol—HTTPS • VLAN—149 Clients going to the virtual IP. choose app-ssl-proxy. Cisco ACE will terminate the SSL session and load-balance the connections to the real-servers over standard HTTP on TCP port 80.4.Step 2: On the Properties dialog box.4. Configure an HTTP-to-HTTPS Redirect (Optional) It is often preferable to have HTTP traffic redirected to HTTPS to ensure that connections to that service are encrypted. click OK. in the Proxy Service Name list. Procedure 6 Step 3: On the SSL Termination dialog box. Step 4: On the Default L7 Load-Balancing Action dialog box. choose Sticky.

and then click Add. • Virtual IP Address—10. and then click Deploy Now. and then click Deploy Now.4. Step 6: On the New Real Server dialog box. in the Server Farm list.Step 7: On the Edit Server Farm dialog box. and then click Add. February 2013 Series Application Resiliency 119 . select redirect1. and then click Add. Step 9: On the Properties dialog box. • Name—http-redirect • Type—Redirect Step 10: On the Default L7 Load-Balancing Action dialog box. and then click Deploy Now. enter the values below.101 • VLAN—149 Step 4: On the New Server Farm dialog box. choose http-redirect. Step 5: Click the Real Server tab. click Deploy Now.49. Step 8: Navigate to Config > Virtual Contexts > Load Balancing > Virtual Servers. enter the following values: • Virtual Server Name—http-vip-redirect Step 3: Navigate to Config > Virtual Contexts > Load Balancing > Server Farms.

FCoE capable Fabric Extender N2K-C2232PP-10GE Ethernet Extension — Data Center Services Functional Area Product Description Part Numbers Software Application Resiliency Cisco ACE 4710 Application Control Engine 2Gbps ACE-4710-02-K9 A5(1. FCoE.2(1)N1(1b) Layer 3 License Cisco Nexus 5548 up to 48-port 10GbE.1(6) E4 Storage Network Extension Functional Area Product Description Part Numbers Software Fibre-channel Switch Cisco MDS 9148 Multilayer Fibre Channel Switch DS-C9148D-8G16P-K9 NX-OS 5.Appendix A: Product List Data Center Core Functional Area Product Description Part Numbers Software Core Switch Cisco Nexus 5596 up to 96-port 10GbE. and Fibre Channel SFP+ N5K-C5548UP-FA Cisco Nexus 5548 Layer 3 Switching Module N55-D160L3 Cisco Nexus 2000 Series 48 Ethernet 100/1000BASE-T (enhanced) Fabric Extender N2K-C2248TP-E Cisco Nexus 2000 Series 48 Ethernet 100/1000BASE-T Fabric Extender N2K-C2248TP-1GE Cisco Nexus 2000 Series 32 1/10 GbE SFP+. FCoE.0(1) IPS 7.2) Cisco ACE 4710 Application Control Engine 1Gbps ACE-4710-01-K9 Cisco ACE 4710 Application Control Engine 1Gbps 2-Pack ACE-4710-2PAK Cisco ACE 4710 Application Control Engine 500 Mbps ACE-4710-0. and Fibre Channel SFP+ N5K-C5596UP-FA Cisco Nexus 5596 Layer 3 Switching Module N55-M160L30V2 NX-OS 5.5-K9 Cisco ASA 5585-X Security Plus IPS Edition SSP-40 and IPS SSP-40 bundle ASA5585-S40P40-K9 Cisco ASA 5585-X Security Plus IPS Edition SSP-20 and IPS SSP-20 bundle ASA5585-S20P20X-K9 Cisco ASA 5585-X Security Plus IPS Edition SSP-10 and IPS SSP-10 bundle ASA5585-S10P10XK9 Firewall ASA 9.0(8) Cisco MDS 9124 Multilayer Fibre Channel Switch DS-C9124-K9 February 2013 Series Appendix A: Product List 120 .

4.1(1a) Cisco UCS Release 1.6 Cisco UCS CIMC Release Appendix A: Product List 121 .Computing Resources Functional Area Product Description Part Numbers Software UCS Fabric Interconnect Cisco UCS up to 48-port Fabric Interconnect UCS-FI-6248UP Cisco UCS up to 96-port Fabric Interconnect UCS-FI-6296UP 2.1(1a) Cisco UCS Release Cisco UCS Blade Server Chassis N20-C6508 Cisco UCS 8-port 10GbE Fabric Extender UCS-IOM2208XP Cisco UCS 4-port 10GbE Fabric Extender UCS-IOM2204XP Cisco UCS B200 M3 Blade Server UCSB-B200-M3 Cisco UCS B200 M2 Blade Server N20-B6625-1 Cisco UCS B250 M2 Blade Server N20-B6625-2 Cisco UCS 1280 Virtual Interface Card UCS-VIC-M82-8P Cisco UCS M81KR Virtual Interface Card N20-AC0002 Cisco UCS C220 M3 Rack Mount Server UCSC-C220-M3S Cisco UCS C240 M3 Rack Mount Server UCSC-C240-M3S Cisco UCS C200 M2 Rack Mount Server R200-1120402W Cisco UCS C210 M2 Rack Mount Server R210-2121605W Cisco UCS C250 M2 Rack Mount Server R250-2480805W Cisco UCS 1225 Virtual Interface Card Dual Port 10Gb SFP+ UCSC-PCIE-CSC-02 Cisco UCS P81E Virtual Interface Card Dual Port 10Gb SFP+ N2XX-ACPCI01 UCS B-Series Blade Servers UCS C-Series Rack-mount Servers February 2013 Series 2.

• We added vPC peer-switch to the data-center core Ethernet as a spanning-tree option to reduce topology change impact when Layer-2 switches are connected to the data-center core. February 2013 Series Appendix B: Changes 122 . • We updated the firewall security policy procedures to use Cisco ASDM in order to create access control lists versus CLI for ease of use. • We updated the software for the Cisco ASA firewalls and IPS devices. • We updated the Cisco NX-OS software version for the Cisco Nexus 5500 data center core switches. • We added vPC object-tracking as an option to add resilience to the virtual port channel domain by tracking critical interfaces.Appendix B: Changes This appendix summarizes the changes to this guide since the previous Cisco SBA series. • We made minor changes and updates to improve the readability of this guide.

FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING. CONSEQUENTIAL. SPECIAL. OR TRADE PRACTICE. INCLUDING. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. Any use of actual IP addresses in illustrative content is unintentional and coincidental. © 2013 Cisco Systems. command display output. USAGE. WITHOUT LIMITATION. THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE.cisco. LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS. WITHOUT LIMITATION.com/go/offices. San Jose. Inc. EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. and figures included in the document are shown for illustrative purposes only. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO. STATEMENTS. Third-party trademarks mentioned are the property of their respective owners. THE WARRANTY OF MERCHANTABILITY. Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U. CA Asia Pacific Headquarters Cisco Systems (USA) Pte. All rights reserved. CISCO AND ITS SUPPLiERS DISCLAIM ALL WARRANTIES. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT. The use of the word partner does not imply a partnership relationship between Cisco and any other company. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS.S. (1110R) B-0000515-1 1/13 . OR INCIDENTAL DAMAGES. Ltd. Inc.” WITH ALL FAULTS. INFORMATION. and other countries. ALL DESIGNS. “DESIGNS”) IN THIS MANUAL ARE PRESENTED “AS IS.Feedback Please use the feedback form to send comments and suggestions about this guide. go to this URL: www. phone numbers. SMART BUSINESS ARCHITECTURE Americas Headquarters Cisco Systems. AND RECOMMENDATIONS (COLLECTIVELY. Addresses. Any examples. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.com/go/trademarks. ITS SUPPLIERS OR PARTNERS.cisco. Singapore Europe Headquarters Cisco Systems International BV Amsterdam. To view a list of Cisco trademarks. SPECIFICATIONS. and fax numbers are listed on the Cisco Website at www. Any Internet Protocol (IP) addresses used in this document are not intended to be actual addresses. INCLUDING. The Netherlands Cisco has more than 200 offices worldwide.