Menu

Putting SDN to Work | Application Centric Infrastructure

Putting SDN to Work with an Application Centric Infrastructure

In a recent blog post we outlined some of benefits and challenges of adopting Software Defined Networking (SDN), including the need to abstract network components to enable a new kind of network infrastructure. To give you a better understanding of what it takes to implement SDN, we wanted to delve a little deeper into how the hardware and software work together to enable software-defined integration, so we sat down with Brian Dooley, OneNeck Data Center Architect and CCIE, and an expert in ACI and numerous other networking technologies.

As an expert with years of experience in networking technologies, how do you view the SDN landscape and current adoption?

The IT world seems to want to embrace SDN just to keep pace with the industry, or at least become more educated about it, and
Software Defined Networking
is being presented in a number of formats, by just as many vendors. Some solutions suggest SDN is merely moving switching and routing into servers and off hardware. Whether you move your infrastructure into servers to support a virtualized solution or deploy a new hardware solution to move towards SDN, there are points of commonality across each environment.

The real goal of SDN is to get traditional

network

engineers to begin thinking like software developers. If you have never studied software development this may seem a bit daunting to begin with. There is no expectation for these engineers to become developers themselves. While you are welcome to review the

SWEBOK

, your time would be better spent reviewing how systems communicate with each other in your existing data center. The primary idea is to streamline the development to operations transition so networking is not seen as a bottleneck to deployment.

So, how does SDN compare to traditional networking environments?

Traditional existing environments are segmented by VLANs. A single subnet of IP addresses is mapped to a single VLAN. VLAN “A” may or may not communicate with VLAN “B”, yet there is not necessarily consistency among all entities in how they are treated with regard to this policy. While this segmentation has worked for us for years, the expectations today are speeding up implementation and to adjust the segmentation to match the solutions being developed. Historically the applications are deployed by fitting them into the infrastructure the network team dictates. Existing constructs have become overloaded, making them less agile and unwieldy for the needs of developers.

In an effort to get away from these constructs the new approach is to treat individual applications as a whole unique entity. To accomplish this, we must forget how we perform networking today, and consider instead how an application communicates with its various components. A typical three tier application may be made up of web, application, and database. Each of these parts can be seen as separate components that require unique treatment. If we break out Web and identify what aspects about it make it unique or different, we can then identify other applications that have components that have the same requirements. By grouping these resources that require the same policy treatment we are beginning to look at an application through a developer’s eyes. So if there are separate containers for each of the three components (Web/App/DB) we can then connect them to each other in a controlled, deterministic manner.

So, let’s talk specifically about Cisco’s Application Centric Infrastructure. What does it entail?

Cisco is providing us with a hardware-based solution for network infrastructure that changes the way we do networking. ACI operates as an overlay on a Clos Fabric (see

Charles Clos

) solution that administers a network through policy. By pushing policy through an API to the Fabric to deploy infrastructure, you can optimize and automate the processes of deployment.

ACI uses Nexus 9000 series switching hardware. This hardware is broken up into Spine switches and Leaf switches. As part of an ACI Fabric, an engineer does not directly configure these switches. The Fabric is configured, administered, and controlled by the Application Policy Infrastructure Controller (APIC) which runs on “specialized” Cisco C-series servers. The APIC is actually a cluster of three servers that share state information of the Fabric for high-availability. The APIC is needed for changes to the Fabric, but it does not partake in forwarding of traffic. The Fabric can continue to operate in the absence of the controller, (i.e. APIC failure) but no changes would be possible without it.

An ACI Fabric is cabled up with each Leaf connecting to every Spine. Then the APIC(s) is connected to a Leaf (or multiple Leafs). After a base provision of the APIC through console connectivity a browser is used to connect to the APIC API. Then the APIC is used to automatically discover the Fabric hardware through LLDP. Once the Fabric discovery is complete we move on to developing policies to support the applications and their respective connectivity requirements.

There are other aspects of ACI that I won’t go into now. Service Graphs, VMM integration, Layer 2 and Layer 3 external integration all require separate conversations due to the depth needed for each subject.

Well, let’s talk policy. Policies are a huge part of what makes ACI work. Can you explain how?

The ACI environment is built entirely on layers of
policies
. Creation of policies allows for the reuse of them when building solutions for application owners. There are many types of policies used in building an ACI application solution. The primary goal is to produce what is known as an Application Network Profile (ANP).

  • Application Network Profiles are a group of EPGs and the policies that define the communication between them.
  • End Point Groups (EPG) are containers for objects (End Points) requiring the same policy treatment, i.e. app tiers, or services.
  • End Points are entities that connect (attach) to the fabric directly or indirectly.
  • Tenants are a Logical separator for: Customer, BU, group etc. Separates traffic, admin, visibility, etc.
  • Private-Network (L3) – Equivalent to a VRF, separates routing instances, can be used as an admin separation.
  • Bridge Domain (BD) – NOT A VLAN, simply a container for subnets. CAN be used to define L2 boundary
  • Contracts represent policies between EPGs. Contracts are “provided” by one EPG and “consumed” by another.
  • Fabric Policies configure interfaces that connect spine and leaf switches.
  • Access Policies configure external-facing interfaces that do not connect to a spine switch. Access Policy components include: Switch Profile, Interface Profile, Interface Policy Group and Interface Policies.

Combine all of the above pieces, along with an EPG, and a VLAN Pool and associate them with a Domain.

VLAN Pools are a list of VLANs. In ACI the VLAN tag is used purely for classification purposes

A domain is a VLAN namespace (a valid VLAN range), can be: physical, bridged external, routed external or virtual (VMM).

An Attachable Entity Profile (AEP) represents a group of external entities with similar infrastructure policy requirements. AEPs are required to enable VLANs on the leaf. Without VLAN pool deployment using AEP, a VLAN is not enabled on the leaf port, even if an EPG is provisioned.

Once an EPG has been associated with switches and their respective interfaces using the appropriate policies, the devices connected to those interfaces may operate within the EPG.

So, that’s a lot of information, but once all of that is defined, what do you do with it to put it to work?

Once the Fabric is up and running the first step is to build a Tenant. Within a Tenant we define a Private Network, which effectively creates a separate Layer 3 network environment in the Fabric.

Within the Private Network we will define Bridge Domains to later be used in defining EPGs. Using the earlier example of a multi-tier application, we create an EPG each with unique Bridge Domains, for Users (External), Web, App, and DB. In order for these EPGs to communicate with one another, we define Contracts and Provider/Consumer relationships between each EPG. An EPG that has the resources is the Provider. The EPG that requires the resource is considered the consumer.

Within a Contract, specific information is defined known as filters, which can restrict access between EPGs to specific TCP port (Layer 4) information. If further Layer 4 services are needed they can be integrated into the Contracts through Service Graphs. Service Graphs provide connectivity to external resources such as firewalls and server load balancers.

So, how does one go about connecting the Fabric into the virtual network?

In order to administer virtual environments through policy, there must be a way to tie a virtual network into the Fabric. From a VMware virtual network perspective, we are familiar associating VMs with individual portgroups to establish connectivity to the physical network. When integrating with ACI, this does not change from the virtual perspective. We still place VMs in specific portgroups. The difference is where the portgroups come from. Once an ACI Fabric ties into vCenter, a vDS is created dynamically within vCenter. EPGs are then pushed to the vDS which become portgroups within the vDS. This allows for control of virtual networking through policy.

Wow – sounds like a powerful way of taking networking to the next level. Thank you for the insight into how ACI works.

Of course. If there’s anything to take away from all of this technical speak, we can see firsthand with our customers how ACI delivers a new way of

networking

that’s faster, more simple and brings a new level of security to the data center. It’s really powerful and here to stay.

Get In Touch

Call Us

For general inquiries, call: 855.ONENECK

Immediate Assistance

Managed services support: 800.272.3077
Non-managed service support: 515.334.5755
Or visit our service desk:
Service Desk Portal

Chat With Us

Hours available: 24/7
Start a Chat

OneNeck Headquarters

525 Junction Road
Madison, WI 53717
View All Locations

Talk to Our Team