CCDE Study Guide: Enterprise Campus Architecture Design

Date: Nov 23, 2015 Sample Chapter is provided courtesy of Cisco Press.
In this chapter from CCDE Study Guide, Marwan Al-shawi discusses issues related to enterprise campus architecture design, including hierarchical design models, modularity, access-distribution design model, layer 3 routing design considerations, EIGRP versus link state as a campus IGP, and enterprise campus network virtualization.

A campus network is generally the portion of the enterprise network infrastructure that provides access to network communication services and resources to end users and devices that are spread over a single geographic location. It may be a single building or a group of buildings spread over an extended geographic area. Normally, the enterprise that owns the campus network usually owns the physical wires deployed in the campus. Therefore, network designers typically tend to design the campus portion of the enterprise network to be optimized for the fastest functional architecture that runs on high-speed physical infrastructure (1/10/40/100 Gbps). Moreover, enterprises can also have more than one campus block within the same geographic location, depending on the number of users within the location, business goals, and business nature. When possible, the design of modern converged enterprise campus networks should leverage the following common set of engineering and architectural principles 10:

  • Hierarchy
  • Modularity
  • Resiliency

Enterprise Campus: Hierarchical Design Models

The hierarchical network design model breaks the complex flat network into multiple smaller and more manageable networks. Each level or tier in the hierarchy is focused on a specific set of roles. This design approach offers network designers a high degree of flexibility to optimize and select the right network hardware, software, and features to perform specific roles for the different network layers.

A typical hierarchical enterprise campus network design includes the following three layers:

  • Core layer: Provides optimal transport between sites and high-performance routing. Due the criticality of the core layer, the design principles of the core should provide an appropriate level of resilience that offers the ability to recover quickly and smoothly after any network failure event with the core block.
  • Distribution layer: Provides policy-based connectivity and boundary control between the access and core layers.
  • Access layer: Provides workgroup/user access to the network.

The two primary and common hierarchical design architectures of enterprise campus networks are the three-tier and two-tier layers models.

Three-Tier Model

This design model, illustrated in Figure 3-1, is typically used in large enterprise campus networks, which are constructed of multiple functional distribution layer blocks.

Figure 3-1 Three-Tier Network Design Model

Two-Tier Model

This design model, illustrated in Figure 3-2, is more suitable for small to medium-size campus networks (ideally not more than three functional disruption blocks to be interconnected), where the core and distribution functions can be combined into one layer, also known as collapsed core-distribution architecture.

Figure 3-2 Two-Tier Network Design Model

Enterprise Campus: Modularity

By applying the hierarchical design model across the multiple functional blocks of the enterprise campus network, a more scalable and modular campus architecture (commonly referred to as building blocks) can be achieved. This modular enterprise campus architecture offers a high level of design flexibility that makes it more responsive to evolving business needs. As highlighted earlier in this book, modular design makes the network more scalable and manageable by promoting fault domain isolation and more deterministic traffic patterns. As a result, network changes and upgrades can be performed in a controlled and staged manner, allowing greater stability and flexibility in the maintenance and operation of the campus network. Figure 3-3 depicts a typical campus network along with the different functional modules as part of the modular enterprise architecture design.

Figure 3-3 Typical Modular Enterprise Campus Architecture

When Is the Core Block Required?

A separate core provides the capability to scale the size of the enterprise campus network in a structured fashion that minimizes overall complexity when the size of the network grows (multiple campus distribution blocks) and the number of interconnections tying the multiple enterprise campus functional blocks increases significantly (typically leads to physical and control plane complexities), as exemplified in Figure 3-4. In other words, not every design requires a separate core.

Figure 3-4 Network Connectivity Without Core Versus With Core

Besides the previously mentioned technical considerations, as a network designer you should always aim to provide a business-driven network design with a future vision based on the principle “build today with tomorrow in mind.” Taking this principle into account, one of the primary influencing factors with regard to selecting two-tier versus three-tier network architecture is the type of site or network (remote branch, regional HQ, secondary or main campus), which will help you, to a certain extent, identify the nature of the site and its potential future scale (from a network design point of view). For instance, it is rare that a typical (small to medium-size) remote site requires a three-tier architecture even when future growth is considered. In contrast, a regional HQ site or a secondary campus network of an enterprise can have a high potential to grow significantly in size (number of users and number of distribution blocks). Therefore, a core layer or three-tier architecture can be a feasible option here. This is from a hypothetical design point of view; the actual answer must always align with the business goals and plans (for example if the enterprise is planning to merge or acquire any new business); it can also derive from the projected percentage of the yearly organic business growth. Again, as a network designer, you can decide based on the current size and the projected growth, taking into account the type of the targeted site, business nature, priorities, and design constraints such as cost. For example, if the business priority is to expand without spending extra on buying additional network hardware platforms (reduce capital expenditure [capex]), in this case the cost savings is going to be a design constraint and a business priority, and the network designer in this type of scenario must find an alternative design solution such as the collapsed architecture (two-tier model) even though technically it might not be the optimal solution.

That being said, sometimes (when possible) you need to gain the support from the business first, to drive the design in the right direction. By highlighting and explaining to the IT leaders of the organization the extra cost and challenges of operating a network that was either not designed optimally with regard to their projected business expansion plans, or the network was designed for yesterday’s requirements and it will not be capable enough to handle today’s requirements. Consequently, this may help to influence the business decision as the additional cost needed to consider three-tier architecture will be justified to the business in this case (long-term operating expenditure [opex] versus short-term capex). In other words, sometimes businesses focus only on the solution capex without considering that opex can probably cost them more on the long run if the solution was not architected and designed properly to meet their current and future requirements

Access-Distribution Design Model

Chapter 2, “Enterprise Layer 2 and Layer 3 Design,” discussed different Layer 2 design models that are applicable to the campus LAN design, in particular to the access-distribution layer. Technically, each design model has different design attributes. Therefore, network designers must understand the characteristics of each design model to be able to choose and apply the most feasible model based on the design requirements.

The list that follows describes the three primary and common design models for the access layer to distribution layer connectivity. The main difference between these design models is where the Layer 2 and Layer 3 boundary is placed and how and where Layer 3 gateway services are handled:

  • Classical multitier STP based: This model is the classical or traditional way of connecting access to the distribution layer in the campus network. In this model, the access layer switches usually operate in Layer 2 mode only, and the distribution layer switches operate in Layer 2 and Layer 3 modes. As discussed earlier in this book, the primary limitation of this design model is the reliance on Spanning Tree Protocol (STP) and First Hop Redundancy Protocol (FHRP). For more information, see Chapter 2.
  • Routed access: In this design model, access layer switches act as Layer 3 routing nodes, providing both Layer 2 and Layer 3 forwarding. In other words, the demarcation point between Layer 2 and Layer 3 is moved from the distribution layer to the access layer. Based on that, the Layer 2 trunk links from access to distribution are replaced with Layer 3 point-to-point routed links, as illustrated in Figure 3-5.

    Figure 3-5 Routed Access Layer

    The routed access design model has several advantages compared to the multitier classical STP-based access-distribution design model, including the following:

    • Simpler and easier to troubleshoot, you can use a standard routing troubleshooting techniques, and you will have fewer protocols to manage and troubleshoot across the network
    • Eliminate the reliance on STP and FHRP and rely on the equal-cost multipath (EMCP) of the used routing protocol to utilize all the available uplinks, which can increase the overall network performance
    • Minimize convergence time during a link or node failure
  • Switch clustering: As discussed in Chapter 2, this design model provides the simplest and most flexible design compared to the other models discussed already. As illustrated in Figure 3-6, by introducing the switch clustering concept across the different functional modules of the enterprise campus architecture, network designers can simplify and enhance the design to a large degree. This offers a higher level of node and path resiliency, along with significantly optimized network convergence time.

    Figure 3-6 Switch Clustering Concept

The left side of Figure 3-6 represents the physical connectivity, and the right side shows the logical view of this architecture, which is based on the switch clustering design model across the entire modular campus network.

Table 3-1 compares the different access-distribution connectivity design models from different design angles.

Table 3-1 Comparing Access-Distribution Connectivity Models

Multitier STP Based

Routed Access

Switch Clustering

* Some switch clustering technologies, such as Cisco Nexus vPC, use FHRP (Hot Standby Router Protocol [HSRP]). However, from a forwarding plane point of view, both upstream switches (vPC peers) do forward traffic, unlike the -classical behavior, which is based on active-standby.

Design flexibility

Limited (topology dependent)

Limited (For example, spanning Layer 2 over different access switches requires an overlay technology)

Flexible

Scalability

Supports scale up and limited scale out (topology dependent)

Supports both scale up and scale out

Scale up and limited scale out (typically limited to 2 distribution switches per cluster)

Layer 3 gateway services

Distribution layer (FHRP based)

Access layer (Layer 3 routing based)

Distribution layer (may or may not require FHRP*)

Multichassis link aggregation (mLAG)

Not supported

Not supported (instead relies on Layer 3 ECMP)

Supported

Access-to-distribution convergence time

Dependent on STP and FHRP timers (relatively slow)

Interior Gateway Protocol (IGP) dependent, commonly fast

Fast

Operational complexity

Complex (multiple control protocols to deal with [for example, STP, FHRP])

Moderate (Advanced routing design expertise may be required)

Simple

Enterprise Campus: Layer 3 Routing Design Considerations

The hierarchal enterprise campus architecture can facilitate achieving more structured hierarchal Layer 3 routing design, which is the key to achieving routing scalability in large networks. This reduces, to a large extent, the number of Layer 3 nodes and adjacencies in any given routing domain within each tier of the hierarchal enterprise campus network 27.

In a typical hierarchal enterprise campus network, the distribution block (layer) is considered the demarcation point between Layer 2 and Layer 3 domains. This is where Layer 3 uplinks participate in the campus core routing, using either an interior routing protocol (IGP) or Border Gateway Protocol (BGP), which can help to interconnect multiple campus distribution blocks together for end-to-end IP connectivity.

By contrast, with the routed access design model, Layer 3 routing is extended to the access layer switches. Consequently, the selection of the routing protocol is important for a redundant and reliable IP/routing reachability within the campus, considering scalability and the ability of the network to grow with minimal changes and impact to the network and routing design. All the Layer 3 routing design considerations discussed in previous chapters must be considered when applying any routing protocol to a campus LAN. Figure 3-7 illustrates a typical ideal routing design that aligns the IGP design (Open Shortest Path First [OSPF]) with the enterprise campus hierarchal architecture, along with the different functional modules.

Figure 3-7 Campus Network: Layer 3 Routing

Figure 3-8 Campus Network: Layer 3 Design with WAN Core

EIGRP Versus Link State as a Campus IGP

As discussed in Chapter 2, each protocol has its own characteristics, especially when applied to different network topologies. For example, Enhanced Interior Gateway Routing Protocol (EIGRP) offers a more flexible, scalable, and easier-to-control design over “hub-and-spoke” topology compared to link state. In addition, although EIGRP is considered more flexible on multitiered network topologies such as three-tier campus architecture, link-state routing protocols have still proven to be powerful, scalable, and reliable protocols in this type of network, especially OSPF, which is one of the most commonly implemented protocols used in campus networks. Furthermore, in large-scale campus networks, if EIGRP is not designed properly with regard to information hiding and EIGRP query scope containment (discussed in Chapter 2), any topology change may lead to a large floods of EIGRP queries. In addition, the network will be more prone to EIGRP stuck-in-active (SIA) impacts, such as a longer time to converge following a failure event and as a SIA timer puts an upper boundary on convergence times.

Consequently, each design has its own requirements, priorities, and constraints; and network designers must evaluate the design scenario and balance between the technical (protocol characteristics) and nontechnical (business priorities, future plans, staff knowledge, and so on) aspects when making design decisions.

Table 3-2 provides a summarized comparison between the two common and primary IGPs (algorithms) used in large-scale hierarchal enterprise campus networks.

Table 3-2 Link State Versus EIGRP in the Campus

Design Consideration

EIGRP (DUAL)

Link State (Dijkstra)

Architecture flexibility

High (natively supports multitier architectures with routes summarization)

High, with limitations (The more tiers the network has, the less flexible the design can be.)

Scalability

High

High

Convergence time (protocol level)*

Fast (ideally with route summarization)

Fast (ideally with topology hiding, route summarization, and timers tuning)

MPLS-TE support

No

Yes

Enterprise Campus Network Virtualization

Virtualization in IT generally refers to the concept of having two or more instances of a system component or function such as operating system, network services, control plane, or applications. Typically, these instances are represented in a logical virtualized manner instead of being physical.

Virtualization can generally be classified into two primary models:

  • Many to one: In this model, multiple physical resources appear as a single logical unit. The classical example of many-to-one virtualization is the switch clustering concept discussed earlier. Also, firewall clustering, and FHRP with a single virtual IP (VIP) that front ends a pair of physical upstream network nodes (switches or routers) can be considered as other examples of the many-to-one virtualization model.
  • One to many: In this model, a single physical resource can appear as many logical units, such as virtualizing an x86 server, where the software (hypervisor) hosts multiple virtual machines (VMs) to run on the same physical server. The concept of network function virtualization (NFV) can also be considered as a one-to-many system virtualization model.

Drivers to Consider Network Virtualization

To meet the current expectations of business and IT leaders, a more responsive IT infrastructure is required. Therefore, network infrastructures need to move from the classical architecture (that is, based on providing basic interconnectivity between different siloed departments within the enterprise network) into a more flexible, resilient, and adaptive architecture that can support and accelerate business initiatives and remove inefficiencies. The IT and the network infrastructure will become like a service delivery business unit that can quickly adopt and deliver services. In other words, it will become a “business enabler.” This is why network virtualization is considered one of the primary principles that enables IT infrastructures to become more dynamic and responsive to the new and the rapidly changing requirements of today’s enterprises.

The following are the primary drivers of modern enterprise networks, which can motivate enterprise businesses to adopt the concept of network virtualization:

  • Cost efficiency and design flexibility: Network virtualization provides a level of abstraction from the physical network infrastructure that can offer cost-effective network designs along with a higher degree of design flexibility, where multiple logical networks can be provisioned over one common physical infrastructure. This ultimately will lead to lower capex because of the reduction in device complexity and number of devices. Similarly, it will open lower because the operations team will have fewer devices to manage.
  • Support a simplified and flexible integrated security: Network virtualization also promotes flexible security designs by allowing the use of separate security policies per logical or vitalized entity, where users’ groups and services can be logically separated.
  • Design and operational simplicity: Network virtualization simplifies the design and provision of path and traffic isolation per application, group, service, and various other logical instances that require end-to-end path isolation.

This section covers the primary network virtualization technologies and techniques that you can use to serve different requirements by highlighting the pros and cons of each technology and design approach. This can help network designers (CCDE candidates) to select the best suitable design after identifying and evaluating the different design requirements (business and functional requirements). This section primarily focuses on network virtualization over the enterprise campus network. Chapter 4, “Enterprise Edge Architecture Design,” expands on this topic to cover network virtualization design options and considerations over the WAN.

Network Virtualization Design Elements

As illustrated in Figure 3-9, the main elements in an end-to end network virtualization design are as follows:

  • Edge control: This element represents the network access point. Typically, it is a host or end-user access (wired, wireless, or virtual private network [VPN]) to the network where the identification (authentication) for physical to logical network mapping can occur. For example, a contracting employee might be assigned to VLAN X, whereas internal staff is assigned to VLAN Y.
  • Transport virtualization: This element represents the transport path that will carry different virtualized networks over one common physical infrastructure, such as an overlay technology like a generic routing encapsulation (GRE) tunnel. The terms path isolation and path separation are commonly used to refer to transport virtualization. Therefore, these terms are used interchangeably throughout this book.
  • Services virtualization: This element represents the extension of the network virtualization concept to the services edge, which can be shared services among different logically isolated groups, such as an Internet link or a file server located in the data center that must be accessed by only one logical group (business unit).

Figure 3-9 Network Virtualization Elements

Enterprise Network Virtualization Deployment Models

Now that you know the different elements that, individually or collectively, can be considered as the foundational elements to create network virtualization within the enterprise network architecture, this section covers how you can use these elements with different design techniques and approaches to deploy network virtualization across the enterprise campus. This section also compares these different design techniques and approaches.

Network virtualization can be categorized into the following three primary models, each of which has different techniques that can serve different requirements:

  • Device virtualization
  • Path isolation
  • Services virtualization

Moreover, you can use the techniques of the different models individually to serve certain requirements or combined together to achieve one cohesive end-to-end network virtualization solution. Therefore, network designers must have a good understanding of the different techniques and approaches, along with their attributes, to select the most suitable virtualization technologies and design approach for delivering value to the business.

Device Virtualization

Also known as device partitioning, device virtualization represents the ability to virtualize the data plane, control plane, or both, in a certain network node, such as a switch or a router. Using device level virtualization by itself will help to achieve separation at Layer 2, Layer 3, or both, on a local device level. The following are the primary techniques used to achieve device level network virtualization:

  • Virtual LAN (VLAN): VLAN is the most common Layer 2 network virtualization technique. It is used in every network where one single switch can be divided into multiple logical Layer 2 broadcast domains that are virtually separated from other VLANs. You can use VLANs at the network edge to place an endpoint into a certain virtual network. Each VLAN has its own MAC forwarding table and spanning-tree instance (Per-VLAN Spanning Tree [PVST]).
  • Virtual routing and forwarding (VRF): VRFs are conceptually similar to VLANs, but from a control plane and forwarding perspective on a Layer 3 device. VRFs can be combined with VLANs to provide a virtualized Layer 3 gateway service per VLAN. As illustrated in Figure 3-10, each VLAN over a 802.1Q trunk can be mapped to a different subinterface that is assigned to a unique VRF, where each VRF maintains its own forwarding and routing instance and potentially leverages different VRF-aware routing protocols (for example, OSPF or EIGRP instance per VRF).

    Figure 3-10 Virtual Routing and Forwarding

Path Isolation

Path isolation refers to the concept of maintaining end-to-end logical path transport separation across the network. The end-to-end path separation can be achieved using the following main design approaches:

  • Hop by hop: This design approach, as illustrated in Figure 3-11, is based on deploying end-to-end (VLANs + 802.1Q trunk links + VRFs) per device in the traffic path. This design approach offers a simple and reliable path separation solution. However, for large-scale dynamic networks (large number of virtualized networks), it will be a complicated solution to manage. This complexity is associated with design scalability limitation.

    Figure 3-11 Hop-by-Hop Path Virtualization

  • Multihop: This approach is based on using tunneling and other overlay technologies to provide end-to-end path isolation and carry the virtualized traffic across the network. The most common proven methods include the following:

    • Tunneling: Tunneling, such as GRE or multipoint GRE (mGRE) (dynamic multipoint VPN [DMVPN]), will eliminate the reliance on deploying end-to-end VRFs and 802.1Q trunks across the enterprise network, because the vitalized traffic will be carried over the tunnel. This method offers a higher level of scalability as compared to the previous option and with simpler operation to some extent. This design is ideally suitable for scenarios where only a part of the network needs to have path isolation across the network.

      However, for large-scale networks with multiple logical groups or business units to be separated across the enterprise, the tunneling approach can add complexity to the design and operations. For example, if the design requires path isolation for a group of users across two “distribution blocks,” tunneling can be a good fit, combined with VRFs. However, mGRE can provide the same transport and path isolation goal for larger networks with lower design and operational complexities. (See the section “WAN Virtualization,” in Chapter 4 for a detailed comparison between the different path separation approaches over different types of tunneling mechanisms.)

    • MPLS VPN: By converting the enterprise to be like a service provider type of network, where the core is Multiprotocol Label Switching (MPLS) enabled and the distribution layer switches to act as provider edge (PE) devices. As in service provider networks, each PE (distribution block) will exchange VPN routing over MP-BGP sessions, as shown in Figure 3-12. (The route reflector [RR] concept can be introduced, as well, to reduce the complexity of full-mesh MP-BGP peering sessions.)

      Figure 3-12 MPLS VPN-Based Path Virtualization

      Furthermore, L2VPN capabilities can be introduced in this architecture, such as Ethernet over MPLS (EoMPLS), to provide extended Layer 2 communications across different distribution blocks if required. With this design approach, the end-to-end virtualization and traffic separation can be simplified to a very large extent with a high degree of scalability. (All the MPLS design considerations and concepts covered in the Service Provider part—Chapter 5, “Service Provider Network Architecture Design,” and Chapter 6, “Service Provider MPLS VPN Services Design,”—in this book are applicable if this design model is adopted by the enterprise.)

Figure 3-13 illustrates a summary of the different enterprise campus network’s virtualization design techniques.

Figure 3-13 Enterprise Campus Network Virtualization Techniques

As mentioned earlier in this section, it is important for network designers to understand the differences between the various network virtualization techniques. Table 3-3 compares these different techniques in a summarized way from different design angles.

Table 3-3 Network Virtualization Techniques Comparison

End to End (VLAN + 802.1Q + VRF)

VLANs + VRFs + GRE Tunnels

VLANs + VRFs + mGRE Tunnels

MPLS Core with MP-BGP

Scalability

Low

Low

Moderate

High

Operational complexity

High

Moderate

Moderate

Moderate to high

Design flexibility

Low

Moderate

Moderate

High

Architecture

Per hop end-to-end virtualization

P2P (multihop end-to-end virtualization)

P2MP (multihop end-to-end virtualization)

MPLS-L3VPN-based virtualization

Operation staff routing expertise

Basic

Medium

Medium

Advanced

Ideal for

Limited NV scope in terms of size and complexity

Interconnecting specific blocks with NV or as an interim solution

Medium to large overlaid NV design

Large to very large (global scale) end-to-end NV design

Service Virtualization

One of the main goals of virtualization is to separate services access into different logical groups, such as user groups or departments. However, in some scenarios, there may be a mix of these services in term of service access, in which some of these services must only be accessed by a certain group and others are to be shared among different groups, such as a file server in the data center or Internet access, as shown in Figure 3-14.

Figure 3-14 End-to-end Path and Services Virtualization

Therefore, in scenarios like this where service access has to be separated per virtual network or group, the concept of network virtualization must be extended to the services access edge, such as a server with multiple VMs or an Internet edge router with single or multiple Internet links.

Figure 3-15 Firewall Virtual Instances

Furthermore, in multitenant network environments, multiple security contexts offer a flexible and cost-effective solution for enterprises (and for service providers). This approach enables network operators to partition a single pair of redundant firewalls or a single firewall cluster into multiple virtual firewall instances per business unit or tenant. Each tenant can then deploy and manage its own security polices and service access, which are virtually separated. This approach also allows controlled intertenant communication. For example, in a typical multitenant enterprise campus network environment with MPLS VPN (L3VPN) enabled at the core, traffic between different tenants (VPNs) is normally routed via a firewalling service for security and control (who can access what), as illustrated in Figure 3-16.

Figure 3-16 Intertenant Services Access Traffic Flow

Figure 3-17 zooms in on the firewall services contexts to show a more detailed view (logical/virtualized view) of the traffic flow between the different tenants/VPNs (A and B), where each tenant has its own virtual firewall service instance located at the services block (or at the data center) of the enterprise campus network.

Figure 3-17 Intertenant Services Access Traffic Flow with Virtual Firewall Instances

In addition, the following are the common techniques that facilitate accessing shared applications and network services in multitenant environments:

  • VRF-Aware Network Address Translation (NAT): One of the common requirements in today’s multitenant environments with network and service’ virtualization enabled, is to provide each virtual (tenant) network the ability to access certain services (shared services) either hosted on premise (such as at the emperies data center or services block) or hosted externally (in a public cloud). Also, providing Internet access to the different tenants (virtual) networks, is a common example of today’s multitenant network requirements. To maintain traffic separation between the different tenants (virtual networks) where private IP address overlapping is a common attribute in this type of environment, NAT is considered one of the common and cost-effective solutions to provide NAT per tenant without compromising path separation requirements between the different tenants’ networks (virtual networks). When NAT is combined with different virtual network instances (VRFs), it is commonly referred to as VRF-Aware NAT, as shown in Figure 3-18.

    Figure 3-18 VRF-Aware NAT

    Figure 3-19 VRF-aware Services Infrastructure

  • Network function virtualization (NFV): The concept of NFV is based on virtualizing network functions that typically require a dedicated physical node, appliances, or interfaces. In other words, NFV can potentially take any network function typically residing in purpose-built hardware and abstract it from that hardware. As depicted in Figure 3-20, this concept offers businesses several benefits, including the following:

    • Reduce the total cost of ownership (TCO) by reducing the required number and diversity of specialized appliances
    • Reduce operational cost (for example, less power and space)
    • Offer a cost-effective capital investment
    • Reduce the level of complexity of integration and network operations
    • Reduce time to market for the business by offering the ability to enable specialized network services (Especially in multitenant where a separate network function/service per tenant can be provisioned faster)

    Figure 3-20 NFV Benefits

This concept helps businesses to adopt and deploy new services quickly (faster time to market), and is consequently considered a business innovation enabler. This is simply because purpose-built hardware functionalities have now been virtualized, and it is a matter of service enablement rather than relying on new hardware (along with infrastructure integration complexities).

Summary

The enterprise campus is one of the vital parts of the modular enterprise network. It is the medium that connects the end users and the different types of endpoints such as printers, video endpoints, and wireless access points to the enterprise network. Therefore, having the right structure and design layout that meets current and future requirements is critical, including the physical infrastructure layout, Layer 2, and Layer 3 designs. To achieve a scalable and flexible campus design, you should ideally base it on hierarchal and modular design principles that optimize the overall design architecture in terms of fault isolation, simplicity, and network convergence time. It should also offer a desirable level of flexibility to integrate other networks and new services and to grow in size.

However, the concept of network virtualization helps enterprises to utilize the same underlying physical infrastructure while maintaining access, and path and services access isolation, to meet certain business goals or functional security requirements. As a result, enterprises can lower capex and opex and reduce the time and effort required to provision a new service or a new logical network. However, the network designer must consider the different network virtualization design options, along with the strengths and weaknesses of each, to deploy the suitable network virtualization technique that meets current and future needs. These needs must take into account the different variables and constraints, such as staff knowledge and the hardware platform supported features and capabilities.

Further Reading


vceplus-200-125    | boson-200-125    | training-cissp    | actualtests-cissp    | techexams-cissp    | gratisexams-300-075    | pearsonitcertification-210-260    | examsboost-210-260    | examsforall-210-260    | dumps4free-210-260    | reddit-210-260    | cisexams-352-001    | itexamfox-352-001    | passguaranteed-352-001    | passeasily-352-001    | freeccnastudyguide-200-120    | gocertify-200-120    | passcerty-200-120    | certifyguide-70-980    | dumpscollection-70-980    | examcollection-70-534    | cbtnuggets-210-065    | examfiles-400-051    | passitdump-400-051    | pearsonitcertification-70-462    | anderseide-70-347    | thomas-70-533    | research-1V0-605    | topix-102-400    | certdepot-EX200    | pearsonit-640-916    | itproguru-70-533    | reddit-100-105    | channel9-70-346    | anderseide-70-346    | theiia-IIA-CIA-PART3    | certificationHP-hp0-s41    | pearsonitcertification-640-916    | anderMicrosoft-70-534    | cathMicrosoft-70-462    | examcollection-cca-500    | techexams-gcih    | mslearn-70-346    | measureup-70-486    | pass4sure-hp0-s41    | iiba-640-916    | itsecurity-sscp    | cbtnuggets-300-320    | blogged-70-486    | pass4sure-IIA-CIA-PART1    | cbtnuggets-100-101    | developerhandbook-70-486    | lpicisco-101    | mylearn-1V0-605    | tomsitpro-cism    | gnosis-101    | channel9Mic-70-534    | ipass-IIA-CIA-PART1    | forcerts-70-417    | tests-sy0-401    | ipasstheciaexam-IIA-CIA-PART3    | mostcisco-300-135    | buildazure-70-533    | cloudera-cca-500    | pdf4cert-2v0-621    | f5cisco-101    | gocertify-1z0-062    | quora-640-916    | micrcosoft-70-480    | brain2pass-70-417    | examcompass-sy0-401    | global-EX200    | iassc-ICGB    | vceplus-300-115    | quizlet-810-403    | cbtnuggets-70-697    | educationOracle-1Z0-434    | channel9-70-534    | officialcerts-400-051    | examsboost-IIA-CIA-PART1    | networktut-300-135    | teststarter-300-206    | pluralsight-70-486    | coding-70-486    | freeccna-100-101    | digitaltut-300-101    | iiba-CBAP    | virtuallymikebrown-640-916    | isaca-cism    | whizlabs-pmp    | techexams-70-980    | ciscopress-300-115    | techtarget-cism    | pearsonitcertification-300-070    | testking-2v0-621    | isacaNew-cism    | simplilearn-pmi-rmp    | simplilearn-pmp    | educationOracle-1z0-809    | education-1z0-809    | teachertube-1Z0-434    | villanovau-CBAP    | quora-300-206    | certifyguide-300-208    | cbtnuggets-100-105    | flydumps-70-417    | gratisexams-1V0-605    | ituonline-1z0-062    | techexams-cas-002    | simplilearn-70-534    | pluralsight-70-697    | theiia-IIA-CIA-PART1    | itexamtips-400-051    | pearsonitcertification-EX200    | pluralsight-70-480    | learn-hp0-s42    | giac-gpen    | mindhub-102-400    | coursesmsu-CBAP    | examsforall-2v0-621    | developerhandbook-70-487    | root-EX200    | coderanch-1z0-809    | getfreedumps-1z0-062    | comptia-cas-002    | quora-1z0-809    | boson-300-135    | killtest-2v0-621    | learncia-IIA-CIA-PART3    | computer-gcih    | universitycloudera-cca-500    | itexamrun-70-410    | certificationHPv2-hp0-s41    | certskills-100-105    | skipitnow-70-417    | gocertify-sy0-401    | prep4sure-70-417    | simplilearn-cisa    |
http://www.pmsas.pr.gov.br/wp-content/    | http://www.pmsas.pr.gov.br/wp-content/    |