This paper provides deployment guidelines and best practices for customers who want to realize the operational benefits of a hyper converged solution from Dell EMC can couple this with best-in-class data center networking options from Arista. Due to the distributed nature of applications with a variety of interaction models, whether it is cross cluster communication or application to database queries etc, the East-west traffic patterns are dominating in the data center.
It is no longer sufficient to just have a firewall for north-south traffic to protect the data center. Arista Macro-Segmentation Service addresses a growing gap in security deployment models wherein embedded security in the virtualization hypervisor addresses inter-VM communication and physical firewalls address at-depth protection for north-south traffic leaving the data center.
It is recommended that the reader has a sound comprehension of these two technologies prior to planning and deployment. As such, detail around configuration, deployment recommendations, and validation will be provided. The following figure depicts the entire application topology and the desired outcome to provide zero trust security model for an application. This chapter details how NSX-T creates virtual Layer 2 networks, called segments, to provide connectivity between its services and the different virtual machines in the environment.
A transport node is, by definition, a device implementing the NSX-T data plane. The software component running this data plane is a virtual switch, responsible for forwarding traffic between logical and physical ports on the device. NSX-T 3. Operational details on how to run NSX on VDS are out of scope of this document, but simplification in term of VMkernel interface management that this new model brings will be called out in the design section. On the other hand, two VMs on different hosts and attached to the same overlay backed segment will have their layer 2 traffic carried by tunnel between their hosts.
This IP tunnel is instantiated and maintained by NSX without the need for any segment specific configuration in the physical infrastructure, thus decoupling NSX virtual networking from this physical infrastructure.
Below is a screenshot representing both possible representation:. Segments are created as part of an NSX object called a transport zone. There are VLAN transport zones and overlay transport zones. A segment created in a VLAN transport zone will be a VLAN backed segment, while, as you can guess, a segment created in an overlay transport zone will be an overlay backed segment. NSX transport nodes attach to one or more transport zones, and as a result, they gain access to the segments created in those transport zones.
However, this segment 1 does not extend to transport node 1. In other words, two NSX virtual switches on the same transport node cannot be attached to the same transport zone. Hypervisor pNICs. In this example, a single virtual switch with two uplinks is defined on the hypervisor transport node.
One of the uplinks is a LAG, bundling physical port p1 and p2, while the other uplink is only backed by a single physical port p3. Both uplinks look the same from the perspective of the virtual switch; there is no functional difference between the two.
The teaming policy defines how the NSX virtual switch uses its uplinks for redundancy and traffic load balancing. There are two main options for teaming policy configuration:.
Should the active uplink fail, the next available uplink in the standby list takes its place immediately. Traffic sent by this virtual interface will leave the host through this uplink only, and traffic destined to this virtual interface will necessarily enter the host via this uplink. The teaming policy only defines how the NSX virtual switch balances traffic across its uplinks.
Note that a LAG uplink has its own hashing options, however, those hashing options only define how traffic is distributed across the physical members of the LAG uplink, whereas the teaming policy define how traffic is distributed between NSX virtual switch uplinks.
When defining a transport node, the user must specify a default teaming policy that will be applicable by default to the segments available to this transport node. ESXi hypervisor transport nodes allow defining more specific teaming policies, identified by a name, on top of the default teaming policy. Overlay backed segments always follow the default teaming policy. This capability is typically used to steer precisely infrastructure traffic from the host to specific uplinks. By default, all the segments are thus going to send and receive traffic on u1.
Sometimes, it might be desirable to only send overlay traffic on a limited set of uplinks. Here, the default teaming policy only includes uplinks u1 and u2. As a result, overlay traffic is constrained to those uplinks.
KVM hypervisor transport nodes can only have a single LAG and only support the failover order default teaming policy; the load balance source teaming policies and named teaming policies are not available for KVM. It is common for multiple transport nodes to share the exact same NSX virtual switch configuration. It is also very difficult from an operational standpoint to configure and maintain multiple parameters consistently across many devices.
For this purpose, NSX defines a separate object called an uplink profile that acts as a template for the configuration of a virtual switch. The administrator can this way create multiple transport nodes with similar virtual switches by simply pointing to a common uplink profile.
Even better, when the administrator modifies a parameter in the uplink profile, it is automatically updated in all the transport nodes following this uplink profile. NSX will assume that it can send overlay traffic with this MTU on the physical uplinks of the transport node without any fragmentation by the physical infrastructure.
LAGs are optional of course, but if you want to define some, you can give them a name, specify the number of links and the hash algorithm they will use. The virtual switch uplinks defined in the uplink profile must be mapped to real, physical uplinks on the device becoming a transport node. The uplinks U1 and U2 listed in the teaming policy of the uplink profile UP1 are just variable names. When transport node TN1 is created, some physical uplinks available on the host are mapped to those variables.
If the uplink profile defined LAGs, physical ports on the host being prepared as a transport node would have to be mapped to the member ports of the LAGs defined in the uplink profile. The benefit of this model is that we can create an arbitrary number of transport nodes following the configuration of the same uplink profile. There might be local differences in the way virtual switch uplinks are mapped to physical ports. For example, one could create a transport node TN2 still using the same UP1 uplink profile, but mapping U1 to vmnic3 and U2 to vmnic0.
On TN1, this would lead vmnic0 as active and vmnic1 as standby, while TN2 would use vmnic3 as active and vmnic0 as standby. If uplink profiles allow configuring the virtual switches of multiple transport nodes in a centralized fashion, they also allow for very granular configuration if needed.
UP1 defined above cannot be applied to KVM hosts because those only support the failover order policy. If NSX had a single centralized configuration for all the hosts, we would have been forced to fall back to the lowest common denominator failover order teaming policy for all the hosts.
The uplink profile model also allows for different transport VLANs on different hosts. This can be useful when the same VLAN ID is not available everywhere in the network, for example, the case for migration, reallocation of VLANs based on topology or geo-location change.
NSX-T 2. The TNP is a template for creating a transport node that can be applied to a group of hosts in a single shot. This TNP could then be applied to the cluster, thus turning all its hosts into transport nodes in a single configuration step.
Further, configuration changes are kept in sync across all the hosts, leading to easier cluster management. This feature allows managing traffic contention on the uplinks of an ESXi hypervisor. NIOC allows the creation of shares, limits and bandwidth reservation for the different kinds of ESXi infrastructure traffic. In addition to system traffic parameters, NIOC provides an additional level of granularity for the VM traffic category: share, reservation and limits can also be applied at the Virtual Machine vNIC level.
Network Resource Pools are used to allocate bandwidth on multiple VMs. For more details, see the vSphere documentation. The Enhanced Data Path virtual switch is optimized for the Network Function Virtualization, where the workloads typically perform networking functions with very demanding requirements in term of latency and packet rate.
In order to accommodate this use case, the Enhanced Data Path virtual switch has an optimized data path, with a different resource allocation model on the host. The specifics of this virtual switch are outside the scope of this document. The important points to remember regarding this switch are:. The two kinds of virtual switches can however coexist on the same hypervisor.
For the further understanding of enhanced data path N-VDS refer to following resources. This section on logical switching focuses on overlay backed segments due to their ability to create isolated logical L2 networks with the same flexibility and agility that exists with virtual machines.
This decoupling of logical switching from the physical network infrastructure is one of the main benefits of adopting NSX-T. In the upper part of the diagram, the logical view consists of five virtual machines that are attached to the same segment, forming a virtual broadcast domain. The physical representation, at the bottom, shows that the five virtual machines are running on hypervisors spread across three racks in a data center.
Whether the TEPs are L2 adjacent in the same subnet or spread in different subnets does not matter. The benefit of this NSX-T overlay model is that it allows direct connectivity between transport nodes irrespective of the specific underlay inter-rack or even inter-datacenter connectivity i. Segments can also be created dynamically without any configuration of the physical network infrastructure. The NSX-T segment behaves like a LAN, providing the capability of flooding traffic to all the devices attached to this segment; this is a cornerstone capability of layer 2.
NSX-T does not differentiate between the different kinds of frames replicated to multiple destinations. Broadcast, unknown unicast, or multicast traffic will be flooded in a similar fashion across a segment. In the overlay model, the replication of a frame to be flooded on a segment is orchestrated by the different NSX-T components.
NSX-T provides two different methods for flooding traffic described in the following sections. They can be selected on a per segment basis. In the head end replication mode, the transport node at the origin of the frame to be flooded sends a copy to each other transport node that is connected to this segment. Each green arrow represents the path of a point-to-point tunnel through which the frame is forwarded.
This is because the NSX-T Controller has determined that there is no recipient for this frame on that hypervisor. In this mode, the burden of the replication rests entirely on source hypervisor. This should be considered when provisioning the bandwidth on this uplink. In the two-tier hierarchical mode, transport nodes are grouped according to the subnet of the IP address of their TEP.
Transport nodes in the same rack typically share the same subnet for their TEP IPs, though this is not mandatory. In this example, the IP subnet have been chosen to be easily readable; they are not public IPs.
The source hypervisor transport node knows about the groups based on the information it has received from the NSX-T Controller. It does not matter which transport node is selected to perform replication in the remote groups so long as the remote transport node is up and available.
If this were not the case e. In this mode, as with head end replication example, seven copies of the flooded frame have been made in software, though the cost of the replication has been spread across several transport nodes. It is also interesting to understand the traffic pattern on the physical infrastructure.
The benefit of the two-tier hierarchical mode is that only two tunnel packets compared to the headend mode of five packets were sent between racks, one for each remote group.
This is a significant improvement in the network inter-rack or inter-datacenter fabric utilization - where available bandwidth is typically less than within a rack. In the case where the TEPs are in another data center, the savings could be significant.
Note also that this benefit in term of traffic optimization provided by the two-tier hierarchical mode only applies to environments where TEPs have their IP addresses in different subnets. In a flat Layer 2 network, where all the TEPs have their IP addresses in the same subnet, the two-tier hierarchical replication mode would lead to the same traffic pattern as the source replication mode.
The default two-tier hierarchical flooding mode is recommended as a best practice as it typically performs better in terms of physical uplink bandwidth utilization. When a frame is destined to an unknown MAC address, it is flooded in the network. When a frame is destined to a unicast MAC address known in the MAC address table, it is only forwarded by the switch to the corresponding port. In this example, the NSX virtual switch on both the source and destination hypervisor transport nodes are fully populated.
This mechanism is relatively straightforward because at layer 2 in the overlay network, all the known MAC addresses are either local or directly reachable through a point-to-point tunnel. The benefit of data plane learning, further described in the next section, is that it is immediate and does not depend on the availability of the control plane.
In a traditional layer 2 switch, MAC address tables are populated by associating the source MAC addresses of frames received with the ports where they were received. In the overlay model, instead of a port, MAC addresses reachable through a tunnel are associated with the TEP for the remote end of this tunnel. Ideally data plane learning would occur through the NSX virtual switch associating the source MAC address of received encapsulated frames with the source IP of the tunnel packet.
But this common method used in overlay networking would not work for NSX with the two-tier replication model. Indeed, as shown in part 3. In that case, the source IP address of the received tunneled traffic represents the intermediate transport node instead of the transport node that originated the traffic.
When intermediate transport node HV5 relays the flooded traffic from HV1 to HV4, it is actually decapsulating the original tunnel traffic and re-encapsulating it, using its own TEP IP address as a source. Metadata is a piece of information that is carried along with the payload of the tunnel.
These tables include:. The global MAC address table can proactively populate the local MAC address table of the different transport nodes before they receive any traffic. Also, in the rare case when transport node receives a frame from a VM destined to an unknown MAC address, it will send a request to look up this MAC address in the global table of the NSX-T Controller while simultaneously flooding the frame.
This behavior was implemented in order to protect the NSX-T Controller from an injection of an arbitrarily large number of MAC addresses into in the network. This capability can be used by anyone to adjust the need of typical workload and overlay fabric, thus NSX-T tunnels are only setup between NSX-T transport nodes.
Network virtualization is all about developing a model of deployment that is applicable to a variety of physical networks and diversity of compute domains.
New networking features are developed in software and implemented without worry of support on the physical infrastructure. For example, the data plane learning section described how NSX-T relies on metadata inserted in the tunnel header to identify the source TEP of a forwarded frame. When a transport node receives a tunneled frame with this bit set, it knows that it must perform local replication to its peers.
Similarly, other vendors or partners can insert their own TLVs. Because overlay tunnels are only setup between NSX-T transport nodes, there is no need for any hardware or software third party vendor to decapsulate or look into NSX-T Geneve overlay packets. Thus, networking feature adoption can be done in the overlay, isolated from underlay hardware refresh cycles. Even in highly virtualized environments, customers often have workloads that cannot be virtualized, because of licensing or application-specific reasons.
Even for the virtualized workload some applications have embedded IP that cannot be changed or legacy application that requires layer 2 connectivity. However, there are some scenarios where layer 2 connectivity is required between VMs and physical devices.
Whether it is for migration purposes or for integration of non-virtualized appliances, if L2 adjacency is not needed, leveraging a gateway on the Edges L3 connectivity is typically more efficient, as routing allows for Equal Cost Multi Pathing, which results in higher bandwidth and a better redundancy model.
A common misconception exists regarding the usage of the edge bridge, from the fact that modern SDN based adoption must not use bridging. In fact, that is not the case, the Edge Bridge can be conceived as a permanent solution for extending overlay-backed segments into VLANs. The use case of having a permeant bridging for set of workloads exist due to variety of reasons such as older application cannot change IP address, end of life gear does not allow any change, regulation, third party connectivity and span of control on those topologies or devices.
However, as an architect if one desired to enable such use case must consider some level of dedicated resources and planning that ensue, such as bandwidth, operational control and protection of bridged topologies. As of NSX-T 2. That means that L2 traffic can enter and leave the NSX overlay in a single location, thus preventing the possibility of a loop between a VLAN and the overlay. It is however possible to bridge several different segments to the same VLAN ID, if those different bridging instances are leveraging separate Edge uplinks.
Starting NSX-T 2. This allows certain bare metal topologies to be connected with overlay segment and bridging to VLANs that can exist in separate rack without depending on physical overlay.
With NSX-T 3. For more information about this feature, see the bridging white paper at:. The Edge bridge active in the data path is backed by a unique, pre-determined standby bridge on a different Edge. Within an Edge Cluster, the user can create a Bridge Profile, which essentially designates two Edges as the potential hosts for a pair of redundant Bridges.
The Bridge Profile specifies which Edge would be primary i. At the time of the creation of the Bridge Profile, no Bridge is instantiated yet. The Bridge Profile is just a template for the creation of one or several Bridge pairs.
Once a Bridge Profile is created, the user can attach a segment to it. By doing so, an active Bridge instance is created on the primary Edge, while a standby Bridge is provisioned on the backup Edge. The attachment of the segment to the Bridge Endpoint is represented by a dedicated Logical Port, as shown in the diagram below:. At the time of the creation of the Bridge Profile, the user can also select the failover mode. In the preemptive mode, the Bridge on the primary Edge will always become the active bridge forwarding traffic between overlay and VLAN as soon as it is available, usurping the function from an active backup.
In the non-preemptive mode, the Bridge on the primary Edge will remain standby should it become available when the Bridge on the backup Edge is already active. The traffic leaving and entering a segment via a Bridge is subject to the Bridge Firewall.
Rules are defined on a per-segment basis and are defined for the Bridge as a whole, i. The firewall rules can leverage existing NSX-T grouping constructs, and there is currently a single firewall section available for those rules.
This part requires understanding of Tier-0 and Tier-1 gateway and refer to the Logical Routing chapter for further understanding about Tier-0 and Tier-1 gateways. Routing and bridging seamlessly integrate. The following diagram is a logical representation of a possible configuration leveraging T0 and T1 gateways along with Edge Bridges. Remarkably, through the Edge Bridges, Tier-1 or Tier-0 gateways can act as default gateways for physical devices.
ARP requests from physical workload for the IP address of an NSX router acting as a default gateway will be answered by the local distributed router on the Edge where the Bridge is active. The logical routing capability in the NSX-T platform provides the ability to interconnect both virtual and physical workloads deployed in different logical L2 networks.
NSX-T enables the creation of network elements like segments Layer 2 broadcast domains and gateways routers in software as logical constructs and embeds them in the hypervisor layer, abstracted from the underlying physical hardware.
Since these network elements are logical entities, multiple gateways can be created in an automated and agile fashion. The previous chapter showed how to create segments; this chapter focuses on how gateways provide connectivity between different logical L2 networks.
When virtual or physical workloads in a data center communicate with the devices external to the data center e. The traffic between workloads confined within the data center is referred to as East-West traffic. For a multi-tiered application where the web tier needs to talk to the app tier and the app tier needs to talk to the database tier and, these different tiers sit in different subnets.
Every time a routing decision is made, the packet is sent to a physical router. Traditionally, a centralized router would provide routing for these different tiers.
With VMs that are hosted on same the ESXi or KVM hypervisor, traffic will leave the hypervisor multiple times to go to the centralized router for a routing decision, then return to the same hypervisor; this is not optimal. NSX-T is uniquely positioned to solve these challenges as it can bring networking closest to the workload.
For the VMs hosted e. A single tier routing topology implies that a Gateway is connected to segments southbound providing E-W routing and is also connected to physical infrastructure to provide N-S connectivity.
This gateway is referred to as Tier-0 Gateway. Tier-0 Gateway consists of two components: distributed routing component DR and centralized services routing component SR.
It runs as a kernel module and is distributed in hypervisors across all transport nodes, including Edge nodes. The traditional data plane functionality of routing and ARP lookups is performed by the logical interfaces connecting to the different segments. A distributed routing DR component for this Tier-0 Gateway is instantiated as a kernel module and will act as a local gateway or first hop router for the workloads connected to the segments.
Routing is performed on the hypervisor attached to the source VM. For the return traffic, the routing lookup happens on the HV2 DR. This represents the normal behavior of the DR, which is to always perform routing on the DR instance running in the kernel of the hypervisor hosting the workload that initiates the communication. East-West routing is completely distributed in the hypervisor, with each hypervisor in the transport zone running a DR in its kernel.
However, some services of NSX-T are not distributed, due to its locality or stateful nature such as:. A services router SR is instantiated on an edge cluster when a service is enabled that cannot be distributed on a gateway. A centralized pool of capacity is required to run these services in a highly available and scaled-out fashion.
The appliances where the centralized services or SR instances are hosted are called Edge nodes. An Edge node is the appliance that provides connectivity to the physical infrastructure.
Note that the compute host i. Notice that all the overlay segments are attached to the SR as well. Static routing and BGP are supported on this interface. This interface was referred to as uplink interface in previous releases.
This interface can also be used to extend a VRF Virtual routing and forwarding instance from the physical networking fabric into the NSX domain. Service interface can also be connected to overlay segments for Tier-1 standalone load balancer use-cases explained in Load balancer Chapter 6.
This interface was referred to as centralized service port CSP in previous releases. Note that a gateway must have a SR component to realize service interface. This interface was referred to as downlink interface in previous releases. Static routing is supported over that interface.
This address range is configurable only when creating the Tier-0 gateway. As mentioned previously, connectivity between DR on the compute host and SR on the Edge node is auto plumbed by the system. From a physical topology perspective, workloads are hosted on hypervisors and N-S connectivity is provided by Edge nodes. If a device external to the data center needs to communicate with a virtual workload hosted on one of the hypervisors, the traffic would have to come to the Edge nodes first.
This traffic will then be sent on an overlay network to the hypervisor hosting the workload. As discussed in the E-W routing section, routing always happens closest to the source. In this example, eBGP peering has been established between the physical router interface with the IP address On the edge node, the packet is directly sent to the SR after the tunnel encapsulation has been removed. No such lookup was required on the DR hosted on the HV1 hypervisor, and packet was sent directly to the VM after removing the tunnel encapsulation header.
If this Edge node goes down, N-S connectivity along with other centralized services running on Edge node will go down as well. To provide redundancy for centralized services and N-S connectivity, it is recommended to deploy a minimum of two edge nodes. High availability modes are discussed in section 4. In addition to providing optimized distributed and centralized routing functions, NSX-T supports a multi-tiered routing model with logical separation between different gateways within the NSX-T infrastructure.
The top-tier gateway is referred to as a Tier-0 gateway while the bottom-tier gateway is a Tier-1 gateway. This structure gives complete control and flexibility over services and policies. Various stateful services can be hosted on the Tier-1 while the Tier-0 can operate in an active-active manner. Configuring two tier routing is not mandatory. It can be single tiered as shown in the previous section. Southbound, the Tier-0 gateway connects to one or more Tier-1 gateways or directly to one or more segments as shown in North-South routing section.
Customer BGP Peering. Static Route Redistribution. Identifying Customer Prefixes. Transit and Peering Overview. Transit Connectivity. Public Peering. Private Peering. ISP Tiers and Peering. BGP Community Design. Prefix Origin Tracking. Dynamic Customer Policy. Local Preference Manipulation. Controlling Upstream Prefix Advertisement.
Static Redistribution and Community Application. BGP Security Features. Peer Filtering. Graded Route Flap Dampening. Public Peering Security Concerns. Pointing Default. Third-Party Next Hop. GRE Tunneling. Dynamic Black Hole Routing. Final Edge Router Configuration Example. Extended Community Attribute. Route Target Extended Community. Route Origin Extended Community. Multiprotocol Reachability Attributes. MPLS Labels. Forwarding Labeled Packets. Automatic Route Filtering. AS Override.
Back-to-Back VRF. Hierarchical VPN. Deployment Considerations. Resource Consumption on PE Devices. Design Guidelines for RDs. Route Target Design Examples. Extranet VPN. Management VPN. Provider Backbone Convergence. Site-to-Site Convergence. Multicast Fundamentals. Multicast Distribution Trees. Multicast Group Notation. Shared Tree. Source Tree. Building Multicast Distribution Trees. Dense Mode. Sparse Mode. Interdomain Multicast. Multicast Source Discovery Protocol. Mesh Groups. Route Reflection Issues.
Anycast RP. Customer Configurations. Multiple Links, Same Upstream Provider. Interdomain Connections. IPv6 Enhancements. Expanded Addressing Capabilities. Autoconfiguration Capabilities. Header Simplification. Security Enhancements.
QoS Capabilities. IPv6 Addressing. Anycast Address Functionality. General Address Format. Aggregatable Global Unicast Addresses. Local Addressing. Interface Identifiers. Special Addresses. Dual-Stack Deployment.
Prefix Filtering for IPv. Initial IPv4 Network Topology. Planned IPv6 Overlay. IPv6 Network Topology. Download - KB -- Index. Errata -- 38 KB. I would like to receive exclusive offers and hear about products from Cisco Press and its family of brands. I can unsubscribe at any time. Pearson Education, Inc.
This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies. To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:.
When creating an address plan as part of a network design, carefully consider other address or network elements to define an address plan that matches and supports these elements. As role-based security is deployed, there is a need for different groupings of VPN clients. These might correspond to administrators, employees, different groups of contractors or consultants, external support organizations, guests, and so on.
Role-based access can be controlled via the group password mechanism for the Cisco VPN client. Each group can be assigned VPN endpoint addresses from a different pool. The different subnets or blocks of VPN endpoint addresses can then be used in ACLs to control access across the network to resources, as discussed earlier for NAC roles.
If the pools are subnets of a summary address block, routing traffic back to clients can be done in a simple way. NAT is a powerful tool for working with IP addresses. It has the potential for being very useful in the enterprise to allow private internal addressing to map to publicly assigned addresses at the Internet connection point. However, if it is overused, it can be harmful. A common approach to supporting content load-balancing devices is to perform destination NAT.
A recommended approach to supporting content load-balancing devices is to perform source NAT. As long as NAT is done in a controlled, disciplined fashion, it can be useful.
Internal NAT can make network troubleshooting confusing and difficult. For example, it would be difficult to determine which network 10 in an organization a user is currently connected to. Many organizations are now using network This is a severely suboptimal situation and can make troubleshooting and documentation very difficult.
Re-addressing should be planned as soon as possible. It is also a recommended practice to isolate any servers reached through content devices using source NAT or destination NAT. These servers are typically isolated because the packets with NAT addresses are not useful elsewhere in the network. NAT also proves useful when a company or organization has more than a couple of external business partners. Some companies exchange dynamic routing information with external business partners.
Exchanges require trust. The drawback to this approach is that a static route from a partner to your network might somehow get advertised back to you.
This advertisement, if accepted, can result in part of your network becoming unreachable. One way to control this situation is to implement two-way filtering of routes to partners: Advertise only subnets that the partner needs to reach, and only accept routes to subnets or prefixes that your staff or servers need to reach at the partner.
Some organizations prefer to use static routing to reach partners in a tightly controlled way. When the partner is huge, such as a large bank, static routing is too labor intensive. Importing thousands of external routes into the internal routing protocol for each of several large partners causes the routing table to become bloated.
Another approach is to terminate all routing from a partner at an edge router, preferably receiving only summary routes from the partner. NAT can then be used to change all partner addresses on traffic into a range of locally assigned addresses. Different NAT blocks are used for different partners.
This approach converts a wide range of partner addresses into a tightly controlled set of addresses and simplifies troubleshooting. It can also avoid potential issues when multiple organizations are using the If the NAT blocks are chosen out of a larger block that can be summarized, a redistributed static route for the larger block easily makes all partners reachable on the enterprise network.
Internal routing then have one route that in effect says "this way to partner networks. A partner block approach to NAT supports faster internal routing convergence by keeping partner subnets out of the enterprise routing table. A disadvantage to this approach is that it is more difficult to trace the source of IP packets. However, if it is required, you can backtrack and get the source information through the NAT table. I would like to receive exclusive offers and hear about products from Cisco Press and its family of brands.
I can unsubscribe at any time. Pearson Education, Inc. This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.
To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:. For inquiries and questions, we collect the inquiry or question, together with name, contact details email address, phone number and mailing address and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.
We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes. Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites.
Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites; develop new products and services; conduct educational research; and for other purposes specified in the survey. Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing.
Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law. If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information ciscopress. On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email.
Generally, users may not opt-out of these communications, though they can deactivate their account information.
0コメント