top of page
1.jpg

NETWORK SCALING: OSPF TO BGP

 HOME > NETWORK SCALING: OSPF TO BGP 

Introduction

This case study discusses a project where we worked with a customer's network infrastructure that encountered scalability issues resulting from uncontrolled growth and reliance on OSPF as the routing protocol. We documented the design requirements and developed an effective migration strategy from OSPF to BGP to overcome the identified challenges and achieve network scalability. The case study concludes by showcasing the final configurations implemented for the customer. ​

 

OSPF to BGP Migration Scenario

The existing network relies on OSPF as the routing protocol, which has resulted in scalability issues and limited administrative control. The network expansion plans and increasing demand for efficient routing necessitate a migration from OSPF to BGP. Figure 5-20 illustrates the initial network topology.

 

Figure 1-10 Initial Network Topology

​

​

​

​

​

​
 
 
 
 
 
 
 
 
 
 
Design Requirements

The primary objective of this project was to improve scalability, redesign administrative control structure and accommodate network expansion plans. The migration project has a limited budget and must be completed within a specified timeframe. The key requirements are as follows:

  1. Enhance scalability and stability

  2. Enable different departments to have administrative control over their respective network resources

  3. Accommodate future network expansion

  4. Optimize routing for diverse locations

  5. Complete the migration within the defined timeframe

​

Solution Description

The scalability issues within the existing OSPF-based network require a transition to BGP to achieve the desired objectives. BGP provides better administrative control, scalability, and flexibility for future policy requirements. By migrating to BGP, the network can accommodate future expansions and improve routing efficiency.

The requirements are analyzed against available BGP architectures to determine the most suitable design. The solution selected was iBGP in the core layer, each department would be a separate iBGP domain, and all departments would have an eBGP peering with the core. This solution provides a clean separation of administrative control, supports aggressive network expansion, and allows for easy policy management.

​

Major Components

Core Design

The network core comprises routers R4, R5, R6, and R10, which establish iBGP peering sessions to form the core. The loopback interfaces serve as the source for these sessions, and a core BGP AS (e.g., AS 65100) is assigned. Each iBGP session uses next-hop-self to eliminate the need to carry prefixes for DMZ Ethernet segments. The core routers issue a default route to major centers and remote site aggregation routers.

​

Major Center Attachment

The network includes multiple major centers, with each center operating within its own BGP AS (e.g., AS 65101, AS 65102, AS 65103). The major centers connect to the BGP core via eBGP sessions. Each major center runs its own OSPF process for local routing, while connectivity beyond the center is managed by the network core. Border routers establish BGP peering sessions with the core using the physical link addresses of the Ethernet DMZ. In locations with multiple border routers, iBGP sessions are established between them using loopback interfaces.

​

Remote Site Aggregation

Approximately 400 remote site routers are aggregated via Frame Relay PVCs to hub routers located in different major centers for redundancy. The hub routers, physically located in Location A and Location C, establish eBGP sessions with the colocated core routers (e.g., R3 with R4, R11 with R10). Remote sites utilize dual PVCs for primary and secondary connectivity, allowing the hub routers to redistribute routes from OSPF into BGP and set the Multi-Exit Discriminator (MED) outbound to indicate the preferred path. Only the default route is advertised via OSPF to remote site routers to scale the EIGRP process. The default route is injected into BGP, filtered to allow only the default route, and redistributed from BGP into OSPF.

​

Internet Connectivity

Internet connectivity is consolidated in Location A and Location C, with routers R13 and R14 connecting to firewalls leading to the external DMZs. The core routers announce full internal routing information via eBGP to R13 and R14, originating default routes into the network core. The firewalls handle full reachability to the public Internet DMZ, and R13 and R14 have default routes pointing to the firewalls for Internet access.

​

​

Figure 1-20 Final Network Topology

​

​

​

​

​

​

​

​

​

​

​

​

​

 

​​

​

​

​

​

​

​

Migration Plan

The migration plan aims to ensure a smooth transition to BGP while minimizing network disruption. It involves establishing the supporting infrastructure, overlaying BGP configurations, activating the BGP core, and performing final cleanup.

  1. Supporting Infrastructure: Loopback interfaces are created and included in OSPF routing. Loopback addressing follows a predefined scheme for easy identification. Internet routers are installed for future BGP origination, and proxy-based Internet connectivity is maintained during this stage.

  2. Overlay BGP and Inject Prefixes: BGP configurations are deployed, and prefixes are injected from OSPF into BGP. BGP administrative distances are set higher than OSPF to validate BGP-learned prefix propagation. BGP infrastructure is validated for full reachability.

  3. BGP Core Activation: OSPF adjacencies between border routers and the core are broken, allowing BGP-learned prefixes to take effect. Full connectivity is verified, and the routing tables are compared before and after the core activation.

  4. Final Cleanup: BGP administrative distances are returned to default values, and the core OSPF process is renumbered to prevent misconfiguration and accidental reformation of OSPF adjacencies. Cleanup also includes removal of the old OSPF process.

​

By following this migration plan, the network successfully transitions from OSPF to BGP, achieving improved scalability, administrative control, and routing efficiency.

​
​
R3b.png
R2b.png
bottom of page