Author: Jeevan Sharma
Seamless Connectivity with Infoblox IPAM for Hybrid Multi-cloud Environments
When an organization has cloud and on-premises networks operating independently in their own silos, IP address allocation is localized in each environment, and there is no need for awareness of IP allocations elsewhere. Overlapping IP addresses can coexist when the cloud and on-premises network are isolated. Such configurations demand minimal coordination between cloud and on-premises IT teams. Many organizations operated in this manner while they were moving into the cloud.
This is not true anymore; the convergence of on-premises and cloud has ushered in a new era of connectivity, redefining the digital fabric where interconnected systems are the new norm. In this hybrid, multi-cloud world of interconnectivity where data applications transcend network boundaries, the role of IP Address Management (IPAM) is indispensable for ensuring seamless communication and maintaining network security across these diverse environments.
Given the critical role IPAM has in this new hybrid, multi-cloud world, you need an IPAM system where all decisions about IP address assignment for on-premises and cloud networks are made in one place. You want NetOps and CloudOps teams to collaborate on IP address allocations instead of being in conflict. Encouraging collaboration between NetOps and CloudOps teams for IP address allocations is crucial, especially within an interconnected hybrid multi-cloud environment. If you don’t do this, there’s a risk that you might give the same range of IP addresses to two different networks. If this happens, you will run into overlapping IP addresses in your network, which can lead to connectivity problems. To minimize the risk of overlapping IP addresses or exhaustion, you need to check on IP address assignments regularly to make sure the allocated IP addresses are properly utilized, and that there are no duplicate IP addresses assigned accidentally.
Lack of visibility into the utilization of IP addresses within a Virtual Private Cloud (VPC) often leads to overallocation. This issue is particularly pronounced with Google Kubernetes Engine (GKE) clusters, as default allocations tend to be excessively large, resulting in the allocation of far more IP addresses than necessary. Without proper oversight, this practice rapidly depletes the available private IP address space, exacerbating the problem further.
This is why Infoblox is collaborating with Google Cloud Platform (GCP) to help customers manage their IP addressing from one central system and seamlessly connect their on-premises networks with Google Cloud infrastructure. In this blog, we will discuss how BloxOne DDI and GCP internal range help you efficiently manage IP addresses for your on-premises and GCP networks at scale through automation.
GCP internal range Integration Overview
What is an internal range resource?
GCP internal range lets you reserve and manage IP address resources for Google Virtual Private Cloud (VPC). As your network grows more complex, you can use internal range to help you manage VPC network topology with features like VPC Network Peering and Shared VPC. With internal ranges, you can reserve and pre-allocate one or more blocks of private IP addresses and specify how those addresses can be used by the VPC resources. To allocate an address block to a VPC, you can create an internal range resource reservation for the VPC. You can also create additional internal range resource reservations to reserve address blocks that are already in use somewhere else (e.g., on-premises) or for those that you want to protect.
You can reserve and protect IP address blocks with internal range for your GCP VPC network provided you know what those IP address blocks are. You can get those address blocks from the networking team and set up internal range resources in your VPC. However, maintaining these internal range resources in a dynamic network environment can get overwhelming when there are frequent changes in IP addresses in other parts of the network (on-premises, other cloud providers, etc.). Here, we see how BloxOne DDI can manage these internal range resources and keep them current with Federated IPAM. In the next section, we will introduce Infoblox IPAM and explain how it can help manage GCP internal ranges.
Infoblox IPAM Overview
What is IPAM?
IP Address Management (IPAM), a feature of BloxOne DDI, allows you to manage multiple IPAM systems (such as on-prem and public cloud) from a central control point, ensuring greater efficiency, coordination, and implementation of policy compliance. Leveraging a “federation” architecture to synchronize IP data from various systems, it promotes collaboration among NetOps and CloudOps teams utilizing various IPAM systems and tools, facilitating streamlined operations with unique IP address assignments across on-premises and cloud networks.
With Infoblox IPAM, you can do the following:
- Create routable, non-overlapping IP address blocks for a connected domain
- Reserve the address blocks for use in a specific public/private cloud or on-premises
- Delegate a subset of the address block to an IP Space
Three key concepts to understanding Infoblox IPAM and its Federation features:
Federated Realm: A unique namespace of address blocks representing a connected routing domain. A federated realm signifies that resources within its address blocks are unique and reachable with other resources in other address blocks.
Federated Block: Federated Block allows to reserve an address block and identify the IPAM system responsible for managing the particular block. It sets the available network space that IPAM systems can allocate within a federated realm context.
Delegation: A subset of IP address space in a Federated Realm that has been allocated to a participating IP space.
Use Cases
Now that you know the basics of IPAM, let’s walk through some practical use cases to illustrate how it works. In these use cases we will show you how you can manage Google cloud networking with BloxOne CSP.
Use Case 1: Plan your networks and reserve IP address ranges
Suppose you are planning to use 10.0.0.0/8 private IP address space for your Enterprise Network. The network that you build needs to connect on-premises sites in HQ, branch offices, Data Centers, GCP, and AWS cloud. You want to assign non-overlapping private IP address blocks to each site and cloud network so they can be connected to each other. In this case there are three address blocks carved out from the private IP address space for on-premises and cloud provider networks.
Enterprise Network IP Addressing | ||
---|---|---|
Private IP CIDR | IP 10.0.0.0/8 | |
On-premises networks | Federated Block 1 | 10.0.0.0/16 |
GCP | Federated Block 2 | 10.1.0.0/16 |
cloud provider 2 | Federated Block 3 | 10.2.0.0/16 |
In this use case we will start planning IP address assignments for the different networks from BloxOne DDI. We will start by logging into BloxOne Cloud Services Platform (CSP) and carry out the following steps.
Step 1: In the Infoblox IPAM create a unique Federated Realm (namespace) for your connected routing domain.
Step 2: Create a Federated Block to reserve free address blocks of the required size from the 10.0.0.0/8 address space. Say we want to create an address block with a/16 CIDR.
After you have finished creating Federated Blocks for various network types you will be able to view all these private IP ranges in Infoblox IPAM.
Step 3: Delegate 10.1.0.0/24 address block from the GCP Federated Block to vpc-a in GCP. This step creates an internal range for vpc-a in GCP with (usage type FOR_VPC) and (peering type FOR_SELF).
Step 4: Create a subnet that uses part of the internal range 10.1.0.0/24 created in step 3. In this step we create a /27 subnet called subnet-a in vpc-a.
After executing these steps in BloxOne CSP, you should see an internal range for vpc-a and a subnet subnet-a created in vpc-a in your GCP Project created by BloxOne in your Google Cloud Console.
Use Case 2: Reserve and protect network ranges allocated to on-premises and other cloud provider networks in GCP
Consider a scenario where your on-premises network is connected to a VPC within the Google Cloud environment. Your goal is to safeguard your on-premise address ranges from being utilized within the Google Cloud environment and avoid any IP conflicts. You can create internal range reservations for your on-premise address ranges in your Google Cloud VPC with the “EXTERNAL_TO_VPC” and “NOT_SHARED” attributes to block creation of subnet or routes using on-premise address ranges. Similar approach can also be applied for IP addresses assigned to other cloud providers linked to the VPC within the Google Cloud environment.
In this example we have a VPC in a Google Cloud Project that is connected to an on-premises network and another cloud provider network.
Here, we will start by logging into BloxOne CSP and add the Federated Blocks reservations for Google Cloud, on-premises, and other cloud provider networks into the same Federated Realm. After this, we will create address block delegations for Google Cloud, on-premises and other cloud provider networks in BloxOne CSP. The table below shows the Federated Blocks and address block delegations that we are going to create in BloxOne CSP. Once these steps are executed in BloxOne CSP, the BloxOne system will create GCP internal range reservations in Google Cloud.
Federated Realm – MyEnterpriseNetwork | ||||
---|---|---|---|---|
Federated Block | Delegation | GCP internal range | ||
Google Cloud | 10.1.0.0/16 | 10.1.0.0/24 | Usage: FOR_VPC |
Peering: FOR_SELF |
On-premises | 10.0.0.0/16 | 10.0.0.0/24 10.0.1.0/24 |
Usage: EXTERNAL_TO_VPC |
Peering: NOT_SHARED |
Other cloud provider | 10.2.0.0/16 | 10.2.0.0/24 10.2.1.0/24 |
Usage: EXTERNAL_TO_VPC |
Peering: NOT_SHARED |
Step 1: Create a unique Federated Realm namespace and reserve Federated blocks for Google Cloud, on-premises and other cloud provider networks in this Federated Realm.
Step 2: Assign IP Spaces mapped to Google Cloud VPCs to the Federated Realm created in the previous step if they are not assigned previously.
Step 3: Delegate 10.1.0.0/24, 10.1.1.0/24, 10.1.2.0/24, and 10.1.3.0/24 address blocks from the Federated Block 10.1.0.0/16 to the IP Spaces mapped to Google Cloud VPCs.
Step 4: Delegate 10.0.0.0/24, 10.0.1.0/24 to on-premises by creating these address blocks in the IP Space for on-premises networks.
Step 5: Delegate 10.2.0.0/24, 10.2.1.0/24 to another cloud provider network by creating these address blocks in the IP Space for the other cloud provider networks.
Once these steps are executed in BloxOne CSP, the BloxOne platform creates internal range resources with the attributes “EXTERNAL_TO_VPC” and “Not Shared” for on-premises and other cloud provider address ranges in Google Cloud VPC. Google Cloud protects the reserved address blocks and denies any attempt to create a subnet or route using these internal range reservations inside the VPC.
Use Case 3: Discover VPC Peering and map existing VPC and subnets in GCP to internal ranges
With BloxOne DDI you can configure a discovery job for GCP and it will sync pre-existing VPCs with subnets inside your GCP projects to IPAM. The GCP discovery job will also discover peering relationships between VPCs and routes to other external networks. Follow the step below in BloxOne CSP to create a discovery job for GCP.
Step 1: Create a discovery job in BloxOne CSP to discover compute and network resources in GCP. Enable “Manage Internal Ranges” to allow BloxOne DDI to discover existing internal range resources.
BloxOne DDI discovers existing VPCs and subnets inside those VPCs and does the following:
- Create IP Spaces for each VPC using the naming convention <provider-name_vpc-id>
- The subnets in a VPC map to subnets in the IP Space created for that VPC
- Discovers existing internal range resource reservations and adds them to the asset inventory table
- Discovers VPC network peering and creates internal range reservations with attribute FOR_VPC and FOR_PEER
With Infoblox IPAM and GCP internal ranges, BloxOne DDI offers:
- Centralized IPAM with the ability to reserve and allocate address spaces for hybrid, multi-cloud environments
- Enhanced visibility and control over Google Cloud infrastructure, management of GCP networks, cross-cloud connectivity to other cloud networks, and on-premises sites.
- Automated GCP internal range resource creation with usage type and peering type to avoid IP address overlaps.
Early Access Program
This offering is currently available to customers and partners through the Infoblox Early Access Program (EAP). In this program, participants receive “first looks” of new offerings and significantly influence the development and direction of solutions that will enhance the effectiveness of their business operations. Interested customers or partners should reach out to their Infoblox account team or complete this form: Infoblox Early Access Program.
Next Steps
To learn more about using Infoblox BloxOne DDI, follow this link. (BloxOne DDI documentation)
For more information on GCP internal range click here. (GCP internal range blog)