GKE can address IPv4 depletion issues by using Class E IPv4 Address space. As more services and applications are hosted on Google Kubernetes Engine (GKE), there is an increasing demand for private IPv4 addresses (RFC 1918). Many large enterprises are finding it more difficult to obtain RFC1918 address space, which makes IP address depletion an issue that impacts the scalability of their applications.
IPv6, which provides a vast number of addresses, solves this specific address depletion issue. However, not all industries or applications are ready for IPv6 just yet. By expanding into the IPv4 address space (240.0.0.0/4), which can handle these issues, you can keep growing your business.While class E addresses (240.0.0.0/4) are reserved for future use, as per RFC 5735 and RFC 1112, as mentioned in Google VPC network approved IPv4 ranges, you can still use them in some circumstances today. Additionally, Google will offer guidance on setting up and utilizing GKE clusters with Class E.
Identifying addresses in Class E
IPv4 addresses
The following are some common objections or misconceptions regarding the use of Class E addresses:- Class E addresses are not compatible with other Google services. This is not accurate. The IPV4 permitted address ranges provided by Google Cloud VPC contain Class E addresses. Moreover, a lot of Google-controlled services can be accessed through private connection methods that use Class E addresses.
- Using Class E addresses restricts communication with services that are not part of Google, such as the internet, on-premises systems, and other clouds. Untrue. Since Class E addresses are not routable and are not published over the internet or outside of Google Cloud, you can use NAT or IP masquerading to convert them to public or private IPv4 addresses in order to visit destinations outside of Google Cloud. Additionally,
b. Several on-premises providers (Cisco, Juniper, Arista) support routing the addresses for use in private DCs.
- Class E addresses are restricted in terms of scalability and performance. This is not accurate. There is no difference in performance between the addresses and other address ranges that Google Cloud uses. Even with NAT/IP Masquerade, agents can expand to support a large number of connections without compromising speed.
Therefore, even though Class E addresses are reserved for future use, not routable over the internet, and shouldn't be publicized over the public internet, you may use them for private usage inside Google Cloud VPCs for both Compute Engine instances and Kubernetes pods/services in GKE.
Benefits
IP addresses in class E
Class E addresses have significant advantages despite these drawbacks:
Large address space: Class E addresses offer a far larger pool of IP addresses than regular RFC 1918 private addresses, which only provide roughly 17.9 million addresses compared to 268.4 million addresses for it. This abundance will help organizations that are running out of IP addresses because it will allow them to grow their services and applications without worrying about running out of address space.
Growth and scalability: The broad reach of its addressing makes it easier for services and apps on Google Cloud and GKE to scale simply. Even in periods of high usage, IP address limits do not stop you from establishing and expanding your infrastructure, which fosters innovation and development.
Efficient use of resources: You may lessen the likelihood of address conflicts and help make better use of IP resources by incorporating Class E addresses into your IP address allocation processes. This leads to lower costs and more effective operations.
Future-proofing: While some operating systems do not support it, its use is expected to increase in response to the increasing demand for IP addresses. By implementing Class E as soon as possible, you can future-proof your infrastructure's scalability and support business growth for many years to come.
IP addresses in class E
Things to consider
Despite the fact that Class E IP addresses have several benefits, it's important to keep the following in mind:Operating system compatibility: Not all operating systems now support Class E addressing. Prior to implementing Class E, confirm that the operating system and tools you have chosen are compatible.
Hardware and software for networking: Verify that the addresses can be handled by your routers and firewalls, as well as any other virtual appliance solutions from third parties that are powered by Google Compute Engine. Ensure that all software and programs that utilize IP addresses are updated to support them as well.
Migration and transition: Careful planning and execution are required to guarantee there are no disruptions when moving from RFC 1918 private addresses to it.
How Class E was implemented by Snap
The increasing usage of microservices and containerization solutions like GKE, especially by large clients like Snap, is making network IP management increasingly challenging. With hundreds of thousands of pods deployed, Snap's limited supply of RFC1918 private IPv4 addresses quickly ran out, preventing cluster growth and requiring a significant amount of manual labor to release addresses.After considering an IPv6 migration at first, Snap decided to implement dual-stack GKE nodes and GKE pods (IPv6 + Class E IPv4) because of issues with application compatibility and readiness. This strategy not only avoided IP weariness but also provided Snap with the number of IP addresses it needed for years to support growth in the future and save overhead. Moreover, this method aligned with Snap's long-term strategy to transition to IPv6.
New clusters
Need
Create clusters of native VPCs.Actions
- If you'd like, create a subnetwork with additional ranges for pods and services. CIDRs in this range (240.0.0.0/4) can be applied to the secondary ranges.
- Use the previously generated secondary ranges for clustering the pod and services CIDR ranges. An illustration of a user-managed secondary range assignment method is this one.
- To translate the IP address of the underlying node to the source network address, set up IP masquerading to source network address translation (SNAT).
Clusters that are moving
Need
The clusters must originate from within the VPC.Actions
- The cluster's default pod IPv4 range cannot be changed. Pod ranges can be added for more modern node pools that enable Class E ranges.
- It is possible to transfer workloads to more recent node pools from older node pools.
0 Comments