One of the many challenges to building a cloud is your addressing strategy.
Now, addressing is an issue for any public facing Internal application, but for clouds, the issues increase because of the dynamic nature of deploying applications that require publicly addressable endpoints at huge scale.
Let’s start by thinking about a traditional web application, whether it be large or small. At the early design phase, you are able to identify the front end roles that will require public IP space. You can then start capacity planning to identify your peak load forecasts, how many front end nodes you will need to support max load, then make the appropriate request and allocation. As the process for allocating IP space from the public range can take many months, you want to conclude this process early in the design phase.
Secondly is the scarcity of resources. If you rely on IPv4, your going to have to be very sparing of your use of public space, as IPv4 is not only very scarce, but it’s expensive. IPv6 on the other hand is abundant, simplifies your network strategy (as you don’t have to necessarily NAT addresses etc), and is easy to acquire. On the flip side, you need to do more planning and up front design to ensure your system, both hardware and software, can accommodate IPv6 addressing.
And now, finally onto the cloud part. In a cloud environment, the tenant, that is, the application being deployed, requires a public IP address. The may have many back end nodes, but for the most part, these will sit behind a load balanced front end node, and therefore, a public IP space. You could try and virtualize that whole layer, and instead, have a primary “head” IP address, where all traffic is routed through, then split based on host headers, but that isn’t going to scale very well. So instead, each customer should have a public IP address for each publicly exposed endpoint, but then, you don’t know how many customer exposed endpoints your going to have during design, or deployment for that fact. You could have 100, you could have 1,000,000.
Using strategies like just-in-time allocation, based on historical usage and prediction is a good approach, but requires a level of sophistication, or you can acquire lots of space up front, but this can be costly and wasteful. Ideally you want to use a mixture of both, establishing a small unit of allocation, something like a /20 or higher space, then constantly measuring the watermark and determining when you need to add more, based on prior use patterns. This way, you know how early you need to make the request, and you don’t over allocate.
Food for thought! :)