Internet Scale Computing (ISC)

Internet Scale Computing (ISC)

  • Comments 1

More from a draft...

A Definition for Internet Scale Computing

ISC is a distributed, globally accessible fabric of resources for highly scalable and efficiently managed solutions.

  • Distributed, Globally Accessible – Location transparency is a critical capability for enabling highly scalable, distributed solutions. Resources can be discovered and used without regard to their true physical location. For example, a service could be physically located within an enterprise, a local datacenter or a datacenter in another country. While location transparency is an important concept, a resource’s physical location may detrimentally impact other characteristics like latency, throughput, user experience and availability. Distributed resources avoid single points of failure, enabling resources to be moved or copied between datacenters for better scalability and user experiences. In an ISC solution the Internet is the network, linking participants, datacenters and resources in a globally accessible network of networks. This globally accessible set of resources creates a network effect that results in benefits for owners and users of a given set of resources. This concept is sometimes referred to as “economic externality” - parties external to a given transaction receive benefits by enabling the transaction.
  • Fabric of Resources An ISC solution isn’t confined to a single application server or server farm. ISC solutions can use resources that are hosted by datacenters all over the world. The result is a standards-based, distributed approach for building, hosting and using resources. Note that we have avoided using the term services – this is to avoid possible confusion since resources may be services in the classical sense (web services, telecom services) or a more physical manifestation such as storage, processors and bandwidth.
  • Highly Scalable – ISC infrastructure is infinitely scalable because it is capable of evolving to meet user demand. This means that an ISC solution incorporates knowledge of a solution’s context and service level expectations (SLEs). If an identified set of events should occur (such as bandwidth dwindling to critical levels) the resources used to compose the solution can be dynamically re-configured, moved or replicated to meet the needs of the solution’s users. For example, in the last few shopping days of the Christmas season an e-commerce solution may require resources to dynamically grow to meet user demand. These resources may include additional instances of the web site, credit card services, additional bandwidth, storage or other resources. The resources for hosting, deploying and running these services may be dynamically instantiated within the datacenter or remotely deployed to other datacenters. As the holiday season wanes resources such as servers, storage and bandwidth are gradually released, enabling other solutions to take advantage of them as needed.
  • Efficiently Managed – One of the biggest challenges facing datacenters today is resource utilization. Resource utilization typically runs from 15-20%, leaving expensive, untapped resources that cannot be taken advantage of by other solutions. For an ISC solution to dynamically scale, resource management is critical. ISC management recognizes the when a given set of resources need to be provisioned or decommissioned and dispatches the appropriate action. The resources being managed may reside at a macro-level (servers, web services and bandwidth) or micro-level (memory, processors and disk space). Regardless of resource levels, managing these resources requires visibility into their availability, performance and efficiency. Visibility is necessary at all levels of a solution – from individual service and task performance to compute clusters and utilization across multiple data centers. Managing resources also means managing costs – virtual hosts and services will be dynamically provisioned and decommissioned to inexpensive commodity machines. The use of commodity machines requires that the implementation of reliability extends higher up in the stack - into the software level. Applications can no longer afford to assume 100% hardware uptime and need to handle exceptions far more gracefully than in large-scale, shared memory machines typically used for highly scalable infrastructures. Extending the responsibility for reliability into the software tier results in a more reliable and lower cost infrastructure.

The ISC Event Cycle

From the datacenter perspective, Utility Computing promises higher utilization, improved application management and greater flexibility to satisfy demand. This is achieved by abstracting away the physical layout of the datacenter and creating a logical partitioning of resources which can dynamically be allocated where needed. Figure 2 illustrates the flow of events as they are generated from within the datacenter (bottom center). Resources within the datacenter are continually monitored, the events are analyzed and resources are allocated or de-allocated in real time on an as needed basis.

Click to see full size (in a pop-up window)

Datacenter resources are monitored at multiple levels. These resources may include physical (hardware), virtual (servers) or any combination thereof (virtual network topologies). Monitoring levels range from health and availability to service level agreements (SLAs). Resource monitoring also spans mulitple levels, ranging from individual resources (largely health and availability) to soution level (largely SLA-level). These concepts and others are discussed in greater detail elsewhere in this paper.

While the event cycle illustrated above is largely reactive, the storage of datacenter events enables discovery of longer term usage patterns (or complex events), enabling opportunistic (proactive) forecasting. Opportunistic forecasting enables ISC solutions to provision resources in anticipation of expected demand, based upon the identification of usage patterns. For example, an online retailer may define a time-based usage pattern which enables additional resources to be allocated in advance of popular shopping holidays like Christmas. Usage patterns can also enable the gradual de-allocation of resources as service demands degrade. Opportunistic forecasting enables higher datacenter utilization rates since resources are proactively managed based upon both real-time and historical usage data.

Leave a Comment
  • Please add 2 and 4 and type the answer here:
  • Post
  • More from a draft... A Definition for Internet Scale Computing ISC is a distributed , globally accessible

Page 1 of 1 (1 items)