Friday, April 19, 2024
Search
  
Tuesday, July 29, 2014
 IBM, ACS And AT&T Claim Breakthrough In Elastic Cloud-to Cloud Networking
You are sending an email that contains the article
and a private message for your recipient(s).
Your Name:
Your e-mail: * Required!
Recipient (e-mail): *
Subject: *
Introductory Message:
HTML/Text
(Photo: Yes/No)
(At the moment, only Text is allowed...)
 
Message Text: Scientists from AT&T, IBM and Applied Communication Sciences (ACS) announced a proof-of-concept technology that reduces set up times for cloud-to-cloud connectivity from days to seconds.

This advance is a step forward that could one day lead to sub-second provisioning time with IP and next generation optical networking equipment and enables elastic bandwidth between clouds at high connection request rates using intelligent cloud data center orchestrators, instead of requiring static provisioning for peak demand.

The prototype was built by AT&T, IBM and ACS, and the work was performed under the auspices of the U.S. Government’s DARPA CORONET program, which focuses on rapid reconfiguration of terabit networks.

"The program was visionary in anticipating the convergence of cloud computing and networking, and in setting aggressive requirements for network performance in support of cloud services" said Ann Von Lehmen, the ACS program lead.

AT&T was responsible for developing the overall networking architecture for this concept, drawing on its expertise on bandwidth-on-demand (BoD) technologies and routing concepts. IBM provided the cloud platform and intelligent cloud data center orchestration technologies to support dynamic provisioning of cloud-to-cloud communications. ACS contributed its expertise in network management and optical-layer routing and signaling as part of the overall cloud networking architecture.

Today’s traditional cloud computing model is built on the premise of automation and lower operational costs, which requires dynamic provisioning of resources. However, the traditional cloud-to-cloud network is static and creating it is labor intensive, expensive and time consuming.

In response to the rapid advent of cloud-based services and explosion in data center size and scope, Cloud Service Providers (CSPs) have installed automatic and intelligent resource management systems within their data centers. For example, these systems can load balance both processor and storage resources, as well as perform massive transfers of data among multiple data centers.

This prototype was implemented on OpenStack, an open-source cloud-computing platform for public and private clouds, elastically provisioning WAN connectivity and placing virtual machines between two clouds for the purpose of load balancing virtual network functions. The use of flexible, on-demand bandwidth for cloud applications, such as load balancing, remote data center backup operation, and elastic scaling of workload, provides the potential for major cost savings and operational efficiency for both CSPs and carriers.

In the demonstration, the IBM cloud platform and orchestration technology manages the life cycle of Virtual Machine (VM) network applications on OpenStack software to automatically monitor server load and request both cloud-to-cloud network bandwidth from a SDN WAN Orchestrator developed by AT&T and compute resources as needed for VM migration.

The AT&T SDN WAN Orchestrator automatically routes data server connection requests across the appropriate network layer: IP/MPLS, subwavelength or Dense Wavelength Division Multiplexing (DWDM). Provisioning protocols developed by ACS are integrated with commercial transport DWDM network elements to set up and tear down connections as needed.

In the demo, setup times as short as 40 seconds were achieved, with sub-second provisioning times possible with next generation DWDM equipment (called ROADMs). This approach also takes BoD into a truly dynamic regime, by enabling the high-connection request rates that will be required in future cloud service environments.

In related news, in support of the updated Climate Data Initiative announced by the White House today, IBM will provide eligible scientists studying climate change-related issues with free access to dedicated virtual supercomputing and a platform to engage the public in their research.

Each approved project will have access to up to 100,000 years of computing time at a value of $60 million. The work will be performed on IBM’s philanthropic World Community Grid platform.

Created and managed by IBM, World Community Grid provides computing power to scientists by harnessing the unused cycle time of volunteers' computers and mobile devices. Participants get involved by downloading software that runs when they take breaks or work on lightweight computer tasks, such as browsing the internet. The software receives, completes, and returns small computational assignments to scientists. The combined power contributed by hundreds of thousands of volunteers has created one of the fastest virtual supercomputers on the planet, advancing scientific work by hundreds of years.

Through the contributions of hundreds of thousands of volunteers, World Community Grid has already provided sustainability researchers with many millions of dollars of computing power to date, enabling important advances in scientific inquiry and understanding.

 
Home | News | All News | Reviews | Articles | Guides | Download | Expert Area | Forum | Site Info
Site best viewed at 1024x768+ - CDRINFO.COM 1998-2024 - All rights reserved -
Privacy policy - Contact Us .