Cloud Computing Resources Directory

  the Basics  
  for Buyers & Users  
  for Vendors  
  for Developers  
  by Industry  
  Analysis  
  Custom View  
  Research & Development  
  Applications  
  Platforms  
  Infrastructure  
  Security  
  Standards & Interoperability  
Virtual Machines Compute Clouds Storage Clouds Private Clouds Grids & Clouds Network Research Projects
 Azure Scalability:- Use “Queues” as your Bridges
Azure Scalability:- Use “Queues” as your Bridges
by Anshulee Asthana
Windows Azure Queue allows decoupling of different parts of a cloud application, enabling cloud applications to be built easily with different technologies and to scale with traffic needs. This description is full of some very important keywords: .decoupling, different technologies, scale with traffic needs. All are very important to make a scalable and extensible application.. Lets see how queues help in all these aspects.
read the full story >>
Tracking System Solutions' Decision to Deploy on Virsto vDisks
Tracking System Solutions' Decision to Deploy on Virsto vDisks
by Eric Burgener
Because storage makes up such a large part of the costs of any cloud-based infrastructure, it is an obvious place to look for cost reductions. System Solutions, Inc. (SSI), a King of Prussia, Pa.-based IT solutions provider, took that advice to heart in building MySecureCloud, a hosted infrastructure solution targeted at small and medium businesses (SMBs).
read the full story >>
Is the Cloud Broken?
Is the Cloud Broken?
by Vince Vasquez
The underlying architecture chosen by first-generation cloud providers borrows heavily from the server consolidation and virtualization era. The major difference is that the pools of compute cores are now being deployed as public-facing services, versus running the previous generation of siloed applications. Unfortunately, the resulting inefficiencies prevent full realization of the cloud promise of truly low-cost, high-performing, elastic computing. In response, Joyent has emerged as a second generation provider, rethinking fundamental cloud architecture, and ushering in the era of what they call “Smart Computing.”
read the full story >>
A Flexible and Interoperable Cloud Operating System
A Flexible and Interoperable Cloud Operating System
by Ignacio M Llorente
Future enterprise data centers will look like private clouds supporting a flexible and agile execution of virtualized services, and combining local with public cloud-based infrastructure to enable highly scalable hosting environments. The key component in these cloud architectures will the cloud management system, also called cloud operating system (OS), being responsible for the secure, efficient and scalable management of the cloud resources. Cloud OS are displacing “traditional” OS, which will be part of the application stack.
read the full story >>
Tracking System Solutions' Decision to Deploy on Virsto vDisks
Tracking System Solutions' Decision to Deploy on Virsto vDisks
by Eric Burgener
Because storage makes up such a large part of the costs of any cloud-based infrastructure, it is an obvious place to look for cost reductions. System Solutions, Inc. (SSI), a King of Prussia, Pa.-based IT solutions provider, took that advice to heart in building MySecureCloud, a hosted infrastructure solution targeted at small and medium businesses (SMBs).
read the full story >>
Enable CloudFront for Your Application’s Non-Dynamic Content
Ofir Nachmani - Chief Evangelist at Newvem Insights Ltd.

CloudFront is Amazon’s Content Delivery Network, a service that aims to speed up delivery of content to users in different geographies. It gives developers access to a worldwide infrastructure that minimizes latency by serving content from the edge location closest to the end user. This article describes two basic use cases utilizing a CDN for non-dynamic content in a typical web application, and provides CloudFront-specific configuration examples.


Infographic: Demystifying Amazon Web Services
Ofir Nachmani - Founder and Author at I Am OnDemand blog

Amazon Web Services (AWS) is the biggest public cloud around, yet what goes on behind the scenes remains a mystery.For heavy users, such as enterprise level CIOs, AWS’s “Reserved Instances” are a cost effective model to scale their cloud activity and benefit from the full service offering that Amazon provides.


Prepare for the next cloud outage: Analyze and Improve
Ofir Nachmani - Founder and Author at I Am OnDemand blog

It happened again… this was the second AWS outage in the same month. Did you fail to protect your service online? Don’t forget – you can’t pass your liability onto your IaaS vendor. You can find a great amount of knowledge resources with regards to AWS cloud High Availability architectures in Newvem’s resources center, starting from Best Practice for High Availability Deployment all the way to knowing more about how to maintain availability for your specific environment, such as how to maintain a failover to MSSQL DB server, or a case study on how to replicate PostgreSQL DB Between AWS Regions.


Upgrading IBM Service Delivery Manager (ISDM) 7.2.1 to 7.2.2: An overview
Aditya Thatte - Software Engineer at IBM Research

An overview of the IBM Service Delivery Manager Cloud platform upgrade process.


Virtualization and its Advancement
Mitesh Soni - Research Engineer at iGATE Patni

Visualization has been investigated since the early days of computing. In the 1960′s, time sharing systems were pursued as an alternative to batch processing systems. Hardware architectures for providing virtualized memory systems and privileged software execution were developed. Visualization emerged as a means to more fully utilize hardware resources and facilitate time-sharing systems.


Exploring Cloud Deployment Models in IBM Workload Deployer
Dustin Amrhein - Technical Evangelist, WebSphere Emerging Technologies at IBM

One of the fundamental tenants of IBM Workload Deployer is a choice of cloud computing deployment models. Starting in v3.0, users will be able to deploy to the cloud using virtual appliances (OVA files), virtual system patterns, or virtual application patterns. The ability to provision plain virtual appliances is a way to rapidly bring your own images, as they currently exist, into the provisioning realm of the appliance. As such, I think the use cases and basis for deciding to use this deployment model are fairly evident. However, when comparing the two patterns-based approaches, virtual system patterns and virtual application patterns, the decision requires a bit more scrutiny


Autonomic Cloud Management Approaches
Dustin Amrhein - Technical Evangelist, WebSphere Emerging Technologies at IBM

The platform services segment of cloud is multi-faceted… to say the least. Lately, likely spurred on by announcements like IBM Workload Deployer and VMware Cloud Foundry, I have been thinking quite a bit about one of those facets: environment management. To be clear, I’m not talking about management tools for end-users, though that topic is worthy of many discussions. Rather, I’m talking about the autonomic management capabilities for deployed environments.


Cloud Computing - Design Considerations PDF
Tinniam V Ganesh - Founder & Owner at INWARDi Technologies

This article discusses key considerations while designing for the cloud.


A view from the clouds: Cloud computing for the WebSphere developer
Dustin Amrhein - Technical Evangelist, WebSphere Emerging Technologies at IBM

I hate sitting on secrets. I always have. I understand that sometimes it's in the best interest of everyone (and your job) to keep tight lips, but that does not make it any more fun. Inevitably, the run-up to our annual Impact conference means everyone in the lab is doing their fair share of secret keeping -- just waiting for announce time. For a lot of us, that day ended Tuesday with the announcement of the IBM Workload Deployer v3.0.


Private Cloud: Elastic, or a hair ball of rubber?
Brad Vaughan - Director of Cloud Migration Services at The Armada Group

Is your private cloud elastic, or just a tightly wrapped ball of rubber? In the war between private and public cloud, people question whether you can actually unleash the elasticity from that ball of rubber bands. This post describes how to evaluate elasticity in private cloud.


A Reference Architecture For Cloud Computing
Dustin Amrhein - Technical Evangelist, WebSphere Emerging Technologies at IBM

Admittedly, when I was heads-down in code earlier in my career, I did not pay much attention to reference architectures. We had our own internal architectures that served as ‘the way and the truth’, and reference architectures for our product or solution domain were simply out of scope. Anyway, reference architectures are, by design, not detailed enough to steer someone implementing one out of hundreds of components that will fall under said architectures. So, for the most part I ignored them, even though I could hear rumblings coming from rooms full of folks arguing over revision 25 of the reference architecture for some problem domain or another.


IaaS Builder's Guide - Network Edition PDF
Randy Bias - Founder at Cloudscaling

This new technical whitepaper is a follow on to Cloudscaling’s IaaS Builder's Guide and talks at an architectural level about building scalable networking for infrastructure clouds. Infrastructure clouds are complex and challenging engineering problems. Covering the topic in detail would take years and several books. Meanwhile, the best practices and state-of-the-art proceeds apace. This 30 page technical piece goes into some of the details while remaining broad to share information and foment discussion.


The IBM Image Construction and Composition Tool
Dustin Amrhein - Technical Evangelist, WebSphere Emerging Technologies at IBM

In a recent post, I wrote about the importance of well-designed, well-constructed virtual images. To be clear, I am not promoting elegant virtual image design for the sake of art. Rather, if we can improve the state of the art in virtual image design and construction, there is a chance to significantly reduce image sprawl typical to many organizations today. Reducing virtual image sprawl will go a long way in reducing the amount of time and resources organizations dedicate to managing their image inventory.


Pervasive DataRush and the Cloud (Part 2 of 2)
Robin Bloor - Founder at Bloor Research

Continuation of the conversation with David Inbar, Pervasive Dir of Strategic Marketing Dev and Robin Bloor on the implications of the new DataRush 5 parallel processing development platform. The energy consumption and green IT conversation continues as well as DataRush support for more programming languages, and what that means in terms of cloud computing requirements.


The Advent of Pervasive DataRush 5 (Part 1 of 2)
Robin Bloor - Founder at Bloor Research

The following is the transcript of a conversation between David Inbar, Dir. Strategic Market Dev. at Pervasive and Robin Bloor focusing on the new version 5 release of DataRush. David and I discussed what was truly special about this release, and what the implications of it are for the industry, cluster and grid computing, energy consumption headaches, the cloud, and big data processing.


Cloud Storage and Higher Education
Joey Widener - Sr. Product Evangelist at AT&T Hosting & Cloud Services

See one of my use cases for cloud services in Higher Education.


2011: The Convergence of Clouds and Virtual Machines
Shai Fultheim - Co-Founder & CTO at ScaleMP

The convergence of clouds and virtual machines (VMs) will become an increasingly prevalent conversation in 2011 and we will see the concept of server aggregation marry cloud computing quite nicely.


Cloud-Based Disaster Recovery PDF
Richard Dolewski - CTO & VP of Business Continuity Services at WTS

Many corporate IT departments have used tape technology for data backup and recovery for years. It may be time for a change. Terms such as backup service, cloud backup and data vaulting describe the process of electronically sending data off-site, where it can be protected from hardware failure, unplanned downtime, loss, and other systems risks. In today’s business environment, continuous access to corporate data is a vital step to supporting your external customers and internal users. Gaining access to and providing availability of this data is crucial to support your business continuity plan in the event of a disaster.


Grid, Cloud, HPC … What's the Difference?
Randy Bias - Founder at Cloudscaling

Some view grid as a pre-cursor to cloud while others view it as a different beast only tangently related. This really comes down to a particular TLA in use to describe grid: High Performance Computing or HPC. HPC and grid are commonly used interchangeably. Cloud is not HPC, although now it can certainly support some HPC workloads, Amazon's EC@ HPC Offering. No, cloud is something a bit different: High Scalability Computing or HSC.


GridGuide
Company Profile: European Grid Initiative (EGI)

The GridGuide from GridTalk shows the human face of grid computing. The GridGuide website encourages visitors to explore an interactive map of the world, visiting a sample of the thousands of scientific institutes involved in grid computing projects. Sites from more than 20 countries appear on the GridGuide, offering insider snippets on everything from research goals and grid projects to the best place to eat lunch and the pros and cons of working in grid computing.


Contributions Results for Developers: Cloud Infrastructure

Showing 1 - 20 of 111 Next > Last >>

How to build a Server in the Time it Takes to Get a Latte
Edward Wustenhoff - CTSO at Burstorm
Company Profile: Burstorm

Burstorm CTSO Edward Wustenhoff demonstrates how easy and quickly one can start a Blog Server on GoGrid


Smart Computing Overview
Company Profile: Joyent, Inc

The former VP of Marketing, Adrian Ludwig, provides an overview of what Joyent calls Smart Computing. He begins by discussing traditional and virtual architectures, then moves on to discuss SmartMachines, SmartDataCenters and SmartPlatforms.


Smart Computing Chalk Talk
Company Profile: Joyent, Inc

The former VP of Marketing, Adrian Ludwig, discusses Smart Computing in more detail by use of a chalk talk. He begins by discussing traditional and virtual architectures, then dives in more deeply into what Smart Computing is all about.


How To Launch an Amazon EC2 Instance
Edward Wustenhoff - CTSO at Burstorm
Company Profile: Burstorm

Burstorm CTSO demonstrates how you can start an instance on Amazon EC2 in about the time it takes to order a latte.


Cloudscape 2: Advances in eInfrastructure
Ignacio M Llorente - Full Professor & Head of Research Group at Complutense University of Madrid

The key players at the CloudScape 2 event in Brussels, Belgium, talk about the Advances in eInfrastructure. They discuss the Digital Agenda for Europe, The Future of Cloud Computing, Barriers for Government, Visibility and Control of Where Data Goes, Providing a Legal Framework, Interoperability, Innovation, Revolutionizing Science, Complimentary Grid and Clouds, and More.


Infinispan and the Future of Open Source Data Grids
Manik Surtani - Founder & Project Lead at JBoss Cache, Infinispan Data Grid

In this talk, Infinispan founder and Jboss Cache project lead Manik Surtani introduces the role of data grids in today's cloud computing environment. The extreme scalability offered by data grids are powering the greatest and most high profile of today's applications, and data grids take on a far more prominent role in cloud deployments as traditional databases hit scalability and resilience issues. In this talk Surtani introduces Infinispan, the new open source data grid platform, and the motivations and evolution as a project. Distributed data structures and the use of data grids as a viable, cloud-ready data storage mechanism is discussed in depth.


An Introduction to the Open Cloud Principals (OCP)
Sam Johnston - Founder & CTO at Australian Online Soultions

An Introduction to the Open Cloud Principals (OCP)


Productivity & Agility in the Cloud
Michael Crandell - CEO & Founder at Rightscale, Inc

Michael Crandell, CEO and Founder of Rightscale, explains how to achieve greater productivity and agility in the cloud through the RightScale Cloud Computing Platform.


GoGrid's CEO Interviewed at VMworld
John Keagy - CEO at GoGrid Cloud Hosting

The CEO and Co-Founder of GoGrid, John Keagy, offers some insights and commentary on Cloud Computing, his company, VMWare, and other partners and products.


Create a VM in a minute for $1-a-day hosted in the cloud on VMware
Simon West - Chief Marketing Officer at Terremark Worldwide, Inc

Terramark showed off today at the VMworld keynote their new vCloud Express package. A web based service to create and host Virtual Machines on VMware vSphere for a very cost effective price.


Contributions Results for Developers: Cloud Infrastructure

Showing 1 - 10 of 17 Next > Last >>

IaaS Builder's Guide - Network Edition PDF
Randy Bias - Founder at Cloudscaling

This new technical whitepaper is a follow on to Cloudscaling’s IaaS Builder's Guide and talks at an architectural level about building scalable networking for infrastructure clouds. Infrastructure clouds are complex and challenging engineering problems. Covering the topic in detail would take years and several books. Meanwhile, the best practices and state-of-the-art proceeds apace. This 30 page technical piece goes into some of the details while remaining broad to share information and foment discussion.


Cloud-Based Disaster Recovery PDF
Richard Dolewski - CTO & VP of Business Continuity Services at WTS

Many corporate IT departments have used tape technology for data backup and recovery for years. It may be time for a change. Terms such as backup service, cloud backup and data vaulting describe the process of electronically sending data off-site, where it can be protected from hardware failure, unplanned downtime, loss, and other systems risks. In today’s business environment, continuous access to corporate data is a vital step to supporting your external customers and internal users. Gaining access to and providing availability of this data is crucial to support your business continuity plan in the event of a disaster.


Joyent Smart Architecture for Cloud Computing
Company Profile: Joyent, Inc

This paper examines the broad architectural differences in cloud computing products, the drawbacks to more generic approaches in cloud delivery, and the Joyent philosophy of constructing cloud computing infrastructures. The paper then describes the Joyent Smart Technologies cloud architecture from server and operating system through data center and software development platform.


Joyent Smart Technology Datasheet
Company Profile: Joyent, Inc

A two-page overview of the Joyent Smart Technology stack providing information on SmartMachines, SmartDataCenter, and SmartPlatform.


Joyent Cloud Hosting Services Datasheet
Company Profile: Joyent, Inc

A two-page overview of the Joyent hosting services with pricing information, background on our physical data centers, and a comparison with Amazon EC2.


The Future of Cloud Computing: Opportunities for European Cloud Computing Beyond 2010 PDF
Ignacio M Llorente - Full Professor & Head of Research Group at Complutense University of Madrid
Company Profile: Seventh Framework Programme

This document provides a detailed analysis of Europe’s position with respect to cloud provisioning, and how this affects in particular future research and development in this area. The report is based on a series of workshops involving experts from different areas related to cloud technologies. In more detail, the identified opportunities are: (1) Provisioning and further development of Cloud infrastructures, where in particular telecommunication companies are expected to provide offerings; (2) Provisioning and advancing cloud platforms, which the telecommunication industry might see as a business opportunity, as well as large IT companies with business in Europe and even large non-IT businesses with hardware not fully utilized. (3) Enhanced service provisioning and development of meta-services: Europe could and should develop a ‘free market for IT services’ to match those for movement of goods, services, capital, and skills. Again telecommunication industry could supplement their services as ISPs with extended cloud capabilities; (4) provision of consultancy to assist businesses to migrate to, and utilize effectively, clouds. This implies also provision of a toolset to assist in analysis and migration.


Infrastructure-as-a-Service Builder's Guide v1.0
Randy Bias - Founder at Cloudscaling

This paper is targeted at anyone building public or private clouds who want to understand clouds, cloud computing, and Infrastructure-as-a-Service. It highlights some of the important areas to think about when planning and designing your infrastructure cloud.


CoSL: A Coordinated Statistical Learning Approach to Measuring the Capacity of Multi-tier Websites PDF
Company Profile: Wayne State University

By Jia Rao and Cheng-Zhong Xu. Abstract: Website capacity determination is crucial to measurement-based access control, because it determines when to turn away excessive client requests to guarantee consistent service quality under overloaded conditions. Conventional capacity measurement approaches based on high-level performance metrics like response time and throughput may result in either resource over-provisioning or lack of responsiveness. It is because a website may have different capacities in terms of the maximum concurrent level when the characteristic of workload changes. Moreover, bottleneck in a multi-tier website may shift among tiers as client access pattern changes. In this paper, we present an online robust measurement approach based on statistical machine learning techniques. It uses a Bayesian network to correlate low level instrumentation data like system and user cpu time, available memory size, and I/O status that are collected at run-time to high level system states in each tier. A decision tree is induced over a group of coordinated Bayesian models in different tiers to identify the bottleneck dynamically when the system is overloaded. Experimental results demonstrate its accuracy and robustness in different traffic loads.


Virtual Infrastructure Management in Private and Hybrid Clouds PDF
Ignacio M Llorente - Full Professor & Head of Research Group at Complutense University of Madrid
Borja Sotomayor - Haizea Developer, Research Assistant & Lecturer at University of Chicago

By Borja Sotomayor, Ruben S Montero, Ignacio M Llorente, and Ian Foster. One of the many definitions of "cloud" is that of an infrastructure-as-a-service (IaaS) system, in which IT infrastructure is deployed in a provider's data center as virtual machines. With IaaS clouds' growing popularity, tools and technologies are emerging that can transform an organization's existing infrastructure into a private or hybrid cloud. OpenNebula is an open source, virtual infrastructure manager that deploys virtualized services on both a local pool of resources and external IaaS clouds. Haizea, a resource lease manager, can act as a scheduling back end for OpenNebula, providing features not found in other cloud software or virtualization-based data center management software.


Prioritized Concerns for Building Cloud Solutions PDF
Jason Carolan - Director of Cloud Solution Development at VMware, Inc
John Stanford - Cloud Solutions Architect at VMware, Inc.

The following paper details some of the areas necessary for successful building and adoption of cloud infrastructure today. In many cases, they are imperatives that seem yet to be solved. Central and obvious to cloud environments are the APIs that control the environments, the tools built to support them, and the virtualization, billing, and utility infrastructure. There are many clouds. Clouds should and can talk to other clouds either via orchestration above the cloud or (eventually) by native capability within the cloud. But there are several aspects that are not so obvious. Security, resource management, protocols, and integration between traditional IT environments and a new “cloud-like” model must integrate together.


European Grid Initiative Blueprint PDF
Company Profile: European Grid Initiative (EGI)

The resources currently coordinates by EGEE will be managed through the European Grid Initiative (EGI) as of 2010. In EGI each country's grid infrastructure will be run by National Grid Initiatives. The adoption of this model will enable the next leap forward in research infrastructures to support collaborative scientific discoveries. EGI will ensure abundant, high-quality computing support for the European and global research community for many years to come.


Cloud Computing Interoperability and Standardization for the Telecommunications Industry PDF
Company Profile: European Telecommunications Standards Institute (ETSI)

Grid and Cloud Computing Technology: Interoperability and Standardization for the Telecommunications Industry


OpenPEX: An Open Provisioning and Execution System for Virtual Machines PDF
Company Profile: University of Melbourne

Written By Srikumar Venugopal from the School of Computer Science and Engineering, University of New South Wales, Australia. James Broberg and Rajkumar Buyya from the Department of Computer Science and Software Engineering, The University of Melbourne, Australia.


Ubuntu Enterprise Cloud Architecture PDF
Simon Wardley - Researcher at CSC Leading Edge Forum

By Simon Wardley, Etienne Goyer & Nick Barcet. Ubuntu Enterprise Cloud (UEC) brings Amazon EC2-like infrastructure capabilities inside the firewall. The UEC is powered by Eucalyptus, an open source implementation for the emerging standard of the EC2 API. This solution is designed to simplify the process of building and managing an internal cloud for businesses of any size, thereby enabling companies to create their own self-service infrastructure. This white paper tries to provide an understanding of the UEC internal architecture and possibilities offered by it in terms of security, networking and scalability.


ElasTraS: An Elastic Transactional Data Store in the Cloud PDF
Company Profile: University of California Santa Barbara

By Sudipto Das, Divyakant Agrawal, and Amr El Abbadi. Abstract: Over the last couple of years, “Cloud Computing” or “Elastic Computing” has emerged as a compelling and successful paradigm for internet scale computing. One of the major contributing factors to this success is the elasticity of resources. In spite of the elasticity provided by the infrastructure and the scalable design of the applications, the elephant (or the underlying database), which drives most of these web-based applications, is not very elastic and scalable, and hence limits scalability. In this paper, we propose ElasTraS which addresses this issue of scalability and elasticity of the data store in a cloud computing environment to leverage from the elastic nature of the underlying infrastructure, while providing scalable transactional data access. This paper aims at providing the design of a system in progress, highlighting the major design choices, analyzing the different guarantees provided by the system, and identifying several important challenges for the research community striving for computing in the cloud.


Reflective Control for an Elastic Cloud Application: An Automated Experiment Workbench PDF
Company Profile: Duke University

By Azbayar Demberel, Jeff Chase, and Shivnath Babu Abstract: This paper addresses “reflective” control for applications that use server resources from a shared cloud infrastructure opportunistically. In this approach, an external reflective controller launches application functions based on knowledge of what resources are available from the cloud, their cost, and their value to the application through time. As a driving example, we consider re-flective control for an important use of elastic computing: a virtual workbench for digital experiments, focusing on automated benchmarking. We report progress on a Workbench Automation/Intelligence Framework (Waif), and show how it can adapt to available cloud resources by planning and launching experiments in parallel. Waif is part of the ongoing Automat project – an open testbed for programmable hosting centers, built on the ORCA resource leasing platform. We designed a prototype Waif, directed at constructing server performance models by mapping server behavior within a multi-dimensional parameter space. The planner estimates the value and cost of candidate experiments based on the results of completed experiments. In this setting, we show the potential of reflective control to accelerate progress toward a benchmarking objective in a way that balances speed, accuracy, and cost.


Private Virtual Infrastructure for Cloud Computing PDF
Company Profile: University of Maryland at Baltimore

By F. John Krautheim. Abstract: Cloud computing places an organization’s sensitive data in the control of a third party, introducing a significant level of risk on the privacy and security of the data. We propose a new management and security model for cloud computing called the Private Virtual Infrastructure (PVI) that shares the responsibility of security in cloud computing between the service provider and client, decreasing the risk exposure to both. The PVI datacenter is under control of the information owner while the cloud fabric is under control of the service provider. A cloud Locator Bot pre-measures the cloud for security properties, securely provisions the datacenter in the cloud, and provides situational awareness through continuous monitoring of the cloud security. PVI and Locator Bot provide the tools that organizations require to maintain control of their information in the cloud and realize the benefits of cloud computing.


Using Proxies to Accelerate Cloud Applications PDF
Company Profile: University of Minnesota, Twin Cities

By Jon Weissman and Siddharth Ramakrishnan. Abstract: A rich cloud ecosystem is unfolding with clouds emerging to provide platforms and services of many shapes and sizes. We speculate that future network applications may wish to utilize and synthesize capabilities from multiple clouds. The problem is this may entail significant data communication that derives from the client server paradigm imposed by most clouds. To address this bottleneck, we propose a cloud proxy network that allows optimized data-centric operations to be performed at strategic network locations. We show the potential of this architecture for accelerating cloud applications.


Nebulas: Using Distributed Voluntary Resources to Build Clouds PDF
Company Profile: University of Minnesota, Twin Cities

By Abhishek Chandra and Jon Weissman. Abstract: Current cloud services are deployed on well provisioned and centrally controlled infrastructures. However, there are several classes of services for which the current cloud model may not fit well: some do not need strong performance guarantees, the pricing may be too expensive for some, and some may be constrained by the data movement costs to the cloud. To satisfy the requirements of such services, we propose the idea of using distributed voluntary resources—those donated by end-user hosts—to form nebulas: more dispersed, less managed clouds. We first discuss the requirements of cloud services and the challenges in meeting these requirements in such voluntary clouds. We then present some possible solutions to these challenges and also discuss opportunities for further improvements to make nebulas a viable cloud paradigm.


Virtual Putty: Reshaping the Physical Footprint of Virtual Machines PDF
Company Profile: University of Minnesota, Twin Cities

By Jason Sonnek and Abhishek Chandra. Abstract: Virtualization is a key technology underlying cloud computing platforms, where applications encapsulated within virtual machines are dynamically mapped onto a pool of physical servers. In this paper, we argue that cloud providers can significantly lower operational costs, and improve hosted application performance, by accounting for affinities and conflicts between co-placed virtual machines. We show how these affinities can be inferred using location-independent VM characterizations called virtual footprints, and then show how these virtual footprints can be used to reshape the physical footprint of a VM—its physical resource consumption—to achieve higher VM consolidation and application performance in a cloud environment. We also identify three general principles for minimizing a virtual machine’s physical footprint, and discuss challenges in applying these principles in practice.


Contributions Results for Developers: Cloud Infrastructure

Showing 1 - 20 of 39 Next > Last >>

A Unified Reinforcement Learning Approach for Autonomic Cloud Management
Company Profile: Wayne State University

Cloud Computing, unlocked by virtualization, is emerging as an increasingly important service-oriented computing paradigm. The goal of this project is to develop a unified learning approach, namely URL, to automate the configuration processes of virtualized machines and applications running on the virtual machines and adapt the systems configuration to the dynamics of cloud.
Cloud-Computing Infrastructure and Technology for Education (CITE)
Company Profile: Massachusetts Institute of Technology (MIT)

This project will support the development of middleware that will enable numerical models to be run on commercial compute farms via cloud computing and exploited in ongoing and future classroom educational activities. The intellectual merit of this work derives from two linked parts: (1) development of technology that would be suitable for many educational scenarios, including providing access to parallel computing resources in classrooms (K-12 on to university). Students and teachers would be able to run and interact with numerical models developed by leading researchers without the overhead of supporting software distributed to desktops in a school or the logistical headache of maintaining a cluster resource. Commercial compute farms would be exploited in which the technical `nitty-g ....
CloudSim
Company Profile: University of Melbourne

Cloud computing focuses on delivery of reliable, secure, fault-tolerant, sustainable, and scalable infrastructures for hosting Internet-based application services. These applications have different composition, configuration, and deployment requirements. Quantifying the performance of scheduling and allocation policy on a Cloud infrastructure (hardware, software, services) for different application and service models under varying load, energy performance (power consumption, heat dissipation), and system size is an extremely challenging problem to tackle. The use of real test beds such as Amazon EC2, limits the experiments to the scale of the testbed, and makes the reproduction of results an extremely difficult undertaking, as the conditions prevailing in the Internet-based environments are beyond the control of the tester.
CloudStor: Performance Evaluation of On-Demand Provisioning of Data Intensive Applications
Company Profile: University of California San Diego

The National Science Foundation has awarded a grant to researchers at SDSC to explore new ways to manage extremely large data sets hosted on massive clusters, which have become known as computing “clouds”. This research will use the LiDAR topography data hosted by OpenTopography as a test case and will focus on how cloud computing can aid the management and processing of massive spatial data sets. The project will study dynamic strategies for provisioning such applications by doing a performance evaluation of alternative approaches for serving very large data sets. The cloud platforms that will be used in the project will be the Google-IBM CluE cluster and the HP-Intel-Yahoo cluster, Open Cirrus Cloud Computing Testbed, at the University of Illinois, both of which have been assemble ....
Enabling Grids for E-sciencE (EGEE)
Company Profile: European Grid Initiative (EGI)

Enabling Grids for E-sciencE (EGEE) is Europe's leading grid computing project, providing a computing support infrastructure for over 10,000 researchers world-wide, from fields as diverse as high energy physics, earth and life sciences. In 2009 EGEE is focused on transitioning to a sustainable operational model, while maintaining reliable services for its users. The EGEE project brings together experts from more than 50 countries with the common aim of building on recent advances in Grid technology and developing a service Grid infrastructure which is available to scientists 24 hours-a-day. The project provides researchers in academia and business with access to a production level Grid infrastructure, independent of their geographic location. The EGEE project also focuses on attrac ....
ETSI Technical Committee GRID (TC GRID)
Company Profile: European Telecommunications Standards Institute (ETSI)

The goal of ETSI TC GRID is to address issues associated with the convergence between IT and Telecommunications. The focus is on scenarios where connectivity goes beyond the local network. This includes not only Grid computing but also the emerging commercial trend towards Cloud computing which places particular emphasis on ubiquitous network access to scalable computing and storage resources.
FutureGrid
Company Profile: Indiana University

This project provides a capability that makes it possible for researchers to tackle complex research challenges in computer science related to the use and security of grids and clouds. The project team will provide a significant new experimental computing grid and cloud test-bed, named FutureGrid, to the research community, together with user support for third-party researchers conducting experiments on FutureGrid.
Hierarchically-Redundant, Decoupled Storage Project (HaRD)
Company Profile: University of Wisconsin, Madison

The Wisconsin Hierarchically-Redundant, Decoupled storage project (HaRD) investigates the next generation of storage software for hybrid Flash/disk storage clusters. The main objective of the project is to improve the performance of storage in a variety of diverse scenarios, including new application environments such as photo storage as found in Facebook and Flickr, high-end scientific processing as found in government labs, and large-scale data processing such as that found in Google and Microsoft.
Hybrid Opportunistic Computing for Green Clouds
Company Profile: North Carolina State University

Abstract: On-demand, service-oriented cloud computing infrastructures continue to increase in popularity with organizations. Three observations motivate us to investigate running high-throughput, data-intensive tasks as background workloads on these cloud infrastructures. First, the rapid growth in hardware parallelism leaves more residue resources to be exploited. Second, the "incremental power usage" of piggybacking a secondary background workload onto the foreground workload to utilize those residue resources is relatively low. Third, the advances in GPGPU (General-Purpose GPU) processing enable a novel coupling of concurrent workloads. This project will explore a new computing model of offering cloud services on active nodes that are serving on-demand utility computing users. We pla ....
Integrated Cluster Computing Architecture (INCA)
Company Profile: Carnegie Mellon University

This research project funded by the NSF CluE program is focused on developing the Integrated Cluster Computing Architecture (INCA) for machine translation (using computers to translate from one language to another). Open-source toolkits make it easier for new research groups to tackle the problem at lower costs, broadening participation. Unfortunately, existing toolkits have not kept up with the computing infrastructure required for modern big data approaches to machine translations INCA will fill this void.
Large-Scale Distributed Scientific Experiments on Shared Substrate
Company Profile: Indiana University

The proposed research may serve as a basis of an Internet architecture that will allow natural sharing of resources among multiple organizations by dynamically configuring and creating a requirement specific network context for a particular application.
Massive Graphs in Clusters (MAGIC)
Company Profile: University of California Santa Barbara

Many of today's data-intensive application domains, including searches on social networks like Facebook and protein matching in bioinformatics, require us to answer complex queries on highly-connected data. The UCSB Massive Graphs in Clusters (MAGIC) project is focused on developing software infrastructure that can efficiently answer queries on extremely large graph datasets. The MAGIC software will provide an easy to use interface for searching and analyzing data, and manage the processing of queries to effeciently take advantage of computing resources like large datacenters. ....
MetaCDN
Company Profile: University of Melbourne

Content Delivery Networks (CDNs) such as Akamai and Mirror Image place web server clusters in numerous geographical locations to improve the responsiveness and locality of the content it hosts for end-users. However, their services are priced out of reach for all but the largest enterprise customers. An alternative approach to content delivery could be achieved by leveraging existing infrastructure provided by ’storage cloud’ providers, at a fraction of the cost. MetaCDN is a system that leverages several existing ’storage clouds’, creating an integrated overlay network that provides a low cost, high performance content delivery network for content creators. MetaCDN intelligently places content onto one or many storage providers based on the quality of service, coverage and budget preferences of participants.
NEXOF
Company Profile: NESSI

The overall ambition of NESSI is to deliver NEXOF, a coherent and consistent open service framework leveraging research in the area of service-based systems to consolidate and trigger innovation in service-oriented economies. The Three Core Elements of NEXOFNESSI Open Reference Model: An open specification, which includes the conceptual model of the core elements that enable service-based ecosystems and their relationship as well as underlying rules, principles and policies which lead to interoperable implementations. Core elements include business dynamics, development environment and operational environment. NESSI Open Reference Architecture: Addressing definition and selection of innovative architectural styles and patterns based on the reference model. Ou ....
One Thousand Points of Light
Company Profile: University of Minnesota, Twin Cities

A large class of distributed data-rich applications, including distributed data mining, distributed workflows, and Web 2.0 Mashups, are increasingly relying on cloud services to meet their data storage and computing demands. This project proposes a cloud proxy network that allows optimized and reliable data-centric operations to be performed at strategic network locations.
Open Cloud Testbed (OCT)
Company Profile: Open Cloud Consortium (OCC)

This working group manages and operates the Open Cloud Testbed. The Open Cloud Testbed uses the Cisco C-Wave and UIC Teraflow Network for its network connections. Both use wavelengths provided by the National Lambda Rail. Currently membership in this working group is limited to OCC members who can contribute computing, networking, or other resources to the Open Cloud Testbed.
OpenPEX
Company Profile: University of Melbourne

Virtual Machines (VMs) have become capable enough to emulate full-featured physical machines in all aspects. Therefore, they have become the foundation not only for flexible data center infrastructure but also for commercial Infrastructure-as-a-Service (IaaS) solutions. However, current providers of virtual infrastructure offer simple mechanisms through which users can ask for immediate allocation of VMs. More sophisticated economic and allocation mechanisms are required so that users can plan ahead and IaaS providers can improve their revenue. This paper introduces OpenPEX, a system that allows users to provision resources ahead of time through advance reservations. OpenPEX also incorporates a bilateral negotiation protocol that allows users and providers to come to an agreement by exchanging offers and counter-offers. These functions are made available to users through a web portal and a REST-based Web service interface.
OPTIMIS
Company Profile: Umeå Universitet

OPTIMIS (Optimized Infrastructure Services) is a EU FP7 IP project scientifically lead by the Umeå group. The OPTIMIS project takes a holistic approach to management of compute clouds. With the challenges of service and infrastructure providers as the point of departure, OPTIMIS focuses on open, scalable and dependable service platforms and architectures that allow flexible and dynamic provision of advanced services.
RESERVOIR Project
Company Profile: RESERVOIR, Resources and Services Virtualization without Barriers

The RESERVOIR project is intended to increase the competitiveness of the EU economy by introducing a powerful ICT infrastructure for the reliable and effective delivery of services as utilities. This infrastructure will support the set-up and deployment of services on demand, and competitive costs, across disparate administrative domains, while assuring quality of service.
The Open Grid Forum Open Cloud Computing Interface (OCCI)
Company Profile: Open Grid Forum OGF

The Open Grid Forum Open Cloud Computing Interface (OCCI) working group will deliver an API specification for remote management of cloud computing infrastructure, allowing for the development of interoperable tools for common tasks including deployment, autonomic scaling and monitoring. The specification will be all high level functionality required for the life-cycle management of virtual machines (or workloads) running on virtualization technologies (or containers) supporting service elasticity.
Contributions Results for Developers: Cloud Infrastructure

Showing 1 - 20 of 25 Next > Last >>

Tilera's Chip for the Cloud Computing Age
By: Ian King and Ari Levy
It's rare for a semiconductor company to promise a chip that will deliver 10 times the capacity of Intel's (INTC) best. That's the claim Silicon Valley startup Tilera is making about a chip it intends to unveil later this year. Tilera says its design, which will pack 100 microprocessors, or cores, onto a single thumbnail-size piece of silicon, will result in faster, energy-efficient computers capable of performing more tasks simultaneously. Intel's new Xeon chip has 10 cores.
read the full article >>
The Cloud Storage Conundrum
By: Mike Vizard
When it comes to invoking cloud computing services, IT organizations are using the cloud to primarily address backup and archiving. IT organizations don’t really want to have to buy the IT infrastructure needed to house the data that is rarely accessed. But the problem is that moving data back and forth between on-premise systems and the cloud can be expensive. One of the ways that cloud computing providers make up for providing inexpensive storage resources is by marking up the cost of network bandwidth to access it.
read the full article >>
AMD Aims Enterprise Chips at Cloud, Virtualization
By: Staff
Enterprise computing increasingly will hinge on cloud computing and virtualization, and Advanced Micro Devices' server chip road map dovetails nicely with that trend, according to a company executive.
read the full article >>
Platform Computing Extends HPC Reach Into MapReduce
By: Derrick Harris
High-performance computing leader Platform Computing hopes to capitalize on the big data movement by spreading its wings beyond its flagship business of managing clusters and grids and into managing MapReduce environments, as well. As was the case when Platform made its foray into cloud computing in 2009, the news is significant, because Platform has a solid foundation among leading businesses, especially in the financial services industry. If large financial organizations were leery about taking their analytics efforts to the next level, Platform might help spur them along, and it might help drive even further choice for customers by driving other HPC vendors into the MapReduce and Hadoop space.
read the full article >>
Samsung Pushes 'Green' Memory
By: David Price
Samsung's components division is pushing the eco-friendly angle at CeBIT this year (just as it did last year). The difference this time around is that the company believes its reduced-power-consumption 30nm DDR memory could hold the key to the future of cloud computing. Following on neatly from the Cloud Computing Summit, at which a number of the panelists spoke of the infrastructure obstacles facing the concept, Samsung was today showing off its 30nm Green DDR3 2 gigabit (Gb) and 4Gb memory. The firm says this RAM will allow datacentres to handle the excessive workloads of fully fledged cloud computing.
read the full article >>
Atlantic.Net Adds Cloud Computing API
By: Press Release
Atlantic.Net, a privately held leading hosting solutions provider, announced today that it has released a free application programming interface (API) for its world-class cloud computing platform, allowing customers to more efficiently utilize its robust functionality. Tasks such as provisioning new servers, deleting existing servers, and turning power on and off can now be handled with a few lines of code, rather than having to access a web interface.
read the full article >>
IBM speeds up virtual machine set-up to boost cloud computing
By: Madeline Bennett
LAS VEGAS: IBM took the wraps off four new software products on Tuesday, all designed at pushing companies further down the cloud computing route. At its Pulse 2011 service management event in Las Vegas, IBM unveiled a beta programme for the high-speed provisioning of virtual machines (VMs). Click here to find out more! "We're able to get VMs provisioned in a number of seconds now," said Dennis Quan, director at IBM's Tivoli China Development Labs. "Even when you up the number of VMs you're provisioning, you still get the time benefits."
read the full article >>
How Former Sun Exec Aims to Elevate Cisco's Cloud-Building Image
By: Chris Preimesberger
CTO Lew Tucker envisions a massive new cloudlike network on the horizon for Cisco to build that may one day number a trillion connected devices. Lew Tucker, Cisco Systems' new vice-president and chief technical officer for cloud computing systems, has a pretty significant mission for his company, which rapidly has been reinventing itself in the last two or three years
read the full article >>
Lenovo: Convergence through cloud as future of IT
By: Liau Yun Qing
The future of innovation for the IT industry will be in data convergence, said Lenovo's chief operating officer, adding that cloud computing and mobile Internet will play a big role for the company to achieve convergence. In an interview with ZDNet Asia during his recent trip to Singapore, Rory Read, president and chief operating officer of Lenovo, said the convergence technology, which allows seamless movement of data from one device to another, will be the next wave of innovation for the next three to five years. Users will not want to move their data from their smartphone to other devices manually, said Read, noting that this will need to be done seamlessly. "Think about it. You will take a picture with your smartphone or smart camera. It's going to automatically, [via] 3G, link up a
read the full article >>
3 elements of good clouds
By: David Linthicum
"We in IT finally seem to be getting to work on this whole cloud computing thing, rather than standing around arguing the benefits of private versus public clouds or trying to define elasticity. Good for us, but considering that most organizations have no experience building clouds, I put together a few items that should be a part of the process. (Note: I'm focusing on clouds built by enterprises.)"
read the full article >>
Mainframes Essential to Cloud Computing Survey
By: Fahmida Rashid
According to the results of a recent survey, 79 percent of IT organizations consider the mainframe to be an essential component of their cloud computing strategy, said CA Technologies on Oct. 7.
read the full article >>