Experiments funded in 1st Open Call

29 proposals were received from the open call, from which 4 were selected for funding. A second open call will be launched later in the project. Over 50% came from the targeted community (Internet of Services), the remainder drawn predominantly from across the rest of the FI. From 32 organisations included in the proposals, 9 were SMEs. No large commercial organisations submit a proposal, the remaining 23 organisation being universities or research centres.

TurboCloud

Partners: RedZinc and Cloudium Systems

An experiment which combines two complimentary technology platforms from two SMEs. One technology platform (Cloudium chipset) enables server-based desktop virtualisation. The other technology platform (VPS controller) enables dynamic virtual path slices to deliver a right of way across the internet without interference from unwanted traffic. Both technologies are in beta development stage. For the server-based desktop virtualization to work satisfactorily and support cloud applications across the public internet a certain level of bandwidth guarantee is required especially for multimedia applications.

The hypothesis for this experiment is that by combining virtual path slice technology with server-based desktop virtualisation a satisfactory user experience can be achieved especially where multimedia applications are used.

This experiment will be conducted using the Virtual Wall facility of the Bonfire project. By establishing a dynamic virtual path slice and associated bandwidth as the application requires, and by varying background traffic load and network conditions in the context of multiple applications the robustness of a virtual path slice will be tested.

VCOC: Virtual Clusters on Federated Cloud Sites

Partner:  CESGA

The feasibility of using several Cloud environments for the provision of Services which need the allocation of a large pool of CPUs or virtual machines to a single user (as High Throughput Computing or High Performance Computing), will be investigated. Three main issues will be researched: the factors that affect deployment time of a distributed virtual cluster as a single entity in a multi-site Cloud infrastructure and its elasticity management, based on application performance data; the penalties and bottlenecks associated to the use of this virtual distributed cluster; and the feasibility of benefit from the distributed nature of the multi-site Cloud to tackle with the failures of a single provider to guarantee the Quality of Service to the final user. The results of the experiments will help companies and institutions to provide better services which demand high capacity computing.

ExSec: Experimenting Scalability of Continuous Security Monitoring in BonFIRE

Partner:  CETIC

The ExSec experiment aims to determine an empirically validated elasticity function for security monitoring. Besides verifying the scalability of the security monitor on different application loads for a number of virtual machines, another important aspect of the experiment will be to verify scalability behaviour on different cloud technologies such as different types of hypervisors and different types of cloud environment managers. The ExSec experiment will leverage the results of two previous European funded projects namely FP6 project GRIDTRUST and FP7 project RESERVOIR. A framework to perform continuous security monitoring on GRID technologies was developed under the auspices of FP6 project GRIDTRUST. A portion of this framework for the policy-based access control was adapted to CLOUD technologies in the FP7 project RESERVOIR. However, only small-scale security tests with a handful of virtual machines (and grid nodes) were performed during GRIDTRUST and RESERVOIR. The ExSec experiment will perform a much more rigorous scalability test for applications requiring continuous security monitoring in the cloud.

TEOS: Testing Optimization in Service Ecosystems

Partner:  University of Manchester

This study aims to determine the conditions for achieving resilient and optimal service compositions on a distributed cloud infrastructure for the Future Internet. It will deploy and test two service optimization models, characterized as global optimization and local optimization. The first one, developed in the EC-funded SOA4All project, computes the optimization of a service composition by analysing end-to end interactions between services. Local optimization is given by the Dynamic Agent-based Ecosystem Model, which computes local optimizations of service compositions by letting the one-to-one interactions between any service provider and any consumer create emergent service chains providing composite services that are resilient to changes.

Driving Experiments

The BonFIRE project will develop three exemplar driving experiments that act as the driving force for the facility requirements, development and operations, especially in the early stages of the project. The experiments pose state-of-the-art research challenges in cloud computing and will help to ensure the BonFIRE facility remains state-of-the-art and applicable to the challenges facing researchers. The experiments will also be used to promote best practice usage of the facility and provide early success stories that offer a blue print for experiments funded in the open call.

Experiment 1: Dynamic Service Landscape Orchestration for internet of services

Partners: SAP and HP Labs

With the growth of Web services and Enterprise SOA investments, new trends and delivery models are emerging for extending and trading services outside traditional ownership and provisioning boundaries. Following the successes of Internet e-commerce marketplaces, the on-demand model for applications has emerged to cut costs and increase the flexibility in provisioning through which an Internet-scale infrastructure is envisaged. Despite this complexity, navigation of such services will need to be as seamless for consumers as linking to pages. At the same time, business processes, not just individuals, are expected to be consumers of widely procured services. The objective of the experiment is to investigate requirements the effect of composition of the various service at the different layers of the future internet ecosystem (Cloud composition and services composition). The end goals are to extract from such experiments performance, deployment, management and life cycle model for Federated Enterprise SOA Landscape within Cloud environment.

Experiment 2: QoS-Oriented Service Engineering for Federated

Partner: IT Innovation Centre at the University of Southampton

Clouds Cloud computing offers the potential to dramatically reduce the cost of software services through the commoditization of information technology assets and on-demand usage patterns. However, the complexity of determining quality of service (QoS) requirements for applications in such environments introduces significant market inefficiencies and has driven the emergence of a new class of service engineering tools within the Platform-as-a-Service (PaaS) layer for modelling, analysing and planning the
QoS of service based applications deployed within the cloud. Today, Infrastructure-as-a-Service (Iaas) QoS offerings are expressed in low level terms (i.e machinelevel,

CPU speed, disk space, etc). Their customers, typically application users, are often interested in application-level parameters because the application is the thing that gives the customer the value (e.g. CFD simulation or video rendering). Therefore, the gap between the terms the Infrastructure provider offers and what the users really want is large which results in a complex relationship between application performance and resource parameters. The complexity of this relationship is increased for applications deployed across federated clouds where even low-level resource descriptions may differ due to lack of standardisation.

Service engineering techniques aim to provide IaaS customers with a set of generic tools that can manage the complexity of the relationship. However, the parameter space used to determine an optimal set of resources requested by a customer in Service Level Agreements (SLAs) is too large to estimate results with acceptable levels of accuracy and precision unless very specific models are developed for each application. This course is not economically viable for most applications. We hypothesise that if IaaS providers raise the level of abstraction for resource QoS terms used in SLAs based on benchmark scores for specific classes of applications, significant overall efficiencies will be achieved for all cloud stakeholders due increased accuracy of requirements achieve by a simplification of service planning and adaptation models, and increased market adoption and flexibility due to the simplification of the federation between Platform and Infrastructure stakeholders. The objective of this experiment is to investigate our hypothesis through the deployment of a serviceoriented application on a cloud testbed that incorporates novel PaaS engineering tools from a leading Internet of Services (IoS) project (EU IST IRMOS) and IaaS provided by BonFIRE but configured with service offers at the identified level of abstraction. The research questions the experiment will address include: Does the expression of IaaS parameters in terms of application class benchmarks simplify the creation of application-level QoS that can be easily understood by users? How does specification of QoS in application-level terms provide efficiencies for users and providers in a service marketplace? The experiment will not only address the specific research challenge detailed above, but will also provide a concrete exemplar scenario where research results from an IoS project can exploit FIRE, can go beyond what is possible within the current project and provide driving requirements for the FIRE facility.

Experiment 3: Elasticity requirement for cloud based applications

Partner: ATOS

The current cloud approach is show to be very appealing for users, mainly within their pay per use and scalability aspect. But they are not so clearly attractive for the providers, especially in the case of Infrastructure as a Service (IaaS), that need a very efficient way to use their resources, to keep in business under extremely changing conditions. The target of the experiment is to experimentally determine the elasticity requirements for cloud based web applications, that will help providers to comfortably keep within SLA levels, without excessive over-provisioning of resources. To achieve this goal, in a first phase we will stress web applications with different load patterns and different provisioned infrastructures. The results of this first phase will be consolidated, thus creating some scalability policies that will be verified in a second phase, by studying their behaviour under changing loads.