BonFIRE operates a Cloud facility based on an Infrastructure as a Service delivery model with guidelines, policies and best practices for experimentation. A federated multi-platform approach is adopted, providing interconnection and interoperation between novel service and networking testbeds.

Geographically distributed testbeds

BonFIRE currently comprises 7 geographically distributed testbeds across Europe, which offer heterogeneous Cloud resources, including compute, storage and networking. Each testbed can be accessed seamlessly with a single experiment descriptor, using the BonFIRE API which is based on OCCI. See the image below for details about resource offering on the different testbeds, which include on-demand resources. For more information about the testbeds, please click [here].

BonFIRE Testbeds Scenarios

Resource control

BonFIRE offers an experimenter control of compute, storage and networking resources. BonFIRE supports dynamically creating, updating, reading and deleting resources throughout the lifetime of an experiment.

Networking resources
Each testbed offers several compute instance types specifying different CPU speeds and RAM sizes, each accessed with a single sign-on with root access.
BonFIRE offers several base VM images, which vary in storage size.   This can be further extended with block storage, made persistent if desired.
Advanced network emulation via the Virtual Wall testbed is possible, giving experimenters fine-grained control of network structure and performance properties.

Compute resources can be configured with application-specific contextualisation information that can provide important configuration information to the virtual machine; this information is available to software applications after the machine is started.

BonFIRE also supports elasticity within an expeirment, i.e., dynamically create, update and destroy resources from a running node of the experiment, including. cross-testbed elasticity.

Inria currently offers on-request compute resources in BonFIRE, allowing experimenters to reserve large quantities of physical hardware (162 nodes/1800 cores available). This gives experimenters flexibility to perform large-scale experimentation, as well as providing greater control of the experiment variables as exclusive access to the physical hosts is possible.

Further control of network performance between testbeds is also possible through future interconnection with Federica and GÉANT AutoBAHN.