You are on page 1of 168

OpenNebula 5.

0 Deployment guide
Release 5.0.2

OpenNebula Systems

Jul 21, 2016

This document is being provided by OpenNebula Systems under the Creative Commons Attribution-NonCommercial-
Share Alike License.

THE DOCUMENT IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IM-
PLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH
THE DOCUMENT.

i

CONTENTS

1 Cloud Design 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Open Cloud Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 VMware Cloud Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 OpenNebula Provisioning Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2 OpenNebula Installation 19
2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Front-end Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3 MySQL Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3 Node Installation 26
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2 KVM Node Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3 vCenter Node Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.4 Verify your Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4 Authentication Setup 45
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.2 SSH Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.3 x509 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.4 LDAP Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

5 Sunstone Setup 56
5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.2 Sunstone Installation & Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.3 Sunstone Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.4 User Security and Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.5 Cloud Servers Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.6 Configuring Sunstone for Large Deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

6 VMware Infrastructure Setup 88
6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.2 vCenter Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.3 vCenter Datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.4 vCenter Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

7 Open Cloud Host Setup 98
7.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
7.2 KVM Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
7.3 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

ii

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 10 References 143 10. . . . . . . . . . . . . . . . . 131 8.3 Logging & Debugging . . . . . . . . . . . . . . . . . . . . . 129 8. . . . . . . . . 127 8. . 163 iii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 8. . . . . . . . . .2 Node Setup . . . . . . . . . . . . . . . .1 Overview . . . . . . . . . . . . .6 iSCSI . . . . . . . . . . . . . . . . .5 VXLAN Networks . . . . . . . . .3 Bridged Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 8. . . 157 10. . . . . . . . . . . . . . . . . 137 9. . . . . . . . . . . . . . . . . . 117 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Ceph Datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Raw Device Mapping (RDM) Datastore . . . . . . . . . .4 PCI Passthrough . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Large Deployments . . . . . . . . . . . . 110 8 Open Cloud Storage Setup 117 8. . . . . . . . . . . . . . . . . . . . . . . . . .4 LVM Datastore . . . . . . . .4 Onedb Tool . . . . . . . . . . . . .4 802. . 143 10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 ONED Configuration . . . . . . . . . . . . . . . . . . . . . . 140 9. . . . . . . . . . . . . . . .Libvirt Datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Filesystem Datastore . . . . . . . . 135 9. . . . . . . . .6 Open vSwitch Networks . . . . . . . .1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 9 Open Cloud Networking Setup 135 9. .1Q VLAN Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 The Kernels & Files Datastore . . . . . . . .1 Overview . . . . . . . . . . . . . . . . . . . . 160 10. . . . . . . . . . . 139 9. . . . . . . 136 9. . . . . . . . . . . . .

2 Hypervisor Compatibility Section Compatibility Open Cloud Architecture This Section applies to KVM. VMware Cloud Architecture This Section applies to vCenter. scalability. numbers of users and so on) and the iii) provisioning workflow. In a small installation with a few hosts. we recommend that you create a plan with the features. as it introduces the needed concepts to correctly define a cloud architecture. Once the cloud architecture has been designed the next step would be to learn how to install the OpenNebula front-end. ie. and high availability characteristics you want in your deployment. Within this Chapter. For KVM clouds proceed to Open Cloud Architecture and for vCenter clouds read VMware Cloud Architecture. This Chapter provides informa- tion to plan an OpenNebula cloud based on KVM or vCenter. 1. as first step a design of the cloud and its dimension should be drafted. that combines existing 1 . perfor- mance. as well as the ii) planned dimension of the cloud (characteristics of the workload.1 How Should I Read This Chapter This is the first Chapter to read.1. OpenNebula is a simple but feature-rich and flexible solution to build and manage enterprise clouds and virtualized DCs. Then you could read the OpenNebula Provisioning Model to identify the wanted model to provision resources to end users. 1. This design needs to be aligned with the expected use of the cloud. In order to get the most out of a OpenNebula Cloud. you can skip this provisioning model guide and use OpenNebula without giving much thought to infrastructure partitioning and provisioning. 1.1 Overview The first step of building a reliable. you will be able to easily architect and dimension your deployment. and it needs to describe which data center components are going to be part of the cloud. as well as understand the technologies involved in the management of virtualized resources and their relationship. With this information.1. CHAPTER ONE CLOUD DESIGN 1. This comprises i) all the infrastructure components such as networking. how end users are going to be isolated and using the cloud. But for medium and large deployments you will probably want to provide some level of isolation and structure. useful and successful cloud is to decide a clear design. storage. OpenNebula Provisioning Model This Section applies to both KVM and vCenter.2 Open Cloud Architecture Enterprise cloud computing is the next step in the evolution of data center (DC) virtualization. authorization and virtualization back-ends.

0. automatic provision and elasticity. OpenNebula 5. • Datastores that hold the base images of the VMs. OpenNebula follows a bottom-up approach driven by sysadmins. and VLANs for the VMs.2. but this is a crucial exercise to build an efficient cloud. We also provide information and support about how to develop new drivers. This Section briefly describes the different choices that you can make for the management of the different subsystems. 1. A cloud architecture is defined by three components: storage. devops and users real needs. 1. OpenNebula presents a highly modular architecture that offers broad support for commodity and enterprise-grade hypervisor. There is at least one physical network joining all the hosts with the front-end. This workload is also tricky to estimate. monitoring.1 Architectural Overview OpenNebula assumes that your physical infrastructure adopts a classical cluster-like architecture with a front-end.2. and a set of hosts where Virtual Machines (VM) will be executed. The main aspects to take into account at the time of dimensioning the OpenNebula cloud follows. networking and virtualization. Open Cloud Architecture 2 . Therefore.2 virtualization technologies with advanced features for multi-tenancy.0 Deployment guide. • Hypervisor-enabled hosts that provide the resources needed by the VMs.2 Dimensioning the Cloud The dimension of a cloud infrastructure can be directly inferred from the expected workload in terms of VMs that the cloud infrastructure must sustain. • Physical networks used to support basic services such as interconnection of the storage servers and OpenNebula control operations.2. If your specific services are not supported we recommend to check the drivers available in the Add-on Catalog. the basic components of an OpenNebula system are: • Front-end that executes the OpenNebula services. storage. OpenNebula front-end The minimum recommended specs are for the OpenNebula front-end are: Resources Minimum Recommended configuration Memory 2 GB CPU 1 CPU (2 cores) Disk Size 100 GB Network 2 NICS 1. Release 5. networking and user management services.

Storage It is important to understand how OpenNebula uses storage. read the section about how to tune OpenNebula for large deployments. The rule of thumb is having at least 1GB per core. as it is usually the cloud bottleneck. it depends on the hypervisor). service and storage networks.2 The maximum number of servers (virtualization hosts) that can be managed by a single OpenNebula instance strongly depends on the performance and scalability of the underlying platform infrastructure. public. or 5 server of 16 cores each. each CPU core assigned to a VM must exists as a physical CPU core. in order to sustain a VM workload of 45 VMs with 2GB of RAM each.0 Deployment guide. OpenNebula services include: • Management daemon (oned) and scheduler (mm_sched) • Web interface server (sunstone-server) 1. mainly the difference between system and image datastore. In any case. and possibly access to the storage Datastores (either by direct mount or network).2. Less NICs can be needed depending on the storage and networking configuration. for a workload of 40 VMs with 2 CPUs. but this also depends on the expected workload. the cloud will need 80 physical CPUs. The general recommendation is that no more than 500 servers managed by a single instance. and ensuring enough system and image datastore space to comply with the limit set in the quotas. With overcommitment.2. It is always a good practice to count 10% of overhead by the hypervisor (this is not an absolute upper limit. while VCPU states virtual CPUs to be presented to the guest OS. By example. Related to this. so they will be able to sustain the estimated workload. as each one will incur a 10% overhead due to the hypervisors. so the rule of thumb is to devote enough space for all the images that OpenNebula will have registered. This machine needs network connectivity to all the hosts. Open Cloud Architecture 3 .000 servers in each zone. Dimensioning storage is a critical aspect. using the CPU and VCPU attributes: CPU states physical CPUs assigned to the VM. KVM nodes Regarding the dimensions of the KVM virtualization nodes: • CPU: without overcommitment. Network Networking needs to be carefully designed to ensure reliability in the cloud infrastructure. but there are users with 1. mainly the storage subsystem. OpenNebula 5. So. For instance. 16Gb of RAM. 90GB of physical memory is needed. The recommendation is having 2 NICs in the front-end (public and service) (or 3 NICs depending on the storage backend. These 80 physical CPUs can be spread among different hosts: 10 servers with 8 cores each. • The image datastore is where OpenNebula stores all the images registered that can be used to create VMs. in Ceph for a medium size cloud at least three servers are needed for storage with 5 disks each of 1TB. As an example. 2 CPUs of 4 cores each and at least 2 NICs.0. • The system datastore is where the VMs that are currently running store their disks. 1. One valid approach is to limit the storage available to users by defining quotas in the number of maximum VMs and also the Max Volatile Storage a user can demand. The base installation of OpenNebula takes less than 150MB. It very much depends on the underlying technology. It is trickier to estimate correctly since volatile disks come into play with no counterpart in the image datastore (volatile disks are created on the fly in the hypervisor). • MEMORY: Planning for memory is straightforward. OpenNebula allows cloud administrators to add more system and images datastores if needed. however. CPU dimension can be planned ahead. 10 hypervisors with 10GB RAM each will contribute with 9GB each (10% of 10GB = 1GB). The number of hosts is important. as by default there is no overcommitment of memory in OpenNebula. Release 5. access to the storage network may be needed) 4 NICs present in each virtualization node: private.3 Front-End The machine that holds the OpenNebula installation is called the front-end.

In general. OpenNebula 5. accessible storage and network connectivity. check the High Availability OpenNebula Section. the configuration of the nodes will be homogeneous in terms of the software components installed. check the Virtual Machines High Availability Section. basic performance indicators. Open Cloud Architecture 4 . you should consider using MySQL. Ideally.0. The Virtualization Subsystem is the component in charge of talking with the hypervisor installed in the hosts and taking the actions needed for each step in the VM life-cycle.5 Virtualization Hosts The hosts are the physical machines that will run the VMs. This may not always be the case. OpenNebula’s default database uses sqlite. If you are planning a production or medium to large scale deployment. Release 5. . Refer to the platform notes and chose the one that better fits your needs. OneGate. each datastore has to be accessible through the front-end using any suitable technology NAS.4 Monitoring The monitoring subsystem gathers information relative to the hosts and the virtual machines. 1.. check the Federation Section. A datastore is any storage medium.2. Please check the the Monitoring Section for more details. This information is collected by ex- ecuting a set of static probes provided by OpenNebula.0 Deployment guide. the KVM hypervisor. typically backed by SAN/NAS servers. and OpenNebula is configured by default to interact with hosts running KVM. OpenNebula natively supports one open source hypervisor. There are several certified platforms to act as nodes for each version of OpenNebula.6 Storage OpenNebula uses Datastores to store VMs’ disk images. This model is highly scalable and its limit (in terms of number of VMs monitored per second) is bounded to the performance of the server running oned and the database server. Refer to the platform notes and chose the one that better fits your needs.2 • Advanced components: OneFlow. econe. and homoge- neous hosts can be grouped in OpenNebula clusters If you are interested in fail-over protection against hardware and operating system outages within your virtualized IT environment.2. If you are interested in setting up a high available cluster for OpenNebula. If you need to federate several datacenters. the onead- min administration user. as well as VM status and capacity consumption. Note: Note that these components communicate through XML-RPC and may be installed in different machines for security or performance reasons There are several certified platforms to act as front-end for each version of OpenNebula. The information is sent according to the following process: each host periodically sends monitoring data to the front-end which collects it and processes it in a dedicated mod- ule. SAN or direct attached storage. with a different OpenNebula instance managing the resources but needing a common authentication schema.. such as the host status. 1. 1. 1.2.2.

its images are transferred from the datastore to the hosts. Warning: Default: The default system and images datastores are configured to use a filesystem with the ssh transfer drivers. shared and qcow.0 Deployment guide. There are three types: ssh. or when disks are attached or snapshotted. qcow deltas or simple filesystem links. • File Datastore: a special datastore used to store plain files. These files can be used as kernels. not disk images. OpenNebula is shipped with 3 different datastore classes: • System Datastores: to hold images for running VMs. or cloned to/from the System Datastore when the VMs are deployed or shutdown. it can mean a real transfer. Disk images are moved. • LVM: to use LVM volumes instead of plain files to hold the Virtual Images. these temporal images can be complete copies of the original image. Image datastores can be of different types. Release 5.2. depending on the underlying storage technology: • Filesystem: to store disk images in a file form. This reduces the overhead of having a file-system in place and thus increases performance. • Image Datastores: to store the disk images repository. 1. Open Cloud Architecture 5 . • Ceph: to store disk images using Ceph block devices. OpenNebula 5. ramdisks or context files. a symbolic link or setting up an LVM volume.0. Depending on the actual storage technology used.2 When a VM is deployed. Please check the Storage Chapter for more details. Depending on the storage technology used.

At least two different physical networks are needed: • Service Network: used by the OpenNebula front-end daemons to access the hosts in order to manage and monitor the hypervisors. you can install the following advanced components: • Multi-VM Applications and Auto-scaling: OneFlow allows users and administrators to define.1Q: restrict network access through VLAN tagging. • Instance Network: offers network connectivity to the VMs across the different hosts. execute and manage multi-tiered applications. and is completely integrated with the advanced OpenNebula user and group management. 1. • 802. you will probably need to make one or more physical networks accessible to them.7 Networking OpenNebula provides an easily adaptable and customizable network subsystem in order to integrate the specific net- work requirements of existing datacenters. which requires support by the hardware switches. • ebtables: restrict network access through Ebtables rules.2.2. 1. and firewalling rules are also ignored.0 Deployment guide. • vxlan: segment a VLAN in isolated networks using the VXLAN encapsulation protocol. • fw: firewalling rules are applied. No special hardware configuration required. • Cloud Bursting: Cloud bursting is a model in which the local resources of a Private Cloud are combined with resources from remote Cloud providers. and move image files. but networking isolation is ignored.2 1. Please check the Authentication Chapter to find out more information about the authentication technologies supported by OpenNebula. To make an effective use of your VM deployments. Each group of Virtual Machines is deployed and managed as a single entity.0. Open Cloud Architecture 6 . OpenNebula 5. Please check the Networking Chapter to find out more information about the networking technologies supported by OpenNebula.9 Advanced Components Once you have an OpenNebula cloud up and running. Release 5.2. 1.2. • ovswitch: restrict network access with Open vSwitch Virtual Switch. or services composed of interconnected Virtual Machines with deployment dependencies between them.8 Authentication The following authentication methods are supported to access OpenNebula: • Built-in User/Password • SSH Authentication • X509 Authentication • LDAP Authentication (and Active Directory) Warning: Default: OpenNebula comes by default with an internal built-in user/password authentication. The OpenNebula administrator may associate one of the following drivers to each Host: • dummy (default): doesn’t perform any network operation. Such support for cloud bursting enables highly scalable hosting envi- ronments. It is highly recommended to install a dedicated network for this purpose.

• Application Insight: OneGate allows Virtual Machine guests to push monitoring information to OpenNebula. Each ESX Cluster is presented in OpenNebula as an aggregated hypervisor. There is at least one physical network joining all the vCenters and ESX hosts with the front-end. including features like virtual data centers. OpenNebula over vCenter is intended for companies that want to keep VMware management tools.3 VMware Cloud Architecture OpenNebula is intended for companies willing to create a self-service cloud environment on top of their VMware in- frastructure without having to abandon their investment in VMware and retool the entire stack. data center federation or hybrid cloud computing to connect in-house vCenter infrastructures with public clouds.3. For these companies. cloud-like provisioning layer on top of vCenter. Connection from the front-end to vCenter is for management purposes. and a set of vCenter instances grouping ESX hosts where Virtual Machines (VM) will be executed. OpenNebula 5.provided by the VMware vSphere product family. In these environments. Note that OpenNebula scheduling decisions are therefore made at ESX Cluster level. The following interface provide a simple and remote management of cloud (virtual) resources at a high abstraction level: Amazon EC2 and EBS APIs. detect problems in their applications. HA or DRS scheduling. or to sell your overcapacity. Release 5. or proceed on a tactically challenging but strategically rewarding path of open. 1. throwing away VMware and retooling the entire stack is not the answer. procedures and workflows. OpenNebula exposes a multi-tenant. 1. VMware Cloud Architecture 7 . vCenter then uses the DRS component to select the actual ESX host and Datastore to deploy the Virtual Machine.2 • Public Cloud: Cloud interfaces can be added to your Private Cloud if you want to provide partners or external users with access to your infrastructure. OpenNebula seamlessly integrates with existing vCenter infrastructures to leverage advanced features -such as vMo- tion. whereas the connection from the front-end to the ESX hosts is to support the VNC connections.1 Architectural Overview OpenNebula assumes that your physical infrastructure adopts a classical cluster-like architecture with a front-end. as they consider moving beyond virtualization toward a private cloud. However.0 Deployment guide. The VMware vCenter drivers enable OpenNebula to access one or more vCenter servers that manages one or more ESX Clusters. 1.0. Users and administrators can use it to gather metrics.3. they can choose to either invest more in VMware. and trigger OneFlow auto-scaling rules.

If your specific services are not supported we recommend to check the drivers available in the Add-on Catalog. OpenNebula 5. We also provide information and support about how to develop new drivers.0 Deployment guide. 1. the basic components of an OpenNebula cloud are: • Front-end that executes the OpenNebula services. • Datastores that hold the base images of the VMs. networking and virtualization.2 Dimensioning the Cloud The dimension of a cloud infrastructure can be directly inferred from the expected workload in terms of VMs that the cloud infrastructure must sustain. monitoring. This Section briefly describes the different choices that you can make for the management of the different subsystems. • Physical networks used to support basic services such as interconnection of the VMs.0. storage. OpenNebula front-end The minimum recommended specs are for the OpenNebula front-end are: 1. OpenNebula presents a highly modular architecture that offers broad support for commodity and enterprise-grade hypervisor.3. • Hypervisor-enabled hosts that provide the resources needed by the VMs. Release 5. networking and user management services. This workload is also tricky to estimate.2 A cloud architecture is defined by three components: storage. Therefore. but this is a crucial exercise to build an efficient cloud.3. VMware Cloud Architecture 8 .

It is important to ensure that enough space is available for new VMs. VMware Cloud Architecture 9 . as each one will incur a 10% overhead due to the hypervisors. using the CPU and VCPU attributes: CPU states physical CPUs assigned to the VM. These 80 physical CPUs can be spread among different hosts: 10 servers with 8 cores each. otherwise its creation process will fail. The number of hosts is important. The base installation of OpenNebula takes less than 150MB. OpenNebula can manage any datastore that is mounted in the ESX and visible in vCenter.000 VMs in total. for a workload of 40 VMs with 2 CPUs.. OpenNebula services include: • Management daemon (oned) and scheduler (mm_sched) • Web interface server (sunstone-server) • Advanced components: OneFlow. OpenNebula is able to manage a vCenter infrastructure of the following characteristics: • Up to 4 vCenters • Up to 40 ESXs managed by each vCenter • Up to 1.2 Resource Minimum Recommended configuration Memory 2 GB CPU 1 CPU (2 cores) Disk Size 100 GB Network 2 NICS When running on a front-end with the minimums described in the above table. The rule of thumb is having at least 1GB per core. it depends on the hypervisor).3 Front-End The machine that holds the OpenNebula installation is called the front-end. • MEMORY: Planning for memory is straightforward. OpenNebula 5.3. 1. the cloud will need 80 physical CPUs. This machine needs network connectivity to all the vCenter and ESX hosts. as it is usually the cloud bottleneck.. Network Networking needs to be carefully designed to ensure reliability in the cloud infrastructure. OpenNebula allows cloud administrators to add more datastores if needed. econe. and ensuring enough datastore space to comply with the limit set in the quotas. in order to sustain a VM workload of 45 VMs with 2GB of RAM each. Storage Dimensioning storage is a critical aspect. Release 5. but this also depends on the expected workload. Less NICs can be needed depending on the storage and networking configuration. service and storage networks. For instance. public. So. or 5 server of 16 cores each. CPU dimension can be planned ahead. 1.3. By example. each CPU core assigned to a VM must exists as a physical CPU core. 10 hypervisors with 10GB RAM each will contribute with 9GB each (10% of 10GB = 1GB). The recommendation is having 2 NICs in the front-end (service and public network) 4 NICs present in each ESX node: private. It is always a good practice to count 10% of overhead by the hypervisor (this is not an absolute upper limit. . 90GB of physical memory is needed. while VCPU states virtual CPUs to be presented to the guest OS. as by default there is no overcommitment of memory in OpenNebula. so they will be able to sustain the estimated workload. One valid approach is to limit the storage available to users by defining quotas in the number of maximum VMs. each vCenter managing up to 250 VMs ESX nodes Regarding the dimensions of the ESX virtualization nodes: • CPU: without overcommitment. In any case.0.0 Deployment guide. however. With overcommitment. OneGate. The datastore used by a VM can be fixed by the cloud admin or delegated to the cloud user.

you should consider using MySQL. check the Virtual Machines High Availability Section. OpenNebula’s default database uses sqlite.0. IPv4 and IPv6.5 Virtualization Hosts The VMware vCenter drivers enable OpenNebula to access one or more vCenter servers that manages one or more ESX Clusters. If you are interested in setting up a high available cluster for OpenNebula. The Virtualization Subsystem is the component in charge of talking with vCenter and taking the actions needed for each step in the VM life-cycle. OpenNebula 5. in particular three types of Address Ranges can be defined per Virtual Network representing the vCenter network resources: plain Ethernet.0 Deployment guide. If you are interested in fail-over protection against hardware and operating system outages within your virtualized IT environment. 1.4 Monitoring The monitoring subsystem gathers information relative to the hosts and the virtual machines. clone and/or upload VMDKs. If you are planning a production or medium to large scale deployment. then vCenter will pick the optimal Datastore to deploy the VM.3. The vCenter/ESX datastore representation in OpenNebula is described in the vCenter datastore Section. 1. 1. 1. OpenNebula can create a new logical layer of these vCenter Networks and Distributed vSwitches.2 Note: Note that these components communicate through XML-RPC and may be installed in different machines for security or performance reasons There are several certified platforms to act as front-end for each version of OpenNebula. This networking information can be passed to the VMs through the contextualization process. and as such.7 Networking Networking in OpenNebula is handled by creating or importing Virtual Network representations of vCenter Networks and Distributed vSwitches. check the High Availability OpenNebula Section.3. the datastore used by a VM can be fixed by the cloud admin or delegated to the cloud user. 1. Release 5. VMware Cloud Architecture 10 . This information is collected by executing a set of probes in the front-end provided by OpenNebula. When a VM is instantiated from a VM Template. Please check the the Monitoring Section for more details. OpenNebula natively supports vCenter hypervisor.6 Storage OpenNebula interacts as a consumer of vCenter storage. All the management operations are issued by the front-end to vCenter. basic performance indicators. supports all the storage devices supported by ESX.3. the datastore associated with the VM template is chosen.3. Each ESX Cluster is presented in OpenNebula as an aggregated hypervisor. new VMs with defined network interfaces will be bound by OpenNebula to these Networks and/or Distributed vSwitches. as well as VM status and capacity consumption. except the VNC connection that is performed directly from the front-end to the ESX where a particular VM is running. Refer to the platform notes and chose the one that better fits your needs. vCenter/ESX Datastores can be represented in OpenNebula to create. such as the host sta- tus. Alternatively. In this way. vCenter drivers need to be configured in the OpenNebula front-end. If DRS is enabled.3.

0.0 Deployment guide. VMware Cloud Architecture 11 . Please check the Authentication Chapter to find out more information about the authentication technologies supported by OpenNebula.8 Authentication The following authentication methods are supported to access OpenNebula: • Built-in User/Password • SSH Authentication • X509 Authentication • LDAP Authentication (and Active Directory) Warning: Default: OpenNebula comes by default with an internal built-in user/password authentication. 1. A single OpenNebula instances can orchestrate several vCenter instances remotely located in different data centers.2 Please check the Networking Chapter to find out more information about the networking support in vCenter infras- tructures by OpenNebula.3. 1. 1.9 Multi-Datacenter Deployments OpenNebula interacts with the vCenter instances by interfacing with its SOAP API exclusively.3. Connectivity between data centers needs to have low latency in order to have a reliable management of vCenter from OpenNebula. This characteristic enables architectures where the OpenNebula instance and the vCenter environment are located in different datacenters. OpenNebula 5. Release 5.3.

Federation allows end users to consume resources allocated by the federation administrators regardless of their geographic location. For more information. The integration is seamless. Each OpenNebula instance of the federation is called a Zone.2 When administration domains need to be isolated or the interconnection between datacenters does not allow a single controlling entity. all the instances will share the same user accounts.0 Deployment guide.3. 1. one of them configured as master and the others as slaves. VMware Cloud Architecture 12 . and it will automatically redirect the requests to the right OpenNebula at the target Zone. OpenNebula 5. check the Federation Section. meaning that a user logged into the Sunstone web interface of a Zone will not have to log out and enter the address of another Zone.0. groups. An OpenNebula federation is a tightly coupled inte- gration. Release 5. OpenNebula can be configured in a federation. and permissions configuration. Sunstone allows to change the active Zone at any time.

1.4. The following interface provide a simple and remote management of cloud (virtual) resources at a high abstraction level: Amazon EC2 and EBS APIs.4 OpenNebula Provisioning Model In a small installation with a few hosts.2 1. • Public Cloud: Cloud interfaces can be added to your Private Cloud if you want to provide partners or external users with access to your infrastructure. Release 5. Users and administrators can use it to gather metrics. OpenNebula 5. detect problems in their applications. 1.10 Advanced Components Once you have an OpenNebula cloud up and running. and trigger OneFlow auto-scaling rules. you can install the following advanced components: • Multi-VM Applications and Auto-scaling: OneFlow allows users and administrators to define. Each group of Virtual Machines is deployed and managed as a single entity. you can use OpenNebula without giving much thought to infrastructure partitioning and provisioning. • Cloud Bursting: Cloud bursting is a model in which the local resources of a Private Cloud are combined with resources from remote Cloud providers. execute and manage services composed of interconnected Virtual Machines with deployment dependencies between them.0 Deployment guide. But for medium and large deployments you will probably want to provide some level of isolation and structure. or to sell your overcapacity. • Application Insight: OneGate allows Virtual Machine guests to push monitoring information to OpenNebula.3. and is completely integrated with the advanced OpenNebula user and group management.0. OpenNebula Provisioning Model 13 . Such support for cloud bursting enables highly scalable hosting envi- ronments.

Multiple OpenNebula Zones can be configured as a federation. to address peaks of demand. Moreover. many organizations have access to external public clouds to build hybrid cloud scenarios where the private capacity of the Data Centers is supplemented with resources from external clouds. A VDC is a fully-isolated virtual infrastructure environment where a Group of users (or optionally several Groups of users). 1. and/or Azure. Domains. groups. you could have two Data Centers in different geographic locations. would only see the virtual resources and not the underlying physical infrastruc- ture. such as Amazon. This model is a result of our collaboration with our user community throughout the life of the project. These resources grouped in the VDC can be dedicated exclusively to the Group. and permissions across Data Centers. including the Group admin. The privileges of the Group users and the admin regarding the operations over the virtual resources created by other users can be configured. and an agreement for cloudbursting with a public cloud provider. each one running its own OpenNebula instance and consisting of several physical Clusters of infrastructure resources (Hosts.0.1 The Infrastructure Perspective Common large IT shops have multiple Data Centers (DCs). Users can access their resources through any of the existing OpenNebula interfaces. A Group is an authorization boundary that can be seen as a business unit if you are considering it as a private cloud or as a complete new company if it is a public cloud. like Amazon AWS. Group admins can manage their Groups through the CLI or the Group Admin View in Sunstone.4. while the admins of the Group have full control over other users’ resources and can also create new users in the Group. Release 5. Sunstone Cloud View.4. 1. builders and administrators. The Physical Resources allocated to the Group are managed by the cloud administrator through a VDC. under the control of a Group admin. providing isolation at the physical level too. the users can instantiate virtual machine templates to create their machines. OCA.2 This Section is meant for cloud architects. 1. OpenNebula 5. Europe and USA West Coast. Each Data Center runs its own zone or full OpenNebula deployment. can create and manage compute and storage capacity. For example. OpenNebula Provisioning Model 14 . While Clusters are used to group Physical Resources according to common characteristics such as networking topology or physical location. For example. Virtual Data Centers (VDCs) allow to create “logical” pools of Physical Resources (which could belong to different Clusters and Zones) and allocate them to user Groups. Cloud administrators can manage the Groups through the CLI or Sunstone. and in this case they will share the same user accounts.4. These Clusters could present different architectures and software/hardware execution environments to fulfill the needs of different workload profiles. or the AWS APIs. Networks and Datastores). such as the CLI...2 The Organizational Perspective Users are organized into Groups (similar to what other environments call Projects. Tenants.0 Deployment guide.). in the Advanced Cloud Provisioning Case described below. The users in the Group. to help them understand the OpenNebula model for managing and provisioning virtual resources.

OpenNebula 5.0 Deployment guide, Release 5.0.2

The Cloud provisioning model based on VDCs enables an integrated, comprehensive framework to dynamically pro-
vision the infrastructure resources in large multi-datacenter environments to different customers, business units or
groups. This brings several benefits:
• Partitioning of cloud Physical Resources between Groups of users
• Complete isolation of Users, organizations or workloads
• Allocation of Clusters with different levels of security, performance or high availability
• Containers for the execution of software-defined data centers
• Way of hiding Physical Resources from Group members
• Simple federation, scalability and cloudbursting of private cloud infrastructures beyond a single cloud instance
and data center

1.4.3 Examples of Provisioning Use Cases

The following are common enterprise use cases in large cloud computing deployments:
• On-premise Private Clouds Serving Multiple Projects, Departments, Units or Organizations. On-premise
private clouds in large organizations require powerful and flexible mechanisms to manage the access privileges
to the virtual and physical infrastructure and to dynamically allocate the available resources. In these scenarios,
the Cloud Administrator would define a VDC for each Department, dynamically allocating resources according
to their needs, and delegating the internal administration of the Group to the Department IT Administrator.
• Cloud Providers Offering Virtual Private Cloud Computing. Cloud providers providing customers with a fully-
configurable and isolated environment where they have full control and capacity to administer its users and
resources. This combines a public cloud with the control usually seen in a personal private cloud system.
For example, you can think Web Development, Human Resources, and Big Data Analysis as business units represented
by Groups in a private OpenNebula cloud, and allocate them resources from your DCs and public clouds in order to
create three different VDCs.
• VDC BLUE: VDC that allocates (ClusterA-DC_West_Coast + Cloudbursting) to Web Development
• VDC RED: VDC that allocates (ClusterB-DC_West_Coast + ClusterA-DC_Europe + Cloudbursting) to Human
Resources
• VDC GREEN: VDC that allocates (ClusterC-DC_West_Coast + ClusterB-DC_Europe) to Big Data Analysis

1.4. OpenNebula Provisioning Model 15

OpenNebula 5.0 Deployment guide, Release 5.0.2

1.4.4 Cloud Provisioning Scenarios

OpenNebula has three predefined User roles to implement three typical enterprise cloud scenarios:
• Data center infrastructure management
• Simple cloud provisioning model
• Advanced cloud provisioning model
In these three scenarios, Cloud Administrators manage the physical infrastructure, creates Users and VDCs, prepares
base templates and images for Users, etc
Cloud Administrators typically access the cloud using the CLI or the Admin View of Sunstone.

1.4. OpenNebula Provisioning Model 16

OpenNebula 5.0 Deployment guide, Release 5.0.2

Role Capabilities
Cloud Admin.
• Operates the Cloud infrastructure (i.e. computing
nodes, networking fabric, storage servers)
• Creates and manages OpenNebula infrastructure
resources: Hosts, Virtual Networks, Datastores
• Creates and manages Multi-VM Applications
(Services)
• Creates new Groups and VDCs
• Assigns Groups and physical resources to a VDC
and sets quota limits
• Defines base instance types to be used by the
users. These types define the capacity of the VMs
(memory, cpu and additional storage) and connec-
tivity.
• Prepare VM images to be used by the users
• Monitor the status and health of the cloud
• Generate activity reports

Data Center Infrastructure Management

This model is used to manage data center virtualization and to integrate and federate existing IT assets that can be in
different data centers. In this usage model, Users are familiar with virtualization concepts. Except for the infrastructure
resources, the web interface offers the same operations available to the Cloud Admin. These are “Advanced Users”
that could be considered also as “Limited Cloud Administrators”.
Users can use the templates and images pre-defined by the cloud administrator, but usually are also allowed to create
their own templates and images. They are also able to manage the life-cycle of their resources, including advanced
features that may harm the VM guests, like hot-plugging of new disks, resize of Virtual Machines, modify boot
parameters, etc.
Groups are used by the Cloud Administrator to isolate users, which are combined with VDCs to have allocated re-
sources, but are not offered on-demand.
These “Advanced Users” typically access the cloud by using the CLI or the User View of Sunstone. This is not the
default model configured for the group Users.
Role Capabilities
Advanced User
• Instantiates VMs using their own templates
• Creates new templates and images
• Manages their VMs, including advanced life-
cycle features
• Creates and manages Multi-VM Application (Ser-
vices)
• Check their usage and quotas
• Upload SSH keys to access the VMs

Simple Cloud Provisioning

In the simple infrastructure provisioning model, the Cloud offers infrastructure as a service to individual Users. Users
are considered as “Cloud Users” or “Cloud Consumers”, being much more limited in their operations. These Users
access a very intuitive simplified web interface that allows them to launch Virtual Machines from predefined Templates.

1.4. OpenNebula Provisioning Model 17

Role Capabilities Group Admin.2 They can access their VMs. Groups are used by the Cloud Administrator to isolate users. shutdown a VM from other user to free group quota usage. for example. and also manage the resources of the rest of the users. but new Images cannot be created from scratch. A Group Admin may.4.0. and perform basic operations like shutdown. OpenNebula 5. OpenNebula Provisioning Model 18 . Release 5. but are not offered on-demand. • Instantiates VMs using their own Images saved from a previous running VM • Manages their VMs. Each Group can define one or more users as Group Admins. The changes made to a VM disk can be saved back. which are combined with VDCs to have allocated re- sources. companies. This is the default model configured for the group Users. including – reboot – power off/on (short-term switching-off) – delete – save a VM into a new Template – obtain basic monitor information and status (including IP addresses) • Delete any previous VM template and disk snap- shot • Check user account usage and quotas • Upload SSH keys to access the VMs Advanced Cloud Provisioning The advanced provisioning model is an extension of the previous one where the cloud provider offers VDCs on demand to Groups of Users (projects. departments or business units). These Group Admins typically access the cloud by using the Group Admin View of Sunstone. These admins can create new users inside the Group. These “Cloud Users” typically access the cloud by using the Cloud View of Sunstone. Role Capabilities Cloud User • Instantiates VMs using the templates defined by the Cloud Admins and the images defined by the Cloud Admins or Group Admins. • Creates new users in the Group • Operates on the Group’s virtual machines and disk images • Share Saved Templates with the members of the Group • Checks Group usage and quotas 1.0 Deployment guide. The Group Users have the capabilities described in the previous scenario and typically access the cloud by using the Cloud View of Sunstone.

19 . There are two alternatives here: you can add our package repositories to your system. etc.1 How Should I Read This Chapter Before reading this chapter make sure you have read and understood the Cloud Design chapter. you should enable MySQL. 2.).1. while following the Installation section. Cloud Bursting. if this is not a small proof of concept. CHAPTER TWO OPENNEBULA INSTALLATION 2. to ensure the installation of the latest version and to avoid possible packages divergences of different distributions. It can be a physical node or a Virtual Machine.1 Overview The Front-end is the central part of an OpenNebula installation. head to the Building from Source Code guide. After reading this chapter. 2.. First you should read the Front-end Installation section. Note that by default it uses a SQLite database that is not recommended for production so. 2. read the Node Installation chapter next in order to add hypervisors to your cloud. but it is also the foundation for a more complex setup.2 Hypervisor Compatibility Section Compatibility Front-end Installation This Section applies to both KVM and vCenter. Using the packages provided in our site is the recommended method. or visit the software menu to download the latest package for your Linux distribution. with Advanced Components (like Host and VM High Availability. This is the machine where the server software is installed and where you connect to manage your cloud. MySQL Setup This Section applies to both KVM and vCenter. The aim of this chapter is to give you a quick-start guide to deploy OpenNebula. If there are no packages for your distribution. This is the simplest possible instal- lation.2 Front-end Installation This page shows you how to install OpenNebula from the binary packages.1. Scheduler This Section applies to both KVM and vCenter..

Add OpenNebula Repositories CentOS/RHEL 7 To add OpenNebula repository execute the following as root: # cat << EOT > /etc/yum.0.org/repo/5.2.d/opennebula.opennebula.org/repo/5.04 # echo "deb http://downloads. Release 5.opennebula.04 stable opennebula" ˓→> /etc/apt/sources.list Ubuntu 16.2.list Ubuntu 14.repos.1 Step 1.2 2.org/repo/5. You can disable it changing in the file /etc/selinux/config this line: SELINUX=disabled After this file is changed reboot the machine.04 stable opennebula" ˓→> /etc/apt/sources.list.list 2.d/opennebula.opennebula.opennebula.list. Disable SElinux in CentOS/RHEL 7 SElinux can cause some problems.0/Ubuntu/14.2 Step 2.org/repo/Debian/repo.d/opennebula.2.key | apt-key add - Debian 8 # echo "deb http://downloads.2.3 Step 3.0/Ubuntu/16. 2.http://downloads. OpenNebula 5.repo [opennebula] name=opennebula baseurl=http://downloads.0/Debian/8 stable opennebula" > / ˓→etc/apt/sources.org/repo/5.0/CentOS/7/x86_64 enabled=1 gpgcheck=0 EOT Debian/Ubuntu To add OpenNebula repository on Debian/Ubuntu execute as root: # wget -q -O.d/opennebula. Front-end Installation 20 . like not trusting oneadmin user’s SSH credentials.04 # echo "deb http://downloads.opennebula. In CentOS this can be done with the following command: 2. Installing the Software Installing on CentOS/RHEL 7 Before installing: • Activate the EPEL repo.list.0 Deployment guide.

2. OpenNebula 5. Front-end Installation 21 . • opennebula-sunstone: Sunstone (the GUI) and the EC2 API. • ruby-opennebula: Ruby API. distributed in the various components that conform OpenNebula.0.0 Deployment guide. execute the following as root. To install a CentOS/RHEL OpenNebula Front-end with packages from our repository. • opennebula-java: Java Bindings. Installing on Debian/Ubuntu To install OpenNebula on a Debian/Ubuntu Front-end using packages from our repositories execute as root: # apt-get update # apt-get install opennebula opennebula-sunstone opennebula-gate opennebula-flow Debian/Ubuntu Package Description These are the packages available for these distributions: • opennebula-common: Provides the user and common files. scheduler. • opennebula-gate: OneGate server that enables communication between VMs and OpenNebula. libvirt and kvm. • opennebula-flow: OneFlow manages services and elasticity. Note: The files located in /etc/one and /var/lib/one/remotes are marked as configuration files. etc. • opennebula-ruby: Ruby Bindings. • opennebula-node-kvm: Meta-package that installs the oneadmin user. Release 5. # yum install opennebula-server opennebula-sunstone opennebula-ruby opennebula-gate ˓→opennebula-flow CentOS/RHEL Package Description These are the packages available for this distribution: • opennebula: Command Line Interface.2 # yum install epel-release There are packages for the Front-end. and packages for the virtualization host.2. • opennebula-server: Main OpenNebula daemon. • opennebula-common: Common files for OpenNebula packages.

0 Deployment guide. • opennebula-node: Prepares a node as an opennebula-node. Release 5.5 Step 5. Front-end Installation 22 . • opennebula-gate: OneGate server that enables communication between VMs and OpenNebula. we suggest that if in doubt. As root execute: # /usr/share/one/install_gems The previous script is prepared to detect common Linux distributions and install the required libraries.2 • libopennebula-java: Java API. Note: Besides /etc/one. 2.2.2. make sure you read the MySQL Setup section.4 Step 4. OpenNebula provides a script that installs the required gems as well as some development libraries packages needed. • opennebula-tools: Command Line interface. If it fails to find the packages needed in your system. but since it’s more cumbersome to migrate databases. However if you are deploying this for production or in a more serious environment. • libopennebula-java-doc: Java API Documentation. 2.2. • opennebula: OpenNebula Daemon. Note that it is possible to switch from SQLite to MySQL. use MySQL from the start.0. the following files are marked as configuration files: • /var/lib/one/remotes/datastore/ceph/ceph. Enabling MySQL/MariaDB (Optional) You can skip this step if you just want to deploy OpenNebula as quickly as possible.conf 2. • opennebula-sunstone: Sunstone (the GUI). • opennebula-flow: OneFlow manages services and elasticity.conf • /var/lib/one/remotes/vnm/OpenNebulaNetwork. Ruby Runtime Installation Some OpenNebula components need Ruby libraries. manually install these packages: • sqlite3 development library • mysql client development library • curl development library • libxml2 and libxslt development libraries • ruby development library • gcc and g++ • make If you want to install only a set of gems for an specific component read Building from Source Code where it is explained in more depth. OpenNebula 5.

0 Deployment guide. run the following command as oneadmin: $ oneuser show USER 0 INFORMATION ID : 0 NAME : oneadmin GROUP : oneadmin PASSWORD : 3bc15c8aae3e4124dd409035f32ea2fd6835efc9 AUTH_DRIVER : core ENABLED : Yes USER TEMPLATE TOKEN_PASSWORD="ec21d27e2fe4f9ed08a396cbd47b08b8e0a4ca3c" RESOURCE USAGE & QUOTAS If you get an error message.0. you should check that the commands can connect to the OpenNebula daemon.connect(2) for ˓→"localhost" port 2633) The OpenNebula logs are located in /var/log/one. Verifying the Installation After OpenNebula is started for the first time.log and sched. the core and scheduler logs. Feel free to change the password before starting OpenNebula.2. 2. It should contain the following: oneadmin:<password>. then the OpenNebula daemon could not be started properly: $ oneuser show Failed to open TCP connection to localhost:2633 (Connection refused . Linux CLI In the Front-end.7 Step 7.log.2.2 2. you should have at least the files oned.log for any error messages.one/one_auth fill will have been created with a randomly-generated password. You are ready to start the OpenNebula daemons: # service opennebula start # service opennebula-sunstone start 2. You can do this in the Linux CLI or in the graphical user interface: Sunstone. you must use the oneuser passwd command to change oneadmin’s password. Check oned.2.6 Step 6. For example: $ echo "oneadmin:mypassword" > ~/.one/one_auth Warning: This will set the oneadmin password on the first boot. Starting OpenNebula Log in as the oneadmin user follow these steps: The /var/lib/one/. Release 5. From that point. marked with [E]. OpenNebula 5. Front-end Installation 23 .

etc. The two back-ends cannot coexist (SQLite and MySQL).one/one_auth oneadmin credentials /var/lib/one/remotes/ Probes and scripts that will be synced to the Hosts /var/lib/one/remotes/hooks/ Hook scripts /var/lib/one/remotes/vmm/ Virtual Machine Manager Driver scripts /var/lib/one/remotes/auth/ Authentication Driver scripts /var/lib/one/remotes/im/ Information Manager (monitoring) Driver scripts /var/lib/one/remotes/market/ MarketPlace Driver scripts Datastore Driver scripts /var/lib/one/remotes/datastore/ /var/lib/one/remotes/vnm/ Networking Driver scripts /var/lib/one/remotes/tm/ Transfer Manager Driver scripts 2.log /var/lib/one/ oneadmin home directory Storage for the datastores /var/lib/one/datastores/<dsid>/ /var/lib/one/vms/<vmid>/ Action files for VMs (deployment file.. Directory Structure The following table lists some notable paths that are available in your Front-end after the installation: Path Description /etc/one/ Configuration Files /var/log/one/ Log files. 2. MySQL Setup 24 .one/one_auth in your Front-end. please follow this guide prior to start OpenNebula the first time to avoid problems with oneadmin and serveradmin credentials.2 Sunstone Now you can try to log in into Sunstone web interface.3. Also. sched. Note: If you are planning to install OpenNebula with MySQL back-end.0.8 Step 8.log.2. Release 5. 2. In this guide and in the rest of Open- Nebula’s documentation and configuration files we will refer to this database as the MySQL. Next steps Now that you have successfully started your OpenNebula service. notably: oned.3 MySQL Setup The MySQL/MariaDB back-end is an alternative to the default SQLite back-end. The user is oneadmin and the password is the one in the file /var/lib/one/. make sure you check /var/log/one/sunstone.log and /var/log/one/sunstone. transfer manager scripts.. OpenNebula 5.log.0 Deployment guide.error.log and <vmid>. head over to the Node Installation chapter in order to add hypervisors to your cloud. and you will have to decide which one is going to be used while planning your OpenNebula installation. however OpenNebula you can use either MySQL or MariaDB. To do this point your browser to http://<fontend_address>:9869.) /var/lib/one/. If everything is OK you will be greeted with a login page. make sure TCP port 9869 is allowed through the firewall. sunstone. If the page does not load.

OpenNebula will create it the first time you run it.] mysql> GRANT ALL PRIVILEGES ON opennebula. Assuming you are going to use the default values. # Sample configuration for MySQL DB = [ backend = "mysql". you need to set in oned.2 Using OpenNebula with MySQL After this installation and configuration process you can use OpenNebula as usual. MySQL Setup 25 . • port: port for the connection to the server. 0 rows affected (0.1 Installation First of all. log in to your MySQL server and issue the following commands: $ mysql -u root -p Enter password: Welcome to the MySQL monitor. Release 5. db_name = "opennebula" ] Fields: • server: URL of the machine running the MySQL server.3. • passwd: MySQL password. OpenNebula 5. Query OK. If set to 0. and the database you have granted privileges on. [. user = "oneadmin". 2.. This new database doesn’t need to exist.3. 2.0. • user: MySQL user-name. port = 0. the default port is used.2 2.0 Deployment guide. server = "localhost". You can either deploy one for the OpenNebula installation or reuse any existing MySQL already deployed and accessible by the Front-end. Configuring MySQL You need to add a new user and grant it privileges on the opennebula database.conf the connection details.* TO 'oneadmin' IDENTIFIED BY '<thepassword> ˓→'. Configuring OpenNebula Before you run OpenNebula for the first time. • db_name: Name of the MySQL database OpenNebula will use. Now configure the transaction isolation level: mysql> SET GLOBAL TRANSACTION ISOLATION LEVEL READ COMMITTED. passwd = "<thepassword>".00 sec) Visit the MySQL documentation to learn how to manage accounts. you need a working MySQL server.3..

you can either start using your cloud or configure more components: • Authenticaton. If in the future you want to switch to other storage or networking technologies you will be able to do so. • Sunstone. If your cloud is KVM based you should also follow: • Open Cloud Host Setup.1 Overview After OpenNebula Front-end is correctly setup the next step is preparing the hosts where the VMs are going to run. OpenNebula GUI should working and accessible at this stage. if it’s VMware based: • Head over to the VMware Infrastructure Setup chapter. Otherwise. After installing the nodes and verifying your installation. This chapter focuses on the minimal node installation you need to follow in order to finish deploying OpenNebula. • Open Cloud Storage Setup. CHAPTER THREE NODE INSTALLATION 3. Concepts like storage and network have been simplified. vCenter Node Installation This Section applies to vCenter. but by reading this guide you will learn about specific enhanced configurations for Sunstone. Therefore feel free to follow the other specific chapters in order to configure other subsystems like networking and storage.2 Hypervisor Compatibility Section Compatibility KVM Node Installation This Section applies to KVM. by the end of it. or securing it further with other authen- tication technologies. to deploy a Virtual Machine.1. • Open Cloud Networking Setup. 26 . Verify your Installation This Section applies to both vCenter and KVM.1 How Should I Read This Chapter Make sure you have properly installed the Front-end before reading this chapter. 3. Note that you can follow this chapter without reading any other guides and you will be ready. (Optional) For integrating OpenNebula with LDAP/AD.1. 3.

3.opennebula.04 stable opennebula" ˓→> /etc/apt/sources.list Ubuntu 16.org/repo/Debian/repo.0.repo [opennebula] name=opennebula baseurl=http://downloads.2.d/opennebula.repos.d/opennebula.0 Deployment guide.org/repo/5. Installing the Software Installing on CentOS/RHEL Execute the following commands to install the node package and restart libvirt to use the OpenNebula provided configuration file: 3.list 3.d/opennebula.04 # echo "deb http://downloads.0/Debian/8 stable opennebula" > / ˓→etc/apt/sources. to ensure the installation of the latest version and to avoid possible packages divergences of different distributions. Release 5. KVM Node Installation 27 .04 # echo "deb http://downloads.2 3.2.1 Step 1.opennebula. Add OpenNebula Repositories CentOS/RHEL 7 To add OpenNebula repository execute the following as root: # cat << EOT > /etc/yum. or visit the software menu to download the latest package for your Linux distribution.opennebula.list Ubuntu 14.key | apt-key add - Debian 8 # echo "deb http://downloads.d/opennebula.list.2 Step 2.org/repo/5. There are two alternatives here: you can add our package repositories to your system.list.org/repo/5.opennebula. Using the packages provided in our site is the recommended method.0/CentOS/7/x86_64 enabled=1 gpgcheck=0 EOT Debian/Ubuntu To add OpenNebula repository on Debian/Ubuntu execute as root: # wget -q -O.list.04 stable opennebula" ˓→> /etc/apt/sources.org/repo/5. OpenNebula 5.0/Ubuntu/14.2 KVM Node Installation This page shows you how to install OpenNebula from the binary packages.0/Ubuntu/16.2.opennebula.http://downloads.

we have to execute this command as user oneadmin in the Front-end with all the node names as parameters: $ ssh-keyscan <node1> <node2> <node3> . and from the nodes to the Front-end..pub and authorized_keys from the Front-end to the nodes.ssh/authorized_keys in all the machines. id_rsa.ssh/known_hosts Now we need to copy the directory /var/lib/one/.4 Step 4. Release 5. 3.3 Step 3. You must distribute the public key of oneadmin user from all machines to the file /var/lib/one/. check the specific guide: KVM.2.0. to the nodes. OpenNebula 5. To create the known_hosts file.. like not trusting oneadmin user’s SSH credentials.. does not ask password: 3. 3.ssh <node2>:/var/lib/one/ $ scp -rp /var/lib/one/. We will sync the id_rsa. The easiest way is to set a temporary password to oneadmin in all the hosts and copy the directory from the Front-end: $ scp -rp /var/lib/one/. Configure Passwordless SSH OpenNebula Front-end connects to the hypervisor Hosts using SSH. You should verify that connecting from the Front-end. ultimately the administrator should choose a method (the recommendation is to use a configuration management system). Additionally we need to create a known_hosts file and sync it as well to the nodes. There are many methods to achieve the distribution of the SSH keys.2 $ sudo yum install opennebula-node-kvm $ sudo service libvirtd restart For further configuration. >> /var/lib/one/.2..ssh <node1>:/var/lib/one/ $ scp -rp /var/lib/one/. as user oneadmin. an SSH key was generated and the authorized_keys populated. When the package was installed in the Front-end.0 Deployment guide. KVM Node Installation 28 . Installing on Debian/Ubuntu Execute the following commands to install the node package and restart libvirt to use the OpenNebula provided configuration file: $ sudo apt-get install opennebula-node $ sudo service libvirtd restart # debian $ sudo service libvirt-bin restart # ubuntu For further configuration check the specific guide: KVM. Disable SElinux in CentOS/RHEL 7 SElinux can cause some problems. In this guide we are going to manually scp the SSH keys.2. You can disable it changing in the file /etc/selinux/config this line: SELINUX=disabled After this file is changed reboot the machine.ssh <node3>:/var/lib/one/ $ .ssh to all the nodes.

0. Release 5.5 Step 5. KVM Node Installation 29 .0 Deployment guide. Networking Configuration A network connection is needed by the OpenNebula Front-end daemons to access the hosts to manage and monitor 3.2 $ ssh <node1> $ ssh <frontend> $ exit $ exit $ ssh <node2> $ ssh <frontend> $ exit $ exit $ ssh <node3> $ ssh <frontend> $ exit $ exit 3. OpenNebula 5.2.2.

There are various network models (please check the Networking chapter to find out the networking technologies supported by OpenNebula). as they accomplish the same. LVM.2. For example. and the local storage of the hypervisors as storage for the running VMs. OpenNebula 5.0. etc. Also remember that it is not important the exact name of the resources (br0. the graphical user interface. To learn more about the host subsystem. Click on the + button. however it’s important that the bridges and NICs have the same name in all the Hosts. not in the Front-end. KVM Node Installation 30 .6 Step 6. It is highly recommended to use a dedicated network for this purpose. Adding a Host through Sunstone Open the Sunstone as documented here.001e682f02ad no eth1 Note: Remember that this is only required in the Hosts. You may want to use the simplest network model that corresponds to the bridged drivers.7 Step 8. you will specify the name of this bridge and OpenNebula will know that it should connect the VM to this bridge. read this guide. NFS.. thus giving it connectivity with the physical network device connected to the bridge. This step can be done in the CLI or in Sunstone. like Ceph. However. so OpenNebula can launch VMs in it. Adding a Host to OpenNebula In this step we will register the node we have installed in the OpenNebula Front-end. In the left side menu go to Infrastructure -> Hosts. Follow just one method. etc. when defining the network in OpenNebula. not both.2 the Hosts. one for public IP addresses (attached to an eth0 NIC for example) and the other for private virtual LANs (NIC eth1 for example) should have two bridges: $ brctl show bridge name bridge id STP enabled interfaces br0 8000. Storage Configuration You can skip this step entirely if you just want to try out OpenNebula.). you should read the Open Cloud Storage chapter. and to transfer the Image files.2.2.0 Deployment guide. a typical host with two physical networks. Release 5.001e682f02ac no eth0 br1 8000. 3. br1. as it will come configured by default in such a way that it uses the local storage of the Front-end to store Images. 3.. For this driver. Later on. you will need to setup a linux bridge and include a physical device to the bridge. if you want to set-up another storage configuration at this stage. 3.

Finally.2 The fill-in the fqdn of the node in the Hostname field. It should take somewhere between 20s to 1m. Try clicking on the refresh button to check the status more frequently. Release 5. KVM Node Installation 31 .0 Deployment guide. OpenNebula 5. 3.0.2. return to the Hosts list. and check that the Host switch to ON status.

log. OpenNebula 5.8 Step 8.0. Chances are it’s a problem with the SSH! Adding a Host through the CLI To add a node to the cloud. check the /var/log/one/oned. Release 5. Chances are it’s a problem with the SSH! 3.7G (0%) on If the host turns to err state instead of on. run this command as oneadmin in the Front-end: $ onehost create <node01> -i kvm -v kvm $ onehost list ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT 1 localhost default 0 . however.0 Deployment guide.1m) $ onehost list ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT 0 node01 default 0 0 / 400 (0%) 0K / 7. Import Currently Running VMs (Optional) You can skip this step as importing VMs can be done at any moment.2.9 Step 9.init # After some time (20s .2. check the /var/log/one/oned. 3.log.2. if you wish to see your previously deployed VMs in OpenNebula you can use the import VM functionality. KVM Node Installation 32 . .2 If the host turns to err state instead of on. Next steps You can now jump to the optional Verify your Installation section in order to get to launch a test VM. 3.

• VMware tools are needed in the guestOS to enable several features (contextualization and networking feedback). • Open Cloud Storage Setup. or securing it further with other authen- tication technologies.3. vCenter then uses the DRS component to select the actual ESX host and Datastore to deploy the Virtual Machine. Release 5. (Optional) For integrating OpenNebula with LDAP/AD. but by reading this guide you will learn about specific enhanced configurations for Sunstone. • Define a vCenter user for OpenNebula.5 and/or 6. • Sunstone. • Alternatively. OpenNebula 5. with at least one cluster aggregating at least one ESX 5. • Open Cloud Networking Setup. 3. In order to avoid problems.1 Requirements The following must be met for a functional vCenter environment: • vCenter 5. the hassle free approach is to declare this oneadmin user as Administrator. although the datastore can be explicitly selected from OpenNebula.0 host.e. The VMware vCenter drivers enable OpenNebula to access one or more vCenter servers that manages one or more ESX Clusters. 3.0.3 vCenter Node Installation This Section lays out the requirements configuration needed in the vCenter and ESX instances in order to be managed by OpenNebula. Note that OpenNebula scheduling decisions are therefore made at ESX Cluster level.0 Deployment guide. Each ESX Cluster is presented in OpenNebula as an aggregated hypervisor. vCenter Node Installation 33 . you are ready to start using your cloud or you could configure more components: • Authenticaton.0.2 Otherwise. Please install VMware Tools (for Windows) or Open Virtual Machine Tools (for *nix) in the guestOS. i. If it’s VMware based: • Head over to the VMware Infrastructure Setup chapter. as an OpenNebula Host. This vCenter user (let’s call her oneadmin) needs to have access to the ESX clusters that OpenNebula will manage. you will need to grant these permissions (please note that the following permissions related to operations are related to the use that OpenNebula does with this operations): 3. in some enterprise environments declaring the user as Administrator is not allowed. OpenNebula GUI should be working and accessible at this stage. in that case.3. If your cloud is KVM based you should also follow: • Open Cloud Host Setup.5 and/or 6. This means that the representation is one OpenNebula Host per ESX Cluster.

LowLevelFileOperations Datastore.Provisioning.SetFloppyMedia VirtualMachine.Config.Unregister DVSwitch.Config.PowerOff Powers off a virtual fVM_Task machine De.AddNewDisk VirtualMachine.Inventory.Config.RawDevice VirtualMachine. RemoveS.CreateNew VirtualMachine.RemoveFile Network.Reset Resets power on a setVM_Task virtual machine Shut.FileManagement On all VMFS tualD.CreateSnapshot Creates a new nap.Config.Inventory. Release 5.Config.AdvancedConfig VirtualMachine.Delete Deletes a VM stroy_Task (including disks) Sus.PowerOn Powers on a virtual machine PowerOf.0.AllocateSpace Datastore.Register VirtualMachine.Config.Config.Inventory.Provisioning.Inventory.Inventory.RevertToSnapshot Revert a virtual ToSnap.3. VirtualMachine. VirtualMachine.Annotation VirtualMachine. snapshot of a virtual shot_Task machine. VirtualMachine.AddExistingDisk VirtualMachine.FileManagement On all VMFS tualD.DeviceConnection Reconfigures a figVM_Task VirtualMachine.SetCDMedia particular virtual VirtualMachine.Config.PowerOff Shutdown guest downGuest Operating System CreateS. VirtualMachine.Config.ChangeTracking VirtualMachine. Datastore.Config.Interact.Inventory.RemoveSnapshot Removes a snapshot nap.AssignVirtualMachineToResourcePool PowerOnVM_Task VirtualMachine.RemoveDisk VirtualMachine.Config. VirtualMachine.Inventory.Interact.CanUse DVPortgroup.Move VirtualMachine.VirtualMachine.Reset Reboots VM’s guest bootGuest Operating System Re. form a virtual machine shot_Task Revert.Settings VirtualMachine.Config.Suspend Suspends a VM pendVM_Task Re.Interact.FileManagement On all VMFS tualD. VirtualMachine.Rename machine.Interact.Config.Interact.Remove VirtualMachine.Assign Resource. VirtualMachine.Datastore. VirtualMachine.Interact.DeployTemplate Creates a clone of a particular VM Recon.0 Deployment guide.HostUSBDevice VirtualMachine.Interact.Interact.Interact.ReadCustSpecs VirtualMachine.State. datastores represented isk_Task by OpenNebula CopyVir.Config. datastores represented isk_Task by OpenNebula .SwapPlacement VirtualMachine.State. machine to a particular shot_Task snapshot CreateVir.AddRemoveDevice VirtualMachine.2 vCenter Privileges Notes Operation CloneVM_Task VirtualMachine. vCenter Node Installation 34 DeleteVir.CreateFromExisting VirtualMachine. OpenNebula 5.Memory VirtualMachine.CanUse Datastore.State. datastores represented isk_Task by OpenNebula 3.BrowseDatastore Datastore. VirtualMachine.Config.Datastore.DiskExtend VirtualMachine. VirtualMachine.CPUCount VirtualMachine.

3. to enable VNC access to the spawned Virtual Machines.e. repeat the following procedure for each ESX: – In the vSphere client proceed to Home -> Inventory -> Hosts and Clusters – Select the ESX host. A different user can be defined in OpenNebula per ESX cluster. which is encapsulated in OpenNebula as an OpenNebula host. OpenNebula does not schedule to the granularity of ESX hosts. Enable GDB Server. • The ESX cluster should have DRS enabled. 3.2 Note: For security reasons.0.0 Deployment guide. Release 5. number of hosts monitored at the same time #------------------------------------------------------------------------------- IM_MAD = [ NAME = "vcenter". Configuration tab and select Security Profile in the Software category – In the Firewall section. DRS is needed to select the actual ESX host within the cluster. the following sections need to be uncommented or added in the /etc/one/oned. EXECUTABLE = "one_im_sh". Additionally. • All ESX hosts belonging to the same ESX cluster to be exposed to OpenNebula must share at least one datastore among them. • Save as VM Templates those VMs that will be instantiated through the OpenNebula provisioning portal • To enable VNC functionality. i.conf file: #------------------------------------------------------------------------------- # vCenter Information Driver Manager Configuration # -r number of retries when monitoring a host # -t number of threads. the Front-end also needs network connectivity to all the ESX hosts Step 2: Enable the drivers in the Front-end (oned. DRS is not required but it is recommended.3. OpenNebula 5. otherwise the VM will be launched in the ESX where the VM template has been created. ARGUMENTS = "-c -t 15 -r 0 vcenter" ] #------------------------------------------------------------------------------- #------------------------------------------------------------------------------- # vCenter Virtualization Driver Manager Configuration 3. SUNSTONE_NAME = "VMWare vCenter". then click OK – Make sure that the ESX hosts are reachable from the OpenNebula Front-end Important: OpenNebula will NOT modify any vCenter configuration.conf) In order to configure OpenNebula to work with the vCenter drivers. vCenter Node Installation 35 . you may define different users to access different ESX Clusters. select Edit.2 Configuration There are a few simple steps needed to configure OpenNebula so it can interact with vCenter: Step 1: Check connectivity The OpenNebula Front-end needs network connectivity to all the vCenters that it is supposed to manage.

reboot.3.0. disk-attach. Release 5.e.. It # defaults to 'suspend'. nic-detach.done! Exploring vCenter resources. snap-create.done! Do you want to process datacenter Development [y/n]? y * Import cluster clusterA [y/n]? y OpenNebula host clusterA with id 0 successfully created. hold. reboot-hard.e. release. SUNSTONE_NAME = "VMWare vCenter". It can be either 'detach' or 'suspend'. OpenNebula needs to be restarted afterwards. The parameters allowed are: parameter description -r <num> number of retries when executing an action -t <num number of threads. disk-detach.conf".. and how to customize and extend the drivers. number of hosts monitored at the same time # -p more than one action per host in parallel. number of actions done at the same time See the Virtual Machine drivers reference for more information about these parameters. snap-delete" ] #------------------------------------------------------------------------------- As a Virtualization driver. default = "vmm_exec/vmm_exec_vcenter..2 # -r number of retries when monitoring a host # -t number of threads. resched. this can be done with the following command: $ sudo service opennebula restart Step 3: Importing vCenter Clusters OpenNebula ships with a powerful CLI tool to import vCenter clusters. resume. IMPORTED_VMS_ACTIONS = "terminate. TYPE = "xml". The tools is self-explanatory. the vCenter driver accept a series of parameters that control their execution. A sample section follows: $ onehost list ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT $ onevcenter hosts --vcenter <vcenter-host> --vuser <vcenter-username> --vpass ˓→<vcenter-password> Connecting to vCenter: <vcenter-host>. just set the credentials and FQDN/IP to access the vCenter host and follow on screen instructions. $ onehost list ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT 3. bash by default # -d default snapshot strategy. i. VM Templates.. delete.0 Deployment guide. EXECUTABLE = "one_vmm_sh". * Import cluster clusterB [y/n]? y OpenNebula host clusterB with id 1 successfully created. OpenNebula 5. suspend. nic-attach. i. terminate-hard. needs support from hypervisor # -s <shell> to execute commands. #------------------------------------------------------------------------------- VM_MAD = [ NAME = "vcenter". ARGUMENTS = "-p -t 15 -r 0 vcenter -s sh". Networks and running VMs. poweroff-hard. poweroff. vCenter Node Installation 36 . unresched.

prefix to the name of the vCenter VMs it spawns. Step 4: Next Steps Jump to the Verify your Installation section in order to get to launch a test VM. 0 . the OpenNebula host. OpenNebula 5. These VMs can be imported and managed through OpenNebula. Release 5.1 KVM based Cloud Verification The goal of this subsection is to launch a small Virtual Machine.init $ onehost list ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT 0 clusterA . This key will be used as a private key to encrypt and decrypt all the passwords for all the vCenters that OpenNebula can access.2 0 clusterA . 0 . to the id of the VM. Note: OpenNebula will add by default a one-<vid>.4 Verify your Installation This chapter ends with this optional section. 0 0 / 800 (0%) 0K / 16G (0%) on 1 clusterB . This value can be changed using a special attribute set in the vCenter cluster representation in OpenNebula.one/one_key. Thus. where <vid> is the id of the VM in OpenNebula. You should only follow the specific subsection for your hypervisor. . 3. This attribute is called VM_PREFIX (which can be set in the OpenNebula host template). you may want to verify your installation or learn how to setup the vmware-based cloud infrastructure. where you will be able to test your cloud by launching a virtual machine and test that everything is working correctly.init 1 clusterB . in this case a TTYLinux (which is a very small Virtual Machine just for testing purposes). $i. ie.0 Deployment guide. . A value of one-$i- in that parameter would have the same behavior as the default. and will evaluate one variable. OpenNebula will display any existing VM as Wild.init $ onehost list ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT 0 clusterA .0. Note: OpenNebula will create a special key at boot time and save it in /var/lib/one/. 0 0 / 800 (0%) 0K / 16G (0%) on 1 clusterB . 3. After this guide. 0 . 3.4. Verify your Installation 37 . .4. 0 0 / 1600 (0%) 0K / 16G (0%) on The following variables are added to the OpenNebula hosts representing ESX clusters: Operation Note VCENTER_HOST hostname or IP of the vCenter host VCENTER_USER Name of the vCenter user VCENTER_PASSWORD Password of the vCenter user Once the vCenter cluster is monitored. the password shown in the OpenNebula host representing the vCenter is the original password encrypted with this special key.

in order to import an appliance from http://marketplace. After that select it (3) and click to button with an arrow pointing to OpenNebula (4) in order to import it. Release 5. you can filter by tty kvm (2) in order to find the one we are going to use in this guide. Import an appliance from the MarketPlace. Step 1.0 Deployment guide. You will see a list of Appliances. Verify your Installation 38 .2 Requirements To complete this section. Step 2. Check that the Hosts are on The Hosts that you have registered should be in ON status.opennebula.0. 3.4. We need to add an Image to our Cloud. you must have network connectivity to the Internet from the Front-end.systems . To do so we can do so by navigating to Storage -> Apps in the left side menu (1). OpenNebula 5.

3.0 Deployment guide. You will need to select the datastore under the “Select the Datastore to store the resource” (1) and then click on the “Download” button (2) Navigate now to the Storage -> Images tab and refresh until the Status of the Image switches to READY.2 A dialog with the information of the appliance you have selected will pop up. Verify your Installation 39 . Release 5. OpenNebula 5.4.0.

Instantiate a VM Navigate to Templates -> VMs. Release 5. In this dialog simply click on the “Instantiate” button.0 Deployment guide. 3. OpenNebula 5.2 Step 3.kvm template that has been created and click on the “Instantiate”.0.4. Verify your Installation 40 . then select the ttylinux .

Test VNC access Navigate to Instances -> VMs. If the VM fails. you can look at the log file /var/log/one/<vmid>. Release 5.0. 3.log. click on the VM and then in the Log tab to see why it failed. you can click on the VNC icon (at the right side of the image below). OpenNebula 5.4. Alternatively. Once it does.2 Step 4. Verify your Installation 41 .0 Deployment guide. You will see that after a while the VM switches to Status RUNNING (you might need to click on the refresh button).

Step 1. 2. Adding Network connectivity to your VM As you might have noticed. In the Network section of the template. Select the ttylinux . The following assumes that Sunstone is up and running.2 vCenter based Cloud Verification In order to verify the correct installation of your OpenNebua cloud.4. OpenNebula 5. 1. as explained in the front-end installation Section. VMs Section. and then import it using the steps described in the vCenter Node Section. and fill in the details of your network: like the Bridge. Create a new Network in the Network -> Virtual Networks dialog. 3. Step 2. 3. the IP ranges. To access Sunstone. Proceed to the Templates tab of the left side menu. Release 5. follow the next steps.0. 4.kvm template and update it. You would need to follow these steps in order to add Networking to the template. select the imported template and click on the Instantiate button.4. Instantiate and connect to the VM. or the Physical Device. Read the Networking chapter to choose a network technology. point your browser to http://<fontend_address>:9869.2 Step 5. 3. unless you have chosen to use the dummy driver explained in the node installation section and you have configured the bridges properly. Instantiate the VM Template You can easily instantiate the template to create a new VM from it using Sunstone. etc. Verify your Installation 42 . this is because it depends a lot in your network technology. select the network that you created in the previous step. this VM does not have networking yet.0 Deployment guide. Import a VM Template You will need a VM Template defined in vCenter.

OpenNebula 5. The next step would be to further configure the OpenNebula cloud to suits your needs.0 Deployment guide. VMs Section. Once the VM is running. Verify your Installation 43 . 3.3 Next steps After this chapter.4. and if you can see a console to the VM. click on the VNC blue icon. or securing it further with other authen- tication technologies.4. Release 5. (Optional) For integrating OpenNebula with LDAP/AD. You can learn more in the VMware Infrastructure Setup guide. Check the VM is Running The scheduler should place the VM in the vCenter cluster imported as part of the vCenter Node Installation Section.0. 3. After a few minutes (depending on the size of the disks defined by the VM Template). You can check the process in Sunstone in the Instances tab of the left side menu. the sate of the VM should be RUNNING. you are ready to start using your cloud or you could configure more components: • Authenticaton.2 Step 3. congratulations! You have a fully functional OpenNebula cloud.

but by reading this guide you will learn about specific enhanced configurations for Sunstone. • Open Cloud Storage Setup.2 • Sunstone.4. if it’s VMware based: • Head over to the VMware Infrastructure Setup chapter. Otherwise. OpenNebula 5. • Open Cloud Networking Setup. Verify your Installation 44 . If your cloud is KVM based you should also follow: • Open Cloud Host Setup. Release 5.0.0 Deployment guide. 3. OpenNebula GUI should working and accessible at this stage.

CHAPTER

FOUR

AUTHENTICATION SETUP

4.1 Overview

OpenNebula comes by default with an internal user/password authentication system, see the Users & Groups Subsys-
tem guide for more information. You can enable an external Authentication driver.

4.1.1 Authentication

In the figure to the right of this text you can see three authentication configurations you can customize in OpenNebula.

a) CLI/API Authentication

You can choose from the following authentication drivers to access OpenNebula from the command line:
• Built-in User/Password and token authentication
• SSH Authentication
• X509 Authentication
• LDAP Authentication

b) Sunstone Authentication

By default any authentication driver configured to work with OpenNebula can be used out-of-the-box with Sunstone.
Additionally you can add SSL security to Sunstone as described in the Sunstone documentation

45

OpenNebula 5.0 Deployment guide, Release 5.0.2

c) Servers Authentication

This method is designed to delegate the authentication process to high level tools interacting with OpenNebula. You’ll
be interested in this method if you are developing your own servers.
By default, OpenNebula ships with two servers: Sunstone and EC2. When a user interacts with one of them, the server
authenticates the request and then forwards the requested operation to the OpenNebula daemon.
The forwarded requests are encrypted by default using a Symmetric Key mechanism. The following guide shows how
to strengthen the security of these requests using x509 certificates. This is specially relevant if you are running your
server in a machine other than the front-end.
• Cloud Servers Authentication

4.1.2 How Should I Read This Chapter

When designing the architecture of your cloud you will have to choose where to store user’s credentials. Different
authentication methods can be configured at the same time and selected per user basis. One big distinction between
the different authentication methods is the ability to be used by API, CLI and/or only Sunstone (web interface).
Can be used with API, CLI and Sunstone:
• Built-in User/Password
• LDAP
Can be used only with API and CLI:
• SSH
Can be used only with Sunstone:
• X509
The following sections are self-contained so you can directly go to the guide that describes the configuration for the
chosen auth method.

4.1.3 Hypervisor Compatibility

Section Compatibility
Built-in User/Password and token authentication This Section applies to both KVM and vCenter.
SSH Authentication This Section applies to both KVM and vCenter.
X509 Authentication This Section applies to both KVM and vCenter.
LDAP Authentication This Section applies to both KVM and vCenter.
Sunstone documentation This Section applies to both KVM and vCenter.

4.2 SSH Authentication

This guide will show you how to enable and use the SSH authentication for the OpenNebula CLI. Using this authenti-
cation method, users login to OpenNebula with a token encrypted with their private ssh keys.

4.2.1 Requirements

You don’t need to install any additional software.

4.2. SSH Authentication 46

OpenNebula 5.0 Deployment guide, Release 5.0.2

4.2.2 Considerations & Limitations

With the current release, this authentication method is only valid to interact with OpenNebula using the CLI.

4.2.3 Configuration

OpenNebula Configuration

The Auth MAD and ssh authentication is enabled by default. In case it does not work make sure that the authentication
method is in the list of enabled methods.

AUTH_MAD = [
executable = "one_auth_mad",
authn = "ssh,x509,ldap,server_cipher,server_x509"
]

There is an external plain user/password authentication driver, and existing accounts will keep working as usual.

4.2.4 Usage

Create New Users

This authentication method uses standard ssh RSA key pairs for authentication. Users can create these files if they
don’t exist using this command:

$ ssh-keygen -t rsa

OpenNebula commands look for the files generated at the standard location ($HOME/.ssh/id_rsa) so it is a good
idea not to change the default path. It is also a good idea to protect the private key with a password.
The users requesting a new account have to generate a public key and send it to the administrator. The way to extract
it is the following:

$ oneuser key
Enter PEM pass phrase:
MIIBCAKCAQEApUO+JISjSf02rFVtDr1yar/34EoUoVETx0n+RqWNav+5wi+gHiPp3e03AfEkXzjDYi8F
voS4a4456f1OUQlQddfyPECn59OeX8Zu4DH3gp1VUuDeeE8WJWyAzdK5hg6F+RdyP1pT26mnyunZB8Xd
bll8seoIAQiOS6tlVfA8FrtwLGmdEETfttS9ukyGxw5vdTplse/fcam+r9AXBR06zjc77x+DbRFbXcgI
1XIdpVrjCFL0fdN53L0aU7kTE9VNEXRxK8sPv1Nfx+FQWpX/HtH8ICs5WREsZGmXPAO/IkrSpMVg5taS
jie9JAQOMesjFIwgTWBUh6cNXuYsQ/5wIwIBIw==

The string written to the console must be sent to the administrator, so the can create the new user in a similar way as
the default user/password authentication users.
The following command will create a new user with username ‘newuser’, assuming that the previous public key is
saved in the text file /tmp/pub_key:

$ oneuser create newuser --ssh --read-file /tmp/pub_key

Instead of using the --read-file option, the public key could be specified as the second parameter.
If the administrator has access to the user’s private ssh key, he can create new users with the following command:

$ oneuser create newuser --ssh --key /home/newuser/.ssh/id_rsa

4.2. SSH Authentication 47

and the authentication method (--ssh in this case). OpenNebula will validate the certificate and decrypt the token to authenticate the user.0. you can specify the public key as the second parameter.0 Deployment guide. The generated token has a default expiration time of 10 hour. otherwise the path can be specified with the --key option. In this case the user will gen- erate a login token with his private key. The x509 certificates can be used in two different ways in OpenNebula. $ oneuser login newuser --ssh The default ssh key is assumed to be in ~/.2 Considerations & Limitations The X509 driver uses the certificate DN as user passwords. OpenNebula 5.3. User Login Users must execute the ‘oneuser login’ command to generate a login token. The second option enables us to use certificates with Sunstone and the Public Cloud servers included in OpenNebula. This may cause problems in the unlikely situation that you are using a CA signing certificate subjects that only differ in spaces.3 x509 Authentication This guide will show you how to enable and use the x509 certificates authentication with OpenNebula.3.1 Requirements If you want to use the x509 certificates with Sunstone or one of the Public Clouds. The token will be stored in the $ONE_AUTH environment variable. 4. Release 5. 4. or use the user’s private key with the --key option. The command requires the OpenNebula username.2 Update Existing Users to SSH You can change the authentication method of an existing user to SSH with the following commands: $ oneuser chauth <id|name> ssh $ oneuser passwd <id|name> --ssh --read-file /tmp/pub_key As with the create command. The x509 driver will remove any space in the certificate DN. x509 Authentication 48 . 4. The first option that is explained in this guide enables us to use certificates with the CLI. you must deploy a SSL capable HTTP proxy on top of them in order to handle the certificate validation.ssh/id_rsa. In this case the authentication is leveraged to Apache or any other SSL capable HTTP proxy that has to be configured by the administrator. You can change that with the --time option.3. 4. If this certificate is validated the server will encrypt those credentials using a server certificate and will send the token to OpenNebula.

It should contain the trusted CA’s for the server. • Change the oneadmin password to the oneadmin certificate DN. You can always disable this feature by moving or renaming . the driver will look for the certificates each time an authen- tication request is made.2 4. i.pem 78d0bbd8 $ sudo cp cacert.0 Deployment guide.conf): VARI. $ oneuser chauth 0 x509 --x509 --cert /tmp/newcert. 4. You can enforce CRL checking by defining :check_crl. you can change this value using the option –time. VALUE ABLE :ca_dir Path to the trusted CA directory.3. Release 5.0 • Create a login for oneadmin using the –x509 option.one/one_x509 4. OpenNebula 5.4 Usage Add and Remove Trusted CA Certificates You need to copy all trusted CA certificates to the certificates directory.e. The hash can be obtained with the openssl command: $ openssl x509 -noout -hash -in cacert. if you place CRL files in the CA directory in the form CA_hash. $ oneuser login oneadmin --x509 --cert newcert.3 Configuration The following table summarizes the available options for the x509 driver (/etc/one/auth/x509_auth. This token has a default expiration time set to 1 hour. so you can revert these steps if the process fails.pem • Add trusted CA certificates to the certificates directory $ openssl x509 -noout -hash -in cacert.3. renaming each of them as <CA_hash>. x509 Authentication 49 . authentication will fail if no CRL file is found.pem /etc/one/auth/certificates/78d0bbd8.one/one_x509 • Set ONE_AUTH to the x509 login file $ export ONE_AUTH=/home/oneadmin/.r0 files Follow these steps to change oneadmin’s authentication method to x509: Warning: You should have another account in the oneadmin group. each CA certificate should be name CA_hash.0.pem --key newkey.pem Enter PEM pass phrase: export ONE_AUTH=/home/oneadmin/. simply remove its certificate from the certificates directory.3.pem /etc/one/auth/certificates/78d0bbd8.pem 78d0bbd8 $ sudo cp cacert.r0.0. OpenNebula will check them.0 :check_crlBy default. This process can be done without restarting OpenNebula.0 To stop trusting a CA.

to the administrator. The following command will create a new user with username ‘newuser’. $ oneuser passwd <id|name> --x509 "/DC=es/O=one/CN=user|/DC=us/O=two/CN=user" User Login Users must execute the ‘oneuser login’ command to generate a login token.pem --key newkey.0.5 Tuning & Extending The x509 authentication method is just one of the drivers enabled in AUTH_MAD. and the authentication method (--x509 in this case).x509.3.2 Create New Users The users requesting a new account have to send their certificate.conf .pem • Using the user certificate subject DN: $ oneuser chauth <id|name> x509 --x509 "user_subject_DN" You can also map multiple certificates to the same OpenNebula account. All drivers are located in /var/lib/one/remotes/auth.0 Deployment guide. The token will be stored in the $ONE_AUTH environment variable. assuming that the user’s certificate is saved in the file /tmp/newcert. The command requires the OpenNebula username. signed by a trusted CA. You can customize the enabled drivers in the AUTH_MAD attribute of oned. Release 5. OpenNebula is configured to use x509 authentication by default.3. x509 Authentication 50 . Therefore if the subject DN is known by the administrator the user can be created as follows: $ oneuser create newuser --x509 "user_subject_DN" Update Existing Users to x509 & Multiple DN You can change the authentication method of an existing user to x509 with the following command: • Using the user certificate: $ oneuser chauth <id|name> x509 --x509 --cert /tmp/newcert. More than one authentication method can be defined: AUTH_MAD = [ executable = "one_auth_mad".ldap.pem This command will create a new user whose password contains the subject DN of his certificate. 4.pem: $ oneuser create newuser --x509 --cert /tmp/newcert.server_x509" ] 4. OpenNebula 5.pem Enter PEM pass phrase: The generated token has a default expiration time of 10 hours. Just add each certificate DN separated with ‘|’ to the password field. newuser@frontend $ oneuser login newuser --x509 --cert newcert.server_cipher. authn = "ssh. You can change that with the --time option.

conf. therefore no special attributes or values are required in the LDIF entry of the user authenticating. 4.ou=groups. if not set 'cn' will be used :user_field: 'cn' 4. The only requirement is the ability to connect to an already running Ldap server and being able to perform a successful ldapbind operation and have a user able to perform searches of users.dc=domain' # field that holds the user name. For # Active Directory append the domain name. Enabling it will let any correctly authenticated LDAP user to use OpenNebula.domain. This is the default config- uration: server 1: # Ldap user able to query. OpenNebula 5.0.3.2 Configuration Configuration file for auth module is located at /etc/one/auth/ldap_auth. LDAP Authentication 51 . delete or modify any entry in the Ldap server it connects to.4.2 4.4. Example: # Administrator@my. This Add-on will not install any Ldap server or configure it in any way. If not set any user will do #:group: 'cn=cloud.0 Deployment guide. if not set connects as anonymous.4 LDAP Authentication The LDAP Authentication add-on permits users to have the same credentials as in LDAP. It will not create. 4.1 Prerequisites Warning: This Add-on requires the ‘net/ldap’ ruby library provided by the ‘net-ldap’ gem.6 Enabling x509 auth in Sunstone Update the /etc/one/sunstone-server. so effectively centralizing authentication. Release 5.4.conf :auth parameter to use the x509 auth: :auth: x509 4.com #:user: 'admin' #:password: 'password' # Ldap authentication method :auth_method: :simple # Ldap server :host: localhost :port: 389 # Uncomment this line for tls connections #:encryption: :simple_tls # base hierarchy where to search for users and groups :base: 'dc=domain' # group the users need to belong to.

0 Deployment guide. by default it is 'member' #:group_field: 'member' # user field that that is in in the group group_field.server 1 #.2 # for Active Directory use this user_field instead #:user_field: 'sAMAccountName' # field name for group membership. if not set 'dn' will be ˓→ used #:user_group_field: 'dn' # Generate mapping file from group template info :mapping_generate: true # Seconds a mapping file remain untouched until the next regeneration :mapping_timeout: 300 # Name of the mapping file in OpenNebula var directory :mapping_filename: server1. OpenNebula 5. The special key :order holds an array with the order we want to query the configured servers.0.server 2 The structure is a hash where any key different to :order will contain the configuration of one ldap server we want to query. Release 5.ou=groups. Any server not listed in :order wont be queried. LDAP Authentication 52 .dc=domain' :user_field: 'cn' # List the order the servers are queried :order: . 4.4.yaml # Key from the OpenNebula template to map to an AD group :mapping_key: GROUP_DN # Default group ID used for users in an AD group not mapped :mapping_default: 1 # this example server wont be called as it is not in the :order list server 2: :auth_method: :simple :host: localhost :port: 389 :base: 'dc=domain' #:group: 'cn=cloud.

Users can easily create escaped $ONE_AUTH tokens with the command oneuser encode <user> [<password>]. Add this line in /etc/one/oned. Users can store their credentials into $ONE_AUTH file (usually $HOME/. so there is no need to keep the ldap password in a plain file. Should be different for each server :mapping_filename :mapping_key Key in the group template used to generate the mapping file. LDAP Authentication 53 .dc=country' 'pass word' cn=First%20Name.dc=country:pass%20word The output of this command should be put in the $ONE_AUTH file.0. It should hold the DN of the mapped group Default group used when no mapped group is found. 4. Do not set if anonymous access is enabled :auth_method Can be set to :simple_tls if SSL connection is needed :encryption Can be set to :simple_tls if SSL connection is needed :host Host name of the ldap server :port Port of the ldap server :base Base leaf where to perform user searches :group If set the users need to belong to this group :user_field Field in ldap that holds the user name Generate automatically a mapping file. OpenNebula must be also configured to enable external authentication.3 User Management Using LDAP authentication module the administrator doesn’t need to create users with oneuser command as this will be automatically done.4.0 Deployment guide. Do not set it if you can perform queries anonymously :password Password for the user defined in :user.4. Simply input the ldap_password when requested. users needs to set up their $ONE_AUTH file accordingly.2 VARIABLE DESCRIPTION :user Name of the user that can query ldap. More information on the management of login tokens and $ONE_AUTH file can be found in Managing Users Guide. Set to false in case you don’t want the :mapping_default user to be authorized if it does not belong to a mapped group To enable ldap authentication the described parameters should be configured. as an example: $ oneuser encode 'cn=First Name. OpenNebula 5.dc=institution. Therefore. It can be disabled in case it needs to be done manually :mapping_generate Number of seconds between automatic mapping file generation :mapping_timeout Name of the mapping file. DN’s With Special Characters When the user dn or password contains blank spaces the LDAP driver will escape them so they can be used to create OpenNebula users.conf DEFAULT_AUTH = "ldap" 4.dc=institution. Release 5.one/one_auth) in this fashion: <user_dn>:ldap_password where • <user_dn> the DN of the user in the LDAP service • ldap_password is the password of the user in the LDAP service Alternatively a user can generate an authentication token using the oneuser login command.

DC=com" And in the ldap configuration file we set the :mapping_key to GROUP_DN. The mapping file is in YAML format and contains a hash where the key is the LDAP’s group DN and the value is the ID of the OpenNebula group. For example: CN=technicians. This is done so the authentication is not continually querying OpenNebula. For example we can add in the group template this line: GROUP_DN="CN=technicians.4 Active Directory LDAP Auth drivers are able to connect to Active Directory.4. The mapping file can be generated automatically using data in the group template that tells which LDAP group maps to that specific group.DN=opennebula.org you will get the base DN: DN=win.opennebula. for win. This tells the driver to look for the group DN in that template parameter.org you specify it as Administrator@win.DC=com: '101' When several servers are configured you should have different :mapping_key and :mapping_file values for each one so they don’t collide. leave it commented. To do this a mapping is generated from the LDAP group to an existing OpenNebula group.4.conf): • :user: the Active Directory user with read permissions in the user’s tree plus the do- main.opennebula.CN=Users.yaml :mapping_key: INTERNAL_GROUP_DN external: :mapping_file: external.4. You can also disable the automatic generation of this file and do the mapping manually. This system uses a mapping file specified by :mapping_file parameter and resides in OpenNebula var directory.5 Group Mapping You can make new users belong to an specific group upon creation.CN=Groups.DC=example.DC=example. You need to decompose the full domain name and use each part as DN component. 4. OpenNebula 5. You will need: • Active Directory server with support for simple user/password authentication. For example: internal: :mapping_file: internal. Release 5. You will need to change the following values in the configuration file (/etc/one/auth/ldap_auth. This mapping expires the number of seconds specified by :mapping_timeout.0 Deployment guide. Example.org • :password: password of this user • :host: hostname or IP of the Domain Controller • :base: base DN to search for users.DN=org • :user_field: set it to sAMAccountName :group parameter is still not supported for Active Directory.2 4. LDAP Authentication 54 .yaml :mapping_key: EXTERNAL_GROUP_DN And in the OpenNebula group template you can define two mappings.DC=example. For example for user Administrator at domain win.CN=Groups. one for each server: 4.opennebula.0.DC=com: '100' CN=Domain Admins. • User with read permissions in the Active Directory user’s tree.

DC=com" EXTERNAL_GROUP_DN="CN=staff. using the specified driver for that user. Release 5.4.0 Deployment guide.DC=other-company.4. To automatically encode credentials as explained in DN’s with special characters section also add this parameter to sunstone configuration: :encode_user_password: true 4.e: LDAP).DC=com" 4.0. Therefore any OpenNebula auth driver can be used through this method to authenticate the user (i.6 Enabling LDAP auth in Sunstone Update the /etc/one/sunstone-server.conf :auth parameter to use the opennebula: :auth: opennebula Using this method the credentials provided in the login screen will be sent to the OpenNebula core and the authen- tication will be delegated to the OpenNebula auth system. LDAP Authentication 55 .DC=internal. OpenNebula 5.2 INTERNAL_GROUP_DN="CN=technicians.CN=Groups.

CHAPTER FIVE SUNSTONE SETUP 5. 56 .1 Overview OpenNebula Sunstone is a Graphical User Interface (GUI) intended for both end users and administrators that simpli- fies the typical management operations in private and hybrid cloud infrastructures. OpenNebula Sunstone allows to easily manage all OpenNebula resources and perform typical operations on them.

OpenNebula 5.0 Deployment guide, Release 5.0.2

5.1.1 How Should I Read this Chapter

The Sunstone Installation & Configuration section describes the configuration and customization options for Sunstone
After Sunstone is running, you can define different sunstone behaviors for each user role in the Sunstone Views section.
For more information on how to customize and extend you Sunstone deployment use the following links:
• Security & Authentication Methods, improve security with x509 authentication and SSL
• Cloud Servers Authentication, advanced reference about the security between Sunstone and OpenNebula.
• Advanced Deployments, improving scalability and isolating the server

5.1.2 Hypervisor Compatibility

Sunstone is available for all the hypervisors. When using vCenter, the cloud admin should enable the
admin_vcenter, groupadmin_vcenter and cloud_vcenter Sunstone views.

5.2 Sunstone Installation & Configuration

5.2.1 Requirements

You must have an OpenNebula site properly configured and running to use OpenNebula Sunstone, be sure to check
the OpenNebula Installation and Configuration Guides to set up your private cloud first. This section also assumes
that you are familiar with the configuration and use of OpenNebula.
OpenNebula Sunstone was installed during the OpenNebula installation. If you followed the installation guide then
you already have all ruby gem requirements. Otherwise, run the install_gem script as root:

# /usr/share/one/install_gems sunstone

The Sunstone Operation Center offers the possibility of starting a VNC/SPICE session to a Virtual Machine. This is
done by using a VNC/SPICE websocket-based client (noVNC) on the client side and a VNC proxy translating and
redirecting the connections on the server-side.

Warning: The SPICE Web client is a prototype and is limited in function. More information of this component
can be found in the following link

Warning: Make sure that there is free space in sunstone’s log directory or it will die silently. By default the log
directory is /var/log/one.

Requirements:
• Websockets-enabled browser (optional): Firefox and Chrome support websockets. In some versions of Firefox
manual activation is required. If websockets are not enabled, flash emulation will be used.
• Installing the python-numpy package is recommended for a better vnc performance.

5.2. Sunstone Installation & Configuration 57

OpenNebula 5.0 Deployment guide, Release 5.0.2

5.2.2 Considerations & Limitations

OpenNebula Sunstone supports Chrome, Firefox and Internet Explorer 11. Other browsers are not supported and may
not work well.

Note: Internet Explorer is not supported with the Compatibility Mode enabled, since it emulates IE7 which is not
supported.

5.2.3 Configuration

sunstone-server.conf

The Sunstone configuration file can be found at /etc/one/sunstone-server.conf. It uses YAML syntax to
define some options:
Available options are:

5.2. Sunstone Installation & Configuration 58

OpenNebula 5.0 Deployment guide, Release 5.0.2

Option Description
:tmpdir Uploaded images will be temporally stored in this folder before being copied to OpenNebula
:one_xmlrpc OpenNebula daemon host and port
:host IP address on which the server will listen on. 0.0.0.0 by default.
:port Port on which the server will listen. 9869 by default.
:sessions Method of keeping user sessions. It can be memory or memcache. For server that spawn more
than one process (like Passenger or Unicorn) memcache should be used
:mem- Host where memcached server resides
cache_host
:mem- Port of memcached server
cache_port
:mem- memcache namespace where to store sessions. Useful when memcached server is used by
cache_namespacemore services
:debug_level Log debug level: 0 = ERROR, 1 = WARNING, 2 = INFO, 3 = DEBUG
:env Execution environment for Sunstone. dev, Instead of pulling the minified js all the files will be
pulled (app/main.js). Check the Building from Source guide in the docs, for details on how to
run Sunstone in development. prod, the minified js will be used (dist/main.js)
:auth Authentication driver for incoming requests. Possible values are sunstone, opennebula,
remote and x509. Check authentication methods for more info
:core_auth Authentication driver to communicate with OpenNebula core. Possible values are x509 or
cipher. Check cloud_auth for more information
:en- For external authentication drivers, such as LDAP. Performs a URL encoding on the credentials
code_user_password
sent to OpenNebula, e.g. secret%20password. This only works with “opennebula” auth.
:lang Default language for the Sunstone interface. This is the default language that will be used if user
has not defined a variable LANG with a different valid value its user template
:vnc_proxy_port Base port for the VNC proxy. The proxy will run on this port as long as Sunstone server does.
29876 by default.
:vnc_proxy_support_wss
yes, no, only. If enabled, the proxy will be set up with a certificate and a key to use secure
websockets. If set to only the proxy will only accept encrypted connections, otherwise it will
accept both encrypted or unencrypted ones.
:vnc_proxy_cert Full path to certificate file for wss connections.
:vnc_proxy_key Full path to key file. Not necessary if key is included in certificate.
:vnc_proxy_ipv6 Enable ipv6 for novnc. (true or false)
:vnc_request_password
Request VNC password for external windows, by default it will not be requested (true or false)
:table_order Default table order, resources get ordered by ID in asc or desc order.
:market- Username credential to connect to the Marketplace.
place_username
:market- Password to connect to the Marketplace.
place_password
:market- Endpoint to connect to the Marketplace. If commented, a 503 service unavailable error
place_url will be returned to clients.
:one- Endpoint to connect to the OneFlow server.
flow_server
:routes List of files containing custom routes to be loaded. Check server plugins for more info.

Starting Sunstone

To start Sunstone just issue the following command as oneadmin

# service opennebula-sunstone start

You can find the Sunstone server log file in /var/log/one/sunstone.log. Errors are logged in
/var/log/one/sunstone.error.

5.2. Sunstone Installation & Configuration 59

] #. and this is why we have introduced an integrated tab in Sunstone to access OpenNebula Systems (the company behind OpenNebula. access to professional. avoiding disruption of work and enhancing productivity. Sunstone Installation & Configuration 60 .2.2 5..2. OpenNebula 5. efficient support is a must. support ticket management can be performed through Sunstone.support-tab 5.4 Commercial Support Integration We are aware that in production environments. enabled_tabs: [.5 Troubleshooting Cannot connect to OneFlow server The Service instances and templates tabs may show the following message: Cannot connect to OneFlow server 5.. Release 5.0. formerly C12G) professional support. In this way. This tab and can be disabled in each one of the view yaml files.2.0 Deployment guide.

2.0. The proxy will redirect the websocket data from the VNC proxy port to the VNC port stated in the template of the VM. You may also need additional modules as python2<version>-numpy. The token expires and cannot be reused. or disable the Service and Service Templates menu entries in the Sunstone views yaml files. Here’s a checklist of common problems: • noVNC requires Python >= 2. Release 5.conf. as stated in the documentation. • You can retrieve useful information from /var/log/one/novnc. This is the default in current Chrome and Firefox. Otherwise Flash emulation will be used. • The VNC console embedded in this dialog will try to connect to the proxy either using websockets (default) or emulating them using Flash. Only connections providing the right token will be successful. Make sure the attribute IP is set correctly (0. • Make sure that you can connect directly from Sunstone frontend to the VM using a normal VNC client tools such as vncviewer.5) required manual activation.0 to allow connections from everywhere).0. but former versions of Firefox (i. 3. no connec- tions will be allowed from the outside. otherwise. Sunstone server will add the VM Host to the list of allowed vnc session targets and create a random token associated to it.0 Deployment guide.2 You need to start the OneFlow component following this section. The value of the proxy port is defined in sunstone-server.5 for the websockets proxy to work.e. There can be multiple reasons that may prevent noVNC from correctly connecting to the machines. VNC Troubleshooting When clicking the VNC icon. • Make sure there are not firewalls blocking the connections. then a noVNC dialog pops up. the process of starting a session begins: • A request is made and if a VNC session is possible. Sunstone Installation & Configuration 61 .log • You must have a GRAPHICS section in the VM template enabling VNC. OpenNebula 5.0. 5. and have them enabled. • Your browser must support websockets. • The server responds with the session token.

shown errors and any relevant information of your infrastructure (if there are Firewalls etc) • The message “SecurityError: The operation is insecure. { 'name': "Fedora".2 • When using secure websockets. 5. { 'name': "CentOS". Please try the manual proxy launch as described below to check it.novnc. { 'name': "Ubuntu".png"} . you need to have both connections secured to not get this error. Note that your certificate must be valid and trusted for the wss connec- tion to work.png"} . { 'name': "Linux". you can visit our: • Transifex poject page Translating through Transifex is easy and quick.server.6 Tuning & Extending Internationalization and Languages Sunstone supports multiple languages. You should see some output from the proxy in the console and hopefully the cause of why the connection does not work. 'path': "images/logos/windowsxp. a call for translations will be made in the forum. { 'name': "Redhat". 'path': "images/logos/windows8. Users can update or contribute translations anytime.png"} .” is usually related to a Same-Origin-Policy problem. • Doesn’t work yet? Try launching Sunstone. 'path': "images/logos/arch. • If your connection is very. 'path': "images/logos/redhat. this would raise the error again (you can setup your own little CA. Customize the VM Logos The VM Templates have an image logo to identify the guest OS. And don’t use a self-signed certificate for the server. Prior to every release.png"} 5. very. All translations should be submitted via Transifex. 'path': "images/logos/centos.2. To modify the list of available logos. Firefox blocks that.png"} . The browser will warn that the certificate is not secure and prompt you to manually trust it. 'path': "images/logos/ubuntu.log.2. normally after the beta release. make corrections or complete a translation.0. make sure that your certificate and key (if not included in certificate) are cor- rectly set in Sunstone configuration files. For Firefox. very slow. 'path': "images/logos/linux.png"} .yaml. If you want to contribute a new language.lock Leave it running and click on the VNC icon on Sunstone for the same VM again. You must generate a lock file containing the PID of the python process in /var/lock/one/. . or to add new ones. Release 5. edit /etc/one/sunstone-logos. If you are working with a certicificate that it is not accepted by the browser.address:vnc_proxy_port. but don’t use a self-signed server certificate).png"} . OpenNebula 5.allowInsecureFromHTTPS” to “true”. If you have Sunstone TLS secured and try to connect to an insecure websocket for VNC. 'path': "images/logos/debian. there might be a token expiration issue. • Please contact the support forum only when you have gone through the suggestion above and provide full sunstone logs. { 'name': "Windows 8".websocket. The other option would be to go into the Firefox config (about:config) and set “network.png"} .0 Deployment guide. killing the websockify proxy and relaunching the proxy manually in a console window with the command that is logged at the beginning of /var/log/one/novnc. you can manually add it to the browser trust-list visiting https://sunstone. Then the source strings will be updated in Transifex so all the translations can be updated to the latest OpenNebula version. { 'name': "Windows XP/2003". { 'name': "Debian". 'path': "images/logos/fedora.png"} . Sunstone Installation & Configuration 62 . { 'name': "Arch Linux". Translation with an acceptable level of completeness will be added to the final OpenNebula release. that works.

it will only show the resources the users have access to. the default_groupadmin views will be used. nor its group in the groups: section. – The default views for group admins. 5.yaml.2. the default views will be used. OpenNebula 5. the views defined inside each group are combined and presented to the user • If no views are available from the user’s group. and users in the users group can use the coud view.0. list the set of views for the group. Release 5.yaml. By default users in the oneadmin group have access to all views. The preferred method to select which views are available to each group is to update the group configuration from Sunstone. • The logo of the main UI screen is defined for each view in the view yaml file. – The default view. Its behavior can be customized and extended via views. views can be defined for: – Each user (users: section). There is also the /etc/one/sunstone-views. For example.2 Branding the Sunstone Portal You can easily add you logos to the login and main screens by updating the logo: attribute as follows: • The login screen is defined in the /etc/one/sunstone-views. Sunstone Installation & Configuration 63 . Here. – Each group (groups: section). Sunstone will calculate the views available to each user using: • From all the groups the user belongs to.0 Deployment guide. list each user and the set of views available for her. if a group admin user is not listed in the users: section.yaml file that defines an alternative method to set the view for each user or group. nor its group in the groups: section. if a user is not listed in the users: section. as described in Sunstone Views section. the defaults would be fetched from /etc/one/sunstone-views.yaml OpenNebula Sunstone can be adapted to different user roles. sunstone-views.

You will just have to deploy a new sunstone server and set a default view for each sunstone instance: # Admin sunstone cat /etc/one/sunstone-server.conf .conf . For example if you want an endpoint for the admins and a different one for the cloud users.user default_groupadmin: ... --- logo: images/opennebula-sunstone-v4. cat /etc/one/sunstone-views.sunstone..2 The following /etc/one/sunstone-views.com .0.yaml) views for helen and the cloud (cloud.png users: helen: . users: groups: default: . If more than one view is available for a given user the first one is the default.cloud A Different Endpoint for Each View OpenNebula Sunstone views can be adapted to deploy a different endpoint for each kind of user.groupadmin . :host: admin. users: groups: default: .com ...yaml example enables the user (user. cat /etc/one/sunstone-views...yaml .0..cloud .0 Deployment guide.yaml ..admin # Users sunstone cat /etc/one/sunstone-server.sunstone.2.user 5... OpenNebula 5.yaml) view for group cloud-users.yaml) and the cloud (cloud. :host: user..user groups: cloud-users: . Release 5.cloud default: . Sunstone Installation & Configuration 64 .

so you can easily enable or disable specific information tabs or action buttons. the cloud layout exposes a simplified version of the cloud where end-users will be able to manage any virtual resource of the cloud. 5.3. without taking care of the physical resources management.3. The OpenNebula Sunstone Views are fully customizable.0 Deployment guide. Details can be configured in the /etc/one/sunstone-views/admin.0. On one hand. Sunstone Views 65 . 5. allowing administrators and advanced users to have full control of any physical or virtual resource of the cloud. The OpenNebula Sunstone Views can be grouped into two different layouts. You can define multiple views for different user groups. Release 5. the classic Sunstone layout exposes a complete view of the cloud. OpenNebula 5. On the other hand.2 5.1 Default Views Admin View This view provides full control of the cloud. Each view defines a set of UI components so each user just accesses and views the relevant parts of the cloud for her role.3 Sunstone Views Using the OpenNebula Sunstone Views you will be able to provide a simplified UI aimed at end-users of an Open- Nebula cloud.yaml file.

add new network interfaces and provide values required by the template. In this scenario the cloud administrator must prepare a set of templates and images and make them available to the cloud users. this view is designed to present the valid operations against a vCenter infrastructure to a cloud end-user. It is designed to present the valid operations against a vCenter infrastructure to a cloud administrator. This view features the ability to create new users within the group as well as set and keep track of user quotas. Before using them. restricted to the physical and virtual resources of the group. 5. please check the /etc/one/sunstone-views/cloud. Release 5. It provides control of all the resources belonging to a group. For more information on how to configure this scenario see this section Cloud View This is a simplified view mainly intended for end-users that just require a portal where they can provision new virtual machines easily from pre-defined Templates.0. These Templates must be ready to be instantiated. that is. Sunstone Views 66 . For more information about this view.yaml file. users can optionally customize the VM capacity. Thereby.yaml file. OpenNebula 5. Group Admin View Based on the Admin View. the user doesn’t have to know any details of the infrastructure such as networking or storage. Details can be configured in the /etc/one/sunstone-views/admin_vcenter.3.0 Deployment guide.2 Admin vCenter View Based on the Admin View. For more information on how to configure this scenario see this section vCenter Cloud View Based on the Cloud View. but with no access to resources outside that group.

2 User View Based on the Admin View. They will be able to see Datastores and Virtual Networks in order to use them when creating a new Image or Virtual Machine. For more information on the different OpenNebula models please check the Understanding OpenNebula documentation. 5.yaml file.3. but they will not be able to create new ones. It is intended for users that need access to more actions that the limited set available in the cloud view.0. the admin view is only available to the oneadmin group. OpenNebula 5. Users will not be able to manage nor retrieve the hosts and clus- ters of the cloud. Release 5. Details can be configured in the /etc/one/sunstone-views/user. Sunstone Views 67 . 5.3. The views assigned to a given group can be defined in the group creation form or updating an existing group to implement different OpenNebula models.2 Configuring Access to the Views By default. New users will be included in the users group and will use the default cloud view. it is an advanced user view.0 Deployment guide.

0. Release 5.3.3 Usage Sunstone users can change their current view from the top-right drop-down menu: They can also configure several options from the settings tab: • Views: change between the different available views 5. Sunstone Views 68 .2 5. OpenNebula 5.3.0 Deployment guide.

sunstone-views.admin. 5.. These options are saved in the user template.. • Display Name: If the user wishes to customize the username that is shown in Sunstone it is possible to so by adding a special parameter named SUNSTONE_DISPLAY_NAME with the desired value.the admin view | `-.cloud. OpenNebula 5.yaml or cloud.yaml <--. It is worth noting that Cloud Administrators may want to automate this with a hook on user create in order to fetch the user name from outside OpenNebula. as well as other hidden settings like for instance the attribute that lets Sunstone remember the number of items displayed in the datatables per user.yaml <--.sunstone-views/ | |-.4 Defining a New OpenNebula Sunstone View or Customizing an Existing one View definitions are placed in the /etc/one/sunstone-views directory. in the form: <view_name>. defaults from /etc/one/sunstone-server.yaml .the cloud view `-.3.0 Deployment guide..2 • Language: select the language that they want to use for the UI.yaml The name of the view will be the filename without the yaml extension. • Use secure websockets for VNC: Try to connect using secure websockets when starting VNC sessions.conf are taken.3. If not defined. Release 5. Each view is defined by a configuration file.. Sunstone Views 69 . Note: The easiest way to create a custom view is to copy the admin. |-.0. /etc/one/ .yaml file and then harden it 5.

e. all Showback features are hidden.toggle_top: false widgets_one_per_row: . Each tab can be enabled or disabled by updating the enabled_tabs: attribute. and VM cost showback: true # Allows to change the security groups for each network interface # on the VM creation dialog secgroups: true This file also defines the tabs available in the view (note: tab is one of the main sections of the UI.network widgets_one_footer: Inside features there are two settings: • showback: When this is false. OpenNebula 5. features: # True to show showback monthly reports. The monthly report tables. the following widgets can be configured: # The following widgets can be used inside any of the '_per_row' settings # bellow. • secgroups: If true.groupquotas # . and the cost for new VMs in the create VM wizard. As the name suggest. comment the clusters-tab entry: enabled_tabs: .vms # .2 as needed.0. those in the left-side menu). Sunstone Views 70 .instances-top-tab .vms . i. visible tabs. or three per row.users # . the create VM wizard will allow to add security groups to each network interface.hosts .network # . # two. The footer uses the widgets at full size.dashboard-tab . Admin View Customization The contents of a view file specifies the enabled features. For example to disable the Clusters tab.storage # .vms-tab 5.3.refresh: false Sunstone.storage . Release 5. # # .hosts # .users widgets_three_per_row: widgets_two_per_row: .0 Deployment guide. # one per row. the widgets will be scaled to fit one. and enabled actions For the dashboard.quotas panel_tabs: actions: Dashboard.

As an example. • The action buttons available to the view (actions: attribute).1 # ID .9 # DS . or capacity tabs in the Virtual Machines tab).vrouters-tab .g. without the info panel_tab and no action buttons: datastores-tab: panel_tabs: datastore_info_tab: false datastore_image_tab: true datastore_clusters_tab: false table_columns: .templates-top-tab .2 # Owner .10 # Type .files-tab . these are the tabs activated when an object is selected (e.marketplaceapps-tab .vnets-tab .oneflow-templates-tab .0 Deployment guide.settings-tab .images-tab .12 # Labels actions: 5.11 # Status #.clusters-tab .6 # Cluster #.2 .vnets-topology-tab .support-tab Each tab can be tuned by selecting: • The individual resource tabs available (panel_tabs: attribute) in the tab.vdcs-tab . the following section defines a simplified datastore tab.0 # Checkbox .network-top-tab .7 # Basepath #.oneflow-services-tab .templates-tab . The attributes in each of the above sections should be self-explanatory.zones-tab .users-tab .3.datastores-tab .secgroups-tab .4 # Name . • The columns shown in the main information table (table_columns: attribute). the information.storage-top-tab .system-top-tab .0.hosts-tab .3 # Group .infrastructure-top-tab #.marketplaces-tab .groups-tab .5 # Capacity . Sunstone Views 71 . OpenNebula 5.8 # TM #. Release 5.acls-tab .

features: # True to show showback monthly reports. OpenNebula 5.enable: false Datastore. Features • showback: When this is false.0 Deployment guide.refresh: true Datastore. The monthly report tables.delete: false Datastore. and the cost for new VMs in the create VM wizard. overview of group VMs tabs: dashboard: # Connected user's quotas 5. the dashboard setup or the resources available for cloud users. The list of VM Templates and OneFlow Services can be hidden with the provision_tabs setting.chown: false Datastore. Sunstone Views 72 . tabs: provision-tab: provision_tabs: flows: true templates: true Dashboard The dashboard can be configured to show user’s quotas.import_dialog: false Datastore. overview of user VMs. Release 5. group quotas. and VM cost showback: true # Allows to change the security groups for each network interface # on the VM creation dialog secgroups: true Resources The list of VMs is always visible. • secgroups: If true. the create VM wizard will allow to add security groups to each network interface.chgrp: false Datastore. In this file you can customize the options available when instantiating a new template.create_dialog: false Datastore. all Showback features are hidden.chmod: false Datastore.2 Datastore.disable: false Cloud View Customization The cloud layout can also be customized by changing the corresponding /etc/one/sunstone-views/ yaml files.0.addtocluster: false Datastore.3.rename: false Datastore.

actions: &provisionactions ..shutdown_hard: false VM. for example the disk snapshots tab tabs: provision-tab: panel_tabs: . VM. • Hiding the delete button tabs: provision-tab: . 5.poweroff: false VM...undeploy_hard: true • Adding panels from the admin view. OpenNebula 5.. actions: &provisionactions . VCPU) customization capacity_select: true # True to allow NIC customization network_select: true # True to allow DISK size customization disk_resize: true Actions The actions available for a given VM can be customized and extended by modifying the yaml file.undeploy: true VM. Sunstone Views 73 ..poweroff_hard: false VM.delete: false • Using undeploy instead of power off tabs: provision-tab: .. Release 5.3. MEMORY. for example to use the disk snapshots or scheduled actions.. VM.2 quotas: true # Overview of connected user's VMs vms: true # Group's quotas vdcquotas: false # Overview of group's VMs vdcvms: false Create VM Wizard The create VM wizard can be configured with the following options: tabs: create_vm: # True to allow capacity (CPU..0...0 Deployment guide. You can even insert VM panels from the admin view into this view.

Authentication is based on the credentials stored in the OpenNebula database for the user.4.0 Deployment guide. Depending on the type of this credentials the authentication method can be: sunstone. This authentication 5.4 User Security and Authentication By default Sunstone works with the core authentication method (user and password) although you can configure any authentication mechanism supported by OpenNebula. OpenNebula 5.disk_snapshot_revert: true VM. x509 and opennebula (supporting LDAP or other custom methods). The requests of a user are forwarded to the core daemon.0..2 vm_snapshot_tab: true . Release 5.1 Authentication Methods Authentication is two-folded: • Web client and Sunstone server. Each request is signed with the credentials of an special server user.. VM. User Security and Authentication 74 .disk_snapshot_delete: true 5...disk_snapshot_create: true VM. . • Sunstone server and OpenNebula core. including the original user name. actions: &provisionactions .. In this section you will learn how to enable other authentication methods and how to secure the Sunstone connections through SSL.4.. 5.

pem To enable this login method. The user password has to be changed running one of the following commands: $ oneuser chauth new_user x509 "/C=ES/O=ONE/OU=DEV/CN=clouduser" or the same command using a certificate file: $ oneuser chauth new_user --x509 --cert /tmp/my_cert. Details on how to configure these methods can be found in the Cloud Authentication section.0.conf to x509: :auth: x509 The login screen will not display the username and password fields anymore. Rack cookie-based sessions are then used to authenticate and authorize the requests.conf to sunstone: :auth: sunstone OpenNebula Auth Using this method the credentials included in the header will be sent to the OpenNebula core and the authentication will be delegated to the OpenNebula auth system. The following sections details the client-to-Sunstone server authentication methods. username and password are matched to those in OpenNebula’s database in order to authorize the user at the time of login. To enable this login method. Basic Auth In the basic mode.4. The sunstone configuration is: :auth: opennebula x509 Auth This method performs the login to OpenNebula based on a x509 certificate DN (Distinguished Name).e: LDAP). Release 5.2 mechanism is based either in symmetric key cryptography (default) or x509 certificates. OpenNebula 5. Therefore any OpenNebula auth driver can be used through this method to authenticate the user (i.pem New users with this authentication method should be created as follows: $ oneuser create new_user "/C=ES/O=ONE/OU=DEV/CN=clouduser" --driver x509 or using a certificate file: $ oneuser create new_user --x509 --cert /tmp/my_cert. User Security and Authentication 75 . set the :auth: option of /etc/one/sunstone-server. using the specified driver for that user. as all information is fetched from the user certificate: 5. set the :auth: option of /etc/one/sunstone-server.0 Deployment guide. The DN is extracted from the certificate and matched to the password value in the user database.

0 Deployment guide. Warning: Sunstone x509 auth method only handles the authentication of the user at the time of login. It performs the login to OpenNebula based on a Kerberos REMOTE_USER. only Sunstone access will be granted to these users. Remote Auth This method is similar to x509 auth.0. To use Kerberos authentication users needs to be configured with the public driver. OpenNebula 5. The USER@DOMAIN is extracted from REMOTE_USER variable and matched to the password value in the user database. To update existing users to use the Kerberos authentication change the driver to public and update the password as follows: $ oneuser chauth new_user public "new_user@DOMAIN" New users with this authentication method should be created as follows: $ oneuser create new_user "new_user@DOMAIN" --driver public To enable this login method.conf to remote: 5. Authenti- cation of the user certificate is a complementary setup.4. User Security and Authentication 76 .2 Note that OpenNebula will not verify that the user is holding a valid certificate at the time of login: this is expected to be done by the external container of the Sunstone server (normally Apache). whose job is to tell the user’s browser that the site requires a user certificate and to check that the certificate is consistently signed by the chosen Certificate Authority (CA). which can rely on Apache. Release 5. set the :auth: option of /etc/one/sunstone-server. Note that this will prevent users to authenticate through the XML-RPC interface.

2 :auth: remote The login screen will not display the username and password fields anymore. 5. whose job is to tell the user’s browser that the site requires a valid ticket to login.2 Configuring a SSL Proxy OpenNebula Sunstone runs natively just on normal HTTP connections. If using an Ubuntu system follow the next steps (otherwise your milleage may vary. the steps are: Step 1: Server Certificate (Snakeoil) We are going to generate a snakeoil certificate.org. which can rely on Apache.pem > / ˓→etc/lighttpd/server. Authen- tication of the remote ticket is a complementary setup. as all information is fetched from Kerberos server or a remote authentication service.0 Deployment guide. OpenNebula 5. If the extra security provided by SSL is needed. you can find in the following lines an example to set a self-signed certificate to be used by a web server configured to act as an HTTP proxy to a correctly configured OpenNebula Sunstone. a proxy can be set up to handle the SSL connection that forwards the petition to the Sunstone server and takes back the answer to the client. Let’s assume the server where the proxy is going to be started is called cloudserver. This set up needs: • A server certificate for the SSL connections • An HTTP proxy that understands SSL • OpenNebula Sunstone configuration to accept petitions from the proxy If you want to try out the SSL setup easily.key /etc/ssl/certs/ssl-cert-snakeoil. Warning: Sunstone remote auth method only handles the authentication of the user at the time of login. but not a lot): • Install the ssl-cert package # apt-get install ssl-cert • Generate the certificate # /usr/sbin/make-ssl-cert generate-default-snakeoil • As we are using lighttpd.4.4. User Security and Authentication 77 . Note that OpenNebula will not verify that the user is holding a valid Kerberos ticket at the time of login: this is expected to be done by the external container of the Sunstone server (normally Apache). we need to append the private key with the certificate to obtain a server certificate valid to lighttpd # cat /etc/ssl/private/ssl-cert-snakeoil. Therefore.0. Release 5.pem 5.

conf configuration file and • Add the following modules (if not present already) – mod_access – mod_alias – mod_proxy – mod_accesslog – mod_compress • Change the server port to 443 if you are going to run lighttpd as root.pem" The host must be the server hostname of the computer running the Sunstone server.0 Deployment guide.4.txt for more info proxy.org HTTP virtual host server { listen 80. and the port the one that the Sunstone Server is running on.0.0.0.server = ( "" => ("" => ( "host" => "127. Depending on the operating system and the method of installation. "port" => 9869 ) ) ) #### SSL engine ssl.2 Step 2: SSL HTTP Proxy lighttpd You will need to edit the /etc/lighttpd/lighttpd.1:9869.pemfile = "/etc/lighttpd/server. nginx You will need to configure a new virtual host in nginx.d or /etc/nginx/sites-enabled. • A sample cloudserver.port = 8443 • Add the proxy module section: #### proxy module ## read proxy. nginx loads virtual host configurations from either /etc/nginx/conf.1". OpenNebula 5. User Security and Authentication 78 .org virtual host is presented next: #### OpenNebula Sunstone upstream upstream sunstone { server 127. 5.0. or any number above 1024 otherwise: server. } #### cloudserver. Release 5.engine = "enable" ssl.0.

passed to localhost. ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.key. OpenNebula 5. Release 5. and are signed with the credentials of an special server user.org HTTPS virtual host server { listen 8443.org.0. the server authenti- cates the request and then forwards the requested operation to the OpenNebula daemon. and how it is secured with a symmetric-key algorithm or x509 certificates. On typical installations the nginx master process is run as user root so you don’t need to modify the HTTPS port. server_name cloudserver.5. ### Proxy requests to upstream location / { proxy_pass http://sunstone. The forwarded requests between the servers and the core daemon include the original user name. port 9869. satisfied (hopefully). ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.0 Deployment guide.conf to listen at localhost:9869.2 server_name cloudserver. } } The IP address and port number used in upstream must be the one of the server Sunstone is running on.org.1 :port: 9869 Once the proxy server is started. In this guide this request forwarding mechanism is explained.0. Step 3: Sunstone Configuration Edit /etc/one/sunstone-server. OpenNebula Sunstone requests using HTTPS URIs can be directed to https://cloudserver.0.5 Cloud Servers Authentication Todo Mention oneflow OpenNebula ships with two servers: Sunstone and EC2. ### Permanent redirect to HTTPS (optional) return 301 https://$server_name:8443. that will then be unencrypted. 5. encrypted again and then passed back to the client. } #### cloudserver. When a user interacts with one of them. :host: 127. Cloud Servers Authentication 79 . 5.pem. ### SSL Parameters ssl on.org:8443.

This is specially relevant if you are running your server in a machine other than the frontend. This can be archived with an SSL proxy like stunnel or apache/nginx acting as proxy. For example.0 Deployment guide. Enable it in the relevant configuration file in /etc/one: • Sunstone: /etc/one/sunstone-server.2 5. After securing OpenNebula XMLRPC connection configure Sunstone to use https with the proxy port: :one_xmlrpc: https://frontend:2634/RPC2 5. you need a user with the driver server_cipher.one ec2_auth sunstone_auth $ cat /var/lib/one/.5.5. you will have a user named serveradmin with driver server_cipher. you may want to have Sunstone configured with a user with server_x509 driver.1 Server Users The Sunstone and EC2 services communicate with the core using a server user. or create a different user with the server_cipher driver. This server user uses a special authentication mechanism that allows the servers to perform an operation on behalf of other user. Please note that you can have as many users with a server_* driver as you need.one/sunstone_auth serveradmin:1612b78a4843647a4b541346f678f9e1b43bbcf9 Warning: serveradmin password is hashed in the database. 5. Warning: When Sunstone is running in a different machine than oned you should use an SSL connection. OpenNebula 5.0. and EC2 with server_cipher.one if you change the serveradmin’s password. You can use the --sha1 flag when issuing oneuser passwd command for this user.2 Symmetric Key Enable This mechanism is enabled by default. You can strengthen the security of the requests from the servers to the core daemon changing the serveruser’s driver to server_x509.5. OpenNebula creates the serverad- min account at bootstrap.conf :core_auth: cipher Configure You must update the configuration files in /var/lib/one/. $ ls -1 /var/lib/one/. Release 5. with the authentication driver server_cipher (symmetric key). Cloud Servers Authentication 80 . To use it.conf • EC2: /etc/one/econe.

3 x509 Encryption Enable To enable it. /etc/one/auth/cert. The defaults should work: # User to be used for x509 server authentication :srv_user: serveradmin # Path to the certificate used by the OpenNebula Services # Certificates must be in PEM format :one_cert: "/etc/one/auth/cert.conf • EC2: /etc/one/econe.pem The serveradmin account should look like: $ oneuser list ID GROUP NAME AUTH ˓→PASSWORD 0 oneadmin oneadmin core ˓→c24783ba96a35464632a624d9f829136edc0175e 1 oneadmin serveradmin server_x /C=ES/O=ONE/OU=DEV/ ˓→CN=server You need to edit /etc/one/auth/server_x509_auth.conf :core_auth: x509 Configure To trust the serveradmin certificate.5.2 5.0 Deployment guide. Release 5.pem" Copy the certificate and the private key to the paths set in :one_cert: and :one_key:. See the x509 Authentication guide for more information. $ openssl x509 -noout -hash -in cacert. Cloud Servers Authentication 81 .0 5. OpenNebula 5.pem 78d0bbd8 $ sudo cp cacert.pem /etc/one/auth/certificates/78d0bbd8. Then edit the relevant configuration file in /etc/one: • Sunstone: /etc/one/sunstone-server. or create a new user with the driver server_x509: $ oneuser chauth serveradmin server_x509 $ oneuser passwd serveradmin --x509 --cert usercert.conf.conf and uncomment all the fields. the CA’s certificate must be added to the ca_dir defined in /etc/one/auth/x509_auth.0.5. change the authentication driver of the serveradmin user.pem" :one_key: "/etc/one/auth/pk.pem if you used the default path. or simply update the paths.

5. The secret part of the token is signed with one of the two mechanisms explained before.4 Tuning & Extending Files You can find the drivers in these paths: • /var/lib/one/remotes/auth/server_cipher/authenticate • /var/lib/one/remotes/auth/server_server/authenticate Authentication Session String OpenNebula users with the driver server_cipher or server_x509 use a special authentication session string (the first parameter of the XML-RPC calls).6 Configuring Sunstone for Large Deployments Low to medium enterprise clouds will typically deploy Sunstone in a single machine a long with the OpenNebula daemons. 5. If you changed the serveradmin password please check the Cloud Servers Authentication guide. Some server do not take into account the TMPDIR env var and this directory must be defined in the configuration file. If you are installing from source use the -s option for the install.2 5. but you are able to install the Sunstone server in a machine different from the frontend.conf points to the right place where Open- Nebula frontend is running. You can also leave it undefined and export ONE_XMLRPC environment variable.6.sh script. Usually deploying sunstone in a separate application container in one or more hosts.0. 5. • If you want to upload files to OpenNebula. for example in Passenger (client_body_temp_path) 5. Release 5. • Provide the serveradmin credentials in the following file /var/lib/one/. A regular authentication token is in the form: username:secret Whereas a user with a server_* driver must use this token format: username:target_username:secret The core daemon understands a request with this authentication session token as “perform this operation on behalf of target_user”.one/sunstone_auth. This can be achieved by deploying the Sunstone server in a separated machine. • Make sure :one_xmlprc: variable in sunstone-server. OpenNebula 5. you will have to share the upload directory (/var/tmp by default) between sunstone and oned. Configuring Sunstone for Large Deployments 82 .1 Deploying Sunstone in a Different Machine By default the Sunstone server is configured to run in the frontend.0 Deployment guide.6. • Improve the scalability of the server for large user pools. • You will need to install only the sunstone server packages in the machine that will be running the server. Check also the api scalability guide as these of the tips also have an impact on Sunstone performance. However this simple deployment can be improved by: • Isolating the access from Web clients to the Sunstone server.

2 Running Sunstone Inside Another Webserver Self contained deployment of Sunstone (using sunstone-server script) is ok for small to medium installations.6. If there is no package for your distribution with this ruby library you can install it using rubygems: $ sudo gem install memcache-client Then you will have to change in sunstone configuration (/etc/one/sunstone-server. By default Sunstone is configured to use memory sessions. The rubygem library needed is memcache-client.0.one 5.0 Deployment guide. that is. more than 2000 simultaneous virtual machines). Warning: Deploying Sunstone behind a proxy in a federated environment requires some specific configuration to properly handle the Sunstone headers required by the Federation.conf) the value of :sessions to memcache. Thin and webrick web servers do not spawn new processes but new threads an all of them have access to that session pool. Sunstone server was modified to be able to run as a rack server.yaml $ chgrp www-data /var/lib/one/. It comes with most distributions and its default configuration should be ok. We now can select web servers that support spawning multiple processes like unicorn or embedding the service inside apache or nginx web servers using the Passenger module. Release 5. This makes it suitable to run in any web server that supports this protocol. OpenNebula 5.conf $ chgrp www-data /etc/one/sunstone-plugins. If you want to use novcn you need to have it running. If you need to retrieve this information you must deploy the server in the frontend 5. This is no longer true when the service has lots of concurrent users and the number of objects in the system is high (for example. the sessions are stored in the process memory.2 $ cat /var/lib/one/. • nginx: enable underscores_in_headers on. You can start this service with the command: $ novnc-server start Another thing you have to take into account is the user on which the server will run.6. for example: $ chgrp www-data /etc/one/sunstone-server. In ruby world this is the standard supported by most web servers. When using more than one process to server Sunstone there must be a service that stores this information and can be accessed by all the processes. Configuring memcached When using one on these web servers the use of a memcached server is necessary. Another benefit will be the ability to run Sunstone in several servers and balance the load between them. The installation sets the permis- sions for oneadmin user and group and files like the Sunstone configuration and credentials can not be read by other users. In this case we will need to install memcached. We will also need to install ruby libraries to be able to access it. Configuring Sunstone for Large Deployments 83 .one/sunstone_auth serveradmin:1612b78a4843647a4b541346f678f9e1b43bbcf9 Using this setup the VirtualMachine logs will not be available. and proxy_pass_request_headers on. Sunstone needs to store user sessions so it does not ask for user/password for every action.one/sunstone_auth $ chmod a+x /var/lib/one $ chmod a+x /var/lib/one/. Apache usually runs as www-data user and group so to let the server run as this user the group of these files must be changed.

We will provide the instructions for Apache web server but the steps will be similar for nginx following Passenger documentation. The installation is self explanatory and will guide you in all the process. This file is specific for rack applications and tells how to fun the application.2 $ chgrp www-data /var/log/one/sunstone* $ chmod g+w /var/log/one/sunstone* We advise to use Passenger in your installation but we will show you how to run Sunstone inside unicorn web server as an example. OpenNebula 5.conf You can find more information about the configuration options in the unicorn documentation. For example. First thing you have to do is install Phusion Passenger. Running Sunstone with Unicorn To get more information about this web server you can go to its web page. This kind of deployment adds better concurrency and lets us add an https endpoint.ru. This can be used to run Sunstone server and will manage all its life cycle. Configuring Sunstone for Large Deployments 84 .conf that contains: worker_processes 4 logger debug stderr_path '/tmp/unicorn. You can alternatively check a list of ruby web servers. It is a multi process web server that spawns new processes to deal with requests. Next thing we have to do is configure the virtual host that will run our Sunstone server. The installation is done using rubygems (or with your package manager if it is available): $ sudo gem install unicorn In the directory where Sunstone files reside (/usr/lib/one/sunstone or /usr/share/opennebula/sunstone) there is a file called config. For this you can use pre-made packages for your distribution or follow the installation instructions from their web page.0 Deployment guide. Release 5. To start a new server using unicorn you can run this command from that directory: $ unicorn -p 9869 Default unicorn configuration should be ok for most installations but a configuration file can be created to tune it. If you are already using one of these servers or just feel comfortable with one of them we encourage you to use this method.log' and start the server and daemonize it using: $ unicorn -d -p 9869 -c unicorn. to tell unicorn to spawn 4 processes and write stderr to /tmp/unicorn. Running Sunstone with Passenger in Apache Phusion Passenger is a module for Apache and Nginx web servers that runs ruby rack applications. follow them an you will be ready to run Sunstone.0.log we can create a file called unicorn.6. here is an example: 5. For more information on web servers that support rack and more information about it you can check the rack docu- mentation page. We have to point to the public directory from the Sunstone installation.

Running Sunstone behind nginx SSL Proxy How to set things up with nginx ssl proxy for sunstone and encrypted vnc.pem 5. restart -or reload apache configuration.2 <VirtualHost *:80> ServerName sunstone-server PassengerUser oneadmin # !!! Be sure to point DocumentRoot to 'public'! DocumentRoot /usr/lib/one/sunstone/public <Directory /usr/lib/one/sunstone/public> # This relaxes Apache security settings. } # HTTPS virtual host. ssl_stapling on.pem. # OpenNebula Sunstone upstream upstream sunstone { server 127.0 Deployment guide.pem.0.0.1:9869. return 301 https://$server_name:443. ssl_certificate_key /etc/ssl/private/opennebula-key. # No squealing. ssl_certificate /etc/ssl/certs/opennebula-certchain.0.6.conf: UI Settings :vnc_proxy_port: 29876 :vnc_proxy_support_wss: only :vnc_proxy_cert: /etc/one/ssl/opennebula-certchain. Release 5. Options -MultiViews # Uncomment this if you're on Apache >= 2. AllowOverride all # MultiViews must be turned off. OpenNebula 5. server_tokens off.to start the application and point to the virtual host to check if everything is running. } # HTTP virtual host. redirect to HTTPS server { listen 80 default_server. proxy to Sunstone server { listen 443 ssl default_server.4: #Require all granted </Directory> </VirtualHost> Note: When you’re experiencing login problems you might want to set PassengerMaxInstancesPerApp 1 in your passenger configuration or try memcached since Sunstone does not support sessions across multiple server instances. Configuring Sunstone for Large Deployments 85 . } And this is the changes that have to be made to sunstone-server. Now the configuration should be ready.

the connection to VNC window in Sunstone will fail. How to configure freeIPA server and Kerberos is outside of the scope of this document. Also make sure that both Sunstone instances connect to the same OpenNebula server. VNC sessions should show “encrypted” in the title. Running Sunstone in Multiple Servers You can run Sunstone in several servers and use a load balancer that connects to them. 5.6. restart -or reload apache configuration. Make sure you are using memcache for sessions and both Sunstone servers connect to the same memcached server. here is an example with Passenger: LoadModule auth_gssapi_module modules/mod_auth_gssapi. The configuration in this case is quite similar to Passenger configuration but we must include the Apache auth module line.0. you can get more info in FreeIPA Apache setup example As example to include Kerberos authentication we can use two different modules: mod_auth_gssapi or mod_authnz_pam And generate the keytab for http service.so <VirtualHost *:80> ServerName sunstone-server PassengerUser oneadmin # !!! Be sure to point DocumentRoot to 'public'! DocumentRoot /usr/lib/one/sunstone/public <Directory /usr/lib/one/sunstone/public> # Only is possible to access to this dir using a valid ticket AuthType GSSAPI AuthName "EXAMPLE. either get a real cert.2 :vnc_proxy_key: /etc/one/ssl/opennebula-key. You will need to have your browser trust that certificate. Options -MultiViews </Directory> </VirtualHost> Note: User must generate a valid ticket running kinit to get acces to Sunstone service. Now. Release 5.0 Deployment guide.COM login" GssapiCredStore keytab:/etc/http.and point to the virtual host using a valid ticket to check if everything is running. To do this change the parameter :memcache_host in the configuration file. or manually accept the selfsigned cert in your browser before trying it with Sunstone. in both 443 and 29876 ports in the OpenNebula IP or FQDN. You can also set a custom 401 document to warn users about any authentication failure.</body></html>' AllowOverride all # MultiViews must be turned off. Configuring Sunstone for Large Deployments 86 . Running Sunstone with Passenger using FreeIPA/Kerberos auth in Apache It is also possible to use Sunstone remote authentication with Apache and Passenger.keytab Require valid-user ErrorDocument 401 '<html><meta http-equiv="refresh" content="0. OpenNebula 5.pem :vnc_proxy_ipv6: false If using a selfsigned cert. Now our configuration is ready to use Passenger and Kerberos. URL=https:// ˓→yourdomain"><body>Kerberos authentication did not pass.

If you are using Phusion Passenger. 5. If you are using another backend than Passenger. Release 5. please port these recommendations to your backend.0.6. set PassengerConcurrencyModel to thread. take the following recommendations into account: • Set PassengerResponseBufferHighWatermark to 0. • Increase PassengerMaxPoolSize.0 Deployment guide.2 MarketPlace If you plan on using the MarketPlaceApp download functionality the Sunstone server(s) will need access to the Mar- ketPlace backends. Each MarketPlaceApp download will take one of this application processes. OpenNebula 5. Configuring Sunstone for Large Deployments 87 . • If Passenger Enterprise is available.

and with exceptions (VMs and VMDKs) does not create these re- sources. 88 . 6. you can delve on advanced topics like OpenNebula upgrade.1. limitations and so on.2.1. Templates. the resources it manages and how to setup OpenNebula to leverage different vCenter features. scalability in the Reference Chapter. After reading this Chapter. 6.2 Hypervisor Compatibility All this Chapter applies exclusively to vCenter hypervisor. This Chapter is organized in the vCenter Node Section. Networks and VMDKs. CHAPTER SIX VMWARE INFRASTRUCTURE SETUP 6. This Chapter gives a detailed view of the vCenter drivers. resource pools. The Virtualization Subsystem is the component in charge of talking with the hypervisor and taking the actions needed for each step in the VM life-cycle.1 How Should I Read This Chapter You should be reading this chapter after performing the vCenter node install. the next step is to learn what capabilities can be leveraged from the vCenter infrastructure and fine tune the OpenNebula cloud to make use of them.1 OpenNebula approach to vCenter interaction OpenNebula consumes resources from vCenter. 6.1 Overview After configuring the OpenNebula front-end and the vCenter nodes. but rather offers mechanisms to import them into OpenNebula to be controlled. and the VMDK image management made by OpenNebula. logging. the Networking Setup Section section that glosses over the network consumption and usage and then the Datastore Setup Section which introduces the concepts of OpenNebula datastores as related to vCenter datastores. which introduces the vCenter integration approach under the point of view of OpenNebula. with description of how to import.2 vCenter Node The vCenter driver for OpenNebula enables the interaction with vCenter to control the life-cycle of vCenter resources such as Virtual Machines. create and use VM Templates. 6. The next step should be proceeding to the Operations guide to learn how the Cloud users can consume the cloud resources that have been set up.

but all the other CONTEXT sections will be honored. 6.. OpenNebula additionally can handle on top of these networks three types of Address Ranges: Ethernet. to move the VM to another datastore or migrate it to another ESX).0. • Datastore names cannot contain spaces.2. vCenter Node 89 . • No files in context: Passing entire files to VMs is not supported.g.2 Considerations & Limitations • Unsupported Operations: The following operations are NOT supported on vCenter VMs managed by Open- Nebula. The following resources are NOT deleted in vCenter when deleted in OpenNebula: • VM Templates • Networks • Datastores The following resource are deleted in vCenter when deleted in OpenNebula: • Images • Virtual Machines Note: After a VM Template is cloned and booted into a vCenter Cluster it can access VMware advanced features and it can be managed through the OpenNebula provisioning portal -to control the life-cycle. OpenNebula assumes that they are independent. add/remove NICs. following this procedure. 6.or through vCenter (e.2.g. There are different behavior of the vCenter resources when deleted in OpenNebula. once ready just save them as VM Templates in vCenter. make snapshots. although they can be performed through vCenter: Operation Note migrate VMs cannot be migrated between ESX clusters disk snapshots Only system snapshots are available for vCenter VMs • No Security Groups: Firewall rules as defined in Security Groups cannot be enforced in vCenter VMs. • vCenter credential password cannot have more than 22 characters. quota. access control. • OpenNebula treats snapshots a tad different than VMware. whereas VMware builds them incrementally. Networking is handled by creating Virtual Network representations of the vCenter networks. OpenNebula will poll vCenter to detect these changes and update its internal representation accordingly.. Release 5. • Cluster names cannot contain spaces. Users will then instantiate the OpenNebula Templates where you can easily build from any provisioning strategy (e. This means that OpenNebula will still present snapshots that are no longer valid if one of their parent snapshots are deleted. This networking information can be passed to the VMs through the contextualization process. • Image names cannot contain spaces. OpenNebula 5.0 Deployment guide.2 Virtual Machines are deployed from VMware VM Templates that must exist previously in vCenter. IPv4 and IPv6.). There is a one- to-one relationship between each VMware VM Template and the equivalent OpenNebula Template. Therefore there is no need to convert your current Virtual Machines or import/export them through any process. and thus revert operations applied upon them will fail.

X_VCENTER_PASSWORD and X_VCENTER_HOST). vCenter Node 90 .UUID : 4216d5af-7c51-914c-33af-1747667c1019 . X_VCENTER_USER..2 • If you are running Sunstone using nginx/apache you will have to forward the following headers to be able to interact with vCenter. Non mandatory attributes for vCenter but specific to them are also recommended to have a default.0.done! Looking for VM Templates. It is generally a good idea to place defaults for the vCenter-specific attributes.Name : ttyTemplate .0 Deployment guide..UUID : 421649f3-92d4-49b0-8b3e-358abd18b7dc .2.conf. proxy_pass_request_headers on. 6.Name : Template test ..4 Importing vCenter VM Templates and running VMs The onevcenter tool can be used to import existing VM templates from the ESX clusters: $ onevcenter templates --vcenter <vcenter-host> --vuser <vcenter-username> --vpass ˓→<vcenter-password> Connecting to vCenter: <vcenter-host>. For example in nginx you have to add the following attrs to the server section of your nginx file: (underscores_in_headers on. • Attaching a new CDROM ISO will add a new (or change the existing) ISO to an already existing CDROM drive that needs to be present in the VM. attributes mandatory in the vCenter driver that are not mandatory for other hypervisors. HTTP_X_VCENTER_PASSWORD and HTTP_X_VCENTER_HOST (or. Release 5.Cluster: clusterB Import this VM template [y/n]? y OpenNebula template 5 created! $ onetemplate list ID USER GROUP NAME REGTIME 4 oneadmin oneadmin ttyTemplate 09/22 11:54:33 5 oneadmin oneadmin Template test 09/22 11:54:35 $ onetemplate show 5 TEMPLATE 5 INFORMATION ID : 5 NAME : Template test USER : oneadmin GROUP : oneadmin 6.2. alternatively. 6..Cluster: clusterA Import this VM template [y/n]? y OpenNebula template 4 created! * VM Template found: .2.3 Configuring The vCenter virtualization driver configuration file is located in /etc/one/vmm_exec/vmm_exec_vcenter. HTTP_X_VCENTER_USER. OpenNebula 5. This file is home for default values for OpenNebula VM templates.). that is.done! Do you want to process datacenter Development [y/n]? y * VM Template found: .

To import existing VMs. The following operations cannot be performed on an imported VM: • Recover –recreate • Undeploy (and Undeploy –hard) • Migrate (and Migrate –live) • Stop 6. VM_TEMPLATE="4216d5af-7c51-914c-33af-1747667c1019" ] SCHED_REQUIREMENTS="NAME=\"devel\"" VCPU="1" After a vCenter VM Template is imported as a OpenNebula VM Template.0 Deployment guide.. permissions.on state (this will import the VMs in OpenNebula as in the poweroff state)..] $ onehost importvm 0 RunningVM $ onevm list ID USER GROUP NAME STAT UCPU UMEM HOST TIME 3 oneadmin oneadmin RunningVM runn 0 590M MyvCenterHost 0d 01h02 After a Virtual Machine is imported. and also VMs defined in vCenter that are not in power. OpenNebula 5.] WILD VIRTUAL MACHINES NAME IMPORT_ID CPU MEMORY RunningVM 4223cbb1-34a3-6a58-5ec7-a55db235ac64 1 1024 [. it can be modified to change the capacity in terms of CPU and MEMORY.. It can also be enriched to add: • New disks • New network interfaces • Context information Before using your OpenNebula cloud you may want to read about the vCenter specifics. the name. vCenter Node 91 . etc. VMs in running state can be imported. the ‘onehost importvm’ command can be used.2.. Release 5.2 REGISTER TIME : 09/22 11:54:35 PERMISSIONS OWNER : um- GROUP : --- OTHER : --- TEMPLATE CONTENTS CPU="1" MEMORY="512" PUBLIC_CLOUD=[ TYPE="vcenter".0.. $ onehost show 0 HOST 0 INFORMATION ID : 0 NAME : MyvCenterHost CLUSTER : - [. their life-cycle (including creation of snapshots) can be controlled through Open- Nebula..

Release 5.0 Deployment guide.ip must be set to 0.2. In the fixed per Cluster basis approach. vCenter Node 92 . as well as capacity (CPU and MEMORY) resizing operations and VNC connections if the ports are opened before hand. Also. network management operations are present like the ability to attach/detach network interfaces.2 Running VMs with open VNC ports are imported with the ability to establish VNC connection to them via OpenNeb- ula.* settings: • remotedisplay. Running and Powered Off VMs can be imported through the WILDS tab in the Host info tab. 6.0 (or alternatively. you need to right click on the VM in vCenter while it is shut down and click on “Edit Settings”.enabled must be set to TRUE.vnc.vnc.0. The steps to confine OpenNebula users into a Resource Pool are: • Create a new vCenter user • Create a Resource Pool in vCenter and assign the subset of Datacenter hardware resources wanted to be exposed through OpenNebula • Give vCenter user Resource Pool Administration rights over the Resource Pool • Give vCenter user Resource Pool Administration (or equivalent) over the Datastores the VMs are going to be running on 6. OpenNebula 5. To activate the VNC ports. the IP of the OpenNebula front-end).vnc. • remotedisplay. vCenter hosts can be imported using the vCenter host create dialog.port must be set to a available VNC port number.0.2. fixed per Cluster basis or flexible per VM Template basis. The same import mechanism is available graphically through Sunstone for hosts. There are two approaches to achieve this. and set the following remotedisplay. and Networks and VM Templates through the Import button in the Virtual Networks and Templates tab respectively. to allow only a fraction of the vCenter infrastructure to be used by OpenNebula users. Note: running VMS can only be imported after the vCenter host has been successfully acquired. • remotedisplay.0. the vCenter credentials that OpenNebula use can be confined into a Resource Pool. templates and running VMs. networks.5 Resource Pool OpenNebula can place VMs in different Resource Pools.

but rather this is delegated to vCenter. with the name of the Resource Pool. Add a new tag called VCENTER_RESOURCE_POOL to the host template representing the vCenter cluster (for instance.3 vCenter Datastore The vCenter datastore allows the representation in OpenNebula of VMDK images available in vCenter datastores. and only vCenter image datastores are allowed.3. or in the CLI). When a VM Template is instantiated. The second approach is more flexible in the sense that all Resource Pools defined in vCenter can be used. TYPE="vcenter".0. these credentials can be used to add to OpenNebula the host representing the vCenter cluster. vCenter Datastore 93 . vCenter performs the 6. OpenNebula 5. Release 5. meaning that VMDK images are not cloned automatically by OpenNebula when a VM is instantiated. so no system datastore is needed in OpenNebula. RESOURCE_POOL="RPAncestor/RPChild" PUBLIC_CLOUD=[ HOST="Cluster".0 Deployment guide. a Resource Pool “RPChild” nested under “RPAnces- tor” can be represented both in VCENTER_RESOURCE_POOL and RESOURCE_POOL attributes as “RPAnces- tor/RPChild”. in the info tab of the host. For instance.2 Afterwards. VM_TEMPLATE="4223067b-ed9b-8f73-82ba-b1a98c3ff96e" ] 6. and the mechanism to select which one the VM is going to reside into can be defined using the attribute RESOURCE_POOL in the OpenNebula VM Template: Nested Resource Pools can be represented using ‘/’. It is a persistent only datastore. vCenter handles the VMDK image copies. No system datastore is needed since the vCenter support in OpenNebula does not rely on transfer managers to copy VMDK images.

6.flatMonolithic.thick2G More information in the VMware documentation.0. DISK_TYPE Type of disk to be created when a DATABLOCK is requested. • Image names and paths cannot contain spaces. busLogic. and deletes them after the VM ends its life-cycle.eagerZeroedThick. Known as “Disk Provisioning Type” in Sunstone. More information in the VMware documentation. The vCenter datastore in OpenNebula is tied to a vCenter OpenNebula host in the sense that all operations to be performed in the datastore are going to be performed through the vCenter instance associated to the OpenNebula host. Values (careful with the case): delta. • Datastore names cannot contain spaces. which happens to hold the needed credentials to access the vCenter instance. • Only one disk is allowed per directory in the vCenter datastores. the following requirements need to be met: • All the ESX servers controlled by vCenter need to mount the same VMFS datastore with the same name.3. Release 5.3.2 VMDK copies. • Datastores that form DRS Clusters are not supported.sparse2Gb.3 Configuration In order to create a OpenNebula vCenter datastore that represents a vCenter VMFS datastore. a new OpenNebula datastore needs to be created with the following attributes: • The OpenNebula vCenter datastore name needs to be exactly the same as the vCenter VMFS datastore available in the ESX hosts.sparseMonolithic. ADAPTER_TYPE It is inherited by images and can be overwritten if specified explicitly in the image. ide.2 Requirements In order to use the vCenter datastore.1 Limitations • No support for snapshots in the vCenter datastore. vCenter Datastore 94 . as well as image deletion.seSparse. The OpenNebula vCenter datastore is a purely persistent images datastore to allow for VMDK cloning and enable disk attach/detach on running VMs. 6.rdm. • The specific attributes for this datastore driver are listed in the following table: Attribute Description DS_MAD Must be set to vcenter TM_MAD Must be set vcenter Name of the OpenNebula host that represents the vCenter cluster that groups the ESX hosts that VCENTER_CLUSTER mount the represented VMFS datastore Default adapter type used by virtual disks to plug inherited to VMs for the images in the datastore. The type of disk has implications on performance and occupied space. vCenter datastores can be represented in OpenNebula to achieve the following VM operations: • Choose a different datastore for VM deployment 6. This value is inherited from the datastore to the image but can be explicitly overwritten.preallocated. • The ESX servers need to be part of the Cluster controlled by OpenNebula 6.3. OpenNebula 5.3. Creation of empty datablocks and VMDK image cloning are supported.raw. Possible values (careful with the case): lsiLogic. Known as “Bus adapter controller” in Sunstone.rdmp.0 Deployment guide.thick.

Total MB : 132352 . etc) is known by OpenNebula.Name : datastore2 . The onevcenter tool can be used to import vCenter datastores: $ onevcenter datastores --vuser <VCENTER_USER> --vpass <VCENTER_PASS> --vcenter ˓→<VCENTER_FQDN> Connecting to vCenter: vcenter. IPv4 and IPv6.0 Deployment guide. will be handled by OpenNebula. These NICs will be invisible for OpenNebula.done! Do you want to process datacenter Datacenter [y/n]? y * Datastore found: . vCenter Networking 95 ... but rather only OpenNebula representations of such Virtual Networks.4 vCenter Networking Virtual Networks from vCenter can be represented using OpenNebula networks. vCenter VM Templates can define their own NICs.0. or through the attach_nic operation.done! Looking for Datastores. and therefore cannot be detached from 6. OpenNebula additionally can handle on top of these networks three types of Address Ranges: Ethernet. any NIC added in the OpenNebula VM Template.3. However. and as such it is subject to be detached and its information (IP. OpenNebula will use these networks with the defined characteristics.. and OpenNebula will not manage them. taking into account that the BRIDGE of the Virtual Network needs to match the name of the Network defined in vCenter.4. and as such can consume any vCenter defined network resource (even those created by other networking components like for instance NSX). Virtual Networks in vCenter can be created using the vCenter web client.4 Tuning and Extending Drivers can be easily customized please refer to the specific guide for each datastore driver or to the Storage subsystem developer’s guide. Release 5. OpenNebula 5. OpenNebula supports both “Port Groups” and “Distributed Port Groups”. vCenter VM Templates with already defined NICs that reference Networks in vCenter will be imported without this information in OpenNebula.Free MB : 130605 . but it cannot create new Virtual Networks in vCenter..Cluster : Cluster Import this Datastore [y/n]? y OpenNebula datastore 100 created! 6. However you may find the files you need to modify here: • /var/lib/one/remotes/datastore/vcenter • /var/lib/one/remotes/tm/vcenter 6. with any specific configuration like for instance VLANs.vcenter3. MAC.2 • Clone VMDKs images • Create empty datablocks • Delete VMDK images All OpenNebula datastores are actively monitoring. and the scheduler will refuse to deploy a VM onto a vCenter datastore with insufficient free space.

..done! Do you want to process datacenter vOneDatacenter [y/n]? y * Network found: ..2 the VMs.4. Release 5.done! Looking for vCenter networks. OpenNebula 5.[E]thernet) ? E Please input the first MAC in the range [Enter for default]: OpenNebula virtual network 29 created with size 45! $ onevnet list ID USER GROUP NAME CLUSTER BRIDGE LEASES 29 oneadmin oneadmin MyvCenterNetwork . vCenter Networking 96 .1 Importing vCenter Networks The onevcenter tool can be used to import existing Networks and distributed vSwitches from the ESX clusters: $ onevcenter networks --vcenter <vcenter-host> --vuser <vcenter-username> --vpass ˓→<vcenter-password> Connecting to vCenter: <vcenter-host>. MyFakeNe 0 $ onevnet show 29 VIRTUAL NETWORK 29 INFORMATION ID : 29 NAME : MyvCenterNetwork USER : oneadmin GROUP : oneadmin CLUSTER : - BRIDGE : MyvCenterNetwork VLAN : No USED LEASES : 0 PERMISSIONS OWNER : um- GROUP : --- OTHER : --- VIRTUAL NETWORK TEMPLATE BRIDGE="MyvCenterNetwork" PHYDEV="" VCENTER_TYPE="Port Group" VLAN="NO" VLAN_ID="" ADDRESS RANGE POOL AR TYPE SIZE LEASES MAC IP GLOBAL_PREFIX 0 ETHER 45 0 02:00:97:7f:f0:87 . The imported VM Templates in OpenNebula can be updated to add NICs from Virtual Networks imported from vCenter (being Networks or Distributed vSwitches).0 Deployment guide.0. to add them later on in the OpenNebula VM Templates 6.IPv[6]. - 6. We recommend therefore to use VM Templates in vCenter without defined NICs.4..Type : Port Group Import this Network [y/n]? y How many VMs are you planning to fit into this network [255]? 45 What type of Virtual Network do you want to create (IPv[4].Name : MyvCenterNetwork .

OpenNebula 5. vCenter Networking 97 .2 LEASES AR OWNER MAC IP IP6_GLOBAL The same import mechanism is available graphically through Sunstone 6.4.0 Deployment guide. Release 5.0.

Follow the vCenter Node section for a similar guide for vCenter. 7. This chapter will focus on the configuration options for the Hosts. This guide describes the use of the KVM with OpenNebula. • In the Monitoring section. which will perform the actions needed to manage the VM and its life-cycle. These Hosts are managed by the KVM Driver.1. you should read the Open Cloud Storage chapter. This chapter analyses the KVM driver in detail.2 KVM Driver KVM (Kernel-based Virtual Machine) is the hypervisor for OpenNebula’s Open Cloud Architecture. KVM is a com- plete virtualization system for Linux. 7.1 How Should I Read This Chapter Before reading this chapter. amongst other things. the KVM Hosts and have an OpenNebula cloud up and running with at least one virtualization node.1 Overview The Hosts are servers with a hypervisor installed (KVM) which execute the running Virtual Machines. and changes you can make in the configuration of that subsystem.2 Hypervisor Compatibility This chapter applies only to KVM.1. CHAPTER SEVEN OPEN CLOUD HOST SETUP 7. the tools to configure and add KVM hosts into the OpenNebula Cloud. you can find information about how OpenNebula is monitoring its Hosts and Virtual Machines. • Read the KVM driver section in order to understand the procedure of configuring and managing kvm Hosts. you should have already installed your Frontend. and will give you. • You can read this section if you are interested in performing PCI Passthrough. 7. It offers full virtualization. 98 . where each Virtual Machine interacts with its own virtualized hardware. After reading this chapter.

EXECUTABLE = "one_vmm_exec". suspend. ARGUMENTS = "-t 15 -r 0 kvm". If you change them you will need to restart OpenNebula. and it will not work while the VM is running (live disk-attach).2 Considerations & Limitations Try to use virtio whenever possible. terminate-hard.2.g. unresched. resume. in order to support virtualization.2. cancel. DEFAULT = "vmm_exec/vmm_exec_kvm. meaning that you have a limit when attaching disks. delete. reboot. snap-create. TYPE = "kvm". hold. resched. if your # CPU does not have virtualization extensions or use nested Qemu-KVM hosts #------------------------------------------------------------------------------- VM_MAD = [ NAME = "kvm".1 Requirements The Hosts will need a CPU with Intel VT or AMD’s AMD-V features. Release 5. will have an impact in performance and will not expose all the available functionality. e. -l. 7. KEEP_SNAPSHOTS= "no". needs support from hypervisor # -s <shell> to execute remote commands. nic-attach. release. number of hosts monitored at the same time # -l <actions[=command_name]> actions executed locally. For instance. reboot-hard. both for networks and disks. shutdown. 7. KVM’s Preparing to use KVM guide will clarify any doubts you may have regarding if your hardware supports KVM.2.e. restore. nic-detach. -t. Drivers The KVM driver is enabled by default in OpenNebula: #------------------------------------------------------------------------------- # KVM Virtualization Driver Manager Configuration # -r number of retries when monitoring a host # -t number of threads. Using emulated hardware. IMPORTED_VMS_ACTIONS = "terminate.2 7.save" # -p more than one action per host in parallel. SUNSTONE_NAME = "KVM". both for networks and disks.3 Configuration KVM Configuration The OpenNebula packages will configure KVM automatically. bash by default # # Note: You can use type = "qemu" to use qemu emulated guests. save. disk-attach. poll # An example: "-l migrate=migrate_local. OpenNebula 5. KVM will be installed and configured after following the KVM Host Installation section.0.0 Deployment guide. therefore you don’t need to take any extra steps. if you don’t use virtio for the disk drivers. you will not be able to exceed a small number of devices connected to the controller. snap-delete" ] The configuration parameters: -r. i. -p and -s are already preconfigured with sane defaults. 7. disk-detach. KVM Driver 99 . migrate.2. command can be # overridden for each action.conf". # Valid actions: deploy.

and execute onehost sync --force afterwards: MIGRATE_OPTIONS=--unsafe 7. and how to customize and extend the drivers. • SPICE: to add default devices for SPICE. Driver Defaults There are some attributes required for KVM to boot a VM. KERNEL_CMD. • OS: attributes KERNEL.0.2.redhat. ROOT. The following can be set for KVM: • EMULATOR: path to the kvm executable. You can enable the migration adding the --unsafe parameter to the virsh command. BOOT. ACPI = "yes". • HYPERV: to enable hyperv extensions. The file to change is /var/lib/one/remotes/vmm/kvm/kvmrc. These attributes are set in /etc/one/vmm_exec/vmm_exec_kvm. APIC = "no". CACHE = "none"] HYPERV_OPTIONS="<relaxed state='on'/><vapic state='on'/><spinlocks state='on' retries= ˓→'4096'/>" SPICE_OPTIONS=" <video> <model type='qxl' heads='1'/> </video> <sound model='ich6' /> <channel type='spicevmc'> <target type='virtio' name='com. HYPERV = "no". Release 5.conf. All disks will use that driver and caching algorithm. MACHINE and ARCH. For example: OS = [ ARCH = "x86_64" ] FEATURES = [ PAE = "no". You can set a suitable defaults for them so. • DISK: attributes DRIVER and CACHE.spice. KVM Driver 100 .0 Deployment guide. Uncomment the following line. • NIC: attribute FILTER.0'/> </channel> <redirdev bus='usb' type='spicevmc'/> <redirdev bus='usb' type='spicevmc'/> <redirdev bus='usb' type='spicevmc'/>" Live-Migration for Other Cache settings In case you are using disks with a cache setting different to none you may have problems with live migration de- pending on the libvirt version. PAE. all the VMs get needed values. • RAW: to add libvirt attributes to the domain XML file. • VCPU • FEATURES: attributes ACPI. GUEST_AGENT = "no" ˓→] DISK = [ DRIVER = "raw" .2 Read the Virtual Machine Drivers Reference for more information about these parameters. OpenNebula 5. INITRD.

g. • (Optional) You may want to limit the total memory devoted to VMs. This feature is useful when a VM gets stuck in Shutdown (or simply does not notice the shutdown command). Cgroups is a kernel feature that allows you to control the amount of resources allocated to a given process (among other things). Cgroups can be also used to limit the overall amount of physical RAM that the VMs can use. thanks to cgroups a VM with CPU=0. So.conf group virt { memory { memory. this should be performed in the hosts. cpuacct = /mnt/cgroups/cpuacct. as defined in its template. cpuset = /mnt/cgroups/cpuset.0. Release 5. Be sure to assign libvirt processes to this group.2. } } mount { cpu = /mnt/cgroups/cpu. wih CGROUP_DAEMON or in cgrules. By default. Create a group for the libvirt processes (VMs) and the total memory you want to assign to them.2 Configure the Timeouts (Optional) Optionally.conf. memory = /mnt/cgroups/memory.0. devices = /mnt/cgroups/devices. Please refer to the cgroups documentation of your Linux distribution for specific details. you can set a timeout for the VM Shutdown operation can be set up. This is configured in /var/lib/one/remotes/vmm/kvm/kvmrc: # Seconds to wait after shutdown until timeout export SHUTDOWN_TIMEOUT=300 # Uncomment this line to force VM cancellation after shutdown timeout #export FORCE_DESTROY=yes Working with cgroups (Optional) Warning: This section outlines the configuration and use of cgroups with OpenNebula and libvirt/KVM.5 will get half of the physical CPU cycles than a VM with CPU=1. so you can leave always a fraction to the host OS. Example: # /etc/cgconfig. e. not in the front- end: • Define where to mount the cgroup controller virtual file systems. KVM Driver 101 . blkio = /mnt/cgroups/blkio. after the timeout time the VM will return to Running state but is can also be configured so the VM is destroyed after the grace time. The following outlines the steps need to configure cgroups.conf *:libvirtd memory virt/ 7. This feature can be used to enforce the amount of CPU assigned to a VM. OpenNebula 5. at least memory and cpu are needed.limit_in_bytes = 5120M. } # /etc/cgrules.0 Deployment guide.

7.. "memory".2 • Enable cgroups support in libvirt by adding this configuration to /etc/libvirt/qemu.."/dev/hpet". "cpuset".2.vcpu0 | |-.one-74 | |-. "cpuacct" ] cgroup_device_acl = [ "/dev/null". If everything is properly configured you should see: /mnt/cgroups/cpu/sysdefault/libvirt/qemu/ |-.tasks and the cpu shares for each VM: > cat /mnt/cgroups/cpu/sysdefault/libvirt/qemu/one-73/cpu. KVM Driver 102 .0.shares 1024 VCPUs are not pinned so most probably the virtual process will be changing the core it is using.event_control | |-. "devices".. "/dev/ptmx".shares | . | `-. "blkio".. you may want to set RESERVED_MEM parameter in host or cluster templates.cgroup. "/dev/random".cgroup.. "/dev/vfio/vfio" ] • After configuring the hosts start/restart the cgroups service then restart the libvirtd service.cpu.clone_children | |-.cgroup.shares |-. `-. In an ideal case where the VM is alone in the physical host the total amount of CPU consumed will be equal to VCPU plus any overhead of virtualization (for example networking).clone_children | |-.clone_children | .procs | |-.cgroup.one-73 | |-.stat |-.conf cgroup_controllers = [ "cpu".cgroup.procs | |-.cpu. In this case cgroups will do a fair share of CPU time between VMs (a VM with CPU=2 will get double the time as a VM with CPU=1).. "/dev/zero".shares | .cgroup.5 and CPU=1) respectively. OpenNebula automatically generates a number of CPU shares proportional to the CPU attribute in the VM template. • (Optional) If you have limited the amount of memory for VMs.0 Deployment guide. That’s it. |-... consider a host running 2 VMs (73 and 74. "/dev/urandom". "/dev/rtc". For example.shares 512 > cat /mnt/cgroups/cpu/sysdefault/libvirt/qemu/one-74/cpu.clone_children | ..cgroup. "/dev/kqemu".cgroup. In case there are more VMs in that physical node and is heavily used then the VMs will compete for physical CPU time. "/dev/kvm".cpu.cpu. with CPU=0.vcpu0 | |-. Release 5. "/dev/full". OpenNebula 5.event_control | |-.. | `-.cgroup.conf: # /etc/libvirt/qemu. |-.event_control .notify_on_release |-.

• CACHE: specifies the optional cache mechanism. none. cdrom or floppy. It corresponds to the ifname option of the ‘-net’ argument of the kvm command. It corresponds to the script option of the ‘-net’ argument of the kvm command. variable SPICE_OPTIONS. possible values are: disk (default). KVM Driver 103 . libvirt and KVM can work with SPICE (check this for more information).2. The configuration can be changed in the driver configuration file. Check the Libvirt documentation for more information. NIC • TARGET: name for the tun device created for the VM. • IO: set IO policy possible values are threads and native. you can also list the rules in your system with: $ virsh -c qemu:///system nwfilter-list Graphics If properly configured. OpenNebula 5. please refer to the template reference documentation for a complete list of the attributes supported to define a VM. • MODEL: ethernet hardware to emulate.. You can get the list of available models with this command: $ kvm -net nic. DISK • TYPE: This attribute defines the type of the media to be exposed to the VM. • SCRIPT: name of a shell script to be executed after creating the tun device for the VM.2 In case you are not overcommiting (CPU=VCPU) all the virtual CPUs will have one physical CPU (even if it’s not pinned) so they could consume the number of VCPU assigned minus the virtualization overhead and any process running in the host OS.0.2.. Libvirt includes some predefined rules (e. possible values are default. To select it. clean- traffic) that can be used. writethrough and writeback. Release 5. possible values are raw. 7.4 Usage KVM Specific Attributes The following are template attributes specific to KVM. This attribute corresponds to the media option of the -driver argument of the kvm command.g. • DRIVER: specifies the format of the disk image. This attribute corresponds to the format option of the -driver argument of the kvm command. qcow2. just add to the GRAPHICS attribute: • TYPE = SPICE Enabling spice will also make the driver inject specific configuration for these machines. 7.model=? -nographic /dev/null • FILTER to define a network filtering rule for the interface.0 Deployment guide.

If you want to use the virtio drivers add the following attributes to your devices: • DISK. Basically. The agent package needed in the Guest OS is available in most distributions. Release 5. Is called qemu-guest-agent in most of them. You will need a linux kernel with the virtio drivers for the guest. If TARGET is passed instead of DEV_PREFIX the same rules apply (what happens behind the scenes is that Open- Nebula generates a TARGET based on the DEV_PREFIX if no TARGET is provided).2. This way the snapshot won’t contain half written data.0 Deployment guide. It # will be used in case the attached disk does not have an specific cache # method set (can be set using templates when attaching a disk). everything placed here will be written literally into the KVM deployment file (use libvirt xml format and semantics). if the guest OS is a Linux flavor. For disks. • vd: virtio (recommended). • sd: SCSI (default).0. The configuration for the default cache type on newly attached disks is configured in /var/lib/one/remotes/vmm/kvm/kvmrc: # This parameter will set the default cache type for new attached disks. OpenNebula 5.2 Virtio Virtio is the framework for IO virtualization in KVM. KVM Driver 104 . If you need more information you can follow these links: 7. add the attribute MODE="virtio" Additional Attributes The raw attribute offers the end user the possibility of passing by attributes not known by OpenNebula to KVM. One of the interesting actions is that it allows to freeze the filesystem before doing an snapshot. Filesystem freeze will only be used with CEPH and qcow2 storage drivers. This can be done issuing the following command as root: # echo 1 > /sys/bus/pci/rescan Enabling QEMU Guest Agent QEMU Guest Agent allows the communication of some actions with the guest OS. add the attribute DEV_PREFIX="vd" • NIC. check the KVM documentation for more info. DEFAULT_ATTACH_CACHE=none For Disks and NICs. the bus the disk will be attached to is inferred from the DEV_PREFIX attribute of the disk template. RAW = [ type = "kvm". data = "<devices><serial type=\"pty\"><source path=\"/dev/pts/5\"/><target ˓→port=\"0\"/></serial><console type=\"pty\" tty=\"/dev/pts/5\"><source path=\"/dev/ ˓→pts/5\"/><target port=\"0\"/></console></devices>" ] Disk/Nic Hotplugging KVM supports hotplugging to the virtio and the SCSI buses. This agent uses a virtio serial connection to send and receive commands. the guest needs to be explicitly tell to rescan the PCI bus.

the “Poweroff” operation is not available for these imported VMs in KVM.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_ Deployment_and_Administration_Guide/chap-QEMU_Guest_Agent.5 Tuning & Extending Multiple Actions per Host Warning: This feature is experimental. besides the limitations explained in the host guide.org/page/Qemu_guest_agent • http://wiki.conf: RAW = "<devices><channel type='unix'><source mode='bind'/><target type='virtio' name= ˓→'org.2 • https://access. there will be only one deployment done per host at a given time.qemu.0 Deployment guide. This limitation can be solved configuring libvirt to accept TCP connections and OpenNebula to use this communication method. To make sure this happens the drivers are configured to send only one action per host at a time.qemu.0'/></channel></devices>" Importing VMs VMs running on KVM hypervisors that were not launched through OpenNebula can be imported in OpenNebula.0. It is important to highlight that. 7. Release 5. By default the drivers use a unix socket to communicate with the libvirt daemon.html • http://wiki. This method can only be safely used by one process at a time. Change the file /etc/libvirt/libvirtd. OpenNebula 5.guest_agent.org/Features/QAPI/GuestAgent To enable the communication channel with the guest agent this line must be present in /etc/one/vmm_exec/vmm_exec_kvm. Libvirt configuration Here is described how to configure libvirtd to accept unencrypted and unauthenticated TCP connections in a CentOS 7 machine. For other setup check your distribution and libvirt documentation.2. For example.conf in each of the hypervisors and make sure that these parameters are set and have the following values: listen_tls = 0 listen_tcp = 1 tcp_port = "16509" auth_tcp = "none" You will also need to modify /etc/sysconfig/libvirtd and uncomment this line: LIBVIRTD_ARGS="--listen" After modifying these files the libvirt daemon must be restarted: 7.libvirt. KVM Driver 105 .redhat.2. Some modifications to the code must be done before this is a recom- mended setup.

OpenNebula 5. • /var/lib/one/remotes/vmm/kvm : commands executed to perform actions. This is done in /etc/one/oned. default = "vmm_exec/vmm_exec_kvm.0. Non mandatory attributes for KVM but specific to them are also recommended to have a default. attributes mandatory in the KVM driver that are not mandatory for other hypervisors. And the following driver configuration files: • /etc/one/vmm_exec/vmm_exec_kvm.0 Deployment guide. The syntax used for the former is plain shell script that will be evaluated before the driver execution. the syntax is the familiar: ENVIRONMENT_VARIABLE=VALUE 7. OpenNebula templates). that is.2. KVM Driver 106 . arguments = "-t 15 -r 0 kvm -p".conf and the value to change is MAX_HOST For example. to let the scheduler submit 10 VMs per host use this line: MAX_HOST = 10 After this update the remote files in the nodes and restart opennebula: $ onehost sync --force $ sudo systemctl restart opennebula Files and Parameters The driver consists of the following files: • /usr/lib/one/mads/one_vmm_exec : generic VMM driver.conf in the VM_MAD configuration section: VM_MAD = [ name = "kvm". executable = "one_vmm_exec". The file is located at /etc/one/sched.conf". For the latter.conf : This file is home for default values for domain definitions (in other words. • /var/lib/one/remotes/vmm/kvm/kvmrc : This file holds instructions to be executed before the actual driver load to perform specific tasks or to pass environmental variables to the driver.2 $ sudo systemctl restart libvirtd OpenNebula configuration The VMM driver must be configured so it allows more than one action to be executed per host. This can be done adding the parameter -p to the driver executable. type = "kvm" ] Change the file /var/lib/one/remotes/vmm/kvm/kvmrc so set a TCP endpoint for libvirt communication: export LIBVIRT_URI=qemu+tcp://localhost/system The scheduler configuration should also be changed to let it deploy more than one VM per host. Release 5. It is generally a good idea to place defaults for the KVM-specific attributes.

This information is collected by executing a set of static probes provided by OpenNebula.0.3.2. Monitoring 107 . OpenNebula starts a collectd daemon running in the Front-end that listens for UDP connections on port 4124. basic performance indicators. It will be used in case DEFAULT_ATTACH_CACHE the attached disk does not have an specific cache method set (can be set using templates when attaching a disk). using a lightweight communication protocol. Release 5.3 Monitoring This section provides an overview of the OpenNebula monitoring subsystem. This distributed monitoring system resembles the architecture of dedicated monitoring systems. 7. The monitoring subsystem gathers information relative to the Hosts and the Virtual Machines. OpenNebula 5. and a push model.6 Troubleshooting image magic is incorrect When trying to restore the VM from a suspended state this error is returned: libvirtd1021: operation failed: image magic is incorrect It can be fixed by applying: options kvm_intel nested=0 options kvm_intel emulate_invalid_guest_state=0 options kvm ignore_msrs=1 7. The output of these probes is sent to OpenNebula using a push mechanism. Set options for the virsh migrate command MIGRATE_OPTIONS See the Virtual Machine drivers reference for more information.2 The parameters that can be changed here are as follows: Parameter Description LIBVIRT_URI Connection string to libvirtd QEMU_PROTOCOL Protocol used for live migrations Seconds to wait after shutdown until timeout SHUTDOWN_TIMEOUT FORCE_DESTROY Force VM cancellation after shutdown timeout CANCEL_NO_ACPIForce VM’s without ACPI enabled to be destroyed on shutdown This parameter will set the default cache type for new attached disks. as well as Virtual Machine status and capacity consumption. In the first monitoring cycle the OpenNebula connects to the host using ssh and starts a daemon that will execute the probe scripts and sends the collected data to the collectd daemon in the Frontend every specific amount of seconds (configurable with the -i option of the collectd IM_MAD). such as the Host status. 7.3. 7.0 Deployment guide. This way the monitoring subsystem doesn’t need to make new ssh connections to receive data.1 Overview Each host periodically sends monitoring data via UDP to the Frontend which collects it and processes it in a dedicated module.

3.0.3. #------------------------------------------------------------------------------- IM_MAD = [ NAME = "collectd". #------------------------------------------------------------------------------- # This driver CANNOT BE ASSIGNED TO A HOST. 7. # -a Address to bind the collectd sockect (defults 0. OpenNebula 5. Monitoring 108 .0) # -p UDP port to listen for monitor information (default 4124) # -f Interval in seconds to flush collected information (default 5) # -t Number of threads for the server (defult 50) # -i Time in seconds of the monitorization push cycle. EXECUTABLE = "collectd".3.2 If the agent stops in a specific Host. This parameter must # be smaller than MONITORING_INTERVAL. otherwise push monitorization will # not be effective.0 Deployment guide.conf must be configured with the following snippets: collectd must be enabled both for KVM: #------------------------------------------------------------------------------- # Information Collector for KVM IM's.0. Release 5.2 Requirements • The firewall of the Frontend (if enabled) must allow UDP packages incoming from the hosts on port 4124. OpenNebula will detect that no monitorization data is received from that hosts and will restart the probe with SSH.0.3 OpenNebula Configuration Enabling the Drivers To enable this monitoring system /etc/one/oned. 7. 7. and needs to be used with KVM # -h prints this help.

3. Time in seconds between host and VM monitorization. Tue May 24 16:22:07 2016 [Z0][VMM][D]: VM 0 successfully monitored: STATE=a CPU=0. EXECUTABLE = "one_im_ssh".0 ˓→MEMORY=113404 NETRX=648 NETTX=398 Tue May 24 16:22:07 2016 [Z0][InM][D]: Host thost087 (0) successfully monitored. KVM: #------------------------------------------------------------------------------- # KVM UDP-push Information Driver Manager Configuration # -r number of retries when monitoring a host # -t number of threads.e. ARGUMENTS = "-r 3 -t 15 kvm" ] #------------------------------------------------------------------------------- The arguments passed to this driver are: • -r: number of retries when monitoring a host • -t: number of threads. number of hosts monitored at the same time #------------------------------------------------------------------------------- IM_MAD = [ NAME = "kvm". number of hosts monitored at the same time Monitoring Configuration Parameters OpenNebula allows to customize the general behavior of the whole monitoring subsystem: Parameter Description MONITOR. i.4 Troubleshooting Healthy Monitoring System Every (approximately) monitoring_push_cycle of seconds OpenNebula is receiving the monitoring data of every Virtual Machine and of a host like such: Tue May 24 16:21:47 2016 [Z0][InM][D]: Host thost087 (0) successfully monitored. Tue May 24 16:21:47 2016 [Z0][VMM][D]: VM 0 successfully monitored: STATE=a CPU=0.0.0 ˓→MEMORY=113516 NETRX=648 NETTX=468 7.0 Deployment guide. SUNSTONE_NAME = "KVM". OpenNebula 5. It must have a value greater ING_INTERVAL than the manager timer HOST_PER_INTERVAL Number of hosts monitored in each interval.0. 7. otherwise push monitorization will not be effective.3.0.e. This parameter must be smaller than MONITOR- ING_INTERVAL (see below). Release 5. Monitoring 109 .2 ARGUMENTS = "-p 4124 -f 5 -t 50 -i 20" ] #------------------------------------------------------------------------------- Valid arguments for this driver are: • -a: Address to bind the collectd socket (defaults 0.0) • -p: port number • -f: Interval in seconds to flush collected information to OpenNebula (default 5) • -t: Number of threads for the collectd server (defult 50) • -i: Time in seconds of the monitorization push cycle. i.

7. an amount of ~4KB per VM. 7. You can easily write your own probes or modify existing ones.4. Probes are defined for each hypervisor. Release 5. 7. Configuration for your distro may be different. Monitoring Probes For the troubleshooting of errors produced during the execution of the monitoring probes. please see the Information Manager Drivers guide.5 Tuning & Extending Adjust Monitoring Interval Times In order to tune your OpenNebula installation with appropriate values of the monitoring parameters you need to adjust the -i option of the collectd IM_MAD (the monitoring push cycle). Tue May 24 16:22:27 2016 [Z0][VMM][D]: VM 0 successfully monitored: STATE=a CPU=0. please refer to the trou- bleshooting section. OpenNebula 5. Warning: The overall setup state was extracted from a preconfigured Fedora 22 machine.SIZE=1] Tue May 24 16:22:27 2016 [Z0][InM][D]: Host thost087 (0) successfully monitored. You can safely ignore all the VGA related sections. OpenNebula will not be able to write that amount of data to the database. See the Tuning section to fix this. If the system is not working healthily it will be due to the database throughput since OpenNebula will write the monitoring information to a database.SIZE=27] DISK_SIZE=[ID=1.log a host is being monitored actively periodically (every MONITORING_INTERVAL sec- onds) then the monitorization is not working correctly: Tue May 24 16:24:23 2016 [Z0][InM][D]: Monitoring host thost087 (0) Tue May 24 16:25:23 2016 [Z0][InM][D]: Monitoring host thost087 (0) Tue May 24 16:26:23 2016 [Z0][InM][D]: Monitoring host thost087 (0) If this is the case it’s probably because OpenNebula is receiving probes faster than it can process. If the number of virtual machines is too large and the monitoring push cycle too low.4 PCI Passthrough It is possible to discover PCI devices in the Hosts and assign them to Virtual Machines for the KVM hypervisor.3. Remember to synchronize the monitor probes in the hosts using onehost sync as described in the Managing Hosts guide.0.d for KVM.0 ˓→MEMORY=113544 NETRX=648 NETTX=468 However. or if you don’t want to output video signal from them. for PCI devices that are not graphic cards. The setup and environment information is taken from here. Driver Files The probes are specialized programs that obtain the monitor metrics.0 Deployment guide. if in oned. and are located at /var/lib/one/remotes/im/kvm-probes. PCI Passthrough 110 .2 Tue May 24 16:22:11 2016 [Z0][VMM][D]: VM 0 successfully monitored: DISK_ ˓→SIZE=[ID=0.

12 7.4.conf for nvidia GPUs: blacklist nouveau blacklist lbm-nouveau options nouveau modeset=0 alias nouveau off alias lbm-nouveau off Alongside this configuration vfio driver should be loaded passing the id of the PCI cards we want to attach to VMs.blacklist=nouveau Loading vfio Driver in initrd The modules for vfio must be added to initrd. The instructions are made for Intel branded processors but the process should be very similar for AMD.4.d/local. PCI Passthrough 111 . for nvidia Grid K2 GPU we pass the id 10de:11bf. • kernel >= 3. File /etc/modprobe. For Intel processors this is called VT-d and for AMD processors is called AMD-Vi.0.conf: options vfio-pci ids=10de:11bf 7. OpenNebula 5.conf with this line: add_drivers+="vfio vfio_iommu_type1 vfio_pci vfio_virqfd" and regenerate initrd: # dracut --force Driver Blacklisting The same blacklisting done in the kernel parameters must be done in the system configuration.4.d/local.0 Deployment guide.1 Requirements • The host that is going to be used for virtualization needs to support I/O MMU. Release 5. The list of modules are vfio vfio_iommu_type1 vfio_pci vfio_virqfd. The parameter to enable I/O MMU is: intel_iommu=on We also need to tell the kernel to load the vfio-pci driver and blacklist the drivers for the selected cards.driver. For example. for nvidia GPUs we can use these parameters: rd.2 Machine Configuration (Hypervisor) Kernel Configuration The kernel must be configured to support I/O MMU and to blacklist any driver that could be accessing the PCI’s that we want to use in our VMs. For example. if your system uses dracut add the file /etc/dracut. /etc/modprobe.pre=vfio-pci rd.2 7.d/blacklist.driver.conf. For example.

Release 5. "/dev/kvm". PCI Passthrough 112 . The cards are specified with PCI addresses.0 0000:85:00. "/dev/vfio/46". "/dev/urandom".0 0000:05:00. Ad- dresses can be retrieved with lspci command. This script binds a card to vfio.4. "/dev/ptmx". "/dev/zero". 58 and 59 so we add this configuration to /etc/libvirt/qemu. It goes into /usr/local/bin/vfio-bind: #!/bin/sh modprobe vfio-pci for dev in "$@". OpenNebula 5.0 Deployment guide. "/dev/vfio/vfio". "/dev/rtc".service and enabled: [Unit] Description=Binds devices to vfio-pci After=syslog.target qemu Configuration Now we need to give qemu access to the vfio devices for the groups assigned to the PCI cards. "/dev/random". It can be written to /etc/systemd/system/vfio-bind. We can get a list of PCI cards and its I/O MMU group using this command: # find /sys/kernel/iommu_groups/ -type l In our example our cards have the groups 45.target [Service] EnvironmentFile=-/etc/sysconfig/vfio-bind Type=oneshot RemainAfterExit=yes ExecStart=-/usr/local/bin/vfio-bind $DEVICES [Install] WantedBy=multi-user. then echo $dev > /sys/bus/pci/devices/$dev/driver/unbind fi echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id done The configuration goes into /etc/sysconfig/vfio-bind. To add the cards to vfio and assign a group to them we can use the scripts shared in the aforementioned web page.0 0000:84:00. 46.0" Here is a systemd script that executes the script. "/dev/full". "/dev/vfio/45". For example: DEVICES="0000:04:00. "/dev/vfio/58".0. Make sure to prepend the domain that is usually 0000. "/dev/kqemu". "/dev/vfio/59" ] 7.2 vfio Device Binding I/O MMU separates PCI cards into groups to isolate memory operation between devices and VMs.conf: cgroup_device_acl = [ "/dev/null"."/dev/hpet". do vendor=$(cat /sys/bus/pci/devices/$dev/vendor) device=$(cat /sys/bus/pci/devices/$dev/device) if [ -e /sys/bus/pci/devices/\$dev/driver ].

2 7. to find out the available PCI devices.4 Usage The basic workflow is to inspect the host information.0 8086:9c31:0c03 8 Series USB xHCI HC 00:16. 7.0 8086:9c10:0604 8 Series PCI Express Root Port 1 00:1c. or simply CLASS.3 Driver Configuration The only configuration that is needed is the filter for the monitoring probe that gets the list of PCI cards.0. • ADDR: PCI Address. If no hosts match.0 8086:9c43:0601 8 Series LPC Controller 00:1f. Release 5. # # From lspci help: # -d [<vendor>]:[<device>][:<class>] # # For example # # FILTER = '::0300' # all VGA cards # FILTER = '10de::0300' # all NVIDIA VGA cards # FILTER = '10de:11bf:0300' # only GK104GL [GRID K2] # FILTER = '8086::0300.0 8086:9c20:0403 8 Series HD Audio Controller 00:1c.0 8086:08b1:0280 Wireless 7260 • VM: The VM ID using that specific device. • TYPE: Values describing the device.4. These values are used when selecting a PCI device do to passthrough. CLI A new table in onehost show command gives us the list of PCI devices per host. PCI Passthrough 113 . PCI devices can be added by specifying VENDOR. either in the CLI or in Sunstone. The format # is the same as lspci and several filters can be added separated by commas.rb and set a list with the same format as lspci: # This variable contains the filters for PCI card monitoring.0 Deployment guide. For example: PCI DEVICES VM ADDR TYPE NAME 00:00.d/pci. Empty if no VMs are using that device. and to add the desired device to the template. • NAME: Name of the PCI device. an error message will appear in the Scheduler log.0 8086:0a0c:0403 Haswell-ULT HD Audio Controller 00:14.4.0 8086:9c26:0c03 8 Series USB EHCI #1 00:1f.0 8086:0a04:0600 Haswell-ULT DRAM Controller 00:02. To narrow the list a filter configuration can be changed in /var/lib/one/remotes/im/kvm-probes.0 8086:0a16:0300 Haswell-ULT Integrated Graphics Controller 123 00:03. OpenNebula 5.3 8086:9c22:0c05 8 Series SMBus Controller 02:00. By de- fault the probe lists all the cards available in a host. # A nil filter will retrieve all PCI cards.2 8086:9c14:0604 8 Series PCI Express Root Port 3 00:1d.0 8086:9c3a:0780 8 Series HECI #0 00:1b. Note that OpenNebula will only deploy the VM in a host with the available PCI device.2 8086:9c03:0106 8 Series SATA Controller 1 [AHCI mode] 00:1f. These are VENDOR:DEVICE:CLASS.4. DEVICE and CLASS.::0106' # all Intel VGA cards and any SATA controller 7.

Sunstone In Sunstone the information is displayed in the PCI tab: To add a PCI device to a template.2 To make use of one of the PCI devices in a VM a new option can be added selecting which device to use. to get any PCI Express Root Ports this can be added to a VM tmplate: PCI = [ CLASS = "0604" ] More than one PCI options can be added to attach more than one PCI device to the VM. DEVICE = "0a0c". CLASS = "0403" ] The device can be also specified without all the type values. PCI Passthrough 114 .0. Release 5. For example this will ask for a Haswell-ULT HD Audio Controller: PCI = [ VENDOR = "8086". OpenNebula 5. select the Other tab: 7.4.0 Deployment guide. For example.

PCI Passthrough 115 . • VLAN_ID: If present. When defining a Network that will be used for PCI passthrough nics. In any case. OpenNebula 5. The will be mapped to nics in the order they appear.2 7. NETWORK_UNAME="oneadmin".5 Usage as Network Interfaces It is possible use a PCI device as a NIC interface directly in OpenNebula. it will be treated as a NIC and OpenNebula will assign a MAC address. assuming a /24 netmask. regardless if they’re NICs of PCIs. CLASS="0200". to the PCI device. VENDOR="8086" ] Note that the order of appearence of the PCI elements and NIC elements in the template is relevant. it will create a tagged interface and assign the IPs to the tagged interface. and it will be ignored. In order to do so you will need to follow the configuration steps mentioned in this guide. • IP: It will assign an IPv4 address to the interface. DEVICE="10d3".0 Deployment guide. TYPE="NIC".4. a VLAN_ID. assuming a /128 netmask.1Q you can also leave PHYDEV blank. For 802. CLI When a PCI in a template contains the attribute TYPE="NIC". The context packages support the configuration of the following attributes: • MAC: It will change the mac address of the corresponding network interface to the MAC assigned by OpenNeb- ula. This is an example of the PCI section of an interface that will be treated as a NIC: PCI=[ NETWORK="passthrough". namely changing the device driver. Release 5.0. type any random value into the BRIDGE field. 7.4. an IP. etc. please use either the dummy network driver or the 802.1Q if you are using VLAN. • IPV6: It will assign an IPv6 address to the interface.

PCI Passthrough 116 . OpenNebula 5. Use the rest of the dialog as usual by selecting a network from the table.0 Deployment guide. under advanced options check the PCI Passthrough option and fill in the PCI address.2 Sunstone In the Network tab.0. Release 5.4. 7.

or when disks are attached or snapshotted.1. or cloned to/from the Images datastore when the VMs are deployed or terminated. stores the images repository. A Datastore is any storage medium to store disk images. ram-disks or context files.1 Overview 8. • The Files & Kernels Datastore to store plain files and not disk images. CHAPTER EIGHT OPEN CLOUD STORAGE SETUP 8. The plain files can be used as kernels. Disk are moved.1 Datastore Types OpenNebula storage is structured around the Datastore concept. See details here. OpenNebula features three different datastore types: • The Images Datastore. 117 . • The System Datastore holds disk for running virtual machines.

LVM • fs_lvm. to direct attach to the virtual machine existing block devices in the nodes. to access iSCSI devices through the buil-in qemu support. • iSCSI . After that. images are iSCSI targets 8. like shared but specialized for the qcow2 format Ceph • ceph. • Ceph. images are copied using the ssh protocol • qcow2. • Raw Device Mapping.1. Overview 118 . to store images in a file form.2 How Should I Read This Chapter Before reading this chapter make sure you have read the Open Cloud Host chapter. Follow the vCenter Storage section for a similar guide for vCenter. images are existing block devices in the nodes iSCSI libvirt • iscsi. proceed to the specific section for the Datastores you may be interested in.0. The following table summarizes the available transfer modes for each datastore: Datastore Image to System Datastore disk transfers methods Filesystem • shared. to store images in LVM logical volumes. • LVM.0 Deployment guide. After reading this chapter you should read the Open Cloud Networking chapter. images are exported in a shared filesys- tem • ssh. These drivers are specialized pieces of software that perform low-level storage operations. to store images using Ceph block devices. all images are exported in Ceph pools • shared. Disk images are transferred between the Image and System datastores by the transfer manager (TM) drivers.1. images exported in a shared FS but dumped to a LV Raw Devices • dev. Release 5. 8.2 Image Datastores There are different Image Datastores depending on how the images are stored on the underlying storage technology: • Filesystem.1.3 Hypervisor Compatibility This chapter applies only to KVM.Libvirt Datastore. 8. OpenNebula 5. volatile & context disks exported in a shared FS.

1 Datastore Layout Images are saved into the corresponding datastore directory (/var/lib/one/datastores/<DATASTORE ID>). Usually it is a good idea to have multiple filesystem datastores to: • Balancing I/O operations between storage servers • Use different datastores for different cluster hosts • Apply different transfer modes to different images • Different SLA policies (e. Typically this is achieved through a distributed FS like NFS.05a38ae85311b9dbb4eb15a2010f11ce |-.d0e0df1fb8cfa88311ea54dfbcfc4b0c Note: The canonical path for /var/lib/one/datastores can be changed in oned.0 Deployment guide. These directories contain the VM disks and additional files. 8.2 Filesystem Datastore The Filesystem Datastore lets you store VM images in a file form.0 | | `-. backup) can be applied to different VM types or users • Easily add new storage to the cloud The Filesystem datastore can be used with three different transfer modes.disk.0 `-.disk.0/ | |-. for each running virtual machine there is a directory (named after the VM ID) in the corresponding System Datastore. OpenNebula 5.g. Release 5. images are copied using the ssh protocol • qcow2.2. Also.conf with the DATASTORE_LOCATION configuration attribute Shared & Qcow2 Transfer Modes The shared transfer driver assumes that the datastore is mounted in all the hosts of the cluster. like shared but specialized for the qcow2 format 8.0. described below: • shared.1 | |-. For example.disk. and VM 7 stopped) running from System Datastore 0 would present the following layout: /var/lib/one/datastores |-. easily backup images.7/ | |-.2bbec245b382fd833be35b0b0683ed09 `-. images are exported in a shared filesystem • ssh. a system with an Image Datastore (1) with three images and 3 Virtual Machines (VM 0 and 2 running.1 |-.disk.0 | `-. The use of file-based disk images presents several benefits over device backed disks (e.g. checkpoint or snapshots.2 8. GlusterFS or Lustre.checkpoint | `-.2/ | | `-. e.0/ | | |-.2.g. Filesystem Datastore 119 . or use of shared FS) although it may less performing in some cases.

These file operations are always performed remotely on the target host.0. Release 5. so the actual I/O bandwidth is balanced • Using an ssh System Datastore instead.i files) are copied or linked in the corresponding directory of the system datastore. its disks (the disk. The ssh transfer driver uses the hosts’ local storage to place the images of running Virtual Machines. Usually this limitation may be overcome by: • Using different file-system servers for the images datastores. 8. which in turn can be a very resource demanding operation. All the operations are then performed locally but images have to be copied always to the hosts. Also this driver prevents the use of live-migrations between hosts. the images are copied locally to each host • Tuning or improving the file-system servers SSH Transfer Mode In this case the System Datastore is distributed among the hosts. Filesystem Datastore 120 .0 Deployment guide.2. but it can also become a bottleneck in your infrastructure and degrade your Virtual Machines performance if the virtualized services perform disk-intensive workloads. OpenNebula 5. This transfer mode usually reduces VM deployment times and enables live-migration.2 When a VM is created.

Shared & Qcow2 Transfer Modes Simply mount the Image Datastore directory in the front-end in /var/lib/one/datastores/<datastore_id>. Note that if all the datastores are of the same type you can mount the whole /var/lib/one/datastores direc- tory.2. In case the files must be read by root the option no_root_squash must be added. will hold temporary disks and files for VMs stopped and undeployed. Note that /var/lib/one/datastores can be mounted from any NAS/SAN server in your network. to store the images. intr. 8.2 Frontend Setup The Frontend needs to prepare the storage area for: • The Image Datastores. OpenNebula 5. Release 5.2.0 Deployment guide. Filesystem Datastore 121 . SSH Transfer Mode Simply make sure that there is enough space under /var/lib/one/datastores to store Images and the disks of the stopped and undeployed virtual machines. wsize=32768. With the documented configuration of libvirt/kvm the image files are accessed as oneadmin user.0. Note: NFS volumes mount tips. The following options are recomended to mount a NFS shares:soft.2 8. rsize=32768. • The System Datastores. Warning: The frontend only needs to mount the Image Datastores and not the System Datastores.

txt NAME = nfs_system TM_MAD = shared TYPE = SYSTEM_DS $ onedatastore create systemds.2 8.0.txt ID: 101 Create an Image Datastore In the same way. for example to create a System Datastore using the shared mode simply: $ cat systemds.2. OpenNebula 5. to create an Image Datastore you need to set: Attribute Description NAME The name of the datastore DS_MAD fs TM_MAD shared for shared transfer mode qcow2 for qcow2 transfer mode ssh for ssh transfer mode For example.2. the following illustrates the creation of a filesystem datastore using the shared transfer drivers. Release 5. 8. SSH Transfer Mode Just make sure that there is enough space under /var/lib/one/datastores to store the disks of running VMs on that host. Filesystem Datastore 122 .4 OpenNebula Configuration Once the Filesystem storage is setup. simply mount in each node the datastore directories in /var/lib/one/datastores/<datastore_id>.2.3 Node Setup Shared & Qcow2 Transfer Modes The configuration is the same as for the Frontend above. the OpenNebula configuration comprises two steps: • Create a System Datastore • Create an Image Datastore Create a System Datastore To create a new System Datastore you need to specify its type as system datastore and transfer mode: Attribute Description NAME The name of the datastore TYPE SYSTEM_DS TM_MAD shared for shared transfer mode qcow2 for qcow2 transfer mode ssh for ssh transfer mode This can be done either in Sunstone or through the CLI. 8.0 Deployment guide.

Custom options can be sent to qemu-img clone action through the variable QCOW2_OPTIONS in /var/lib/one/remotes/tm/tmrc.conf ID: 100 Also note that there are additional attributes that can be set. 8. Addtional Configuration The qcow2 drivers are a specialization of the shared drivers to work with the qcow2 format for disk images.2 $ cat ds.0. 8.3 Ceph Datastore The Ceph datastore driver provides OpenNebula users with the possibility of using Ceph block devices as their Virtual Images. Warning: This driver requires that the OpenNebula nodes using the Ceph driver must be Ceph clients of a running Ceph cluster. otherwise new snapshots are created in the form one-<IMAGE ID>-<VM ID>-<DISK ID>. Images are created and through the qemu-img command using the original image as backing file. Warning: Be sure to use the same TM_MAD for both the System and Image datastore. Virtual machines will use these rbd volumes for its disks if the Images are persistent. OpenNebula 5.1 Datastore Layout Images and virtual machine disks are stored in the same Ceph pool.conf NAME = nfs_images DS_MAD = fs TM_MAD = shared $ onedatastore create ds. Each Image is named one-<IMAGE ID> in the pool. Release 5.3. The pool with one Image (ID 0) and two Virtual Machines 14 and 15 using this Image as virtual disk 0 would be similar to: $ rbd ls -l -p one --id libvirt NAME SIZE PARENT FMT PROT LOCK one-0 10240M 2 one-0@snap 10240M 2 yes one-0-14-0 10240M one/one-0@snap 2 one-0-15-0 10240M one/one-0@snap 2 Note: In this case context disk and auxiliar files (deployment description and chekpoints) are stored locally in the nodes. Ceph Datastore 123 . For example. 8.0 Deployment guide. consider a system using an Image and System Datastore backed by a Ceph pool named one. check the datastore template attributes.3. More information in Ceph documentation.

0. get a copy of the key of this user to distribute it later to the OpenNebula nodes. Ceph Datastore 124 . so hostname and port doesn’t need to be specified explicitly in any Ceph command. $ scp ceph. Also.3. 8.client. Additionally you need to: • Create a pool for the OpenNebula datastores. osds) with Open- Nebula nodes or front-end 8.libvirt.client.keyring root@node:/etc/ceph $ scp client.key $ ceph auth get client.1 metadata.2 Ceph Cluster Setup This guide assumes that you already have a functional Ceph cluster in place.4 Node Setup In order to use the Ceph cluster the nodes needs to be configured as follows: • The ceph client tools must be available in the node • The mon daemon must be defined in the ceph. Check that ceph.libvirt.conf includes: [global] rbd_default_format = 2 • Pick a set of client nodes of the cluster to act as storage bridges.keyring) to the nodes under /etc/ceph.libvirt.libvirt -o ceph. it will access the Ceph cluster through the storage bridges. Write down the name of the pool to include it in the datastore definitions.libvirt | tee client.6 one.2 8.key oneadmin@node: 8.libvirt. Note: For production environments it is recommended to not co-allocate ceph services (monitor. These nodes must have qemu-img command installed.libvirt mon 'allow r' osd \ 'allow class-read object_prefix rbd_children.key) to the oneadmin home. • Define a Ceph user to access the datastore pool. allow rwx pool=one' $ ceph auth get-key client.3.2 rbd.libvirt.3.3 Frontend Setup The Frontend does not need any specific Ceph setup.3. For example.libvirt. and the user key (client. this user will be also used by libvirt to access the disk images. create a user libvirt: $ ceph auth get-or-create client.client. OpenNebula 5.0 Deployment guide.keyring • Altough RDB format 1 is supported it is strongly recommended to use Format 2. Release 5. • Copy the Ceph user keyring (ceph. $ ceph osd pool create one 128 $ ceph osd lspools 0 data. These nodes will be used to import images into the Ceph Cluster from OpenNebula.conf for all the nodes.

xml <<EOF <secret ephemeral='no' private='no'> <uuid>$UUID</uuid> <usage type='ceph'> <name>client. Write down the UUID for later use. Note: You may add addtional Image and System Datastores pointing to other pools with diffirent allocation/replication policies in Ceph. ˓→libvirt. OpenNebula 5. 8.0. deployment and checkpoint files are created at the nodes under /var/lib/one/datastores/. YES TM_MAD ceph YES CEPH_CONF Non default ceph configuration file if needed. This requires access to the ceph user keyring.3. Ceph Datastore 125 .key • The oneadmin account needs to access the Ceph Cluster using the libvirt Ceph user defined above. Test that Ceph client is properly configured in the node. Both datastores will share the same configuration parameters and Ceph pool.2 • Generate a secret for the Ceph user and copy it to the nodes under oneadmin home. Each Image/System Datastore pair needs to define the same following attributes: Attribute Description Mandatory POOL_NAME The Ceph pool name YES CEPH_USER The Ceph user name.5 OpenNebula Configuration To use your Ceph cluster with OpenNebula you need to define a System and Image datastores.libvirt.0 Deployment guide. used by libvirt and rbd commands. $ UUID=`uuidgen`. • Ancillary virtual machine files like context disks.xml $ virsh -c qemu:///system secret-set-value --secret $UUID --base64 $(cat client.xml oneadmin@node: • Define the a libvirt secret and remove key files in the nodes: $ virsh -c qemu:///system secret-define secret. NO 8.key) $ rm client. $ ssh oneadmin@node $ rbd ls -p one --id libvirt You can read more information about this in the Ceph guide Using libvirt with Ceph. echo $UUID c7bdeabf-5f2a-4094-9413-58c6a9590980 $ cat > secret.3. NO RBD_FORMAT By default RBD Format 2 will be used. Release 5. make sure that enough storage for these files is provisioned in the nodes.libvirt secret</name> </usage> </secret> EOF $ scp secret.

OpenNebula 5.2 Create a System Datastore Create a System Datastore in Sunstone or through the CLI. Ceph Datastore 126 .conf NAME = "cephds" DS_MAD = ceph TM_MAD = ceph DISK_TYPE = RBD POOL_NAME = one CEPH_HOST = host1 host2:port2 CEPH_USER = libvirt CEPH_SECRET = "6f88b54b-5dae-41fe-a43e-b2763f601cfc" BRIDGE_LIST = cephfrontend > onedatastore create ds.txt ID: 101 Note: Ceph can also work with a System Datastore of type Filesystem in a shared transfer mode. CEPH_SECRETThe UUID of the libvirt secret. Create an Image Datastore Apart from the previous attributes. the following can be set for an Image Datastore: Attribute Description Manda- tory DS_MAD ceph YES DISK_TYPE RBD YES BRIDGE_LISTList of storage bridges to access the Ceph cluster YES CEPH_HOST Space-separated list of Ceph monitors.conf ID: 101 8.3.txt NAME = ceph_system TM_MAD = ceph TYPE = SYSTEM_DS POOL_NAME = one CEPH_USER = libvirt $ onedatastore create systemds. Example: host1 host2:port2 host3 YES host4:port4. YES STAGING_DIRDefault path for image operations in the bridges NO An example of datastore: > cat ds. that need to be the same as the associated System Datastore.0 Deployment guide. Release 5. Note that apart from the Ceph Cluster you need to setup a shared FS. for example: $ cat systemds. as described in the Filesystem Datastore section. In that case volatile and swap disks are created as plain files in the System Datastore.0.

The same LUN can be exported to all the hosts. Virtual Machines will be able to run directly from the SAN.4.2.20g lv-one-9-0 vg-one-0 -wi------. The nodes have configured a shared LUN and created a volume group named vg-one-0. LVM Datastore 127 . OpenNebula 5.20g 8. Release 5. with ID 0. This is the recommended driver to be used when a high-end SAN is available.2. Note: The LVM datastore does not need CLVM configured in your cluster. consider a system with two Virtual Machines (9 and 10) using a disk. but they will be dumped into a Logical Volumes (LV) upon virtual machine creation. the layout of the datastore would be: # lvs LV VG Attr LSize Pool Origin Data% Meta% Move lv-one-10-0 vg-one-0 -wi------.0 Deployment guide. The virtual machines will run from Logical Volumes in the node.2 Addtional Configuration Default values for the Ceph drivers can be set in /var/lib/one/remotes/datastore/ceph/ceph. For example.4. The drivers refresh LVM meta-data each time an image is needed in another host.conf: • POOL_NAME: Default volume group • STAGING_DIR: Default path for image operations in the storage bridges • RBD_FORMAT: Default format for RBD volumes. 8.0. running in a LVM Datastore. This reduces the overhead of having a file-system in place and thus it may increase I/O performance.1 Datastore Layout Images are stored as regular files (under the usual path: /var/lib/one/datastores/<id>) in the Image Data- store. 8.4 LVM Datastore The LVM datastore driver provides OpenNebula with the possibility of using LVM volumes instead of plain files to hold the Virtual Images.

the OpenNebula configuration comprises two steps: • Create a System Datastore • Create an Image Datastore Create a System Datastore LVM System Datastores needs to be created with the following values: Attribute Description NAME The name of the Datastore TM_MAD fs_lvm TYPE SYSTEM_DS For example: > cat ds.2 Frontend Setup No additional configuration is needed. • lvmetad must be disabled.0.4 OpenNebula Configuration Once the storage is setup. LVM Datastore 128 . • All the nodes needs to have access to the same LUNs. and disable the lvm2-lvmetad.conf NAME = lvm_system TM_MAD = fs_lvm TYPE = SYSTEM_DS > onedatastore create ds.2 8. Be sure that enough local space is present. additional VM files like checkpoints or deployment files are stored under /var/lib/one/datastores/<id>.4.conf: use_lvmetad = 0. • A LVM VG needs to be created in the shared LUNs for each datastore following name: vg-one-<system_ds_id>.4. Release 5.conf ID: 100 8. However. • oneadmin needs to belong to the disk group. OpenNebula 5.4. 8.3 Node Setup Nodes needs to meet the following requirements: • LVM2 must be available in the Hosts.4.service if running. • Virtual Machine disks are symbolic links to the block devices. Set this parameter in /etc/lvm/lvm. 8.0 Deployment guide. This just need to be done in one node.

For example.5 Raw Device Mapping (RDM) Datastore The RDM Datastore is an Image Datastore that enables raw access to node block devices. 8.0. Release 5.5. 8.3 Node Setup The devices you want to attach to a VM should be accessible by the hypervisor.5. make sure this user is in a group with access to the disk (like disk) and has read write permissions for the group.conf ID: 101 8. and set the following: Attribute Description NAME The name of the datastore TYPE IMAGE_DS DS_MAD fs TM_MAD fs_lvm DISK_TYPE BLOCK For example. As KVM usually runs as oneadmin.2 Frontend Setup No addtional setup is required. In this case we will use the host host01 as one of our OpenNebula LVM-enabled hosts. Warning: The datastore should only be usable by the administrators. 8. the following examples illustrates the creation of an LVM datastore using a configuration file. The devices should be already setup and available and.1 Datastore Layout The RDM Datastore is used to register already existent block devices in the nodes.5.conf NAME = production DS_MAD = fs TM_MAD = fs_lvm DISK_TYPE = "BLOCK" TYPE = IMAGE_DS SAFE_DIRS="/var/tmp /tmp" > onedatastore create ds.5. Additional virtual machine files. Raw Device Mapping (RDM) Datastore 129 . > cat ds. like deployment files or volatile disks are created as regular files. 8.0 Deployment guide. Letting users create images in this datastore will cause security problems.2 Create an Image Datastore To create an Image Datastore you just need to define the name. register an image /dev/sda and reading the host filesystem. VMs using these devices must be fixed to run in the nodes ready for them. OpenNebula 5.

All devices registered will render size of 0 and the overall devices datastore will show up with 1MB of available space 8. Note that the System Datastore is only used for volatile disks and context devices. the OpenNebula configuration comprises two steps: • Create a System Datastore • Create an Image Datastore Create a System Datastore The RDM Datastore can work with the following System Datastores: • Filesystem. and set the following: Attribute Description NAME The name of the datastore TYPE IMAGE_DS DS_MAD dev TM_MAD dev DISK_TYPE BLOCK An example of datastore: > cat rdm.4 OpenNebula Configuration Once the storage is setup.0.5 Datastore Usage New images can be added as any other image specifying the path.5.0 Deployment guide. As an example here is an image template to add a node disk /dev/sdb: NAME=scsi_device PATH=/dev/sdb PERSISTENT=YES Note: As this datastore does is just a container for existing devices images does not take any size from it. OpenNebula 5. Raw Device Mapping (RDM) Datastore 130 . ssh transfer mode Please refer to the Filesystem Datastore section for more details. shared transfer mode • Filesystem. Release 5.5.conf NAME = rdm_datastore TYPE = "IMAGE_DS" DS_MAD = "dev" TM_MAD = "dev" DISK_TYPE = "BLOCK" > onedatastore create rdm. Create an Image Datastore To create an Image Datastore you just need to define the name.conf ID: 101 8.5. If you are using the CLI do not use the shorthand parameters as the CLI check if the file exists and the device most provably won’t exist in the frontend.2 8.

Note that the System Datastore is only used for volatile disks and context devices. OpenNebula 5. shared transfer mode • Filesystem.2 8. ssh transfer mode Please refer to the Filesystem Datastore section for more details. Release 5.6 iSCSI . * Qemu needs to be compiled with Libiscsi support. iSCSI . 8. Letting users create images in this datastore can cause security problems. • <target> field in the secret XML document will contain the ISCSI_USAGE paremeter.6.6. iSCSI CHAP Authentication In order to use CHAP authentication.6.1 Frontend Setup No addtional configuration is needed 8. Follow this Libvirt Secret XML format guide to register the secret. you will need to create a libvirt secret in all the hypervisors.6.Libvirt Datastore 131 .Libvirt Datastore This datastore is used to register already existing iSCSI volume available to the hypervisor nodes Warning: The datastore should only be usable by the administrators.3 OpenNebula Configuration Once the storage is setup.0 Deployment guide. Take this into consideration: • incominguser field on the iSCSI authentication file should match the Datastore’s ISCSI_USER parameter. 8.2 Node Setup The nodes need to meet the following requirements: * The devices you want to attach to a VM should be accessible by the hypervisor. • Do this in all the hypervisors.0. 8. the OpenNebula configuration comprises two steps: • Create a System Datastore • Create an Image Datastore Create a System Datastore The RDM Datastore can work with the following System Datastores: • Filesystem.

6.example:storage:diskarrays-sn-a86753 NAME = iscsi_device PATH = iqn. OpenNebula 5. If you need to use CHAP authentication (optional) add the following attributes to the datastore: Attribute Description ISCSI_USAGE Usage of the secret with the CHAP Auth string.com.1992-01.1992-01.0. Example: host or host:port. ISCSI_USER user the iSCSI CHAP authentication.6.0 Deployment guide. If you are using the CLI do not use the shorthand parameters as the CLI check if the file exists and the device most provably won’t exist in the frontend. As an example here is an image template to add a node disk iqn. iSCSI .ds NAME = iscsi DISK_TYPE = "ISCSI" DS_MAD = "iscsi" TM_MAD = "iscsi" ISCSI_HOST = "the_iscsi_host" ISCSI_USER = "the_iscsi_user" ISCSI_USAGE = "the_iscsi_usage" > onedatastore create iscsi. Release 5. and set the following: Attribute Description NAME The name of the datastore TYPE IMAGE_DS DS_MAD iscsi TM_MAD iscsi DISK_TYPE ISCSI ISCSI_HOST iSCSI Host.2 Create an Image Datastore To create an Image Datastore you just need to define the name. All devices registered will render size of 0 and the overall devices datastore will show up with 1MB of available space 8.ds ID: 101 Warning: Images created in this datastore should be persistent. 8.4 Datastore Usage New images can be added as any other image specifying the path.example:storage:diskarrays-sn-a8675309 PERSISTENT = YES Warning: As this datastore does is just a container for existing devices images does not take any size from it. An example of datastore: > cat iscsi.Libvirt Datastore 132 .com. Making the images non persistent allows more than one VM use this device and will probably cause problems and data corruption.

> cat kernels_ds. The Files Datastore does not expose any special storage mechanism but a simple and secure way to use files within VM templates. 8. Release 5.192. the following illustrates the creation of File Datastore. These overridden parameters will come into effect for new Virtual Machines.0. oneadmin@onedv:~/exampletemplates$ more iscsiimage.1 Requirements There are no special requirements or software dependencies to use the Files Datastore. This is the iSCSI LUN ID. tar. use fs to use the file-based drivers TM_MAD Transfer drivers for the datastore. Here is an example of an iSCSI LUN template that uses the iSCSI transfer manager. 8. you will also need to complete with the common datastore attributes: Attribute Description TYPE Use FILE_DS to setup a Files datastore DS_MAD The DS type. The Kernels & Files Datastore 133 . ln. datastore management.61:test:7cd2cc1e/0 ISCSI_HOST=192. ramdisks or context files.. OpenNebula 5.. There is a Files Datastore (datastore ID: 2) ready to be used in OpenNebula. driver setup.0 Deployment guide. The recommended drivers make use of standard filesystem utils (cp.2 Note: You may override the any of the following: ISCSI_HOST. use ssh to transfer the files For example.conf ID: 100 > onedatastore list ID NAME CLUSTER IMAGES TYPE DS TM 8.tpl NAME=iscsi_device_with_lun PATH=iqn..61 PERSISTENT=YES Note the explicit “/0” at the end of the IQN target path. mkfs.). cluster assignment. 8..7. The specific attributes for this datastore driver are listed in the following table.50. ISCSI_USER‘.01.g.7. ISCSI_USAGE and ISCSI_IQN parameters in the image template.2014.168. mv.2 Configuration Most of the configuration considerations used for disk images datastores do apply to the Files Datastore (e.50.7.168.conf NAME = kernels DS_MAD = fs TM_MAD = ssh TYPE = FILE_DS SAFE_DIRS = /var/tmp/files > onedatastore create kernels_ds.7 The Kernels & Files Datastore The Files Datastore lets you store plain files to be used as VM kernels.) that should be installed in your system.

dummy 1 default . You can check more details of the datastore by issuing the onedatastore show command.2 0 system . The Kernels & Files Datastore 134 .7. 0 fil fs ssh The DS and TM MAD can be changed later using the onedatastore update command. 0 fil fs ssh 100 kernels .7. 0 img dummy dummy 2 files . 8.0 Deployment guide. 8. Release 5.3 Host Configuration The recommended ssh driver for the File Datastore does not need any special configuration for the hosts.0. Just make sure that there is enough space under $DATASTORE_LOCATION to hold the VM files in the front-end and hosts. OpenNebula 5. 0 sys . as the same configuration guidelines applies. For more details refer to the Filesystem Datastore guide.

1. 135 . 9. When you create a new network you will need to add the attribute VN_MAD to the template. public or private. After reading this chapter you can complete your OpenNebula installation by optionally enabling an External Authen- tication or configuring Sunstone. Note: Security Groups are not supported by the Open vSwitch mode. 9. Virtual Networks implements VLANs using the VXLAN protocol that relies on a UDP encapsulation and IP multicast. Start by reading the common Node Setup section to learn how to configure your hosts.1. OpenNebula will connect its network interfaces (defined by NIC attribute) to hypervisor physical devices as defined in the Virtual Network. specifying which of the above networking modes you want to use. Otherwise you are ready to Operate your Cloud. • VXLAN. • VLAN.1Q VLAN tagging.1 How Should I Read This Chapter Before reading this chapter make sure you have read the Open Cloud Storage chapter. The Virtual Machine is directly attached to an existing bridge in the hypervisor. This will allow the VM to have access to different networks. Virtual Networks are implemented through 802. • Open vSwitch.2 Hypervisor Compatibility This chapter applies only to KVM. OpenNebula supports four different networking modes: • Bridged. CHAPTER NINE OPEN CLOUD NETWORKING SETUP 9. This mode can be configured to use security groups and network isolation. Similar to the VLAN mode but using an openvswitch instead of a Linux bridge. and then proceed to the specific section for the networking mode that you are interested in.1 Overview When a new Virtual Machine is launched.

The physical switch ports should be VLAN trunks. 9. • By default.2 9.001e682f02ac no eth0 br1 8000.1 Bridged Networking Mode Requirements • The OpenNebula node packages has been installed. Configuration No additional configuration is needed. Release 5. • Add the physical network interface to the bridge. OpenNebula 5.2.2 VLAN Networking Mode Requirements • The OpenNebula node packages has been installed. 9. a node with two networks one for public IP addresses (attached to eth0) and another one for private traffic (NIC eth1) should have two bridges: $ brctl show bridge name bridge id STP enabled interfaces br0 8000.0 Deployment guide. see the KVM node installation section for more details.2. • A network switch capable of forwarding VLAN tagged traffic. network isolation is provided through ebtables. • The 8021q module must be loaded in the kernel. 9.001e682f02ad no eth1 Note: It is recommended that this configuration is made persistent.2 Node Setup Todo • Architect • KVM This guide includes specific node setup steps to enable each network mode. You only need to apply the corresponding section to the select mode. Use the same name in all the nodes. Configuration • Create a linux bridge for each network that would be expose to Virtual Machines. Node Setup 136 . this package needs to be installed in the nodes. see the KVM node installation section for more details.2. Please refer to the network configuration guide of your system to do so. For example.0.

4 Open vSwtich Networking Mode Requirements • The OpenNebula node packages has been installed.0 Deployment guide.2. Note: It is recommended that this configuration is made persistent. Release 5.3 Bridged Networking This guide describes how to deploy Bridged networks.2 9. Note that if the multicast traffic needs to traverse routers a multicast protocol like IGMP needs to be configured in your network. • The node must run a Linux kernel (>3. Bridged networks can operate on three different modes depending on the 9. 9. a node that forwards Virtual Networks traffic through the enp0s8 network interface should create an openvswitch like: # ovs-vsctl show c61ba96f-fc11-4db9-9636-408e763f529e Bridge "ovsbr0" Port "ovsbr0" Interface "ovsbr0" type: internal Port "enp0s8" Interface "enp0s8" Configuration • Create a openvswitch for each network that would be expose to Virtual Machines. Please refer to the Open vSwitch documentation to do so. 9.0.2.3. • You need to install Open vSwitch on each node. OpenNebula 5. • Add the physical network interface to the openvswitch. • When all the nodes are connected to the same broadcasting domain be sure that the multicast traffic is not filtered by any iptable rule in the nodes. Bridged Networking 137 . see the KVM node installation section for more details. For example. Use the same name in all the nodes.3 VXLAN Networking Mode Requirements • The OpenNebula node packages has been installed. Configuration No additional configuration is needed. In this mode virtual machine traffic is directly bridged through an existing Linux bridge in the nodes.7.0) that natively supports the VXLAN protocol and the associated iproute2 package. see the KVM node installation section for more details. Please refer to the network configuration guide of your system to do so.

3. no filtering is made • Security Group. 9. Bridged Networking 138 .3.0. same as above plus additional ebtables rules to isolate (L2) each Virtual Networks.4 ebtables VLAN Mode: default rules This section lists the ebtables rules that are created. Note that it is limited to /24 networks.0 Deployment guide.. 9. • The ebtables VLAN mode is targeted to small environments without proper hardware support to implement VLANS. iptables rules are installed to implement security groups rules.3.2 additional traffic filtering made by OpenNebula: • Dummy.2 OpenNebula Configuration No specific configuration is required for bridged networking 9. 9. in case you need to debug your setup # Drop packets that don't match the network's MAC Address -s ! <mac_address>/ff:ff:ff:ff:ff:0 -o <tap_device> -j DROP # Prevent MAC spoofing -s ! <mac_address> -i <tap_device> -j DROP 9.1 Considerations & Limitations The following needs to be considered regarding traffic isolation: • In the Dummy and Security Group modes you can add tagged network interfaces to achieve network isolation.3. This mode is only recommended for testing purposes.. and that IP addresses cannot overlap between Virtual Networks.1Q network include the following information: Attribute Value Mandatory VN_MAD YES • dummy for the Dummy Bridged mode • fw for Bridged with Security Groups • ebtables for Bridged with ebtables isolation BRIDGE Name of the linux bridge in the nodes YES The following example defines Bridged network using the Security Groups mode: NAME = "bridged_net" VN_MAD = "fw" BRIDGE = vbr1 . • ebtables VLAN.3 Defining a Bridged Network To create a 802. OpenNebula 5. This is the recommended deployment strategy in production environments in this mode. Release 5.3.

1Q VLAN Networks 139 .4. eth0 will be tagged (eth0. defaults to onebr<net_id> or onebr. OpenNebula will find a free VLAN_ID from the VLAN pool.4. automatically compued by OpenNebula. 9. 9.1Q drivers).1Q Network To create a 802.1Q.0 Deployment guide. If it doesn’t exist it will be created.2 Defining a 802. This pool # is for 802. and it’s also shared with the Open vSwitch network mode. 802.1Q VLAN Networks This guide describes how to enable Network isolation provided through host-managed VLANs.1Q network NAME = "hmnet" VN_MAD = "802.1Q YES PHYDEV Name of the physical network device that will be attached to the bridge.1Q networks (Open vSwitch and 802. will be generated if not defined NO MTU The MTU for the tagged interface and bridge NO The following example defines a 802. This mechanism is compliant with IEEE 802.4 802. This pool is global.conf: # VLAN_IDS: VLAN ID pool for the automatic VLAN_ID assigment. The driver # will try first to allocate VLAN_IDS[START] + VNET_ID # start: First VLAN_ID to use # reserved: Comma separated list of VLAN_IDs VLAN_IDS = [ START = "2". Release 5. the driver will check for the existence of the brhm bridge. 1.50) and attached to brhm (unless it’s already attached). When a new isolatad network is created. 9. It may also be forced by specifying an VLAN_ID parameter in the Virtual Network template. YES BRIDGE Name of the linux bridge.<vlan_id> NO VLAN_ID The VLAN ID. 4095" ] By modifying that parameter you can reserve some VLANs so they aren’t assigned to a Virtual Network.1 OpenNebula Configuration The VLAN_ID is calculated according to this configuration option of oned.1Q network include the following information: Attribute Value Mandatory VN_MAD 802.0. This driver will create a bridge for each OpenNebula Virtual Network and attach an VLAN tagged network interface to the bridge. The VLAN id will be the same for every interface in a given network.1Q" PHYDEV = "eth0" VLAN_ID = 50 # optional BRIDGE = "brhm" # optional In this scenario.4. You can also define the first VLAN_ID. OpenNebula 5.2 9. RESERVED = "0.

0.0 Deployment guide. The multicas sddress is vxlan_mc + vlan_id vxlan_ttl Time To Live (TTL) should be > 1 in routed multicast networks (IGMP) 9.5. calculated automatically by OpenNebula. # start: First VNI to use VXLAN_IDS = [ START = "2" ] The following configuration attributes can be adjusted in /var/lib/one/remotes/vnm/OpenNebulaNetwork. 9. This address is assigned by default to the 239. This is used # for vxlan networks. VXLAN Networks 140 . VXLAN traffic is forwarded to a physical device. 9.3 Defining a VXLAN Network To create a VXLAN network include the following information: Attribute Value Mandatory VN_MAD vxlan YES PHYDEV Name of the physical network device that will be attached to the bridge.5 VXLAN Networks This guide describes how to enable Network isolation provided through the VXLAN encapsulation protocol. will be generated if not defined NO MTU The MTU for the tagged interface and bridge NO The following example defines a VXLAN network NAME = "vxlan_net" VN_MAD = "vxlan" PHYDEV = "eth0" 9.conf: Parameter Description vxlan_mc Base multicast address for each VLAN. The VLAN id will be the same for every interface in a given network. Release 5.<vlan_id> NO VLAN_ID The VLAN ID. this device can be set to be a VLAN tagged interface.0.2 9.5.0/8 range as defined by RFC 2365 (Administratively Scoped IP Multicast). It may also be forced by specifying an VLAN_ID parameter in the Virtual Network template.0. Additionally each VLAN has associated a multicast address to encapsulate L2 broadcast and multicast traffic. defaults to onebr<net_id> or onebr.0. YES BRIDGE Name of the linux bridge.5. In particular the multicast address is obtained by adding the VLAN_ID to the 239. but in that case you must make sure that the tagged interface is manually created first in all the hosts.5.conf: # VXLAN_IDS: Automatic VXLAN Network ID (VNI) assigment.2 OpenNebula Configuration It is possible specify the start VLAN ID by configuring /etc/one/oned.1 Considerations & Limitations This driver works with the default UDP server port 8472.0/8 base address. OpenNebula 5. This driver will create a bridge for each OpenNebula Virtual Network and attach a VXLAN tagged network interface to the bridge.0.

Open vSwitch Networks 141 . OpenNebula 5.0. The following configuration attributes can be adjusted in /var/lib/one/remotes/vnm/OpenNebulaNetwork.1Q tagged interface if you want to isolate the OpenNebula VXLAN traffic. This pool # is for 802. If it doesn’t exist it will be created.1 OpenNebula Configuration The VLAN_ID is calculated according to this configuration option of oned. 9. RESERVED = "0. 4095" ] By modifying that parameter you can reserve some VLANs so they aren’t assigned to a Virtual Network.6. In this scenario. When a new isolatad network is created. You can also define the first VLAN_ID. Note that eth0 can be a 802. They provide network isolation using VLANs by tagging ports and basic network filtering using OpenFlow. This pool is global.2 Defining an Open vSwitch Network To create a VXLAN network include the following information: 9. The VLAN id will be the same for every interface in a given network. and it’s also shared with the 802. Note: Remember to run onehost sync to deploy the file to all the nodes.2 VLAN_ID = 50 # optional BRIDGE = "vxlan50" # optional .conf: Parameter Description arp_cache_poisoning Enable ARP Cache Poisoning Prevention Rules.6. OpenNebula will find a free VLAN_ID from the VLAN pool.1Q drivers).. It may also be forced by specifying an VLAN_ID parameter in the Virtual Network template.1Q networks (Open vSwitch and 802. 9.50) and attached to vxlan50 (unless it’s already attached).1Q VLAN network mode. eth0 will be tagged (eth0.. Warning: This driver is not compatible with Security Groups. the driver will check for the existence of the vxlan50 bridge. The driver # will try first to allocate VLAN_IDS[START] + VNET_ID # start: First VLAN_ID to use # reserved: Comma separated list of VLAN_IDs VLAN_IDS = [ START = "2". Other traffic attributes that may be configured through Open vSwitch are not modified.conf: # VLAN_IDS: VLAN ID pool for the automatic VLAN_ID assigment.6 Open vSwitch Networks This guide describes how to use the Open vSwitch network drives. Release 5.6. calculated automatically by OpenNebula. 1. 9.0 Deployment guide.

10.32.dl_src=<MAC>.dl_src=<MAC>priority=45000.priority=39000.actions=drop ICMP Drop icmp.dl_dst=<MAC>.0.dl_src=<MAC>.priority=40000.0 Deployment guide.dl_dst=<MAC>. 9..priority=46000.3 OpenFlow Rules This section lists de default openflow rules installed in the open vswitch. in_port=<PORT>. Mac-spoofing These rules prevent any traffic to come out of the port the MAC address has changed.6. Release 5.actions=normal IP hijacking These rules prevent any traffic to come out of the port for IPv4 IP’s not configured for a VM in_port=<PORT>.30.actions=normal in_port=<PORT>.. Multiple VLANs (VLAN trunking) VLAN trunking is also supported by adding the following tag to the NIC element in the VM template or to the virtual network template: • VLAN_TAGGED_ID: Specify a range of VLANs to tag.2 Attribute Value Mandatory VN_MAD ovswitch YES BRIDGE Name of the Open vSwitch switch to use YES VLAN_ID The VLAN ID.nw_src=<IP>.actions=drop 9.tp_dst=<PORT>.arp.6. for example: 1. will be generated if not defined NO The following example defines a VXLAN network NAME = "ovswitch_net" VN_MAD = "ovswitch" BRIDGE = vbr1 VLAN_ID = 50 # optional .arp. OpenNebula 5.actions=drop in_port=<PORT>.actions=normal Black ports (one rule per port) tcp. Open vSwitch Networks 142 .

virtual networks. groups and storage datastores. 10.2 Hypervisor Compatibility This chapter applies to all the hypervisors. number of threads used to process monitor messages • HOST_PER_INTERVAL: Number of hosts monitored in each interval. and all the configuration parameters of the OpenNebula core daemon. MONITOR- ING_INTERVAL cannot have a smaller value than MANAGER_TIMER. Read Logging and Debugging for a complete reference on how to use log files. In this reference document we describe all the format and options that can be specified in oned. It can be used to get information from an OpenNebula database.conf file is the main OpenNebula configuration file. or fix inconsistency problems. upgrade it.1. CHAPTER TEN REFERENCES 10.1 Daemon Configuration Attributes • MANAGER_TIMER : Time in seconds the core uses to evaluate periodical functions. The configuration file for the daemon is called oned. and it is essential to tweak the performance and be- havior of your OpenNebula installation. the database tool.1 Overview This Chapter covers references that apply to the configuration of OpenNebula to interact smoothly and efficiently with other datacenter components. virtual machines.1 How Should I Read This Chapter The oned.1. users.2. and how to adjust the verbosity and log subsystem. For database maintenance operations. • MONITORING_THREADS : Max.2 ONED Configuration The OpenNebula daemon oned manages the cluster nodes.conf. • MONITORING_INTERVAL : Time in seconds between each monitorization. The section Large Deployments contains more helpful pointers to tune the OpenNebula performance. 10. 10. 10. 143 .conf and it is placed inside the /etc/one directory. use the command line tool onedb. as well as information on where are the log files.

DEBUG_LEVEL = 3 ] #MANAGER_TIMER = 15 MONITORING_INTERVAL = 60 MONITORING_THREADS = 50 10. For some custom monitor drivers you may need activate the individual VM monitoring process. – passwd (MySQL only): MySQL user’s password. Please visit the MySQL configuration guide for more information. Release 5. – server (MySQL only): Host name or an IP address for the MySQL server. • VM_MONITORING_EXPIRATION_TIME: Time. Use 0 to dis- able VM monitoring recording. • LOG : Configure the logging system – SYSTEM : Can be either file (default). ONED Configuration 144 .0. • DB : Vector of configuration attributes for the database back-end. to expire monitoring information. to expire monitoring information. Possible values are: DEBUG_LEVEL Meaning 0 ERROR 1 WARNING 2 INFO 3 DEBUG Example of this section: #******************************************************************************* # Daemon configuration attributes #******************************************************************************* LOG = [ SYSTEM = "file". in seconds. Refer to the VM template reference for further information: – start: first port to assign – reserved: comma separated list of reserved ports • VM_SUBMIT_ON_HOLD : Forces VMs to be created on hold state instead of pending. OpenNebula 5. syslog or std – DEBUG_LEVEL : Sets the level of verbosity of the log messages. • VNC_PORTS : VNC port pool for automatic VNC port assignment.0 Deployment guide. • VM_PER_INTERVAL: Number of VMs monitored in each interval.2 • HOST_MONITORING_EXPIRATION_TIME: Time. • LISTEN_ADDRESS: Host IP to listen on for xmlrpc calls (default: all IPs). • VM_INDIVIDUAL_MONITORING: VM monitoring information is obtained along with the host information. • SCRIPTS_REMOTE_DIR: Remote path to store the monitoring and VM management script. • PORT : Port where oned will listen for xml-rpc calls. Values: YES or NO. – user (MySQL only): MySQL user’s login ID. in seconds. – db_name (MySQL only): MySQL database name. if possible the port will be set to START + VMID. – backend : Set to sqlite or mysql. Use 0 to disable HOST monitoring recording.2.

org:2633/RPC2 #******************************************************************************* # Federation configuration attributes #******************************************************************************* FEDERATION = [ MODE = "STANDALONE". e. • MASTER_ONED : The xml-rpc endpoint of the master oned. 9869" ] #VM_SUBMIT_ON_HOLD = "NO" 10.g. ONED Configuration 145 . # USER = "oneadmin". This is the default operational mode * MASTER: this oned is the master zone of the federation * SLAVE: this oned is a slave zone • ZONE_ID : The zone ID as returned by onezone command.one.2.2. http://master. • FEDERATION : Federation attributes.0.0" DB = [ BACKEND = "sqlite" ] # Sample configuration for MySQL # DB = [ BACKEND = "mysql". OpenNebula 5. # PORT = 0.2 Federation Configuration Attributes Control the federation capabilities of oned. 6801.2 #HOST_PER_INTERVAL = 15 #HOST_MONITORING_EXPIRATION_TIME = 43200 #VM_INDIVIDUAL_MONITORING = "no" #VM_PER_INTERVAL = 5 #VM_MONITORING_EXPIRATION_TIME = 14400 SCRIPTS_REMOTE_DIR=/var/tmp/one PORT = 2633 LISTEN_ADDRESS = "0.0. ZONE_ID = 0. # PASSWD = "oneadmin". MASTER_ONED = "" ] 10. Operation in a federated setup requires a special DB configuration. – MODE : Operation mode of this oned.0 Deployment guide. * STANDALONE: not federated. Release 5. # DB_NAME = "opennebula" ] VNC_PORTS = [ START = 5900 # RESERVED = "6800. # SERVER = "localhost".0.

MEMORY_COST = 0. Release 5.2 10. • RPC_LOG: Create a separated log file for xml-rpc requests. • MESSAGE_SIZE: Buffer size in bytes for XML-RPC responses. MEMORY or DISK cost.4 XML-RPC Server Configuration • MAX_CONN: Maximum number of simultaneous TCP connections the server will maintain • MAX_CONN_BACKLOG: Maximum number of TCP connections the operating system will accept on the server’s behalf without the server accepting them from the operating system • KEEPALIVE_TIMEOUT: Maximum time in seconds that the server allows a connection to be open between RPCs • KEEPALIVE_MAX_CONN: Maximum number of RPCs that the server will execute on a single connection • TIMEOUT: Maximum time in seconds the server will wait for the client to do anything while processing an RPC.0.2. ONED Configuration 146 .2. Interpreted strings: – %i – request id – %m – method name – %u – user id – %U – user name – %l – param list – %p – user password – %g – group id – %G – group name – %a – auth token – %% – % #******************************************************************************* # XML-RPC server configuration #******************************************************************************* #MAX_CONN = 15 #MAX_CONN_BACKLOG = 15 10. DISK_COST = 0 ] 10.log. This is used by the oneshowback calculate method. • LOG_CALL_FORMAT: Format string to log XML-RPC calls. in /var/log/one/one_xmlrpc. OpenNebula 5. #******************************************************************************* # Default showback cost #******************************************************************************* DEFAULT_COST = [ CPU_COST = 0.0 Deployment guide. This timeout will be also used when proxy calls to the master in a federation.2.3 Default Showback Cost The following attributes define the default cost for Virtual Machines that don’t have a CPU.

1. These images can be used by several Virtual Machines simultaneously.5 Virtual Networks • NETWORK_SIZE: Here you can define the default size for the virtual networks • MAC_PREFIX: Default MAC prefix to be used to create the auto-generated MAC addresses is defined here (this can be overwritten by the Virtual Network template) • VLAN_IDS: VLAN ID pool for the automatic VLAN_ID assignment.2. to be used in Virtual Machines easily.1Q drivers).32. Currently only the packages distributed by OpenNebula are linked with this library. This pool is for 802. Release 5. 4095" ] VXLAN_IDS = [ START = "2" ] 10.2. – start: First VNI to use – Note: reserved is not supported by this pool Sample configuration: #******************************************************************************* # Physical Networks configuration #******************************************************************************* NETWORK_SIZE = 254 MAC_PREFIX = "02:00" VLAN_IDS = [ START = "2".0. 10.0 Deployment guide. This is used for vxlan networks. The driver will try first to allocate VLAN_IDS[START] + VNET_ID – start: First VLAN_ID to use – reserved: Comma separated list of VLAN_IDs • VXLAN_IDS: Automatic VXLAN Network ID (VNI) assignment. RESERVED = "0. which can be operative systems or data.2. ONED Configuration 147 . OpenNebula 5.6 Datastores The Storage Subsystem allows users to set up images.1Q networks (Open vSwitch and 802.2 #KEEPALIVE_TIMEOUT = 15 #KEEPALIVE_MAX_CONN = 30 #TIMEOUT = 15 #RPC_LOG = NO #MESSAGE_SIZE = 1073741824 #LOG_CALL_FORMAT = "Req:%i UID:%u %m invoked %l" Warning: This functionality is only available when compiled with xmlrpc-c libraires >= 1. and also shared with other 10.

2 users. Defaults to Yes • DEFAULT_IMAGE_TYPE : Default value for TYPE field when it is omitted in a template.0. and needs to be used with KVM drivers.0. Each datastore has its own directory (called BASE_PATH) in the form: $DATASTORE_LOCATION/<datastore_id>. You can symlink this directory to any other path if needed. More information on the image repository can be found in the Managing Virtual Machine Images guide. BASE_PATH is generated from this attribute each time oned is started. created as an empty block • DEFAULT_DEVICE_PREFIX : Default value for DEV_PREFIX field when it is omitted in a template. Sample configuration: #******************************************************************************* # Image Repository Configuration #******************************************************************************* #DATASTORE_LOCATION = /var/lib/one/datastores DATASTORE_CAPACITY_CHECK = "yes" DEFAULT_IMAGE_TYPE = "OS" DEFAULT_DEVICE_PREFIX = "hd" DEFAULT_CDROM_DEVICE_PREFIX = "hd" 10. The missing DEV_PREFIX attribute is filled when Images are created.2.7 Information Collector This driver CANNOT BE ASSIGNED TO A HOST. It defaults to /var/lib/one/datastores (in self-contained mode defaults to $ONE_LOCATION/var/datastores). Release 5. It IS the same for all the hosts and front-end.0 Deployment guide. OpenNebula 5. Here you can configure the default values for the Datastores and Image templates. Options that can be set: • -a: Address to bind the collectd socket (default 0. • DATASTORE_CAPACITY_CHECK: Checks that there is enough capacity before creating a new image. so changing this prefix won’t affect existing Images. You have more information about the templates syntax here.2.0. • DATASTORE_LOCATION: Path for Datastores.0) • -p: UDP port to listen for monitor information (default 4124) • -f: Interval in seconds to flush collected information (default 5) • -t: Number of threads for the server (default 50) 10. It can be set to: Prefix Device type hd IDE sd SCSI vd KVM virtual disk • DEFAULT_CDROM_DEVICE_PREFIX: Same as above but for CDROM devices. ONED Configuration 148 . Values accepted are: – OS: Image file holding an operating system – CDROM: Image file holding a CDROM – DATABLOCK: Image file holding a datablock.

can be an absolute path or relative to /usr/lib/one/mads/ • arguments: for the driver executable • type: driver type. ONED Configuration 149 . To define it.2. can be an absolute path or relative to /etc/one/ 10. Sample configuration: IM_MAD = [ name = "collectd". arguments = "-p 4124 -f 5 -t 50 -i 20" ] 10.0 Deployment guide. ARGUMENTS = "-r 3 -t 15 kvm" ] #------------------------------------------------------------------------------- 10. This parameter must be smaller than MONITOR- ING_INTERVAL.e. number of hosts monitored at the same time #------------------------------------------------------------------------------- IM_MAD = [ NAME = "kvm". usually a probe configuration file. can be an absolute path or relative to /usr/lib/one/mads/ • arguments: for the driver executable.0. supported drivers: xen.g. Release 5. • executable: path of the information driver executable. For more information on configuring the information and monitoring system and hints to extend it please check the information driver configuration guide. You can define more than one virtualization driver (e.9 Virtualization Drivers The virtualization drivers are used to create. EXECUTABLE = "one_im_ssh". control and monitor VMs on the hosts. the following needs to be set: • name: name of the virtualization driver.2 • -i: Time in seconds of the monitorization push cycle.2. SUNSTONE_NAME = "KVM". the following needs to be set: • name: name for this information driver. can be an absolute path or relative to /etc/one/. kvm or xml • default: default values and configuration parameters for the driver. You can define more than one information manager but make sure it has different names. Sample configuration: #------------------------------------------------------------------------------- # KVM UDP-push Information Driver Manager Configuration # -r number of retries when monitoring a host # -t number of threads. OpenNebula 5. otherwise push monitorization will not be effective. you have different virtualizers in several hosts) but make sure they have different names. and they depend on the virtualizer you are using. executable = "collectd".2. • executable: path of the virtualization driver executable. To define it. i.8 Information Drivers The information drivers are used to gather information from the cluster nodes.

Release 5.0. The available actions are: – migrate – live-migrate – terminate – terminate-hard – undeploy – undeploy-hard – hold – release – stop – suspend – resume – delete – delete-recreate – reboot – reboot-hard – resched – unresched – poweroff – poweroff-hard – disk-attach – disk-detach – nic-attach – nic-detach – snap-create – snap-delete For more information on configuring and setting up the Virtual Machine Manager Driver please check the section that suits you: • KVM Driver • vCenter Driver Sample configuration: #------------------------------------------------------------------------------- # Virtualization Driver Configuration #------------------------------------------------------------------------------- 10. • imported_vms_actions : comma-separated list of actions supported for imported vms. ONED Configuration 150 .2.2 • keep_snapshots: do not remove snapshots on power on/off cycles and live migrations if the hypervisor supports that. OpenNebula 5.0 Deployment guide.

conf". i. You may need to modify the TM_MAD to add custom plugins. – NONE: The image will be linked and no more storage capacity will be used – SELF: The image will be cloned in the Images datastore – SYSTEM: The image will be cloned in the System datastore • clone_target : determines how the non persistent images will be cloned when a new VM is instantiated. OpenNebula 5. release.shared. These values are used when creating a new datastore and should not be modified since they define the datastore behavior. hold.ssh. Sample configuration: #------------------------------------------------------------------------------- # Transfer Manager Driver Configuration #------------------------------------------------------------------------------- TM_MAD = [ EXECUTABLE = "one_tm".2. Release 5. nic-attach. IMPORTED_VMS_ACTIONS = "terminate. TYPE = "kvm". disk-detach. ONED Configuration 151 . snap-delete" ] 10. nic-detach. resched. disk-attach. number of transfers made at the same time – -d: list of transfer drivers separated by commas. can be an absolute path or relative to /usr/lib/one/mads/ • arguments: for the driver executable: – -t: number of threads. DEFAULT = "vmm_exec/vmm_exec_kvm. The default TM_MAD driver includes plugins for all supported storage modes. ARGUMENTS = "-t 15 -d dummy. reboot. remove and create VM images.iscsi_ ˓→libvirt" ] The configuration for each driver is defined in the TM_MAD_CONF section. clone. resume.ceph.vcenter. • name : name of the transfer driver. delete.qcow2. unresched.2 VM_MAD = [ NAME = "kvm".0 Deployment guide. snap-create. SUNSTONE_NAME = "KVM". • executable: path of the transfer driver executable. reboot-hard. if not defined all the drivers available will be enabled For more information on configuring different storage alternatives please check the storage configuration guide. listed in the -d option of the TM_MAD section • ln_target : determines how the persistent images will be cloned when a new VM is instantiated.e.dev. KEEP_SNAPSHOTS = "no". – NONE: The image will be linked and no more storage capacity will be used – SELF: The image will be cloned in the Images datastore – SYSTEM: The image will be cloned in the System datastore 10.10 Transfer Driver The transfer drivers are used to transfer. suspend. ARGUMENTS = "-t 15 -r 0 kvm". EXECUTABLE = "one_vmm_exec". terminate-hard.lvm.2.fs_lvm.0.

0. clone_target= "SYSTEM".11 Datastore Driver The Datastore Driver defines a set of scripts to manage the storage backend. i. number of repo operations at the same time – -m marketplace mads separated by commas Sample configuration: 10. specialized for the storage back-end • executable: path of the transfer driver executable.e.2. shared = "yes" ] TM_MAD_CONF = [ name = "shared".vcenter -s ˓→shared. Release 5. clone_target= "SELF". i. • executable: path of the transfer driver executable. number of repo operations at the same time – -d datastore mads separated by commas – -s system datastore tm drivers.ssh.fs. ARGUMENTS = "-t 15 -d dummy.e.lvm. Sample configuration: TM_MAD_CONF = [ name = "lvm".ceph. • ds_migrate: set if system datastore migrations are allowed for this TM. can be an absolute path or relative to /usr/lib/one/mads/ • arguments : for the driver executable – -t number of threads. 10.2. ONED Configuration 152 . shared = "yes".iscsi_libvirt.2 • shared : determines if the storage holding the system datastore is shared among the different hosts or not.0 Deployment guide.dev. used to monitor shared system ds Sample configuration: DATASTORE_MAD = [ EXECUTABLE = "one_datastore". Only useful for system datastore TMs. ln_target = "NONE". can be an absolute path or relative to /usr/lib/one/mads/ • arguments: for the driver executable – -t number of threads.12 Marketplace Driver Configuration Drivers to manage different marketplaces. OpenNebula 5. please visit its reference guide.fs_lvm" ] For more information on this Driver and how to customize it.2. ds_migrate = "yes" ] 10.ceph. Valid values: yes or no. ln_target = "NONE".

when the VM is in the unknown state – SHUTDOWN. after the VM is stopped (including VM image transfers) – DONE.s3.0. To configure the Hook System the following needs to be set in the OpenNebula configuration file: • executable: path of the hook driver executable. after the VM is shutdown – STOP. the VM template in xml and base64 encoded multiple – PREV_STATE. the previous STATE of the Virtual Machine – PREV_LCM_STATE. • on: when the hook should be executed. can be an absolute path or relative to /usr/lib/one/mads/ • arguments : for the driver executable. You can access to VM information with $ – $ID. OpenNebula 5. – YES. The hook is executed in the host where the VM was allocated – NO. when the VM is in the prolog state – RUNNING.2.one" ] 10. Release 5. useful to track the hook (OPTIONAL). The hook is executed in the OpenNebula server (default) 10. the previous LCM STATE of the Virtual Machine • remote: values. can be an absolute path or relative to /etc/one/ Sample configuration: HM_MAD = [ executable = "one_hm" ] Virtual Machine Hooks (VM_HOOK) defined by: • name: for the hook. ARGUMENTS = "-t 15 -m http. after the VM is deleted or shutdown – CUSTOM. user defined specific STATE and LCM_STATE combination of states to trigger the hook • command: path can be absolute or relative to /usr/share/one/hooks • arguments: for the hook.2. – CREATE.0 Deployment guide.13 Hook System Hooks in OpenNebula are programs (usually scripts) which execution is triggered by a change in state in Virtual Machines or Hosts.2 MARKET_MAD = [ EXECUTABLE = "one_market". the ID of the virtual machine – $TEMPLATE. The hooks can be executed either locally or remotely in the node where the VM or Host is running. after the VM is successfully booted – UNKNOWN. when the VM is created (onevm create) – PROLOG. ONED Configuration 153 .

rb". the ID of the host – $TEMPLATE. when the Host is disabled • command: path can be absolute or relative to /usr/share/one/hooks • arguments: for the hook. when the Host is created (onehost create) – ERROR.2. If not defined OpenNebula will use the built-in auth policies – executable: path of the auth driver executable. command = "log. – YES. Values: YES or NO • DEFAULT_UMASK: Similar to Unix umask. OpenNebula 5.2. arguments = "$ID $PREV_STATE $PREV_LCM_STATE" ] 10. if not defined all the modules available will be enabled – authz: list of authentication modules separated by commas • SESSION_EXPIRATION_TIME: Time in seconds to keep an authenticated token as valid.2 Host Hooks (HOST_HOOK) defined by: • name: for the hook. Release 5. useful to track the hook (OPTIONAL) • on: when the hook should be executed. can be an absolute path or relative to /usr/lib/one/mads/ – authn: list of authentication modules separated by commas.0.--- Sample configuration: AUTH_MAD = [ executable = "one_auth_mad". ONED Configuration 154 . The hook is executed in the OpenNebula server (default) Sample configuration: VM_HOOK = [ name = "advanced_hook".u-. You can use the following Host information: – $ID. the Host template in xml and base64 encoded • remote: values. state = "ACTIVE". Its format must be 3 octal digits. The hook is executed in the host – NO. authn = "ssh. lcm_state = "BOOT_UNKNOWN".server_x509" ] 10. – CREATE.0 Deployment guide. when the Host enters the error state – DISABLE.14 Auth Manager Configuration • AUTH_MAD: The driver that will be used to authenticate and authorize OpenNebula requests. the driver is not used. on = "CUSTOM".x509. For example a umask of 137 will set the new object’s permissions to 640 um. Use 0 to disable session caching • ENABLE_OTHER_PERMISSIONS: Whether or not to enable the permissions for ‘other’.ldap. sets the default resources permissions. During this time.server_cipher. Users in the oneadmin group will still be able to change these permissions.

then users outside the ‘’oneadmin” group can instantiate these templates.2. These attributes are not considered for regular VNET creation.0 Deployment guide. OpenNebula 5. Release 5. Sample configuration: VM_RESTRICTED_ATTR = "CONTEXT/FILES" VM_RESTRICTED_ATTR = "NIC/MAC" VM_RESTRICTED_ATTR = "NIC/VLAN_ID" VM_RESTRICTED_ATTR = "NIC/BRIDGE" VM_RESTRICTED_ATTR = "NIC_DEFAULT/MAC" VM_RESTRICTED_ATTR = "NIC_DEFAULT/VLAN_ID" VM_RESTRICTED_ATTR = "NIC_DEFAULT/BRIDGE" VM_RESTRICTED_ATTR = "DISK/TOTAL_BYTES_SEC" VM_RESTRICTED_ATTR = "DISK/READ_BYTES_SEC" VM_RESTRICTED_ATTR = "DISK/WRITE_BYTES_SEC" VM_RESTRICTED_ATTR = "DISK/TOTAL_IOPS_SEC" VM_RESTRICTED_ATTR = "DISK/READ_IOPS_SEC" VM_RESTRICTED_ATTR = "DISK/WRITE_IOPS_SEC" #VM_RESTRICTED_ATTR = "DISK/SIZE" VM_RESTRICTED_ATTR = "DISK/ORIGINAL_SIZE" VM_RESTRICTED_ATTR = "CPU_COST" VM_RESTRICTED_ATTR = "MEMORY_COST" VM_RESTRICTED_ATTR = "DISK_COST" VM_RESTRICTED_ATTR = "PCI" VM_RESTRICTED_ATTR = "USER_INPUTS" #VM_RESTRICTED_ATTR = "RANK" #VM_RESTRICTED_ATTR = "SCHED_RANK" #VM_RESTRICTED_ATTR = "REQUIREMENTS" #VM_RESTRICTED_ATTR = "SCHED_REQUIREMENTS" IMAGE_RESTRICTED_ATTR = "SOURCE" VNET_RESTRICTED_ATTR = "VN_MAD" VNET_RESTRICTED_ATTR = "PHYDEV" 10. for example ldap: DEFAULT_AUTH = "ldap" 10.0. ONED Configuration 155 .2 SESSION_EXPIRATION_TIME = 900 #ENABLE_OTHER_PERMISSIONS = "YES" DEFAULT_UMASK = 177 The DEFAULT_AUTH can be used to point to the desired default authentication driver.2. If the VM template has been created by admins in the ‘’oneadmin” group.15 Restricted Attributes Configuration Users outside the oneadmin group won’t be able to instantiate templates created by users outside the ‘’oneadmin” group that include the attributes restricted by: • VM_RESTRICTED_ATTR: Virtual Machine attribute to be restricted for users outside the ‘’oneadmin” group • IMAGE_RESTRICTED_ATTR: Image attribute to be restricted for users outside the ‘’oneadmin” group • VNET_RESTRICTED_ATTR: Virtual Network attribute to be restricted for users outside the ‘’oneadmin” group when updating a reservation.

0.2 VNET_RESTRICTED_ATTR = "VLAN_ID" VNET_RESTRICTED_ATTR = "BRIDGE" VNET_RESTRICTED_ATTR = "AR/VN_MAD" VNET_RESTRICTED_ATTR = "AR/PHYDEV" VNET_RESTRICTED_ATTR = "AR/VLAN_ID" VNET_RESTRICTED_ATTR = "AR/BRIDGE" OpenNebula evaluates these attributes: • on VM template instantiate (onetemplate instantiate) • on VM create (onevm create) • on VM attach nic (onevm nic-attach) (for example to forbid users to use NIC/MAC) 10.16 Inherited Attributes Configuration The following attributes will be copied from the resource template to the instantiated VMs. • INHERIT_IMAGE_ATTR: Attribute to be copied from the Image template to each VM/DISK. OpenNebula 5. • INHERIT_VNET_ATTR: Attribute to be copied from the Network template to each VM/NIC. Sample configuration: #INHERIT_IMAGE_ATTR = "EXAMPLE" #INHERIT_IMAGE_ATTR = "SECOND_EXAMPLE" #INHERIT_DATASTORE_ATTR = "COLOR" #INHERIT_VNET_ATTR = "BANDWIDTH_THROTTLING" INHERIT_DATASTORE_ATTR = "CEPH_HOST" INHERIT_DATASTORE_ATTR = "CEPH_SECRET" INHERIT_DATASTORE_ATTR = "CEPH_USER" INHERIT_DATASTORE_ATTR = "CEPH_CONF" INHERIT_DATASTORE_ATTR = "POOL_NAME" INHERIT_DATASTORE_ATTR = "ISCSI_USER" INHERIT_DATASTORE_ATTR = "ISCSI_USAGE" INHERIT_DATASTORE_ATTR = "ISCSI_HOST" INHERIT_IMAGE_ATTR = "ISCSI_USER" INHERIT_IMAGE_ATTR = "ISCSI_USAGE" INHERIT_IMAGE_ATTR = "ISCSI_HOST" INHERIT_IMAGE_ATTR = "ISCSI_IQN" INHERIT_DATASTORE_ATTR = "GLUSTER_HOST" INHERIT_DATASTORE_ATTR = "GLUSTER_VOLUME" INHERIT_DATASTORE_ATTR = "DISK_TYPE" INHERIT_DATASTORE_ATTR = "ADAPTER_TYPE" INHERIT_IMAGE_ATTR = "DISK_TYPE" INHERIT_IMAGE_ATTR = "ADAPTER_TYPE" INHERIT_VNET_ATTR = "VLAN_TAGGED_ID" 10.2.2. • INHERIT_DATASTORE_ATTR: Attribute to be copied from the Datastore template to each VM/DISK. More than one attribute can be defined. ONED Configuration 156 .0 Deployment guide. Release 5.

OpenNebula 5.0 Deployment guide, Release 5.0.2

INHERIT_VNET_ATTR = "FILTER_IP_SPOOFING"
INHERIT_VNET_ATTR = "FILTER_MAC_SPOOFING"
INHERIT_VNET_ATTR = "MTU"

10.2.17 OneGate Configuration

• ONEGATE_ENDPOINT: Endpoint where OneGate will be listening. Optional.
Sample configuration:

ONEGATE_ENDPOINT = "http://192.168.0.5:5030"

10.3 Logging & Debugging

OpenNebula provides logs for many resources. It supports three logging systems: file based logging systems, syslog
logging and logging to standard error stream.
In the case of file based logging, OpenNebula keeps separate log files for each active component, all of them stored in
/var/log/one. To help users and administrators find and solve problems, they can also access some of the error
messages from the CLI or the Sunstone GUI.
With syslog or standard error the logging strategy is almost identical, except that the logging message change slightly
their format following syslog logging conventions, and resource information.

10.3.1 Configure the Logging System

The Logging system can be changed in /etc/one/oned.conf , specifically under the LOG section. Two parameters can
be changed: SYSTEM, which is ‘syslog’, ‘file’ (default) or ‘std’, and the DEBUG_LEVEL is the logging verbosity.
For the scheduler the logging system can be changed in the exact same way. In this case the configuration is in
/etc/one/sched.conf.

10.3.2 Log Resources

There are different log resources corresponding to different OpenNebula components:
• ONE Daemon: The core component of OpenNebula dumps all its logging information onto
/var/log/one/oned.log. Its verbosity is regulated by DEBUG_LEVEL in /etc/one/oned.conf.
By default the one start up scripts will backup the last oned.log file using the current time, e.g.
oned.log.20121011151807. Alternatively, this resource can be logged to the syslog.
• Scheduler: All the scheduler information is collected into the /var/log/one/sched.log file. This re-
source can also be logged to the syslog.
• Virtual Machines: The information specific of the VM will be dumped in the log file
/var/log/one/<vmid>.log. All VMs controlled by OpenNebula have their folder,
/var/lib/one/vms/<VID>, or to the syslog/stderr if enabled. You can find the following informa-
tion in it:
– Deployment description files : Stored in deployment.<EXECUTION>, where <EXECUTION> is the
sequence number in the execution history of the VM (deployment.0 for the first host, deployment.1 for the
second and so on).

10.3. Logging & Debugging 157

OpenNebula 5.0 Deployment guide, Release 5.0.2

– Transfer description files : Stored in transfer.<EXECUTION>.<OPERATION>, where
<EXECUTION> is the sequence number in the execution history of the VM, <OPERATION> is the stage
where the script was used, e.g. transfer.0.prolog, transfer.0.epilog, or transfer.1.cleanup.
• Drivers: Each driver can have activated its ONE_MAD_DEBUG variable in their RC files. If so, error informa-
tion will be dumped to /var/log/one/name-of-the-driver-executable.log; log information of
the drivers is in oned.log.

10.3.3 Logging Format

The structure of an OpenNebula message for a file based logging system is the following:
date [Z<zone_id>][module][log_level]: message body

In the case of syslog it follows the standard:
date hostname process[pid]: [Z<zone_id>][module][log_level]: message

Where the zone_id is the ID of the zone in the federation, 0 for single zone set ups, module is any of the internal
OpenNebula components: VMM, ReM, TM, etc. And the log_level is a single character indicating the log level: I for
info, D for debug, etc.
For the syslog, OpenNebula will also log the Virtual Machine events like this:
date hostname process[pid]: [VM id][Z<zone_id>][module][log_level]: message

And similarly for the stderr logging, for oned and VM events the format are:
date [Z<zone_id>][module][log_level]: message
date [VM id][Z<zone_id>][module][log_level]: message

10.3.4 Virtual Machine Errors

Virtual Machine errors can be checked by the owner or an administrator using the onevm show output:
$ onevm show 0
VIRTUAL MACHINE 0 INFORMATION
ID : 0
NAME : one-0
USER : oneadmin
GROUP : oneadmin
STATE : ACTIVE
LCM_STATE : PROLOG_FAILED
START TIME : 07/19 17:44:20
END TIME : 07/19 17:44:31
DEPLOY ID : -

VIRTUAL MACHINE MONITORING
NET_TX : 0
NET_RX : 0
USED MEMORY : 0
USED CPU : 0

VIRTUAL MACHINE TEMPLATE
CONTEXT=[
FILES=/tmp/some_file,

10.3. Logging & Debugging 158

OpenNebula 5.0 Deployment guide, Release 5.0.2

TARGET=hdb ]
CPU=0.1
ERROR=[
MESSAGE="Error excuting image transfer script: Error copying /tmp/some_file to /var/
˓→lib/one/0https://www.scribd.com/images/isofiles",

TIMESTAMP="Tue Jul 19 17:44:31 2011" ]
MEMORY=64
NAME=one-0
VMID=0

VIRTUAL MACHINE HISTORY
SEQ HOSTNAME ACTION START TIME PTIME
0 host01 none 07/19 17:44:31 00 00:00:00 00 00:00:00

Here the error tells that it could not copy a file, most probably it does not exist.
Alternatively you can also check the log files for the VM at /var/log/one/<vmid>.log.

Note: Check the Virtual Machines High Availability Guide, to learn how to recover a VM in fail state.

10.3.5 Host Errors

Host errors can be checked executing the onehost show command:

$ onehost show 1
HOST 1 INFORMATION
ID : 1
NAME : host01
STATE : ERROR
IM_MAD : im_kvm
VM_MAD : vmm_kvm
TM_MAD : tm_shared

HOST SHARES
MAX MEM : 0
USED MEM (REAL) : 0
USED MEM (ALLOCATED) : 0
MAX CPU : 0
USED CPU (REAL) : 0
USED CPU (ALLOCATED) : 0
RUNNING VMS : 0

MONITORING INFORMATION
ERROR=[
MESSAGE="Error monitoring host 1 : MONITOR FAILURE 1 Could not update remotes",
TIMESTAMP="Tue Jul 19 17:17:22 2011" ]

The error message appears in the ERROR value of the monitoring. To get more information you can check
/var/log/one/oned.log. For example for this error we get in the log file:

Tue Jul 19 17:17:22 2011 [InM][I]: Monitoring host host01 (1)
Tue Jul 19 17:17:22 2011 [InM][I]: Command execution fail: scp -r /var/lib/one/
˓→remotes/. host01:/var/tmp/one

Tue Jul 19 17:17:22 2011 [InM][I]: ssh: Could not resolve hostname host01: nodename
˓→nor servname provided, or not known

10.3. Logging & Debugging 159

upgrade it. $ onedb fsck --sqlite /var/lib/one/one. locate the time when the VM was shut down in the logs and then execute this patch to edit the times manually: 10. These are two examples for the default databases: $ onedb <command> -v --sqlite /var/lib/one/one. OpenNebula 5. Release 5. 10.4.2 Tue Jul 19 17:17:22 2011 [InM][I]: lost connection Tue Jul 19 17:17:22 2011 [InM][I]: ExitCode: 1 Tue Jul 19 17:17:22 2011 [InM][E]: Error monitoring host 1 : MONITOR FAILURE 1 Could ˓→not update remotes From the execution output we notice that the host name is not know.0 User 2 quotas: MEMORY_USED has 1536 is 1408 User 2 quotas: VMS_USED has 12 is 11 User 2 quotas: Image 0 RVMS has 6 is 5 Group 1 quotas: CPU_USED has 12 is 11. 10.4 Onedb Tool This section describes the onedb CLI tool. and fixes the problems found.0 Deployment guide. or fix inconsistency problems.db $ onedb <command> -v -S localhost -u oneadmin -p oneadmin -d opennebula 10. if the machine where OpenNebula is running crashes.4.2 onedb fsck Checks the consistency of the DB. For example. you may have a wrong number of VMs running in a Host. It can be used to get information from an OpenNebula database. To fix this problem.0. and the VM is considered as still running when it should not.4. the etime (end-time) of that history record is not set. or incorrect usage quotas for some users. Host 0 RUNNING_VMS has 12 is 11 Host 0 CPU_USAGE has 1200 is 1100 Host 0 MEM_USAGE has 1572864 is 1441792 Image 0 RUNNING_VMS has 6 is 5 User 2 quotas: CPU_USED has 12 is 11. Onedb Tool 160 . probably a mistake naming the host.0 Group 1 quotas: MEMORY_USED has 1536 is 1408 Group 1 quotas: VMS_USED has 12 is 11 Group 1 quotas: Image 0 RVMS has 6 is 5 Total errors found: 12 If onedb fsck shows the following error message: [UNREPAIRED] History record for VM <<vid>> seq # <<seq>> is not closed (etime = 0) This is due to bug #4000. Visit the onedb man page for a complete reference.db.bck Use 'onedb restore' or copy the file back to restore the DB. It means that when using accounting or showback.1 Connection Parameters The command onedb can connect to any SQLite or MySQL database.db Sqlite database backup stored in /var/lib/one/one. or looses connectivity with the database.

$ onedb version --sqlite /var/lib/one/one. Release 5. or 0 to leave unset (open ˓→history record).rb Version read: Shared tables 4.4.1 daemon bootstrap Sqlite database backup stored in /var/lib/one/one.rb This tool will allow you to edit the timestamps of VM history records. used to ˓→calculate accounting and showback.11. VM ID: 1 History sequence number: 0 STIME Start time : 2015-10-08 15:24:06 UTC PSTIME Prolog start time : 2015-10-08 15:24:06 UTC PETIME Prolog end time : 2015-10-08 15:24:29 UTC RSTIME Running start time : 2015-10-08 15:24:29 UTC RETIME Running end time : 2015-10-08 15:42:35 UTC ESTIME Epilog start time : 2015-10-08 15:42:35 UTC EETIME Epilog end time : 2015-10-08 15:42:36 UTC ETIME End time : 2015-10-08 15:42:36 UTC To set new values: empty to use current value.1 daemon bootstrap Local tables 4.db /usr/lib/one/ruby/onedb/patches/history_ ˓→times.85 : OpenNebula 5. Onedb Tool 161 .0 10.0. > Running patch /usr/lib/one/ruby/onedb/patches/history_times.db 3. STIME Start time : 2015-10-08 15:24:06 UTC New value : ETIME End time : 2015-10-08 15:42:36 UTC New value : The history record # 0 for VM 1 will be updated with these new values: STIME Start time : 2015-10-08 15:24:06 UTC PSTIME Prolog start time : 2015-10-08 15:24:06 UTC PETIME Prolog end time : 2015-10-08 15:24:29 UTC RSTIME Running start time : 2015-10-08 15:24:29 UTC RETIME Running end time : 2015-10-08 15:42:35 UTC ESTIME Epilog start time : 2015-10-08 15:42:35 UTC EETIME Epilog end time : 2015-10-08 15:42:36 UTC ETIME End time : 2015-10-08 15:42:36 UTC Confirm to write to the database [Y/n]: y > Done > Total time: 27. OpenNebula 5. <YYYY-MM-DD HH:MM:SS> in UTC.79s 10.2 $ onedb patch -v --sqlite /var/lib/one/one.0.0.db_2015-10-13_12:40:2.3 onedb version Prints the current DB version.80 : OpenNebula 5.4.8.0 Deployment guide.bck Use 'onedb restore' or copy the file back to restore the DB.13.

Onedb Tool 162 .5 onedb upgrade The upgrade process is fully documented in the Upgrading from Previous Versions guide.80 (OpenNebula 3.0.8.4.80 to 3.0 Timestamp: 10/19 16:04:17 Comment: Database migrated from 3. If the MySQL database password contains special characters. 10.7. Version: 3. $ onedb backup --sqlite /var/lib/one/one.80) by onedb ˓→command.80 to 3. Version: 3.8.db Version: 3.0 to 3. the onedb command will fail to connect to it. 10.7.4.2 Use the -v flag to see the complete version and comment.0.8. The workaround is to temporarily change the oneadmin’s password to an ASCII string. OpenNebula 5.0 (OpenNebula 3.8.8.6 onedb backup Dumps the OpenNebula DB to a file.db Sqlite database backup stored in /tmp/my_backup.4 onedb history Each time the DB is upgraded. 10.7. You can use the history command to retrieve the upgrade history.0 daemon bootstrap .0 (OpenNebula 3.8. Release 5. The set password statement can be used for this: $ mysql -u oneadmin -p mysql> SET PASSWORD = PASSWORD('newpass').0 Timestamp: 10/19 16:04:17 Comment: Database migrated from 3.0) by onedb command..7..4.0) by onedb command.0 Deployment guide. $ onedb history -S localhost -u oneadmin -p oneadmin -d opennebula Version: 3. 10. $ onedb version -v --sqlite /var/lib/one/one.db /tmp/my_backup. the process is logged.db Use 'onedb restore' or copy the file back to restore the DB.0 Timestamp: 10/07 12:40:49 Comment: OpenNebula 3.0.4.80 Timestamp: 10/08 17:36:15 Comment: Database migrated from 3. such as @ or #.7.6.

8 onedb sqlite2mysql This command migrates from a sqlite database to a mysql database. and 4 KB for each VM.0.0 Deployment guide. 10.4. in seconds. in seconds. These values are then used to draw the plots in Sunstone.2. These monitorization entries can take quite a bit of storage in your database. setting both expiration times to 0.5 Large Deployments 10. OpenNebula supports two native monitoring systems: ssh-pull and udp-push. • HOST_MONITORING_EXPIRATION_TIME: Time.4 onwards. The procedure to follow is: • Stop OpenNebula • Change the DB directive in /etc/one/oned. Please not that this tool will only restore backups generated from the same backend.e. For vCenter environments. This model is highly scalable and its limit (in terms of number of VMs monitored per second) is bounded to the performance of the server running oned and the database server. Default: 60. Default: 12h. Large Deployments 163 . Release 5. Read more in the Monitoring guide. ssh-pull is the default monitoring system for OpenNebula <= 4. to expire monitoring information.5. • VM_MONITORING_EXPIRATION_TIME: Time.2 Core Tuning OpenNebula keeps the monitorization history for a defined time in a database table.conf to use MySQL instead of SQLite • Bootstrap the MySQL Database: oned -i • Migrate the Database: onedb sqlite2mysql -s <SQLITE_PATH> -u <MYSQL_USER> -p <MYSQL_PASS> -d <MYSQL_DB> • Start OpenNebula 10.2 10. i. If you don’t use Sunstone. however from OpenNebula 4. Default: 20. The driver is optimized to cache common VM information. OpenNebula uses the VI API offered by vCenter to monitor the state of the hypervisor and all the Virtual Machines running in all the imported vCenter clusters. The former one. 10.5. you may want to disable the monitoring history. The amount of storage used will depend on the size of your cloud.1 Monitoring In KVM environments. In both environments. you cannot backup a SQLite database and then try to populate a MySQL one. here are some examples: 10. Each monitoring entry will be around 2 KB for each Host. Default: 4h. to expire monitoring information. To give you an idea of how much database storage you will need to prepare. • collectd IM_MAD -i argument (KVM only): Time in seconds of the monitorization push cycle. the default monitoring system is the udp-push system.5.7 onedb restore Restores the DB from a backup file. OpenNebula 5. our scalability testing achieves the monitoring of tens of thousands of VMs in a few minutes.4. and the following configuration attributes in oned.conf : • MONITORING_INTERVAL: Time in seconds between each monitorization.

3 API Tuning For large deployments with lots of xmlprc calls the default values for the xmlprc server are too conservative.conf and the xmlrpc-c library documentation. OpenNebula 5. By default the pagination value is 2000 objects but can be changed using the environment variable ONE_POOL_PAGE_SIZE.0.8 GB 20s 24h 10000 7 GB 10.5. 10. You can make this value in oned.6 Sunstone Tuning Please refer to guide about Configuring Sunstone for Large Deployments.2 GB Monitoring interval VM expiration # VMs Storage 20s 4h 2000 1.4 Driver Tuning OpenNebula drivers have by default 15 threads.5.conf . the next actions will be queued. Release 5. It should be bigger that 2.5. sqlite is too slow for more than a couple hosts and a few VMs. The values you can modify and its meaning are explained in oned.5.0 Deployment guide. 10. 10. Large Deployments 164 . This makes the memory consumption decrease and in some cases the parsing faster. the driver parameter is -t.5.5 Database Tuning For non test installations use MySQL database. 10. For example. to list VMs with a page size of 5000 we can use: $ ONE_POOL_PAGE_SIZE=5000 onevm list To disable pagination we can use a non numeric value: $ ONE_POOL_PAGE_SIZE=disabled onevm list This environment variable can be also used for Sunstone. From our experience these values improve the server behavior with a high amount of client calls: MAX_CONN = 240 MAX_CONN_BACKLOG = 480 The core is able to paginate some pool answers. This is the maximum number of actions a driver can perform at the same time.2 Monitoring interval Host expiration # Hosts Storage 20s 12h 200 850 MB 20s 24h 1000 8.