Quick admin guide
Introduction
After following this guide, users will have a working OpenNebula with a graphics interface (Sunstone), at least one hypervisor (host) and a running virtual machine. It is useful when setting up pilot clouds, for quickly testing new features and as a base deployment to build a large infrastructure.
There are two separate roles during the installation: the Frontend and Nodes. The Frontend server will run OpenNebula services, and Nodes will be used to implement virtual machines. Please do not follow this guide to create a single host combining the Frontend and Nodes on one server. However, it is recommended to create virtual machines on hosts with virtualization extensions. To test if your host supports virtualization extensions, please run:
If you do not get any output, you probably do not have virtualization extensions supported/enabled on your server.
Package Layout
opennebula-server: OpenNebula Daemons
opennebula: OpenNebula CLI commands
opennebula-sunstone: OpenNebula’s web GUI
opennebula-java: OpenNebula Java API
opennebula-node-kvm: Install dependencies required by OpenNebula in the nodes
opennebula-gate: Send information from Virtual Machines to OpenNebula
opennebula-flow: Manage OpenNebula Services
opennebula-context: Package for OpenNebula Guests
Additionally opennebula-common and opennebula-ruby exist, but they are intended to be used as dependencies.
Warning
In order to avoid problems, you should disable SELinux in all nodes, the Frontend and Nodes.
…
SELINUX=disabled
…
# setenforce 0
# getenforce
Permissive
Warning
Some commands may fail depending on your iptables/firewalls configuration. Disable the firewalls entirely for testing just to rule it out.
Frontend Installation
Note
Commands prefixed by # are meant to be run as root. Commands prefixed by $ must be run as oneadmin.
Install the repository
Enable the EPEL repository:
Add the OpenNebula repository:
# cat << EOT > /etc/yum.repos.d/opennebula.repo
[opennebula]
SELINUX=disabled
name=opennebula
baseurl=http://downloads.opennebula.org/repo/4.12/CentOS/7/x86_64/
enabled=1
gpgcheck=0
EOT
Install the required packages
A complete installation of OpenNebula will have at least the opennebula-server and opennebula-sunstone packages:
You should run install_gems to install all the gem dependencies. Choose CentOS/RedHat if prompted:
# /usr/share/one/install_gems
lsb_release command not found. If you are using a RedHat based distribution install redhat-lsb
Select your distribution or press enter to continue without
installing dependencies.
1.Ubuntu/Debian
2.CentOS/RedHat
Configure and start the services
There are two main processes that must be started, i.e. the main OpenNebula daemon: oned, and the graphics user interface: sunstone.
Sunstone listens only to the loopback interface by default for security reasons. To change it, edit /etc/one/sunstone-server.conf and change :host: 127.0.0.1 to :host: 0.0.0.0.
Now we can start the services:
# systemctl enable opennebula
# systemctl start opennebula
# systemctl enable opennebula-sunstone
# systemctl start opennebula-sunstone
Configure NFS
Note
Skip this section if you are using a single server for both the frontend and worker node roles.
Export /var/lib/one/ from the frontend to the worker nodes. To do so, add the following to the /etc/exports file in the frontend:
Refresh the NFS exports by doing:
Configure the SSH Public Key
OpenNebula will need access to SSH passwordlessly from any node (including the frontend) to any other node.
Add the following snippet to ~/.ssh/config as oneadmin so it does not prompt to add keys to the known_hosts file:
# su – oneadmin
$ cat << EOT > ~/.ssh/config
Host *
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
EOT
$ chmod 600 ~/.ssh/config
Nodes Installation
Install the repository
Add the OpenNebula repository:
# cat << EOT > /etc/yum.repos.d/opennebula.repo
[opennebula]
name=opennebula
baseurl=http://downloads.opennebula.org/repo/4.12/CentOS/7/x86_64/
enabled=1
gpgcheck=0
EOT
Install the required packages
Start the required services:
# systemctl enable messagebus.service
# systemctl start messagebus.service
# systemctl enable libvirtd.service
# systemctl start libvirtd.service
# systemctl enable nfs.service
# systemctl start nfs.service
You will need to have your main interface connected to the bridge. We will do the following example with ens3, but the name of the interface may vary. An OpenNebula requirement is that the name of the bridge should be the same in all nodes.
To do so, substitute /etc/sysconfig/network-scripts/ifcfg-ens3 with:
DEVICE=ens3
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes
TYPE=Ethernet
BRIDGE=br0
And add a new /etc/sysconfig/network-scripts/ifcfg-br0 file.
If you are using DHCP for your ens3 interface, use this template:
DEVICE=br0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=dhcp
NM_CONTROLLED=no
If you are using a static IP address, use another template:
DEVICE=br0
TYPE=Bridge
IPADDR=<YOUR_IPADDRESS>
NETMASK=<YOUR_NETMASK>
ONBOOT=yes
BOOTPROTO=static
NM_CONTROLLED=no
After these changes, restart the network:
Configure NFS
Note
Skip this section if you are using a single server for both the frontend and worker node roles.
Mount the datastore export. Add the following to your /etc/fstab:
Note
Replace 192.168.1.1 with the IP address of the frontend.
Mount the NFS share:
If the above command fails or hangs, it can be a firewall issue.
Basic Usage
Note
All the operations in this section can be performed using Sunstone instead of the command line. Point your browser to: http://frontend:9869.
The default password for the oneadmin user, which is randomly generated on every installation, can be found in ~/.one/one_auth.
Adding a Host
To start running VMs, you should first register a worker node for OpenNebula.
Run this command for each node. Replace localhost with your node’s hostname.
Run the onehost list command. If it fails, you probably have some problems with your ssh configuration. Look at /var/log/one/oned.log.
Adding virtual resources
Once it works, you need to create a network, an image and a virtual machine template.
BRIDGE = br0
AR = [
TYPE = IP4,
IP = 192.168.0.100,
SIZE = 3
]
Note
Replace the address range with free IPs in your host network. You can add more than one address range.
Now we can move ahead and create resources in OpenNebula:
$ oneimage create –name “CentOS-7-one-4.8” \
–path http://marketplace.c12g.com/appliance/53e7bf928fb81d6a69000002/download \
–driver qcow2 \
-d default
$ onetemplate create –name “CentOS-7” \
–cpu 1 –vcpu 1 –memory 512 –arch x86_64 \
–disk “CentOS-7-one-4.8” \
–nic “private” \
–vnc –ssh –net_context
Note
If the ‘oneimage create’ command complains because there is not enough space available in the datastore, you can disable the datastore capacity check in OpenNebula: /etc/one/oned.conf:DATASTORE_CAPACITY_CHECK = “no”. You need to restart OpenNebula after changing it.
You will need to wait until the image is ready to be used. Monitor its state by running oneimage list.
In order to dynamically add ssh keys to Virtual Machines we should add our ssh key to the user template by editing the user template:
Add a new line like the following to the template:
Substitute the value above with the output of:
Running a Virtual Machine
To run a Virtual Machine, you will need to instantiate a template:
Run onevm list and watch the virtual machine going from PENDING to PROLOG to RUNNING. If the VM fails, check the reason in the log: /var/log/one/<VM_ID>/vm.log.
Note
If it stays too long in the pend status, you can check why by doing: onevm show <vmid>|grep ^SCHED_MESSAGE. If it reports that no datastores have enough capacity for the VM, you can force a manual deployment by running: onevm deploy <vmid> <hostid>.