How to Create a personal cloud at home using Openstack

 This document shows how to spin up a proof of concept cloud on 3 nodes, using the Packstack installation utility. 

The hardware configuration used in this guide:

  • 1x Dlink DR-600 router to provide access to and from internet using NAT, Firewall,...
  • 1x Dlink switch to provide communication between machines
  • 3x physical machines with hardware virtualization extensions, and at least one network adapter:
    • compute01: 8cpu | 32G RAM | 1,8T  data | 192.168.1.101 | CentOS 7 x86_64
    • compute02: 8cpu | 32G RAM | 500G data | 192.168.1.102 | CentOS 7 x86_64
    • compute03: 4cpu | 16G RAM | 500G data | 192.168.1.103 | CentOS 7 x86_64

Run the following commands on the first node "Compute01" us root user


Starting installation on the first node that going to act as a controller and compute node at the same time.

If you are using non-English locale make sure your /etc/environment is populated:

echo "LANG=en_US.utf-8
LC_ALL=en_US.utf-8" >> /etc/environment

Adding other nodes addresses in /etc/hosts

echo "192.168.1.101 compute01
192.168.1.102 compute02
192.168.1.103 compute03" >> /etc/hosts

If your system meets all the prerequisites mentioned below, proceed with running the following commands.

  • Red Hat Enterprise Linux (RHEL) 7 is the minimum recommended version, or the equivalent version of one of the RHEL-based Linux distributions such as CentOS, Scientific Linux, and so on. x86_64 is currently the only supported architecture.
  • Machine with at least 16GB RAM, processors with hardware virtualization extensions, and at least one network adapter.

Update your current packages:

yum update -y

If you plan on having external network access to the server and instances, this is a good moment to properly configure your network settings. A static IP address to your network card, and disabling NetworkManager are good ideas.

systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network
yum update -y

On CentOS 7, the Extras repository provides the RPM that enables the OpenStack repository. Extras is enabled by default on CentOS 8, so you can simply install the RPM to set up the OpenStack repository:

yum install -y centos-release-openstack-rocky
yum update -y

Install Packstack Installer:

yum install -y openstack-packstack

In case you have additional Hard drives, you can use them to extend cinder storage capacity as lvm backend. To add 2 disks mounted on /dev/sdb and /dev/sdc run the following commands:

For the first disk /dev/sdb :
Creates an XFS file system with an internal log on the /dev/sdb disk:

mkfs.xfs -f -L "HomeLabStore" /dev/sdb

Next we will use the pvcreate command to create a physical volume for later use by the LVM. In this case the physical volume will be our new /dev/sdb:

pvcreate -y /dev/sdb

Now using the vgextend command, we extend the centos volume group by adding in the physical volume of /dev/sdb which we created using the pvcreate command just before.

vgextend centos /dev/sdb

The logical volume is then extended using the lvextend command. We are extending the original logical volume of /dev/centos/root over the newer /dev/sdb

lvextend -l +100%FREE /dev/centos/root

Grow the file system with :

xfs_growfs /

check the changes:

lvmdiskscan -l
df -h

For the second disk /dev/sdc :
Creates an XFS file system with an internal log on the /dev/sdc disk:

mkfs.xfs -f -L "HomeLabStore" /dev/sdc

Next we will use the pvcreate command to create a physical volume for later use by the LVM. In this case the physical volume will be our new /dev/sdc:

pvcreate -y /dev/sdc

Now using the vgextend command, we extend the centos volume group by adding in the physical volume of /dev/sdc which we created using the pvcreate command just before.

vgextend centos /dev/sdc

The logical volume is then extended using the lvextend command. We are extending the original logical volume of /dev/centos/root over the newer /dev/sdc

lvextend -l +100%FREE /dev/centos/root

Grow the file system with :

xfs_growfs /

check the changes:

lvmdiskscan -l
df -h

Reboot The node :

reboot

Packstack takes the work out of manually setting up OpenStack. For a single node OpenStack deployment (we will add other nodes in a second deployment), run the following command:

packstack
 --allinone --provision-demo=n  --os-neutron-l2-agent=openvswitch 
--os-neutron-ml2-mechanism-drivers=openvswitch 
--os-neutron-ml2-tenant-network-types=vxlan 
--os-neutron-ml2-type-drivers=flat,vlan,gre,vxlan  
--os-neutron-ovs-bridge-mappings=extnet:br-ex 
--os-neutron-ovs-bridge-interfaces=br-ex:p3p1

This means, we will bring up the interface and plug it into br-ex OVS bridge as a port, providing the uplink connectivity. And will define a logical name for our external physical L2 segment as "extnet". Later we will reference to our provider network by the name when creating external networks.

Some useful commands to check network configuration:

ovs-vsctl list-br
cat /etc/sysconfig/network-scripts/ifcfg-p3p1
cat /etc/sysconfig/network-scripts/ifcfg-br-ex
ovs-vsctl list-ports br-tun
ovs-vsctl list-ports br-int
ovs-vsctl list-ports br-ex
cat /etc/neutron/plugins/ml2/ml2_conf.ini |grep type_drivers
cat /etc/neutron/plugins/ml2/ml2_conf.ini |grep tenant_network_types
cat /etc/neutron/plugins/ml2/ml2_conf.ini |grep flat_networks
cat /etc/neutron/plugins/ml2/ml2_conf.ini |grep vni_ranges
cat /etc/neutron/plugins/ml2/openvswitch_agent.ini |grep  tunnel_types -B12
cat /etc/neutron/plugins/ml2/openvswitch_agent.ini |grep  l2_population
cat /etc/neutron/plugins/ml2/openvswitch_agent.ini |grep  prevent_arp_spoofing
cat /etc/neutron/plugins/ml2/openvswitch_agent.ini |grep  bridge_mappings

Reboot The node :

reboot

Now, create the external network with Neutron.

source keystonerc_admin
neutron net-create external_network --provider:network_type flat --provider:physical_network extnet  --router:external

Please note: "extnet" is the L2 segment we defined with –os-neutron-ovs-bridge-mappings above.

You need to create a public subnet with an allocation range outside of your external DHCP range and set the gateway to the default gateway of the external network.

neutron
 subnet-create --name public_subnet --enable_dhcp=False 
--allocation-pool=start=192.168.1.110,end=192.168.1.210  
--gateway=192.168.1.254 external_network 192.168.1.0/24

Get a cirros image, not provisioned without demo provisioning:

curl
 -L http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img |
 glance          image-create --name='cirros image' --visibility=public 
--container-format=bare --disk-format=qcow2

Since you haven't created a project and user yet:

openstack project create --enable homelab
openstack user create --project homelab --password SuperPassw0Rd --email [email protected] --enable ymaachi

Now, let's create a source file to easily switch to the newly created user:

echo "    export OS_USERNAME=ymaachi
    export OS_PASSWORD='SuperPassw0Rd'
    export OS_TENANT_NAME=homelab
    export PS1='[\u@\h \W(keystone_ymaachi)]\$ '" > keystonerc_ymaachi

Then create a router and set its gateway using the external network created by the admin in one of previous steps:

neutron router-create homelab_router
neutron router-gateway-set homelab_router external_network

Now create a private network and a subnet in it, since demo provisioning has been disabled:

neutron net-create homelab_network
neutron subnet-create --name homelab_subnet --gateway 192.168.100.1 --dns-nameserver 192.168.1.254 --dns-nameserver 8.8.8.8 --dns-nameserver 4.4.4.4  homelab_network 192.168.100.0/24

Finally, connect your new private network to the public network through the router, which will provide floating IP addresses.

neutron router-interface-add homelab_router homelab_subnet

Import an ssh key to access to instances (you can generate your own key):

openstack keypair create --public-key ~/.ssh/id_rsa.pub rootATcompute01

Create a security group to authorize ssh access to instances:

openstack security group create secgroup01
openstack security group rule create --protocol icmp --ingress secgroup01
openstack security group rule create --protocol tcp --dst-port 22:22 secgroup01
openstack security group rule list
 

Run the following commands on the second node "Compute02" as root user


If you are using non-English locale make sure your /etc/environment is populated:

echo "LANG=en_US.utf-8
LC_ALL=en_US.utf-8" >> /etc/environment

Adding other nodes addresses in /etc/hosts

echo "192.168.1.101 compute01
192.168.1.102 compute02
192.168.1.103 compute03" >> /etc/hosts

If your system meets all the prerequisites mentioned below, proceed with running the following commands.

  • Red Hat Enterprise Linux (RHEL) 7 is the minimum recommended version, or the equivalent version of one of the RHEL-based Linux distributions such as CentOS, Scientific Linux, and so on. x86_64 is currently the only supported architecture.

Update your current packages:

yum update -y

If you plan on having external network access to the server and instances, this is a good moment to properly configure your network settings. A static IP address to your network card, and disabling NetworkManager are good ideas.

systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network
yum update -y

On CentOS 7, the Extras repository provides the RPM that enables the OpenStack repository. Extras is enabled by default on CentOS 8, so you can simply install the RPM to set up the OpenStack repository:

yum install -y centos-release-openstack-rocky
yum update -y

Reboot The node :

reboot

Run the following on the third node "Compute03" as root user

If you are using non-English locale make sure your /etc/environment is populated:

echo "LANG=en_US.utf-8
LC_ALL=en_US.utf-8" >> /etc/environment

Adding other nodes addresses in /etc/hosts

echo "192.168.1.101 compute01
192.168.1.102 compute02
192.168.1.103 compute03" >> /etc/hosts

If your system meets all the prerequisites mentioned below, proceed with running the following commands.

  • Red Hat Enterprise Linux (RHEL) 7 is the minimum recommended version, or the equivalent version of one of the RHEL-based Linux distributions such as CentOS, Scientific Linux, and so on. x86_64 is currently the only supported architecture.

Update your current packages:

yum update -y

If you plan on having external network access to the server and instances, this is a good moment to properly configure your network settings. A static IP address to your network card, and disabling NetworkManager are good ideas.

systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network
yum update -y

On CentOS 7, the Extras repository provides the RPM that enables the OpenStack repository. Extras is enabled by default on CentOS 8, so you can simply install the RPM to set up the OpenStack repository:

yum install -y centos-release-openstack-rocky
yum update -y

Reboot The node :

reboot

Run the following commands on the controller node (the first node) "Compute01" as root user


First, edit the "answer file" generated during the initial Packstack setup. You'll find the file in the directory from which you ran Packstack.

Note: By default, $youranswerfile is called packstack-answer-$date-$time.txt.

vi packstack-answers-20210604-163902.txt

If you want to have your new node as the only compute node, change the value for CONFIG_COMPUTE_HOSTS from the value of your first host IP address to the value of your second host IP address. You can also have both systems as compute nodes, if you add them as a comma-separated list:

CONFIG_COMPUTE_HOSTS=192.168.1.101,192.168.1.102,192.168.1.103

You can change the following values to install these additional services in you Home Cloud :

CONFIG_MANILA_INSTALL=y
CONFIG_PANKO_INSTALL=y
CONFIG_SAHARA_INSTALL=y
CONFIG_HEAT_INSTALL=y
CONFIG_MAGNUM_INSTALL=y
CONFIG_TROVE_INSTALL=y
CONFIG_NEUTRON_FWAAS=y
CONFIG_NEUTRON_VPNAAS=y

Run Packstack again, specifying your modified "answer file":

packstack --answer-file=packstack-answers-20210604-163902-second-deployment.txt

The installer will ask you to enter the root password for each host node you are installing on the network, to enable remote configuration of the host so it can remotely configure each node using Puppet.

Install manila UI in Horizon:

yum install -y openstack-manila-ui

Customize Horizon Logo:

mv
 /usr/share/openstack-dashboard/static/dashboard/img/logo-splash.svg 
/usr/share/openstack-dashboard/static/dashboard/img/logo-splash.svg.old
mv /usr/share/openstack-dashboard/static/dashboard/img/logo.svg /usr/share/openstack-dashboard/static/dashboard/img/logo.svg.old
mv /usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/img/logo.svg /usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/img/logo.svg.old
mv /usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/img/logo-splash.svg /usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/img/logo-splash.svg.old

From your computer, copy the new logo to horizon in order to replace openstack default logo:

scp
 img/Color-logo-no-background.svg 
[email protected]:/usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/img/logo.svg
scp img/Color-logo-no-background-splash.svg [email protected]:/usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/img/logo-splash.svg
scp img/Color-logo-no-background.svg [email protected]:/usr/share/openstack-dashboard/static/dashboard/img/logo.svg
scp img/Color-logo-no-background-splash.svg [email protected]:/usr/share/openstack-dashboard/static/dashboard/img/logo-splash.svg

Now from the controller node (compute01), add DNS alias name of Horizon and restard httpd service:

vi /etc/httpd/conf.d/15-horizon_vhost.conf
  ServerAlias cloud.yassinemaachi.com
systemctl restart httpd
systemctl restart memcached

Once the process is complete, you can log in to the OpenStack web interface Horizon by going to http://$YOURIP/dashboard. The user name is admin. The password can be found in the file keystonerc_admin in the /root directory of the control node.

 

To check the state of all compute nodes in horizon go to Hypervisors section under compute in the admin project:

 

Links
  • [1] https://www.rdoproject.org/install/packstack/

 

Commentaires

Posts les plus consultés de ce blog

How to increase Pods limit per worker node in Kubernetes

Knative vs OpenFaaS: What are the differences?