Archives For Fedora

Fedora 22 was released a few months ago and amongst many new features it came with a replacement for yum as package manager called dnf, or DaNdiFied YUM, oh yes yum is still around but it is now considered legacy software. Also DNF will become in the near future the default package manager for RHEL and CentOS so it is for the best that you get familiarized with it sooner than later.

DNF Commands

The first thing you need to understand about dnf is that many commands are basically still the same but there are differences. Package management commands can be executed with almost the same syntax previously used with yum.

Search for a package,

[jrey@fed22-srv ~]$ sudo dnf search htop
Last metadata expiration check performed 1:25:54 ago on Mon Oct 5 23:47:45 2015.
=================================== N/S Matched: htop ====================================
htop.x86_64 : Interactive process viewer
php-lightopenid.noarch : PHP OpenID library
[jrey@fed22-srv ~]$

Install a package.

[jrey@fed22-srv ~]$ sudo dnf install htop

Remove a package.

[jrey@fed22-srv ~]$ sudo dnf remove htop

Get information about a package

[jrey@fed22-srv ~]$ sudo dnf info htop
Last metadata expiration check performed 1:47:13 ago on Mon Oct 5 23:47:45 2015.
Available Packages
Name : htop
Arch : x86_64
Epoch : 0
Version : 1.0.3
Release : 4.fc22
Size : 91 k
Repo : fedora
Summary : Interactive process viewer
URL : http://hisham.hm/htop/
License : GPL+
Description : htop is an interactive text-mode process viewer for Linux, similar to
 : top(1).

[jrey@fed22-srv ~]$

Group and repository management commands are still the same as well.

[jrey@fed22-srv ~]$ sudo dnf repolist

Querying the available repositories for a specific command.

[jrey@fed22-srv ~]$ sudo dnf repoquery --whatprovides htop
Last metadata expiration check performed 1:54:52 ago on Mon Oct 5 23:47:45 2015.
htop-0:1.0.3-4.fc22.x86_64
[jrey@fed22-srv ~]$

dnf comes with some powerful capabilities like history query.

[jrey@fed22-srv ~]$ sudo dnf history list
Last metadata expiration check performed 11 days, 19:14:54 ago on Wed Oct 7 02:56:21 2015.
ID | Command line             | Date a           | Action  | Altere
-------------------------------------------------------------------------------
 9 | history undo 8           | 2015-10-06 01:53 | Install | 1 
 8 | erase htop               | 2015-10-06 01:28 | Erase   | 1 
 7 | install htop -y          | 2015-10-06 01:28 | Install | 1 
 6 | remove htop              | 2015-10-06 01:14 | Erase   | 1 
 5 | install htop             | 2015-10-06 01:14 | Install | 1 
 4 | install make gcc kernel- | 2015-09-30 16:21 | Install | 9 
 3 | update                   | 2015-09-30 15:43 | I, U    | 112 
 2 | update                   | 2015-09-16 11:45 | I, O, U | 297 
 1 |                          | 2015-09-16 10:59 | Install | 658 EE
[jrey@fed22-srv ~]$

This can be specially helpful if you need to rollback a change, like clean up dependencies after uninstalling a package or reinstall a package.

[jrey@fed22-srv ~]$ sudo history undo 8

You can also look for duplicated within the installed ones.

[jrey@fed22-srv ~]$ sudo dnf repoquery --duplicated
Last metadata expiration check performed 0:30:42 ago on Tue Oct 6 02:48:41 2015.
kernel-core-0:4.0.4-301.fc22.x86_64
kernel-core-0:4.1.6-201.fc22.x86_64
kernel-core-0:4.1.7-200.fc22.x86_64
kernel-modules-0:4.0.4-301.fc22.x86_64
kernel-modules-0:4.1.6-201.fc22.x86_64
kernel-modules-0:4.1.7-200.fc22.x86_64
[jrey@fed22-srv ~]$

Retrieve all available packages providing a specific software of capability.

[jrey@fed22-srv ~]$ sudo dnf repoquery --whatprovides curl
Last metadata expiration check performed 0:38:00 ago on Tue Oct 6 02:48:41 2015.
curl-0:7.40.0-3.fc22.x86_64
curl-0:7.40.0-7.fc22.x86_64
[jrey@fed22-srv ~]$

This is a very basic introduction to dnf capabilities but hopefully you have been able to get how it works. My advice is to review DNF documentation for all the details.

The Photon Connection

VMware Photon comes with tdnf (Tiny DNF); this is a development by VMware that comes with compatible repository and package management capabilites. Not every dnf command is available but the basic ones are there.

Package installation and updates.

Screen Shot 2015-10-11 at 19.41.00

Repository management.

Screen Shot 2015-10-11 at 18.54.47

In the future if I find the time I’ll write a new post with some advanced examples of dnf commands. Comments are welcome.

Juanma.

FirewallD, or Dynamic Firewall Manager, is the replacement for the IPTables firewall in Red Hat Enterprise Linux. The main improvement over IPTables is the capacity to make cahnges without the need to restart the whole firewall service.

FirewallD was first introduced in Fedora 18 and has been the default firewall mechanism for Fedora since then. Finally this year Red Hat decided to include it in RHEL 7, and of course it also made its way to the different RHEL clones like CentOS 7 and Scientific Linux 7.

Checking FirewallD service status

To get the basic status of the service simply use firewall-cmd --state.

[root@centos7 ~]# firewall-cmd --state
running
[root@centos7 ~]#

If you need to get a more detailed state of the service you can always use systemctl command.

[root@centos7 ~]# systemctl status firewalld.service
firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled)
   Active: active (running) since Wed 2014-11-19 06:47:42 EST; 32min ago
 Main PID: 873 (firewalld)
   CGroup: /system.slice/firewalld.service
           └─873 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

Nov 19 06:47:41 centos7.vlab.local systemd[1]: Starting firewalld - dynamic firewall daemon...
Nov 19 06:47:42 centos7.vlab.local systemd[1]: Started firewalld - dynamic firewall daemon.
[root@centos7 ~]#

To enable or disable FirewallD again use systemctl commands.

systemctl enable firewalld.service
systemctl disable firewalld.service

Managing firewall zones

FirewallD introduces the zones concept, a zone is no more than a way to define the level of trust for a set of connections. A connection definition can only be part of one zone at the same time but zones can be grouped  There is a set of predefined zones:

  • Public – For use in public areas. Only selected incoming connections are accepted.
  • Drop – Any incoming network packets are dropped, there is no reply. Only outgoing network connections are possible.
  • Block – Any incoming network connections are rejected with an icmp-host-prohibited message for IPv4 and icmp6-adm-prohibited for IPv6. Only network connections initiated within this system are possible.
  • External – For use on external networks with masquerading enabled especially for routers. Only selected incoming connections are accepted.
  • DMZ – For computers DMZ network, with limited access to the internal network. Only selected incoming connections are accepted.
  • Work – For use in work areas. Only selected incoming connections are accepted.
  • Home – For use in home areas. Only selected incoming connections are accepted.
  • Trusted – All network connections are accepted.
  • Internal – For use on internal networks. Only selected incoming connections are accepted.

By default all interfaces are assigned to the public zone. Each zone is defined in its own XML file stored in /usr/lib/firewalld/zones. For example the public zone XML file looks like this.

root@centos7 zones]# cat public.xml
<?xml version="1.0" encoding="utf-8"?>
<zone>
  <short>Public</short>
  <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description>
  <service name="ssh"/>
  <service name="dhcpv6-client"/>
</zone>
[root@centos7 zones]#

Retrieve a simple list of the existing zones.

[root@centos7 ~]# firewall-cmd --get-zones
block dmz drop external home internal public trusted work
[root@centos7 ~]#

Get a detailed list of the same zones.

firewall-cmd --list-all-zones

Get the default zone.

[root@centos7 ~]# firewall-cmd --get-default-zone
public
[root@centos7 ~]#

Get the active zones.

[root@centos7 ~]# firewall-cmd --get-active-zones
public
  interfaces: eno16777736 virbr0
[root@centos7 ~]#

Get the details of a specific zone.

[root@centos7 zones]# firewall-cmd --zone=public --list-all
public (default, active)
  interfaces: eno16777736 virbr0
  sources:
  services: dhcpv6-client ssh
  ports:
  masquerade: no
  forward-ports:
  icmp-blocks:
  rich rules:

[root@centos7 zones]#

Change the default zone.

firewall-cmd --set-default-zone=home

Interfaces and sources

Zones can be bound to a network interface and to a specific network addressing or source.

Assign an interface to a different zone, the first command assigns it temporarily and the second makes it permanently.

firewall-cmd --zone=home --change-interface=eth0
firewall-cmd --permanent --zone=home --change-interface=eth0

Retrieve the zone an interface is assigned to.

[root@centos7 zones]# firewall-cmd --get-zone-of-interface=eno16777736
public
[root@centos7 zones]#

Bound the zone work to a source.

firewall-cmd --permanent --zone=work --add-source=192.168.100.0/27

List the sources assigned to a zone, in this case work.

[root@centos7 ~]# firewall-cmd --permanent --zone=work --list-sources
172.16.10.0/24 192.168.100.0/27
[root@centos7 ~]#

Services

FirewallD can assign services permanently to a zone, for example to assign http service to the dmz zone. A service can be also assigned to multiple zones.

[root@centos7 ~]# firewall-cmd --permanent --zone=dmz --add-service=http
success
[root@centos7 ~]# firewall-cmd --reload
success
[root@centos7 ~]#

List the services assigned to a given zone.

[root@centos7 ~]# firewall-cmd --list-services --zone=dmz
http ssh
[root@centos7 ~]#

Other operations

Besides of Zones, interfaces and Services management FirewallD like other firewalls can perform several network related operations like masquerading, set direct rules and manage ports.

Masquerading and port forwading

Add masquerading to a zone.

firewall-cmd --zone=external --add-masquerade

Query if masquerading is enabled in a zone.

[root@centos7 ~]# firewall-cmd --zone=external --query-masquerade
yes
[root@centos7 ~]#

You can also set port redirection. For example to forward traffic originally intended for port 80/tcp to port 8080/tcp.

firewall-cmd --zone=external --add-forward-port=port=80:proto=tcp:toport=8080

A destination address can also bee added to the above command.

firewall-cmd --zone=external --add-forward-port=port=80:proto=tcp:toport=8080:toaddr=172.16.10.21

Set direct rules

Create a firewall rule for 8080/tcp port.

firewall-cmd --direct --add-rule ipv4 filter INPUT -p tcp -m state --state NEW -m tcp --dport 8080 -j ACCEPT

Port management

Allow a port temporary in a zone.

firewall-cmd --zone=dmz --add-port=8080/tcp

Hopefully you found the post useful to start working with FirewallD. Comments are welcome.

Juanma.

fedora_infinity_140x140Cockpit is a new web based server manager to administer Linux server, it will provide the system administrators with a user friendly interface to manage their Linux servers, it includes multiserver managing capacity and more importantly it will create no interference or disconnection between the tasks done from the web and from the command line. This last feature is specially useful

By default Cockpit, stable version, comes installed and enabled in Fedora 21 Server. It also can be found in CentOS/RHEL 7 Atomic, Fedora 21 Atomic and Fedora 21 Cloud, and there are plans in the near future to support Arch Linux.

Lets review now some of the features of Cockpit, as said before multiple servers can be managed from the same Cockpit instance.

Screen Shot 2014-12-31 at 19.26.00

Once you access one of managed nodes it will present general overview of the server with real-time charts of CPU, Memory, Disk I/O and Network Traffic.

Screen Shot 2014-12-31 at 19.37.53

On the left pane there are a series of actionable items that will give you access to the different subsystems of the node like Networking, Storage, User Accounts and even the status of the Docker containers running on the server, if the Docker service has been enabled.

System services view.

Screen Shot 2014-12-31 at 19.52.53

When a process is selected Cockpit will display its details.

Screen Shot 2015-01-08 at 12.11.27

Networking area displays traffic for the selected interface, the journal of the networking system and even allows you to create a new bond interface, a new bridge or add a new VLAN tag to the interface.

Screen Shot 2014-12-31 at 19.53.18

The Storage view will display similar info for the disks, and will display detailed information for each of them, review the LVM configuration of the server and perform different storage related operations.

Screen Shot 2014-12-31 at 19.53.45

Journal view lets you review systemd journal. You can go back seven days into the log and filter on the type of messages.

Screen Shot 2014-12-31 at 19.54.29

After using Cockpit for some time in my lab I can say that I genuinely love it, the interface is pretty fast, it uses systemd for everything and it does not interface with my console-based admin habits, on the contrary is a great complement to them.

Juanma.

One of the pillars of my personal OpenStack ecosystem is DevStack. For those of you new to the OpenStack world DevStack is a tool, basically is a shell script, that allows to deploy a full OpenStack environment, installing every required dependency. It is widely used amongst beginners and the development community.

My DevStack instance is a Fedora 20 virtual machine with 2 vCPUs and 2GB of memory, I use it mostly for testing and development. I have setup a small vSphere environment in Fusion with a vCSA virtual machine and a nested ESXi, both 5.5 version. The DNS and NFS services are provided by my management VM which is another Fedora 20 VM with just 512MB of RAM.

My local.conf file is no rocket science as you have seen, but may be it can be of help to anyone wanting to quickly setup a DevStack+vSphere development environment.

Juanma.

 

kvmWelcome to the third post of my series about OpenStack. In the first and second posts we saw in detail how to prepare the basic network infrastructure of our future OpenStack cloud using VMware NSX. In this third one we are going to install and configure the KVM compute host and the shared storage of the lab.

KVM setup

Create and install two CentOS 6.4 virtual machines with 2 vCPU, 2 GB of RAM, 2 network interfaces (E1000) and one 16GB disk. For the partitioning schema I have used the following one:

  • sda1 – 512MB – /boot
  • sda2 – Rest of the disk – LVM PV
    • lv_root – 13.5GB – /
    • lv_swap – 2GB – swap

Mark Base and Standard groups to be installed and leave the rest unchecked. Set the hostname during the installation and leave the networking configuration with the default values. Please have in mind that you will need to have a DHCP server on your network, in my case I’m using the one that comes with VMware Fusion if you don’t have one then you will have to set here a temporary IP address in order to able to install the KVM software. Once the installation is done reboot your virtual machine and open a root SSH session to proceed with the rest of the configuration tasks.

Disable SELinux with setenfornce command, also modify SELinux config to disable it during OS boot. I do not recommend to disable SELinux in a production environment but for a lab it will simplify things.

setenforce 0
cp /etc/selinux/config /etc/selinux/config.orig
sed -i s/SELINUX\=enforcing/SELINUX\=disabled/ /etc/selinux/config

Check that hardware virtualization support is activated.

egrep -i 'vmx|svm' /proc/cpuinfo

Install KVM packages.

yum install kvm libvirt python-virtinst qemu-kvm

After installing a ton of dependencies and if t nothing failed enable and start the libvirtd service.

[root@kvm1 ~]# chkconfig libvirtd on
[root@kvm1 ~]# service libvirtd start
Starting libvirtd daemon:                                  [  OK  ]
[root@kvm1 ~]#

Verify that KVM has been correctly installed and it’s loaded and running on the system.

[root@kvm1 ~]# lsmod | grep kvm
kvm_intel              53484  0
kvm                   316506  1 kvm_intel
[root@kvm1 ~]#
[root@kvm1 ~]# virsh -c qemu:///system list
 Id    Name                           State
----------------------------------------------------

[root@kvm1 ~]#

Hypervisor networking setup

With KVM software installed and ready we can now move on to configure the networking for both hosts and integrate them into our NSX deployment.

Disable Network Manager for both interfaces. Edit /etc/sysconfig/network-scripts/ifcfg-ethX files and change NM_CONTROLLED value to no.

By default libvirt creates virbr0 network bridge to be used for the virtual machines to access the external network through a NAT connection. We need to disable it to ensure that bridge components of Open vSwitch can load without any errors.

virsh net-destroy default
virsh net-autostart --disable default

Install Open vSwitch

Copy the NSX OVS package to the KVM host and extract it.

[root@kvm1 nsx-ovs]# tar vxfz nsx-ovs-2.1.0-build33849-rhel64_x86_64.tar.gz
./
./nicira-flow-stats-exporter/
./nicira-flow-stats-exporter/nicira-flow-stats-exporter-4.1.0.32691-1.x86_64.rpm
./tcpdump-ovs-4.4.0.ovs2.1.0.33849-1.x86_64.rpm
./kmod-openvswitch-2.1.0.33849-1.el6.x86_64.rpm
./openvswitch-2.1.0.33849-1.x86_64.rpm
./nicira-ovs-hypervisor-node-2.1.0.33849-1.x86_64.rpm
./nicira-ovs-hypervisor-node-debuginfo-2.1.0.33849-1.x86_64.rpm
[root@kvm1 nsx-ovs]#

Install Open vSwitch packages.

rpm -Uvh kmod-openvswitch-2.1.0.33849-1.el6.x86_64.rpm
rpm -Uvh openvswitch-2.1.0.33849-1.x86_64.rpm

Verify that Open vSwitch service is enabled and start it.

[root@kvm1 ~]# chkconfig --list openvswitch
openvswitch     0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@kvm1 ~]#
[root@kvm1 ~]#
[root@kvm1 ~]# service openvswitch start
/etc/openvswitch/conf.db does not exist ... (warning).
Creating empty database /etc/openvswitch/conf.db           [  OK  ]
Starting ovsdb-server                                      [  OK  ]
Configuring Open vSwitch system IDs                        [  OK  ]
Inserting openvswitch module                               [  OK  ]
Starting ovs-vswitchd                                      [  OK  ]
Enabling remote OVSDB managers                             [  OK  ]
[root@kvm1 ~]#

Install nicira-ovs-hypervisor-node package, this utility provides the infrastructure for distributed routing on the hypervisor. With the installation the integration bridge br-int and OVS SSL credentials will be created.

[root@kvm1 ~]# rpm -Uvh nicira-ovs-hypervisor-node*.rpm
Preparing...                ########################################### [100%]
   1:nicira-ovs-hypervisor-n########################################### [ 50%]
   2:nicira-ovs-hypervisor-n########################################### [100%]
Running '/usr/sbin/ovs-integrate init'
successfully generated self-signed certificates..
successfully created the integration bridge..
[root@kvm1 ~]#

There are other packages like nicira-flow-stats-exporter and tcpdump-ovs but they are not needed for OVS functioning. We can proceed now with OVS configuration.

Configure Open vSwitch

The first step is to create OVS bridges for each network interface card of the hypervisor.

ovs-vsctl add-br br0
ovs-vsctl br-set-external-id br0 bridge-id br0
ovs-vsctl set Bridge br0 fail-mode=standalone
ovs-vsctl add-port br0 eth0

If you were logged in by an SSH session you have probably noticed that your connection is lost, this is because br0 interface has taken control of the networking of the host and it doesn’t have an IP address configured. To solve this access the host console and edit ifcfg-eth0 file and modify to look like this.

DEVICE=eth0
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br0
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
NAME=eth0
HOTPLUG=no
HWADDR=00:0C:29:CA:34:FE
NM_CONTROLLED=no

Next create and edit ifcfg-br0 file.

DEVICE=br0
DEVICETYPE=ovs
TYPE=OVSBridge
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.82.42
NETMASK=255.255.255.0
GATEWAY=192.168.82.2
IPV6INIT=no
HOTPLUG=no

Restart the network service and test the connection.

service network restart

Repeat all the above steps for the second network interface.

Finally configure NSX Controller Cluster as manager in Open vSwitch.

ovs-vsctl set-manager ssl:192.168.82.44

Execute ovs-vsctl show command to review OVS current configuration.

[root@kvm1 ~]# ovs-vsctl show
383c3f17-5c53-4992-be8e-6e9b195e51d8
    Manager "ssl:192.168.82.44"
    Bridge "br1"
        fail_mode: standalone
        Port "br1"
            Interface "br1"
                type: internal
        Port "eth1"
            Interface "eth1"
    Bridge "br0"
        fail_mode: standalone
        Port "eth0"
            Interface "eth0"
        Port "br0"
            Interface "br0"
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.1.0.33849"
[root@kvm1 ~]#

Register OVS in NSX Controller

With our OVS instance installed and running we can now inform NSX Controller of its existence either via NVP API or NSX Manager, in our case we will use the later.

Log into NSX Manager as admin user and go to Dashboard, from Summary of Transport Components table click Add in the Hypervisors row. Verify that Hypervisor is selected as transport node and move to the Basics screen. Enter a name for the hypervisor, usually the hostname of the server.

Screen Shot 2014-05-05 at 23.18.22

In Properties enter:

  • Integration bridge ID, for us is br-int.
  • Admin Status Enabled –  Enabled by default.

Screen Shot 2014-05-05 at 23.29.03

For the Credential screen we are going to need the SSL certificate that was created along with the integration bridge during the NSX OVS installation. The PEM certificate file is ovsclient-cert.pem and is in /etc/openvswitch directory.

[root@kvm1 ~]# cat /etc/openvswitch/ovsclient-cert.pem
-----BEGIN CERTIFICATE-----
MIIDwjCCAqoCCQDZUob5H9tzvjANBgkqhkiG9w0BAQUFADCBojELMAkGA1UEBhMC
VVMxCzAJBgNVBAgTAkNBMRIwEAYDVQQHEwlQYWxvIEFsdG8xFTATBgNVBAoTDE9w
ZW4gdlN3aXRjaDEfMB0GA1UECxMWT3BlbiB2U3dpdGNoIGNlcnRpZmllcjE6MDgG
A1UEAxMxb3ZzY2xpZW50IGlkOjA4NWQwMTFiLTJiMzYtNGQ5My1iMWIyLWJjODIz
MDczYzE0YzAeFw0xNDA1MDQyMjE3NTVaFw0yNDA1MDEyMjE3NTVaMIGiMQswCQYD
VQQGEwJVUzELMAkGA1UECBMCQ0ExEjAQBgNVBAcTCVBhbG8gQWx0bzEVMBMGA1UE
ChMMT3BlbiB2U3dpdGNoMR8wHQYDVQQLExZPcGVuIHZTd2l0Y2ggY2VydGlmaWVy
MTowOAYDVQQDEzFvdnNjbGllbnQgaWQ6MDg1ZDAxMWItMmIzNi00ZDkzLWIxYjIt
YmM4MjMwNzNjMTRjMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwgqT
hvG72vat0hXvTuukZOs6fM4CAphmN34l4415q/vReSM3upN+vOLoyGJ/8VJGdNXH
3Bsu6V58f6o8EPbfnhgqf2rCP0r5kiiN5SivsAWI5//ltV1GDFO4+8VpYAwn4Cbd
sNOuFEM1mKOR//IL3Riy9Nkh16wfLy44KEE9745uhZ9gW96AkSkBx1ajjUiApnjL
M6L2w/E4sxNeMDLf/VYlc/SuEg775D9iaPpA1haJt8FFw1g769FsR9Q0Fl+CoT7f
ggBZTKwwcoU+5Ew1mNlPV0Hm8vpFcXbtMBeuT9Fe7k4bC+UuQPaSnbPpbZMpx/wd
fHOdJpemcog/0EjOJQIDAQABMA0GCSqGSIb3DQEBBQUAA4IBAQDBPNM/uI25ofIl
AgCpG42UD3M/RZRPX0/6Be4jCTaAuET6J8wAKA4k1btA6UPt0M98N6o4y60Du2D+
ZwFOa2LSTXZB43X70XnDKxapDVqmhKtrmX2hL1NRD9RjTTx3TOXMOlUiUizRB1+L
d8MNhX3qrvOLeFOUnxm6C5RnI/HdqvS9TyxybX+Qfqit9Q66hbjAt9RribXSw21G
Ix8d9S4NyDO91mDstIcXeNRUk8K64gEQSKxQO9QKmVAQBIlYAJVVXzfkXyHEiKTe
0zIsW/oknwWeQMD9xSrKomY/5+LCuDM1jT5LcL8vxmrEVIrUjNqt4nQsT4mjooG+
XYf2HdXj
-----END CERTIFICATE-----
[root@kvm1 ~]#

Copy the contents of the file and paste them in the Security Certificate text box.

Screen Shot 2014-05-05 at 23.36.28

Finally add the Transport Connector with the values:

  • Transport Type: STT
  • Transport Zone UUID: The transport zone, in my case the UUID corresponding to vlab-transport-zone.
  • IP Address – The address of the br0 interface of the host.

Screen Shot 2014-05-05 at 23.41.57

Click Save & View and check that Management and OpenFlow connections are up.

Screen Shot 2014-05-05 at 23.52.16

GlusterFS setup

gluster-logo-300x115I choose GlusterFS for my OpenStack lab for two reasons.  I have used it in the past so this has been a good opportunity for me to refresh and enhance my rusty gluster skills, and it’s supported as storage backend for Glance in OpenStack. Instead of going with CentOS again this time I choose Fedora 20 for my gluster VM, a real world GlusterFS cluster will have at least two node but for our lab one will be enough.

Create a Fedora x64 virtual machine with 1 vCPU, 1GB of RAM and one network interface. For the storage part use the following:

  • System disk: 16GB
  • Data disk: 72GB

Use the same partitioning schema of the KVM hosts for the system disk. Choose a Minimal installation and add the Standard group. Configure the hostname and the IP address of the node, set the root password and create a user as administrator, I’m using here my personal user jrey.

Disable SELinux.

sudo setenforce 0
sudo cp /etc/selinux/config /etc/selinux/config.orig
sudo sed -i s/SELINUX\=enforcing/SELINUX\=disabled/ /etc/selinux/config

Stop and disable firewalld.

sudo systemctl disable firewalld.service
sudo systemctl stop firewalld.service

Install GlusterFS packages. There is no need to add any additional yum repository since Gluster is included in the standard Fedora repos.

sudo systemctl install glusterfs-server

Enable Gluster services.

sudo systemctl enable glusterd.service
sudo systemctl enable glusterfsd.service

Start Gluster services.

[jrey@gluster ~]$ sudo systemctl start glusterd.service
[jrey@gluster ~]$ sudo systemctl start glusterfsd.service
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo systemctl status glusterd.service
glusterd.service - GlusterFS an clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
   Active: active (running) since Mon 2014-04-28 17:17:35 CEST; 20s ago
  Process: 1496 ExecStart=/usr/sbin/glusterd -p /run/glusterd.pid (code=exited, status=0/SUCCESS)
 Main PID: 1497 (glusterd)
   CGroup: /system.slice/glusterd.service
           └─1497 /usr/sbin/glusterd -p /run/glusterd.pid

Apr 28 17:17:35 gluster.vlab.local systemd[1]: Started GlusterFS an clustered file-system server.
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo systemctl status glusterfsd.service
glusterfsd.service - GlusterFS brick processes (stopping only)
   Loaded: loaded (/usr/lib/systemd/system/glusterfsd.service; enabled)
   Active: active (exited) since Mon 2014-04-28 17:17:45 CEST; 15s ago
  Process: 1515 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 1515 (code=exited, status=0/SUCCESS)

Apr 28 17:17:45 gluster.vlab.local systemd[1]: Starting GlusterFS brick processes (stopping only)...
Apr 28 17:17:45 gluster.vlab.local systemd[1]: Started GlusterFS brick processes (stopping only).
[jrey@gluster ~]$

Since we are running a one-node cluster there is no need to add any node to the trusted pool. In case you decide to run a multinode environment you can setup the pool by running the following command on each node of the clsuter. .

gluster peer probe <IP_ADDRESS_OF_OTHER_NODE>

Edit the data disk with fdisk and create a single partition. Format the partition as XFS.

[jrey@gluster ~]$ sudo mkfs.xfs -i size=512 /dev/sdb1
meta-data=/dev/sdb1              isize=512    agcount=4, agsize=4718528 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=18874112, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=9215, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[jrey@gluster ~]$

Create the mount point for the new filesystem, mount the partition and edit /etc/fstab accordingly to make it persistent to reboots.

sudo mkdir -p /data/glance/
sudo mount /dev/sdb1 /data/glance
sudo mkdir -p /data/glance/brick1
sudo echo "/dev/sdb1 /data/glance xfs defaults 0 0" >> /etc/fstab

Create the Gluster volume and start it.

[jrey@gluster ~]$ sudo gluster volume create gv0 gluster.vlab.local:/data/glance/brick1
volume create: gv0: success: please start the volume to access data
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo gluster volume start gv0
volume start: gv0: success
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo gluster volume info

Volume Name: gv0
Type: Distribute
Volume ID: d1ad2d00-6210-4856-a5eb-26ddcba77a70
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gluster.vlab.local:/data/glance/brick1
[jrey@gluster ~]$

The configuration of the Gluster node is finished. In the next article we will install and configure OpenStack using the different components detailed during current and previous parts of the series.

Please feel free to add any comment or correction.

Juanma.

I got aware of this issue last week after installing a Fedora 18 virtual machine on Fusion 5. The installation of the Tools went as expected but when the install process launched the vmware-tools-config,pl script I got the typical error of not being able to find the Linux Kernel headers.

Searching for a valid kernel header path...
The path "" is not a valid path to the 3.7.2-204.fc18.x86_64 kernel headers.
Would you like to change it? [yes]

I installed the kernel headers and devel packages with yum.

[root@fed18 ~]# yum install kernel-headers kernel-devel

Fired up again the configuration script and got the same error. The problem is that snce kernel 3.7 all the kernel header files have been relocated to a new path and because of that the script is not able to find them. To solve it just create a symlink of the version.h file from the new location to the old one.

[root@fed18 src]# ln -s /usr/src/kernels/3.7.2-204.fc18.x86_64/include/generated/uapi/linux/version.h /lib/modules/3.7.2-204.fc18.x86_64/build/include/linux/

With the problem fixed I launched the config script again and the tools finally got configured without problems.

[root@fed18 ~]# vmware-config-tools.pl 
Initializing...

Making sure services for VMware Tools are stopped.
Stopping Thinprint services in the virtual machine:
 Stopping Virtual Printing daemon: done
Stopping vmware-tools (via systemctl): [ OK ]

The VMware FileSystem Sync Driver (vmsync) allows external third-party backup 
software that is integrated with vSphere to create backups of the virtual 
machine. Do you wish to enable this feature? [no]

Before you can compile modules, you need to have the following installed...
make
gcc
kernel headers of the running kernel

Searching for GCC...
Detected GCC binary at "/bin/gcc".
The path "/bin/gcc" appears to be a valid path to the gcc binary.
Would you like to change it? [no]

Searching for a valid kernel header path...
Detected the kernel headers at 
"/lib/modules/3.7.2-204.fc18.x86_64/build/include".
The path "/lib/modules/3.7.2-204.fc18.x86_64/build/include" appears to be a 
valid path to the 3.7.2-204.fc18.x86_64 kernel headers.
Would you like to change it? [no]

Juanma.