Archives For Linux

Fedora 22 was released a few months ago and amongst many new features it came with a replacement for yum as package manager called dnf, or DaNdiFied YUM, oh yes yum is still around but it is now considered legacy software. Also DNF will become in the near future the default package manager for RHEL and CentOS so it is for the best that you get familiarized with it sooner than later.

DNF Commands

The first thing you need to understand about dnf is that many commands are basically still the same but there are differences. Package management commands can be executed with almost the same syntax previously used with yum.

Search for a package,

[jrey@fed22-srv ~]$ sudo dnf search htop
Last metadata expiration check performed 1:25:54 ago on Mon Oct 5 23:47:45 2015.
=================================== N/S Matched: htop ====================================
htop.x86_64 : Interactive process viewer
php-lightopenid.noarch : PHP OpenID library
[jrey@fed22-srv ~]$

Install a package.

[jrey@fed22-srv ~]$ sudo dnf install htop

Remove a package.

[jrey@fed22-srv ~]$ sudo dnf remove htop

Get information about a package

[jrey@fed22-srv ~]$ sudo dnf info htop
Last metadata expiration check performed 1:47:13 ago on Mon Oct 5 23:47:45 2015.
Available Packages
Name : htop
Arch : x86_64
Epoch : 0
Version : 1.0.3
Release : 4.fc22
Size : 91 k
Repo : fedora
Summary : Interactive process viewer
License : GPL+
Description : htop is an interactive text-mode process viewer for Linux, similar to
 : top(1).

[jrey@fed22-srv ~]$

Group and repository management commands are still the same as well.

[jrey@fed22-srv ~]$ sudo dnf repolist

Querying the available repositories for a specific command.

[jrey@fed22-srv ~]$ sudo dnf repoquery --whatprovides htop
Last metadata expiration check performed 1:54:52 ago on Mon Oct 5 23:47:45 2015.
[jrey@fed22-srv ~]$

dnf comes with some powerful capabilities like history query.

[jrey@fed22-srv ~]$ sudo dnf history list
Last metadata expiration check performed 1:43:14 ago on Mon Oct 5 23:47:45 2015.
ID | Command line | Date a | Action | Altere
 8 | erase htop | 2015-10-06 01:28 | Erase | 1
 7 | install htop -y | 2015-10-06 01:28 | Install | 1
 6 | remove htop | 2015-10-06 01:14 | Erase | 1
 5 | install htop | 2015-10-06 01:14 | Install | 1
 4 | install make gcc kernel- | 2015-09-30 16:21 | Install | 9
 3 | update | 2015-09-30 15:43 | I, U | 112
 2 | update | 2015-09-16 11:45 | I, O, U | 297
 1 | | 2015-09-16 10:59 | Install | 658 EE
[jrey@fed22-srv ~]$

This can be specially helpful if you need to rollback a change, like clean up dependencies after uninstalling a package or reinstall a package.

[jrey@fed22-srv ~]$ sudo history undo 8

You can also look for duplicated within the installed ones.

[jrey@fed22-srv ~]$ sudo dnf repoquery --duplicated
Last metadata expiration check performed 0:30:42 ago on Tue Oct 6 02:48:41 2015.
[jrey@fed22-srv ~]$

Retrieve all available packages providing a specific software of capability.

[jrey@fed22-srv ~]$ sudo dnf repoquery --whatprovides curl
Last metadata expiration check performed 0:38:00 ago on Tue Oct 6 02:48:41 2015.
[jrey@fed22-srv ~]$

This is a very basic introduction to dnf capabilities but hopefully you have been able to get how it works. My advice is to review DNF documentation for all the details.

The Photon Connection

VMware Photon comes with tdnf (Tiny DNF); this is a development by VMware that comes with compatible repository and package management capabilites. Not every dnf command is available but the basic ones are there.

Package installation and updates.

Screen Shot 2015-10-11 at 19.41.00

Repository management.

Screen Shot 2015-10-11 at 18.54.47

In the future if I find the time I’ll write a new post with some advanced examples of dnf commands. Comments are welcome.


FirewallD, or Dynamic Firewall Manager, is the replacement for the IPTables firewall in Red Hat Enterprise Linux. The main improvement over IPTables is the capacity to make cahnges without the need to restart the whole firewall service.

FirewallD was first introduced in Fedora 18 and has been the default firewall mechanism for Fedora since then. Finally this year Red Hat decided to include it in RHEL 7, and of course it also made its way to the different RHEL clones like CentOS 7 and Scientific Linux 7.

Checking FirewallD service status

To get the basic status of the service simply use firewall-cmd --state.

[root@centos7 ~]# firewall-cmd --state
[root@centos7 ~]#

If you need to get a more detailed state of the service you can always use systemctl command.

[root@centos7 ~]# systemctl status firewalld.service
firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled)
   Active: active (running) since Wed 2014-11-19 06:47:42 EST; 32min ago
 Main PID: 873 (firewalld)
   CGroup: /system.slice/firewalld.service
           └─873 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

Nov 19 06:47:41 centos7.vlab.local systemd[1]: Starting firewalld - dynamic firewall daemon...
Nov 19 06:47:42 centos7.vlab.local systemd[1]: Started firewalld - dynamic firewall daemon.
[root@centos7 ~]#

To enable or disable FirewallD again use systemctl commands.

systemctl enable firewalld.service
systemctl disable firewalld.service

Managing firewall zones

FirewallD introduces the zones concept, a zone is no more than a way to define the level of trust for a set of connections. A connection definition can only be part of one zone at the same time but zones can be grouped  There is a set of predefined zones:

  • Public – For use in public areas. Only selected incoming connections are accepted.
  • Drop – Any incoming network packets are dropped, there is no reply. Only outgoing network connections are possible.
  • Block – Any incoming network connections are rejected with an icmp-host-prohibited message for IPv4 and icmp6-adm-prohibited for IPv6. Only network connections initiated within this system are possible.
  • External – For use on external networks with masquerading enabled especially for routers. Only selected incoming connections are accepted.
  • DMZ – For computers DMZ network, with limited access to the internal network. Only selected incoming connections are accepted.
  • Work – For use in work areas. Only selected incoming connections are accepted.
  • Home – For use in home areas. Only selected incoming connections are accepted.
  • Trusted – All network connections are accepted.
  • Internal – For use on internal networks. Only selected incoming connections are accepted.

By default all interfaces are assigned to the public zone. Each zone is defined in its own XML file stored in /usr/lib/firewalld/zones. For example the public zone XML file looks like this.

root@centos7 zones]# cat public.xml
<?xml version="1.0" encoding="utf-8"?>
  <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description>
  <service name="ssh"/>
  <service name="dhcpv6-client"/>
[root@centos7 zones]#

Retrieve a simple list of the existing zones.

[root@centos7 ~]# firewall-cmd --get-zones
block dmz drop external home internal public trusted work
[root@centos7 ~]#

Get a detailed list of the same zones.

firewall-cmd --list-all-zones

Get the default zone.

[root@centos7 ~]# firewall-cmd --get-default-zone
[root@centos7 ~]#

Get the active zones.

[root@centos7 ~]# firewall-cmd --get-active-zones
  interfaces: eno16777736 virbr0
[root@centos7 ~]#

Get the details of a specific zone.

[root@centos7 zones]# firewall-cmd --zone=public --list-all
public (default, active)
  interfaces: eno16777736 virbr0
  services: dhcpv6-client ssh
  masquerade: no
  rich rules:

[root@centos7 zones]#

Change the default zone.

firewall-cmd --set-default-zone=home

Interfaces and sources

Zones can be bound to a network interface and to a specific network addressing or source.

Assign an interface to a different zone, the first command assigns it temporarily and the second makes it permanently.

firewall-cmd --zone=home --change-interface=eth0
firewall-cmd --permanent --zone=home --change-interface=eth0

Retrieve the zone an interface is assigned to.

[root@centos7 zones]# firewall-cmd --get-zone-of-interface=eno16777736
[root@centos7 zones]#

Bound the zone work to a source.

firewall-cmd --permanent --zone=work --add-source=

List the sources assigned to a zone, in this case work.

[root@centos7 ~]# firewall-cmd --permanent --zone=work --list-sources
[root@centos7 ~]#


FirewallD can assign services permanently to a zone, for example to assign http service to the dmz zone. A service can be also assigned to multiple zones.

[root@centos7 ~]# firewall-cmd --permanent --zone=dmz --add-service=http
[root@centos7 ~]# firewall-cmd --reload
[root@centos7 ~]#

List the services assigned to a given zone.

[root@centos7 ~]# firewall-cmd --list-services --zone=dmz
http ssh
[root@centos7 ~]#

Other operations

Besides of Zones, interfaces and Services management FirewallD like other firewalls can perform several network related operations like masquerading, set direct rules and manage ports.

Masquerading and port forwading

Add masquerading to a zone.

firewall-cmd --zone=external --add-masquerade

Query if masquerading is enabled in a zone.

[root@centos7 ~]# firewall-cmd --zone=external --query-masquerade
[root@centos7 ~]#

You can also set port redirection. For example to forward traffic originally intended for port 80/tcp to port 8080/tcp.

firewall-cmd --zone=external --add-forward-port=port=80:proto=tcp:toport=8080

A destination address can also bee added to the above command.

firewall-cmd --zone=external --add-forward-port=port=80:proto=tcp:toport=8080:toaddr=

Set direct rules

Create a firewall rule for 8080/tcp port.

firewall-cmd --direct --add-rule ipv4 filter INPUT -p tcp -m state --state NEW -m tcp --dport 8080 -j ACCEPT

Port management

Allow a port temporary in a zone.

firewall-cmd --zone=dmz --add-port=8080/tcp

Hopefully you found the post useful to start working with FirewallD. Comments are welcome.


Being used to have Cockpit in my Fedora 21 Server VMs I decided that having it also on my CentOS machines would be awesome, unfortunately I quickly found that Cockpit was not available in CentOS repositories. Of course I knew that Cockpit comes installed and enabled by default in CentOS 7 Atomic host image so I figured out that those packages had to be hidden in some Atomic related repo.

After looking a bit I finally found in GitHub the sig-atomic-buildscripts repository that belongs to CentOS Project. This repository contains several scripts and files intended to build your own CentOS Atomic host including virt7-testing.repo, the Yum repository file needed for Cockpit.

Clone the GutHub repository.

git clone

Copy virt7-testing.repo file to /etc/yum.repos.d and install Cockpit.

yum install cockpit

Enable Cockpit service.

[root@webtest ~]# systemctl enable cockpit.socket
ln -s '/usr/lib/systemd/system/cockpit.socket' '/etc/systemd/system/'
[root@webtest ~]#

Add Cockpit to the list of trusted services in FirewallD.

[root@webtest ~]# firewall-cmd --permanent --zone=public --add-service=cockpit
[root@webtest ~]#
[root@webtest ~]# firewall-cmd --reload
[root@webtest ~]#
[root@webtest ~]# firewall-cmd --list-services
cockpit dhcpv6-client ssh
[root@webtest ~]#

Start Cockpit socket.

systemctl start cockpit.socket

Do no try to access Cockpit yet, there is an issue about running Cockpit on stock CentOS/RHEL 7. To be able to start it we need first to modify the service file to disable SSL.Edit file /usr/lib/systemd/system/cockpit.service and modify ExecStart line to look like this.

ExecStart=/usr/libexec/cockpit-ws --no-tls

I know this procedure will invalidate Cockpit for a production environment in RHEL7 at least for now but this is for my lab environment and I can live with it.

Reload systemd.

systemctl daemon-reload

Restart Cockpit.

systemctl restart cockpit

Access Cockpit web interface, login as root and have fun :-)

Screen Shot 2015-01-09 at 01.57.51



fedora_infinity_140x140Cockpit is a new web based server manager to administer Linux server, it will provide the system administrators with a user friendly interface to manage their Linux servers, it includes multiserver managing capacity and more importantly it will create no interference or disconnection between the tasks done from the web and from the command line. This last feature is specially useful

By default Cockpit, stable version, comes installed and enabled in Fedora 21 Server. It also can be found in CentOS/RHEL 7 Atomic, Fedora 21 Atomic and Fedora 21 Cloud, and there are plans in the near future to support Arch Linux.

Lets review now some of the features of Cockpit, as said before multiple servers can be managed from the same Cockpit instance.

Screen Shot 2014-12-31 at 19.26.00

Once you access one of managed nodes it will present general overview of the server with real-time charts of CPU, Memory, Disk I/O and Network Traffic.

Screen Shot 2014-12-31 at 19.37.53

On the left pane there are a series of actionable items that will give you access to the different subsystems of the node like Networking, Storage, User Accounts and even the status of the Docker containers running on the server, if the Docker service has been enabled.

System services view.

Screen Shot 2014-12-31 at 19.52.53

When a process is selected Cockpit will display its details.

Screen Shot 2015-01-08 at 12.11.27

Networking area displays traffic for the selected interface, the journal of the networking system and even allows you to create a new bond interface, a new bridge or add a new VLAN tag to the interface.

Screen Shot 2014-12-31 at 19.53.18

The Storage view will display similar info for the disks, and will display detailed information for each of them, review the LVM configuration of the server and perform different storage related operations.

Screen Shot 2014-12-31 at 19.53.45

Journal view lets you review systemd journal. You can go back seven days into the log and filter on the type of messages.

Screen Shot 2014-12-31 at 19.54.29

After using Cockpit for some time in my lab I can say that I genuinely love it, the interface is pretty fast, it uses systemd for everything and it does not interface with my console-based admin habits, on the contrary is a great complement to them.


Welcome to Part 4 for this series about OpenStack and VMware NSX. To do a quick review, in the first three parts we described the different VMware NSX components and concepts and how to install and configure them, also discussed how to install and configure the KVM and GlusterFS nodes. In this fourth part of the series we will see how to deploy OpenStack in a three-node architecture and integrate it with our existent NSX installation.

If you remember the first post where I described the components of the lab, there were three OpenStack dedicated nodes:

  • Cloud controller node
  • Neutron networking node
  • Nova compute node

Instead of installing from scratch I decided to go with one of the OpenStack distributions: RDO. What is RDO and why I decided for it? RDO is a community distribution of OpenStack sponsored by Red Hat, yes I just say Red Hat so please stop the eye rolling.

RDO is the upstream version of RHEL OpenStack Platform, the commercial version of OpenStack by Red Hat. During the last months I tried several flavors of OpenStack and while I still think that installing from scratch is the best way to learn, in fact is what I did for my first labs, RDO gives me the possibility to quickly create my testing labs. Also RHEL OP Version 4, based on RDO, is supported with VMware NSX and I really couldn’t resist myself to try it.

Installation prerequisites

Before proceeding with the installation there are some preparations we need to perform on the OpenStack nodes.

SSH key generation

Generate a new SSH key to be later distributed on the OpenStack nodes during the installation. Use ssh-keygen to generate the new key.

Neutron server preparation

In the Neutron node install NSX Open vSwitch version as described in Part 3 for the KVM nodes, the network interface configuration it’s quite similar.

With the network interface configuration files properly setup exist your SSH session and log into the VM console to create the OVS bridges like the example below.

ovs-vsctl add-br br-ex
ovs-vsctl br-set-external-id br-ex bridge-id br-ex
ovs-vsctl set Bridge br-ex fail-mode=standalone
ovs-vsctl add-port br-ex eth0

OpenStack installation

RDO relies on packstack for the installation of its different components. Packstack is a tool that will install all required software in the nodes based on an answer file. Enable RDO and EPEL repos and install openstack-packstack package.

yum install -y
yum install -y
yum install -y openstack-packstack

Once it is installed generate a new answer file, we will use this file as a template for our installation.

packstack --gen-answer-file rdo_answers.txt

Edit packstack answer file and modify the following entries, leave the rest with the default values. It is important to do not eliminate any entry or packstack execution will fail.

Deactivate services we do not want to deploy.


Nova settings.


And finally Neutron settings. Don’t set any L3 value since that part will be managed by NSX.


Launch OpenStack installation process.

packstack --answer-file rdo_answers.txt

The installation will take a while so you better grab a cup of coffee and have a look at the output while the software installs on each of the three nodes. If everything goes as expected we should see a similar message at the of the installation process.

 **** Installation completed successfully ******

Additional information:
 * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
 * File /root/keystonerc_admin has been created on OpenStack client host To use the command line tools you need to source the file.
 * To access the OpenStack Dashboard browse to
Please, find your login credentials stored in the keystonerc_admin in your home directory. 
 * Because of the kernel update the host requires reboot. 
 * Because of the kernel update the host requires reboot.
 * Because of the kernel update the host requires reboot.
 * The installation log file is available at: /var/tmp/packstack/20140617-001835-On5TCi/openstack-setup.log 
 * The generated manifests are available at: /var/tmp/packstack/20140617-001835-On5TCi/manifests 
[root@cloud-controller ~]#

Reboot the three nodes as instructed and proceed to the next step.

Configure Glance to use GlusterFS

RDO packstack cannot configure Glance to use GlusterFS as its storage backend during the installation and it has to be configured afterwards. Fortunately the necessary steps are documented on RDO site.

Stop Glance services.

service openstack-glance-registry stop
service openstack-glance-api stop

Install gluster required packages on the controller node.

yum install glusterfs-fuse glusterfs

Mount GlusterFS share and set the ownership and permissions for glance user.

mount -t glusterfs gluster.vlab.local:gv0 /var/lib/glance/images
chown -R glance:glance /var/lib/glance/images

Start Glance services.

service openstack-glance-registry start
service openstack-glance-api start

With the installation finished OpenStack Horizon dashboard should be available at http://cloud_controller_fqdn/dashboard. Log in with the user admin, the password for this user can be found in the file /root/keystonerc_admin on the cloud controller node.

[root@cloud-controller ~]# cat keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=cd0ed5b5f251450f
export OS_AUTH_URL=
export PS1='[\u@\h \W(keystone_admin)]\$ '
[root@cloud-controller ~]#

If login fails with an unexpected error check that firewall is deactivated in all three nodes and that all services are up and running, in some of my deployments Neutron server did not start after a reboot and I had to start it manually.

Once logged into horizon navigate to Admin -> Hypervisor and check that the KVM hypervisor is properly registered.

Screen Shot 2014-06-17 at 01.56.04

Configure the NSX integration

At this point we have a working OpenStack installation with Neutron using the Open vSwitch plugin, now we will proceed to integrate our shiny OpenStack cloud with NSX.

Install NSX Neutron plugin

VMware provides a set of RPM packages containing the NSX plugin and a VMware sanctioned version of Neutron, however I found that this packages were older than my Havana installation and didn’t want to brake any dependencies and spend hours trying to fix my installation.

A tar file containing all the source for both the plugin and Neutron itself is also available and instructions on how to compile and install it are provided in NSX documentation, during my first trials I took this path but this time I decided to use the upstream plugin instead since it was available in RDO repositories.

yum install openstack-neutron-nicira

Configure NSX plugin

Register the Neutron server as a transport node on the NSX Controller Cluster.

ovs-vsctl set-manager ssl:

Stop neutron services.

service neutron-server stop

Edit /etc/neutron/neutron.conf file and set core_plugin value to neutron.plugins.nicira.NeutronPlugin.NvpPluginV2.

Configure nvp.ini file accordingly, this file can be found in /etc/neutron/plugins/nicira.

Set NSX admin user and password.

nvp_user = admin
nvp_password = admin

Configure NSX controllers IP addresses.

nvp_controllers =

Set the default Transport Zone UUID and the L3 and L2 gateway serveices UUID, these values can be retrieved from the NSX Manager web.

default_tz_uuid = b948fd35-5737-4a30-8741-43134771d40c
default_l3_gw_service_uuid = adee048c-3776-4bd2-ade1-42ab5c90bf9e

Configure metadata for Nova instances, set metadata_dhcp_host_route to False in [DEFAULT] section. In [nvp] section set the metadata mode as access_network.

enable_metadata_access_network = True
metadata_mode = access_network

Create a [database] section and configure the connection to Neutron MySQL database, the data can be found on neutron.conf file.

connection = mysql://neutron:ac2191a8661b4b66@

FInally before start Neutron services check nvp.ini with the command neutron-check-nvp-config. You should get something like this.

[root@neutron ~]# neutron-check-nvp-config /etc/neutron/plugins/nicira/nvp.ini
----------------------- Database Options -----------------------
        connection: mysql://neutron:ac2191a8661b4b66@
        retry_interval: 10
        max_retries: 10
-----------------------    NVP Options   -----------------------
        NVP Generation Timeout -1
        Number of concurrent connections to each controller 10
        max_lp_per_bridged_ls: 5000
        max_lp_per_overlay_ls: 256
-----------------------  Cluster Options -----------------------
        requested_timeout: 30
        retries: 2
        redirects: 2
        http_timeout: 10
Number of controllers found: 1
        Controller endpoint:
                Gateway(L3GatewayServiceConfig) uuid: adee048c-3776-4bd2-ade1-42ab5c90bf9e
        Transport zones: [u'b948fd35-5737-4a30-8741-43134771d40c']
[root@neutron ~]#

Start Neutron services

service neutron-server start

Create a network neutron command line to test that everything is working as expected.

[root@cloud-controller ~(keystone_admin)]# neutron net-create nsx-test-net
Created a new network:
| Field                 | Value                                |
| admin_state_up        | True                                 |
| id                    | 24f3b23f-a938-40e7-b026-14c8fb77ff34 |
| name                  | nsx-test-net                         |
| port_security_enabled | True                                 |
| shared                | False                                |
| status                | ACTIVE                               |
| subnets               |                                      |
| tenant_id             | 4d9fbabd4c9d4fa4a2185ff7559ae4e8     |
[root@cloud-controller ~(keystone_admin)]#
[root@cloud-controller ~(keystone_admin)]# neutron net-list
| id                                   | name         | subnets |
| 24f3b23f-a938-40e7-b026-14c8fb77ff34 | nsx-test-net |         |
[root@cloud-controller ~(keystone_admin)]#

Access NSX Manager web interface, navigate to Logical Switches and confirm that a new logical switch with the same name and UUID as the new OpenStack network has been created.

Screen Shot 2014-06-21 at 22.15.20

Congratulations! We have successfully deployed a distributed installation of OpenStack with KVM as the underlying hypervisor and integrated with VMware NSX state of the art network virtualization software. In future posts out of this four article series we will discuss some tips and other parts of OpenStack and NSX. Courteous comments are welcome.


kvmWelcome to the third post of my series about OpenStack. In the first and second posts we saw in detail how to prepare the basic network infrastructure of our future OpenStack cloud using VMware NSX. In this third one we are going to install and configure the KVM compute host and the shared storage of the lab.

KVM setup

Create and install two CentOS 6.4 virtual machines with 2 vCPU, 2 GB of RAM, 2 network interfaces (E1000) and one 16GB disk. For the partitioning schema I have used the following one:

  • sda1 – 512MB – /boot
  • sda2 – Rest of the disk – LVM PV
    • lv_root – 13.5GB – /
    • lv_swap – 2GB – swap

Mark Base and Standard groups to be installed and leave the rest unchecked. Set the hostname during the installation and leave the networking configuration with the default values. Please have in mind that you will need to have a DHCP server on your network, in my case I’m using the one that comes with VMware Fusion if you don’t have one then you will have to set here a temporary IP address in order to able to install the KVM software. Once the installation is done reboot your virtual machine and open a root SSH session to proceed with the rest of the configuration tasks.

Disable SELinux with setenfornce command, also modify SELinux config to disable it during OS boot. I do not recommend to disable SELinux in a production environment but for a lab it will simplify things.

setenforce 0
cp /etc/selinux/config /etc/selinux/config.orig
sed -i s/SELINUX\=enforcing/SELINUX\=disabled/ /etc/selinux/config

Check that hardware virtualization support is activated.

egrep -i 'vmx|svm' /proc/cpuinfo

Install KVM packages.

yum install kvm libvirt python-virtinst qemu-kvm

After installing a ton of dependencies and if t nothing failed enable and start the libvirtd service.

[root@kvm1 ~]# chkconfig libvirtd on
[root@kvm1 ~]# service libvirtd start
Starting libvirtd daemon:                                  [  OK  ]
[root@kvm1 ~]#

Verify that KVM has been correctly installed and it’s loaded and running on the system.

[root@kvm1 ~]# lsmod | grep kvm
kvm_intel              53484  0
kvm                   316506  1 kvm_intel
[root@kvm1 ~]#
[root@kvm1 ~]# virsh -c qemu:///system list
 Id    Name                           State

[root@kvm1 ~]#

Hypervisor networking setup

With KVM software installed and ready we can now move on to configure the networking for both hosts and integrate them into our NSX deployment.

Disable Network Manager for both interfaces. Edit /etc/sysconfig/network-scripts/ifcfg-ethX files and change NM_CONTROLLED value to no.

By default libvirt creates virbr0 network bridge to be used for the virtual machines to access the external network through a NAT connection. We need to disable it to ensure that bridge components of Open vSwitch can load without any errors.

virsh net-destroy default
virsh net-autostart --disable default

Install Open vSwitch

Copy the NSX OVS package to the KVM host and extract it.

[root@kvm1 nsx-ovs]# tar vxfz nsx-ovs-2.1.0-build33849-rhel64_x86_64.tar.gz
[root@kvm1 nsx-ovs]#

Install Open vSwitch packages.

rpm -Uvh kmod-openvswitch-
rpm -Uvh openvswitch-

Verify that Open vSwitch service is enabled and start it.

[root@kvm1 ~]# chkconfig --list openvswitch
openvswitch     0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@kvm1 ~]#
[root@kvm1 ~]#
[root@kvm1 ~]# service openvswitch start
/etc/openvswitch/conf.db does not exist ... (warning).
Creating empty database /etc/openvswitch/conf.db           [  OK  ]
Starting ovsdb-server                                      [  OK  ]
Configuring Open vSwitch system IDs                        [  OK  ]
Inserting openvswitch module                               [  OK  ]
Starting ovs-vswitchd                                      [  OK  ]
Enabling remote OVSDB managers                             [  OK  ]
[root@kvm1 ~]#

Install nicira-ovs-hypervisor-node package, this utility provides the infrastructure for distributed routing on the hypervisor. With the installation the integration bridge br-int and OVS SSL credentials will be created.

[root@kvm1 ~]# rpm -Uvh nicira-ovs-hypervisor-node*.rpm
Preparing...                ########################################### [100%]
   1:nicira-ovs-hypervisor-n########################################### [ 50%]
   2:nicira-ovs-hypervisor-n########################################### [100%]
Running '/usr/sbin/ovs-integrate init'
successfully generated self-signed certificates..
successfully created the integration bridge..
[root@kvm1 ~]#

There are other packages like nicira-flow-stats-exporter and tcpdump-ovs but they are not needed for OVS functioning. We can proceed now with OVS configuration.

Configure Open vSwitch

The first step is to create OVS bridges for each network interface card of the hypervisor.

ovs-vsctl add-br br0
ovs-vsctl br-set-external-id br0 bridge-id br0
ovs-vsctl set Bridge br0 fail-mode=standalone
ovs-vsctl add-port br0 eth0

If you were logged in by an SSH session you have probably noticed that your connection is lost, this is because br0 interface has taken control of the networking of the host and it doesn’t have an IP address configured. To solve this access the host console and edit ifcfg-eth0 file and modify to look like this.


Next create and edit ifcfg-br0 file.


Restart the network service and test the connection.

service network restart

Repeat all the above steps for the second network interface.

Finally configure NSX Controller Cluster as manager in Open vSwitch.

ovs-vsctl set-manager ssl:

Execute ovs-vsctl show command to review OVS current configuration.

[root@kvm1 ~]# ovs-vsctl show
    Manager "ssl:"
    Bridge "br1"
        fail_mode: standalone
        Port "br1"
            Interface "br1"
                type: internal
        Port "eth1"
            Interface "eth1"
    Bridge "br0"
        fail_mode: standalone
        Port "eth0"
            Interface "eth0"
        Port "br0"
            Interface "br0"
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    ovs_version: ""
[root@kvm1 ~]#

Register OVS in NSX Controller

With our OVS instance installed and running we can now inform NSX Controller of its existence either via NVP API or NSX Manager, in our case we will use the later.

Log into NSX Manager as admin user and go to Dashboard, from Summary of Transport Components table click Add in the Hypervisors row. Verify that Hypervisor is selected as transport node and move to the Basics screen. Enter a name for the hypervisor, usually the hostname of the server.

Screen Shot 2014-05-05 at 23.18.22

In Properties enter:

  • Integration bridge ID, for us is br-int.
  • Admin Status Enabled –  Enabled by default.

Screen Shot 2014-05-05 at 23.29.03

For the Credential screen we are going to need the SSL certificate that was created along with the integration bridge during the NSX OVS installation. The PEM certificate file is ovsclient-cert.pem and is in /etc/openvswitch directory.

[root@kvm1 ~]# cat /etc/openvswitch/ovsclient-cert.pem
[root@kvm1 ~]#

Copy the contents of the file and paste them in the Security Certificate text box.

Screen Shot 2014-05-05 at 23.36.28

Finally add the Transport Connector with the values:

  • Transport Type: STT
  • Transport Zone UUID: The transport zone, in my case the UUID corresponding to vlab-transport-zone.
  • IP Address – The address of the br0 interface of the host.

Screen Shot 2014-05-05 at 23.41.57

Click Save & View and check that Management and OpenFlow connections are up.

Screen Shot 2014-05-05 at 23.52.16

GlusterFS setup

gluster-logo-300x115I choose GlusterFS for my OpenStack lab for two reasons.  I have used it in the past so this has been a good opportunity for me to refresh and enhance my rusty gluster skills, and it’s supported as storage backend for Glance in OpenStack. Instead of going with CentOS again this time I choose Fedora 20 for my gluster VM, a real world GlusterFS cluster will have at least two node but for our lab one will be enough.

Create a Fedora x64 virtual machine with 1 vCPU, 1GB of RAM and one network interface. For the storage part use the following:

  • System disk: 16GB
  • Data disk: 72GB

Use the same partitioning schema of the KVM hosts for the system disk. Choose a Minimal installation and add the Standard group. Configure the hostname and the IP address of the node, set the root password and create a user as administrator, I’m using here my personal user jrey.

Disable SELinux.

sudo setenforce 0
sudo cp /etc/selinux/config /etc/selinux/config.orig
sudo sed -i s/SELINUX\=enforcing/SELINUX\=disabled/ /etc/selinux/config

Stop and disable firewalld.

sudo systemctl disable firewalld.service
sudo systemctl stop firewalld.service

Install GlusterFS packages. There is no need to add any additional yum repository since Gluster is included in the standard Fedora repos.

sudo systemctl install glusterfs-server

Enable Gluster services.

sudo systemctl enable glusterd.service
sudo systemctl enable glusterfsd.service

Start Gluster services.

[jrey@gluster ~]$ sudo systemctl start glusterd.service
[jrey@gluster ~]$ sudo systemctl start glusterfsd.service
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo systemctl status glusterd.service
glusterd.service - GlusterFS an clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
   Active: active (running) since Mon 2014-04-28 17:17:35 CEST; 20s ago
  Process: 1496 ExecStart=/usr/sbin/glusterd -p /run/ (code=exited, status=0/SUCCESS)
 Main PID: 1497 (glusterd)
   CGroup: /system.slice/glusterd.service
           └─1497 /usr/sbin/glusterd -p /run/

Apr 28 17:17:35 gluster.vlab.local systemd[1]: Started GlusterFS an clustered file-system server.
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo systemctl status glusterfsd.service
glusterfsd.service - GlusterFS brick processes (stopping only)
   Loaded: loaded (/usr/lib/systemd/system/glusterfsd.service; enabled)
   Active: active (exited) since Mon 2014-04-28 17:17:45 CEST; 15s ago
  Process: 1515 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 1515 (code=exited, status=0/SUCCESS)

Apr 28 17:17:45 gluster.vlab.local systemd[1]: Starting GlusterFS brick processes (stopping only)...
Apr 28 17:17:45 gluster.vlab.local systemd[1]: Started GlusterFS brick processes (stopping only).
[jrey@gluster ~]$

Since we are running a one-node cluster there is no need to add any node to the trusted pool. In case you decide to run a multinode environment you can setup the pool by running the following command on each node of the clsuter. .

gluster peer probe <IP_ADDRESS_OF_OTHER_NODE>

Edit the data disk with fdisk and create a single partition. Format the partition as XFS.

[jrey@gluster ~]$ sudo mkfs.xfs -i size=512 /dev/sdb1
meta-data=/dev/sdb1              isize=512    agcount=4, agsize=4718528 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=18874112, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=9215, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[jrey@gluster ~]$

Create the mount point for the new filesystem, mount the partition and edit /etc/fstab accordingly to make it persistent to reboots.

sudo mkdir -p /data/glance/
sudo mount /dev/sdb1 /data/glance
sudo mkdir -p /data/glance/brick1
sudo echo "/dev/sdb1 /data/glance xfs defaults 0 0" >> /etc/fstab

Create the Gluster volume and start it.

[jrey@gluster ~]$ sudo gluster volume create gv0 gluster.vlab.local:/data/glance/brick1
volume create: gv0: success: please start the volume to access data
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo gluster volume start gv0
volume start: gv0: success
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo gluster volume info

Volume Name: gv0
Type: Distribute
Volume ID: d1ad2d00-6210-4856-a5eb-26ddcba77a70
Status: Started
Number of Bricks: 1
Transport-type: tcp
Brick1: gluster.vlab.local:/data/glance/brick1
[jrey@gluster ~]$

The configuration of the Gluster node is finished. In the next article we will install and configure OpenStack using the different components detailed during current and previous parts of the series.

Please feel free to add any comment or correction.


If you follow me on Twitter or Google+ probably have seen and increased number of tweets and posts about OpenStack, DevStack, KVM and other Linux related topics. It’s no secret that I am a *nix guy however it wasn’t until last year that I really discovered OpenStack. Oh yes I knew about it, have read a ton of articles and watched some videos in YouTube but I never had the opportunity to actually play with it until I sat on a Hands on Lab about OpenStack and vSphere during VMworld in Barcelona last October. After VMworld I started a personal project to learn as much as possible about OpenStack, using some labs with KVM and vSphere to try to achieve a decent level of proficiency. Finally this year I was able to ramp up with NSX and decided to build a new lab with OpenStack, KVM and NSX and document my progress here in my blog. So without further ado here it is my first series of posts about OpenStack and NSX.

During this series we will see how to deploy OpenStack with KVM as the underlying hypervisor and VMware NSX for the networking part. I intended to create a fairly comprehensive guide here for my personal reference and as a learning exercise. All posts of the series are based on my personal experience in a lab environment.

Lab components

To illustrate the post I have created a lab with virtual machines running on VMware Fusion in my MacBook Pro, but you can use any virtualization software you want as long as it allows you to expose the virtualization extensions to the virtual machine, for the KVM compute node. We will need the following virtual machines

  • Cloud controller node
  • Nova compute node with KVM
  • Neutron networking node
  • GlusterFS storage node
  • NSX Controller
  • NSX Manager
  • NSX Service Node
  • NSX Gateway

I’ll provide the exact hardware config of each virtual machine in its own part. We will deploy OpenStack Havana using as reference one of the architectures described in OpenStack Havana installation guide.

You are probably asking yourself now why I’m using Havana when Icehouse was released just a few weeks ago? There are two reasons for this, first is that when I started to create my lab and decided to document my progress here Icehouse wasn’t out yet and after it was released I decided to stick with Havana because the NSX plugin for Neutron, OpenStack network module, has not been updated yet for Icehouse.

The software versions to be used are:

  • OpenStack Havana
  • CentOS 6.4 – For OpenStack nodes
  • Fedora 20 – For GlusterFS storage node
  • NSX for multi-hypervisor

I have another Fedora 20 virtual machine providing DNS and NTP services for the lab, I’m planning to add DHCP and OpenLDAP capabilities in the future.

NSX deployment overview

Screen Shot 2014-04-24 at 22.40.40The Network Views

The first concept you need to understand in NSX are the network views. NSX defines two network views:

  • Logical Network View
  • Transport Network View

The Logical Network View is a representation of the network services and connectivity that a virtual machine “see” in the cloud, basically for the operating system running inside the VM the logical network view is “the network” that it is connected to. The Logical Network View is completely independent from the underlying physical network. It is made of the logical ports, switches and routers that interconnects the different virtual machines within a tenant and connect them to the outside physical network. In a cloud each tenant will have its own logical network view and would isolated from other tenants views.

The Transport Network View represents the physical devices that underlie the logical networks. These devices or transport nodes, as they are referred, can be hypervisors and the network appliances interconnecting those hypervisors to the external physical network. Every one of these transport nodes must run an instance of Open vSwitch.

NSX Deployment Components

An NSX deployment will be made out of the Control Plane and Data Plane. Additionally there is a Management Plane comprised by the NSX Manager, last one is not mandatory for an OpenStack deployment but it can be useful.

NSX Control Plane

The Control Plane is made of the NSX Controller Cluster. This is an OpenFlow controller that manages all the Open vSwitch devices running on the transport nodes and a logical network manager that allow to build and maintain all the logical networks carried by the transport nodes. It provides consistency between logical network view and transport network view. Internally it has several roles to manage the different tasks it is responsible of.

  • Transport node management: Maintains connections with the different OVS instances.
  • Logical network management: Monitors when endhosts get connected and disconnected from OVS. Also implements logical connectivity and policies by configuring OVS forwarding states.
  • Data persistence and replication: Stores data from OVS devices and NVP API to provides persistence across all nodes of the cluster in case of failure.
  • API server: Handles HTTP requests from external elements.

The NSX Controller is an scalable out cluster running on x86 hardware, it supports a minimum of three nodes and a maximum of five. Single node clusters are not supported although for the lab I deployed a single-node one.

NSX Data Plane

The Data Plane will be implemented by the previously referred transport nodes, this is OVS devices and NSX appliances, managed by the Controller Cluster.

Hypervisors: The compute nodes leveraging Open vSwitch to provide network connectivity for the virtual machines.

NSX Gateway/s: The NSX Gateways formed the Gateway Service that allows a logical network to be attached to a physical network not managed by NSX. The gateways can be L2 Gateway, expands L2 logical segment to include a physical one, and L3 Gateway that maps itself to physical router port.

NSX Service Node/s: The Service Nodes are OVS enabled appliances that provide extra processing capacity by offloading network packet processing from the hypervisor virtual switches. The type of operations managed by the service nodes are for example assisting with the packet replication during broadcast/multicast operations or unknown multicast flooding in overlay logical networks.

NSX Management Plane

The NSX Management Plane is composed exclusively by the NSX Manager. Provides a different and more friendly way to interact with the NVP API, and configure the logical network components for example, through its web UI. In an OpenStack deployment there is need to use it, however it can be helpful for troubleshooting purposes.

NSX network appliances deployment

For our lab purposes create four Ubuntu x64 virtual machines with 1vCPU, 1GB of RAM, 1 network interface (E1000) and 16GB of disk.

NSX Controller

Power on the VM and on the boot screen select Automated Install.

Screen Shot 2014-04-27 at 20.49.15

The installation will take several minutes to finish. When it’s finished you will see a prompt like this in the virtual machine console.

Screen Shot 2014-04-27 at 22.42.11

Login as admin user with password admin. In a normal deployment you will configure admin user password with set admin user password but for the lab is not needed.

Set the IP address for the controller node.

nsx-controller # set network interface breth0 static
Setting IP for interface breth0...
Clearing DNS configuration...
nsx-controller # 
nsx-controller # show network interface breth0
IP config: static
MTU: 1500
MAC: 00:0c:29:92:ce:0c
Admin-Status: UP
Link-Status: UP
SNMP: disabled
nsx-controller #

Configure the hostname.

nsx-controller # set hostname nsxc
nsxc #

Next configure the default route.

nsxc # add network route
nsxc #
nsxc # show network route
Prefix/Mask         Gateway         Metric  MTU     Iface     0       intf    breth0         0       intf    breth0
nsxc #

Set the address of the DNS and NTP servers.

nsxc # add network dns-server
nsxc #
nsxc # add network ntp-server
 * Stopping NTP server ntpd                                                                                                                                                          [ OK ]
Synchronizing with NTP servers. This may take a few seconds...
27 Apr 21:03:49 ntpdate[3755]: step time server offset -7199.735794 sec
 * Starting NTP server ntpd                                                                                                                                                          [ OK ]
nsxc #

Set the management address of the control cluster.

set control-cluster management-address

Configure the IP address to be used for communication with the different transport nodes.

set control-cluster role switch_manager listen-ip

Configure the IP address to handle NVP API requests.

set control-cluster role api_provider listen-ip

Finally join the cluster, since this the first node of the cluster the IP has to be its own one.

nsxc # join control-cluster
Clearing controller state and restarting
Stopping nicira-nvp-controller: [Done]
Clearing nicira-nvp-controller's state: OK
Starting nicira-nvp-controller: CLI revert file already exists
mapping eth0 -> bridged-pif
ssh stop/waiting
ssh start/running, process 5009
mapping breth0 -> eth0
mapping breth0 -> eth0
ssh stop/waiting
ssh start/running, process 5158
Setting core limit to unlimited
Setting file descriptor limit to 100000
 nicira-nvp-controller [OK]
** Watching control-cluster history; ctrl-c to exit **
Host nsx-controller
Node ffac511c-12b3-4dd0-baa7-632df4860521 (
  04/27 22:40:42: Initializing data contact with cluster
  04/27 22:40:49: Fetching initial configuration data
  04/27 22:40:51: Join complete
nsxc #

You can check at any moment the status of the node in the cluster with the show control-cluster status command.

nsxc # show control-cluster status
Type                Status                                       Since
Join status:        Join complete                                04/27 22:40:51
Majority status:    Disconnected from cluster majority           04/27 22:53:44
Restart status:     This controller can be safely restarted      04/27 21:23:29
Cluster ID:         7837a89a-22f3-4c8c-8bef-c100886374e9
Node UUID:          7837a89a-22f3-4c8c-8bef-c100886374e9

Role                Configured status   Active status
api_provider        enabled             activated
persistence_server  enabled             activated
switch_manager      enabled             activated
logical_manager     enabled             activated
directory_server    disabled            disabled
nsxc #

In a standard NSX deployment now would the moment to add more nodes to the cluster using again the join control-cluster command with the same IP address.

NSX Gateway

Proceed with the Automated Install as in the Controller node. When the installation is done login as admin user.

Set IP address.

set network interface breth0 static

Set hostname.

set hostname nsxg

Configure the rest of the network parameters as in the Controller node and proceed to the gateway specific configuration.

nsxg # add switch manager
Waiting for the manager CA certificate to synchronize...
Manager CA certificate synchronized
nsxg #

NSX Service Node

Again launch the Automated Install and let it finish. As admin user configure the IP address…

set network interface breth0 static

…and the hostname.

set hostname nsxsn

Finish the network configuration as in the Gateway and the Controller and configure the Service Node to be aware of the Controller Cluster

add switch manager

The above command will return an error like this.

Manager CA certificate failed to synchronize.  Verify
the manager is running on the specified IP address.

It’s normal since the Transport Node will not be able to connect to the NSX Controller Cluster until the cluster has been informed, either via NVP API or NSX Manager interface, about the existence of the Transport Node.

NSX Manager

Access the NSX Manager console, you have to see a similar screen.

Screen Shot 2014-04-28 at 00.47.54

Set the IP and the hostname and configure the default route, DNS and NTP server.

set network interface breth0 static
set hostname nsxm
add network route
add network dns-server
add network ntp-server

With this we have completed the installation and initial configuration of our four NSX appliances. In a real world deployment we should have to add at least two more NSX controller nodes to our cluster and maybe one or more gateways in order to setup L2 and L3 Gateway Services. The number of Service Nodes will depend on the expected load of our cloud.

Connect the NSX Manager to the Controller Cluster

Our next step is to connect our newly crested NSX Controller Cluster with NSX Manager. Access NSX Manager web interface and login as admin user.

Screen Shot 2014-04-28 at 01.10.19

After the login the Manager will indicate that there is no Controller Cluster added.

Screen Shot 2014-04-28 at 01.15.33

Click the Add Cluster button and enter the data for the NSX Controller Cluster.

Screen Shot 2014-04-28 at 01.26.03

If the connection is successful the a new screen will show up.

Screen Shot 2014-04-28 at 01.36.29

Provide the following information:

  • Name of the cluster
  • Contact email address of the administrator
  • Automatically Use New IPs – This setting, checked by default, will add all the IP address of the members form this cluster as eligible to receive API call from the NSX Manager.
  • Make Active Cluster

In the next screen enter the IP address of your syslog server or click Use This NSX Manager to use the NSX Manager as syslog server.

Screen Shot 2014-04-28 at 01.57.43

After clicking in Configure the Manager will finish the configuration of the Controller Cluster and will go back the previous screen where we can see the new cluster we have just added to the Manager.

Screen Shot 2014-04-28 at 02.01.44

In the next post we will see how to configure NSX Transport and Logical network elements. As always comments are welcome.


I got aware of this issue last week after installing a Fedora 18 virtual machine on Fusion 5. The installation of the Tools went as expected but when the install process launched the vmware-tools-config,pl script I got the typical error of not being able to find the Linux Kernel headers.

Searching for a valid kernel header path...
The path "" is not a valid path to the 3.7.2-204.fc18.x86_64 kernel headers.
Would you like to change it? [yes]

I installed the kernel headers and devel packages with yum.

[root@fed18 ~]# yum install kernel-headers kernel-devel

Fired up again the configuration script and got the same error. The problem is that snce kernel 3.7 all the kernel header files have been relocated to a new path and because of that the script is not able to find them. To solve it just create a symlink of the version.h file from the new location to the old one.

[root@fed18 src]# ln -s /usr/src/kernels/3.7.2-204.fc18.x86_64/include/generated/uapi/linux/version.h /lib/modules/3.7.2-204.fc18.x86_64/build/include/linux/

With the problem fixed I launched the config script again and the tools finally got configured without problems.

[root@fed18 ~]# 

Making sure services for VMware Tools are stopped.
Stopping Thinprint services in the virtual machine:
 Stopping Virtual Printing daemon: done
Stopping vmware-tools (via systemctl): [ OK ]

The VMware FileSystem Sync Driver (vmsync) allows external third-party backup 
software that is integrated with vSphere to create backups of the virtual 
machine. Do you wish to enable this feature? [no]

Before you can compile modules, you need to have the following installed...
kernel headers of the running kernel

Searching for GCC...
Detected GCC binary at "/bin/gcc".
The path "/bin/gcc" appears to be a valid path to the gcc binary.
Would you like to change it? [no]

Searching for a valid kernel header path...
Detected the kernel headers at 
The path "/lib/modules/3.7.2-204.fc18.x86_64/build/include" appears to be a 
valid path to the 3.7.2-204.fc18.x86_64 kernel headers.
Would you like to change it? [no]


If your vCSA is configured to use the embedded DB2 database and if it’s not properly shutdown, next you power it on may be you should not be able to power on a VM like in the screenshot below…


…or the vSphere Client will not show some of information about the host or the VMs.


We all have seen those kind of errors in our homelabs from time to time. In the Windows-based vCenter it was relatively easy to solve, close the client, log into the vCenter, restart the vCenter Server service and in the next login into the vSphere Client everything will go as expected.

However how can we resolve this issue in the vCenter Linux appliance? Can’t be easier.

There are two ways to restart the vCenter services in the vCSA:

  • From he WebUI administration interface
  • From the command line

For the first method log into the WebUI of the vCSA by accessing https://<vCSA_URL>:5480 with your favorite web browser.


In the vCenter Server screen in the Status tab there stop and start the vCenter Server service from the Action buttons.

The second method is faster and easier, and to be sincere it feels more natural for me and probably for the other Unix Geek/Sysadmins out there.

The vCenter service in the Linux appliance is vmware-vpxd so with a simple service vmware-vpxd restart we’ll be on business again. Check the screenshot below.


Finally as seen in the screen capture you can check the status of the service.

More on troubleshooting the vCSA in a future post.


suse_linux_logo1Getting SUSE Enterprise Linux integrated with Microsoft Active Directory is much easier than it sounds.

There are a few prerequisites to meet before:

  • Samba client must be installed.
  • Packages samba-winbind and krb5-client must be installed.
  • The primary DNS server must be the Domain Controller.

For this task we will use YaST2, the SUSE configuration tool.

YaST2 can be run either in graphical…

Screenshot-YaST2 Control Center-1

…or in text mode.


I decided to use the text mode since it will be by far the most common use case, anyway in both cases the procedure is exactly the same.

Go to Network Services section and later select Windows Domain Membership. The Windows Domain Membership configuration screen will appear.

In the Membership area enter the domain name and configure the options that best suit your environment, including the other sections of the screen.


I configure it to allow SSH single sign-on, more on this later, and to create a home directory for the user on his first login.

You should take into account the NTP configuration since it’s a critical component in Active Directory authentication.

Select OK to acknowledge your selection and a small pop-up will show up to inform that the host is not part of the domain and if you want to join it.


Next you must enter the password of the domain Administrator.


And YaST will finally confirm the success of the operation.


At this point the basic configuration is done and the server should be integrated on the Windows Domain.

Under the hood this process has modified several configuration files in order the get the system ready to authenticate against Active Directory:

  • smb.conf
  • krb5.conf
  • nsswitch.conf


The first is the configuration file for the samba service. As you should know Samba is an open source implementation of the Windows SMB/CIFS protocol, it allows Unix systems to integrate almost transparently into a Windows Domain infrastructure and also provides file and print services for Windows clients.

The file resides in /etc/samba. Take a look at the contents of the file, the relevant part is the global section.

        workgroup = VJLAB
        passdb backend = tdbsam
        printing = cups
        printcap name = cups
        printcap cache time = 750
        cups options = raw
        map to guest = Bad User
        include = /etc/samba/dhcp.conf
        logon path = \\%L\profiles\.msprofile
        logon home = \\%L\%U\.9xprofile
        logon drive = P:
        usershare allow guests = No
        idmap gid = 10000-20000
        idmap uid = 10000-20000
        realm = VJLAB.LOCAL
        security = ADS
        template homedir = /home/%D/%U
        template shell = /bin/bash
        winbind refresh tickets = yes


The krb5.conf is the Kerberos daemon configuration file which contains the necessary information for the Kerberos library.

jreypo@sles11-01:/etc> cat krb5.conf
        default_realm = VJLAB.LOCAL
        clockskew = 300
        VJLAB.LOCAL = {
                kdc = dc.vjlab.local
                default_domain = vjlab.local
                admin_server = dc.vjlab.local
        kdc = FILE:/var/log/krb5/krb5kdc.log
        admin_server = FILE:/var/log/krb5/kadmind.log
        default = SYSLOG:NOTICE:DAEMON
        .vjlab.local = VJLAB.LOCAL
        pam = {
                ticket_lifetime = 1d
                renew_lifetime = 1d
                forwardable = true
                proxiable = false
                minimum_uid = 1


The nsswitch.conf file as stated by its man page is the System Databases and Name Service Switch configuration file. Basically it includes the different databases of the system to look for authentication information when user tries to log into the server.

Have a quick look into the file and you will notice the two fields changed, passwd and group. In both the winbind option has been added in order to indicate the system to use Winbind, the Name Service Switch daemon used to resolve NT server names.

passwd: compat winbind
group:  compat winbind

SSH single sign-on

Finally we need to test the SSH connection to the host using a user account of the domain. When asked for the login credentials use the DOMAIN\USER formula for the user name.


This kind of integration is very useful, specially for the bigger shops, because you don’t have to maintain the user list of your SLES servers individually, just only the root account since the other accounts can be centrally managed from the Windows Domain.

However there is one issue that must be taken into account, the SSH single sign-on authentication means that anyone with a domain account can log into your Linux servers and we don’t want that.

To prevent this potentially dangerous situation we are going to limit the access only to those groups of users that really need it. I’m going to use the Domain Admins to show you how.

First we need to look for the Domain Admins group ID within our Linux box. Log in as DOMAIN\Administrator and use the id command to get the user info.

VJLAB\administrator@sles11-01:~> id
uid=10000(VJLAB\administrator) gid=10000(VJLAB\domain users) groups=10000(VJLAB\domain users),10001(VJLAB\schema admins),10002(VJLAB\domain admins),10003(VJLAB\enterprise admins),10004(VJLAB\group policy creator owners)

There are several group IDs, for our purposes we need the VJLAB\domain admins which is 10002.

You should be asking yourself, but the GID is not 10002 but 10000? Yes you are right and because of that we need to make some changes at Domain Controller level.

Fire up Server Manager and go to Roles –> Active Directory Domain Services –> Active Directory Users and Computers –> DOMAIN –> Users.


On the right pane edit the properties of the account you want to be able to access the linux server via SSH. In my case I used my own account juanma. In the Member of tab select the Domain Admins group and click Set Primary Group.


Now we need to modify how the pam daemon manage the authentication. Go back to SLES and edit /etc/pam.d/sshd.

auth     requisite
auth     include        common-auth
account  include        common-account
password include        common-password
session  required
session  include        common-session

Delete the account line and add the following two lines.

account  sufficient
account  sufficient gid = 10002

The sshd file should look like this:

auth     requisite
auth     include        common-auth
account  sufficient
account  sufficient gid = 10002
password include        common-password
session  required
session  include        common-session

What we did? First eliminated the ability to login via SSH for every user and later we allow the server local users and the Domain Admins to log into the server.

And we are done. Any comment would be welcome as always :-)