Archives For Red Hat

Fedora 22 was released a few months ago and amongst many new features it came with a replacement for yum as package manager called dnf, or DaNdiFied YUM, oh yes yum is still around but it is now considered legacy software. Also DNF will become in the near future the default package manager for RHEL and CentOS so it is for the best that you get familiarized with it sooner than later.

DNF Commands

The first thing you need to understand about dnf is that many commands are basically still the same but there are differences. Package management commands can be executed with almost the same syntax previously used with yum.

Search for a package,

[jrey@fed22-srv ~]$ sudo dnf search htop
Last metadata expiration check performed 1:25:54 ago on Mon Oct 5 23:47:45 2015.
=================================== N/S Matched: htop ====================================
htop.x86_64 : Interactive process viewer
php-lightopenid.noarch : PHP OpenID library
[jrey@fed22-srv ~]$

Install a package.

[jrey@fed22-srv ~]$ sudo dnf install htop

Remove a package.

[jrey@fed22-srv ~]$ sudo dnf remove htop

Get information about a package

[jrey@fed22-srv ~]$ sudo dnf info htop
Last metadata expiration check performed 1:47:13 ago on Mon Oct 5 23:47:45 2015.
Available Packages
Name : htop
Arch : x86_64
Epoch : 0
Version : 1.0.3
Release : 4.fc22
Size : 91 k
Repo : fedora
Summary : Interactive process viewer
URL : http://hisham.hm/htop/
License : GPL+
Description : htop is an interactive text-mode process viewer for Linux, similar to
 : top(1).

[jrey@fed22-srv ~]$

Group and repository management commands are still the same as well.

[jrey@fed22-srv ~]$ sudo dnf repolist

Querying the available repositories for a specific command.

[jrey@fed22-srv ~]$ sudo dnf repoquery --whatprovides htop
Last metadata expiration check performed 1:54:52 ago on Mon Oct 5 23:47:45 2015.
htop-0:1.0.3-4.fc22.x86_64
[jrey@fed22-srv ~]$

dnf comes with some powerful capabilities like history query.

[jrey@fed22-srv ~]$ sudo dnf history list
Last metadata expiration check performed 11 days, 19:14:54 ago on Wed Oct 7 02:56:21 2015.
ID | Command line             | Date a           | Action  | Altere
-------------------------------------------------------------------------------
 9 | history undo 8           | 2015-10-06 01:53 | Install | 1 
 8 | erase htop               | 2015-10-06 01:28 | Erase   | 1 
 7 | install htop -y          | 2015-10-06 01:28 | Install | 1 
 6 | remove htop              | 2015-10-06 01:14 | Erase   | 1 
 5 | install htop             | 2015-10-06 01:14 | Install | 1 
 4 | install make gcc kernel- | 2015-09-30 16:21 | Install | 9 
 3 | update                   | 2015-09-30 15:43 | I, U    | 112 
 2 | update                   | 2015-09-16 11:45 | I, O, U | 297 
 1 |                          | 2015-09-16 10:59 | Install | 658 EE
[jrey@fed22-srv ~]$

This can be specially helpful if you need to rollback a change, like clean up dependencies after uninstalling a package or reinstall a package.

[jrey@fed22-srv ~]$ sudo history undo 8

You can also look for duplicated within the installed ones.

[jrey@fed22-srv ~]$ sudo dnf repoquery --duplicated
Last metadata expiration check performed 0:30:42 ago on Tue Oct 6 02:48:41 2015.
kernel-core-0:4.0.4-301.fc22.x86_64
kernel-core-0:4.1.6-201.fc22.x86_64
kernel-core-0:4.1.7-200.fc22.x86_64
kernel-modules-0:4.0.4-301.fc22.x86_64
kernel-modules-0:4.1.6-201.fc22.x86_64
kernel-modules-0:4.1.7-200.fc22.x86_64
[jrey@fed22-srv ~]$

Retrieve all available packages providing a specific software of capability.

[jrey@fed22-srv ~]$ sudo dnf repoquery --whatprovides curl
Last metadata expiration check performed 0:38:00 ago on Tue Oct 6 02:48:41 2015.
curl-0:7.40.0-3.fc22.x86_64
curl-0:7.40.0-7.fc22.x86_64
[jrey@fed22-srv ~]$

This is a very basic introduction to dnf capabilities but hopefully you have been able to get how it works. My advice is to review DNF documentation for all the details.

The Photon Connection

VMware Photon comes with tdnf (Tiny DNF); this is a development by VMware that comes with compatible repository and package management capabilites. Not every dnf command is available but the basic ones are there.

Package installation and updates.

Screen Shot 2015-10-11 at 19.41.00

Repository management.

Screen Shot 2015-10-11 at 18.54.47

In the future if I find the time I’ll write a new post with some advanced examples of dnf commands. Comments are welcome.

Juanma.

FirewallD, or Dynamic Firewall Manager, is the replacement for the IPTables firewall in Red Hat Enterprise Linux. The main improvement over IPTables is the capacity to make cahnges without the need to restart the whole firewall service.

FirewallD was first introduced in Fedora 18 and has been the default firewall mechanism for Fedora since then. Finally this year Red Hat decided to include it in RHEL 7, and of course it also made its way to the different RHEL clones like CentOS 7 and Scientific Linux 7.

Checking FirewallD service status

To get the basic status of the service simply use firewall-cmd --state.

[root@centos7 ~]# firewall-cmd --state
running
[root@centos7 ~]#

If you need to get a more detailed state of the service you can always use systemctl command.

[root@centos7 ~]# systemctl status firewalld.service
firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled)
   Active: active (running) since Wed 2014-11-19 06:47:42 EST; 32min ago
 Main PID: 873 (firewalld)
   CGroup: /system.slice/firewalld.service
           └─873 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

Nov 19 06:47:41 centos7.vlab.local systemd[1]: Starting firewalld - dynamic firewall daemon...
Nov 19 06:47:42 centos7.vlab.local systemd[1]: Started firewalld - dynamic firewall daemon.
[root@centos7 ~]#

To enable or disable FirewallD again use systemctl commands.

systemctl enable firewalld.service
systemctl disable firewalld.service

Managing firewall zones

FirewallD introduces the zones concept, a zone is no more than a way to define the level of trust for a set of connections. A connection definition can only be part of one zone at the same time but zones can be grouped  There is a set of predefined zones:

  • Public – For use in public areas. Only selected incoming connections are accepted.
  • Drop – Any incoming network packets are dropped, there is no reply. Only outgoing network connections are possible.
  • Block – Any incoming network connections are rejected with an icmp-host-prohibited message for IPv4 and icmp6-adm-prohibited for IPv6. Only network connections initiated within this system are possible.
  • External – For use on external networks with masquerading enabled especially for routers. Only selected incoming connections are accepted.
  • DMZ – For computers DMZ network, with limited access to the internal network. Only selected incoming connections are accepted.
  • Work – For use in work areas. Only selected incoming connections are accepted.
  • Home – For use in home areas. Only selected incoming connections are accepted.
  • Trusted – All network connections are accepted.
  • Internal – For use on internal networks. Only selected incoming connections are accepted.

By default all interfaces are assigned to the public zone. Each zone is defined in its own XML file stored in /usr/lib/firewalld/zones. For example the public zone XML file looks like this.

root@centos7 zones]# cat public.xml
<?xml version="1.0" encoding="utf-8"?>
<zone>
  <short>Public</short>
  <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description>
  <service name="ssh"/>
  <service name="dhcpv6-client"/>
</zone>
[root@centos7 zones]#

Retrieve a simple list of the existing zones.

[root@centos7 ~]# firewall-cmd --get-zones
block dmz drop external home internal public trusted work
[root@centos7 ~]#

Get a detailed list of the same zones.

firewall-cmd --list-all-zones

Get the default zone.

[root@centos7 ~]# firewall-cmd --get-default-zone
public
[root@centos7 ~]#

Get the active zones.

[root@centos7 ~]# firewall-cmd --get-active-zones
public
  interfaces: eno16777736 virbr0
[root@centos7 ~]#

Get the details of a specific zone.

[root@centos7 zones]# firewall-cmd --zone=public --list-all
public (default, active)
  interfaces: eno16777736 virbr0
  sources:
  services: dhcpv6-client ssh
  ports:
  masquerade: no
  forward-ports:
  icmp-blocks:
  rich rules:

[root@centos7 zones]#

Change the default zone.

firewall-cmd --set-default-zone=home

Interfaces and sources

Zones can be bound to a network interface and to a specific network addressing or source.

Assign an interface to a different zone, the first command assigns it temporarily and the second makes it permanently.

firewall-cmd --zone=home --change-interface=eth0
firewall-cmd --permanent --zone=home --change-interface=eth0

Retrieve the zone an interface is assigned to.

[root@centos7 zones]# firewall-cmd --get-zone-of-interface=eno16777736
public
[root@centos7 zones]#

Bound the zone work to a source.

firewall-cmd --permanent --zone=work --add-source=192.168.100.0/27

List the sources assigned to a zone, in this case work.

[root@centos7 ~]# firewall-cmd --permanent --zone=work --list-sources
172.16.10.0/24 192.168.100.0/27
[root@centos7 ~]#

Services

FirewallD can assign services permanently to a zone, for example to assign http service to the dmz zone. A service can be also assigned to multiple zones.

[root@centos7 ~]# firewall-cmd --permanent --zone=dmz --add-service=http
success
[root@centos7 ~]# firewall-cmd --reload
success
[root@centos7 ~]#

List the services assigned to a given zone.

[root@centos7 ~]# firewall-cmd --list-services --zone=dmz
http ssh
[root@centos7 ~]#

Other operations

Besides of Zones, interfaces and Services management FirewallD like other firewalls can perform several network related operations like masquerading, set direct rules and manage ports.

Masquerading and port forwading

Add masquerading to a zone.

firewall-cmd --zone=external --add-masquerade

Query if masquerading is enabled in a zone.

[root@centos7 ~]# firewall-cmd --zone=external --query-masquerade
yes
[root@centos7 ~]#

You can also set port redirection. For example to forward traffic originally intended for port 80/tcp to port 8080/tcp.

firewall-cmd --zone=external --add-forward-port=port=80:proto=tcp:toport=8080

A destination address can also bee added to the above command.

firewall-cmd --zone=external --add-forward-port=port=80:proto=tcp:toport=8080:toaddr=172.16.10.21

Set direct rules

Create a firewall rule for 8080/tcp port.

firewall-cmd --direct --add-rule ipv4 filter INPUT -p tcp -m state --state NEW -m tcp --dport 8080 -j ACCEPT

Port management

Allow a port temporary in a zone.

firewall-cmd --zone=dmz --add-port=8080/tcp

Hopefully you found the post useful to start working with FirewallD. Comments are welcome.

Juanma.

Being used to have Cockpit in my Fedora 21 Server VMs I decided that having it also on my CentOS machines would be awesome, unfortunately I quickly found that Cockpit was not available in CentOS repositories. Of course I knew that Cockpit comes installed and enabled by default in CentOS 7 Atomic host image so I figured out that those packages had to be hidden in some Atomic related repo.

After looking a bit I finally found in GitHub the sig-atomic-buildscripts repository that belongs to CentOS Project. This repository contains several scripts and files intended to build your own CentOS Atomic host including virt7-testing.repo, the Yum repository file needed for Cockpit.

Clone the GutHub repository.

git clone https://github.com/baude/sig-atomic-buildscripts

Copy virt7-testing.repo file to /etc/yum.repos.d and install Cockpit.

yum install cockpit

Enable Cockpit service.

[root@webtest ~]# systemctl enable cockpit.socket
ln -s '/usr/lib/systemd/system/cockpit.socket' '/etc/systemd/system/sockets.target.wants/cockpit.socket'
[root@webtest ~]#

Add Cockpit to the list of trusted services in FirewallD.

[root@webtest ~]# firewall-cmd --permanent --zone=public --add-service=cockpit
success
[root@webtest ~]#
[root@webtest ~]# firewall-cmd --reload
success
[root@webtest ~]#
[root@webtest ~]# firewall-cmd --list-services
cockpit dhcpv6-client ssh
[root@webtest ~]#

Start Cockpit socket.

systemctl start cockpit.socket

Do no try to access Cockpit yet, there is an issue about running Cockpit on stock CentOS/RHEL 7. To be able to start it we need first to modify the service file to disable SSL.Edit file /usr/lib/systemd/system/cockpit.service and modify ExecStart line to look like this.

ExecStart=/usr/libexec/cockpit-ws --no-tls

I know this procedure will invalidate Cockpit for a production environment in RHEL7 at least for now but this is for my lab environment and I can live with it.

Reload systemd.

systemctl daemon-reload

Restart Cockpit.

systemctl restart cockpit

Access Cockpit web interface, login as root and have fun :-)

Screen Shot 2015-01-09 at 01.57.51

Juanma.

 

fedora_infinity_140x140Cockpit is a new web based server manager to administer Linux server, it will provide the system administrators with a user friendly interface to manage their Linux servers, it includes multiserver managing capacity and more importantly it will create no interference or disconnection between the tasks done from the web and from the command line. This last feature is specially useful

By default Cockpit, stable version, comes installed and enabled in Fedora 21 Server. It also can be found in CentOS/RHEL 7 Atomic, Fedora 21 Atomic and Fedora 21 Cloud, and there are plans in the near future to support Arch Linux.

Lets review now some of the features of Cockpit, as said before multiple servers can be managed from the same Cockpit instance.

Screen Shot 2014-12-31 at 19.26.00

Once you access one of managed nodes it will present general overview of the server with real-time charts of CPU, Memory, Disk I/O and Network Traffic.

Screen Shot 2014-12-31 at 19.37.53

On the left pane there are a series of actionable items that will give you access to the different subsystems of the node like Networking, Storage, User Accounts and even the status of the Docker containers running on the server, if the Docker service has been enabled.

System services view.

Screen Shot 2014-12-31 at 19.52.53

When a process is selected Cockpit will display its details.

Screen Shot 2015-01-08 at 12.11.27

Networking area displays traffic for the selected interface, the journal of the networking system and even allows you to create a new bond interface, a new bridge or add a new VLAN tag to the interface.

Screen Shot 2014-12-31 at 19.53.18

The Storage view will display similar info for the disks, and will display detailed information for each of them, review the LVM configuration of the server and perform different storage related operations.

Screen Shot 2014-12-31 at 19.53.45

Journal view lets you review systemd journal. You can go back seven days into the log and filter on the type of messages.

Screen Shot 2014-12-31 at 19.54.29

After using Cockpit for some time in my lab I can say that I genuinely love it, the interface is pretty fast, it uses systemd for everything and it does not interface with my console-based admin habits, on the contrary is a great complement to them.

Juanma.

Welcome to Part 4 for this series about OpenStack and VMware NSX. To do a quick review, in the first three parts we described the different VMware NSX components and concepts and how to install and configure them, also discussed how to install and configure the KVM and GlusterFS nodes. In this fourth part of the series we will see how to deploy OpenStack in a three-node architecture and integrate it with our existent NSX installation.

If you remember the first post where I described the components of the lab, there were three OpenStack dedicated nodes:

  • Cloud controller node
  • Neutron networking node
  • Nova compute node

Instead of installing from scratch I decided to go with one of the OpenStack distributions: RDO. What is RDO and why I decided for it? RDO is a community distribution of OpenStack sponsored by Red Hat, yes I just say Red Hat so please stop the eye rolling.

RDO is the upstream version of RHEL OpenStack Platform, the commercial version of OpenStack by Red Hat. During the last months I tried several flavors of OpenStack and while I still think that installing from scratch is the best way to learn, in fact is what I did for my first labs, RDO gives me the possibility to quickly create my testing labs. Also RHEL OP Version 4, based on RDO, is supported with VMware NSX and I really couldn’t resist myself to try it.

Installation prerequisites

Before proceeding with the installation there are some preparations we need to perform on the OpenStack nodes.

SSH key generation

Generate a new SSH key to be later distributed on the OpenStack nodes during the installation. Use ssh-keygen to generate the new key.

Neutron server preparation

In the Neutron node install NSX Open vSwitch version as described in Part 3 for the KVM nodes, the network interface configuration it’s quite similar.

With the network interface configuration files properly setup exist your SSH session and log into the VM console to create the OVS bridges like the example below.

ovs-vsctl add-br br-ex
ovs-vsctl br-set-external-id br-ex bridge-id br-ex
ovs-vsctl set Bridge br-ex fail-mode=standalone
ovs-vsctl add-port br-ex eth0

OpenStack installation

RDO relies on packstack for the installation of its different components. Packstack is a tool that will install all required software in the nodes based on an answer file. Enable RDO and EPEL repos and install openstack-packstack package.

yum install -y http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum install -y http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-8.noarch.rpm
yum install -y openstack-packstack

Once it is installed generate a new answer file, we will use this file as a template for our installation.

packstack --gen-answer-file rdo_answers.txt

Edit packstack answer file and modify the following entries, leave the rest with the default values. It is important to do not eliminate any entry or packstack execution will fail.

Deactivate services we do not want to deploy.

CONFIG_SWIFT_INSTALL=n
CONFIG_CEILOMETER_INSTALL=n
CONFIG_NAGIOS_INSTALL=n
CONFIG_CINDER_INSTALL=n

Nova settings.

CONFIG_NOVA_COMPUTE_HOSTS=192.168.82.42
CONFIG_NOVA_NETWORK_HOSTS=

And finally Neutron settings. Don’t set any L3 value since that part will be managed by NSX.

CONFIG_NEUTRON_SERVER_HOST=192.168.82.41
CONFIG_NEUTRON_DHCP_HOSTS=192.168.82.41
CONFIG_NEUTRON_METADATA_HOSTS=192.168.82.41

Launch OpenStack installation process.

packstack --answer-file rdo_answers.txt

The installation will take a while so you better grab a cup of coffee and have a look at the output while the software installs on each of the three nodes. If everything goes as expected we should see a similar message at the of the installation process.

 **** Installation completed successfully ******

Additional information:
 * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
 * File /root/keystonerc_admin has been created on OpenStack client host 192.168.82.40. To use the command line tools you need to source the file.
 * To access the OpenStack Dashboard browse to http://192.168.82.40/dashboard.
Please, find your login credentials stored in the keystonerc_admin in your home directory. 
 * Because of the kernel update the host 192.168.82.42 requires reboot. 
 * Because of the kernel update the host 192.168.82.40 requires reboot.
 * Because of the kernel update the host 192.168.82.41 requires reboot.
 * The installation log file is available at: /var/tmp/packstack/20140617-001835-On5TCi/openstack-setup.log 
 * The generated manifests are available at: /var/tmp/packstack/20140617-001835-On5TCi/manifests 
[root@cloud-controller ~]#

Reboot the three nodes as instructed and proceed to the next step.

Configure Glance to use GlusterFS

RDO packstack cannot configure Glance to use GlusterFS as its storage backend during the installation and it has to be configured afterwards. Fortunately the necessary steps are documented on RDO site.

Stop Glance services.

service openstack-glance-registry stop
service openstack-glance-api stop

Install gluster required packages on the controller node.

yum install glusterfs-fuse glusterfs

Mount GlusterFS share and set the ownership and permissions for glance user.

mount -t glusterfs gluster.vlab.local:gv0 /var/lib/glance/images
chown -R glance:glance /var/lib/glance/images

Start Glance services.

service openstack-glance-registry start
service openstack-glance-api start

With the installation finished OpenStack Horizon dashboard should be available at http://cloud_controller_fqdn/dashboard. Log in with the user admin, the password for this user can be found in the file /root/keystonerc_admin on the cloud controller node.

[root@cloud-controller ~]# cat keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=cd0ed5b5f251450f
export OS_AUTH_URL=http://192.168.82.40:35357/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ '
[root@cloud-controller ~]#

If login fails with an unexpected error check that firewall is deactivated in all three nodes and that all services are up and running, in some of my deployments Neutron server did not start after a reboot and I had to start it manually.

Once logged into horizon navigate to Admin -> Hypervisor and check that the KVM hypervisor is properly registered.

Screen Shot 2014-06-17 at 01.56.04

Configure the NSX integration

At this point we have a working OpenStack installation with Neutron using the Open vSwitch plugin, now we will proceed to integrate our shiny OpenStack cloud with NSX.

Install NSX Neutron plugin

VMware provides a set of RPM packages containing the NSX plugin and a VMware sanctioned version of Neutron, however I found that this packages were older than my Havana installation and didn’t want to brake any dependencies and spend hours trying to fix my installation.

A tar file containing all the source for both the plugin and Neutron itself is also available and instructions on how to compile and install it are provided in NSX documentation, during my first trials I took this path but this time I decided to use the upstream plugin instead since it was available in RDO repositories.

yum install openstack-neutron-nicira

Configure NSX plugin

Register the Neutron server as a transport node on the NSX Controller Cluster.

ovs-vsctl set-manager ssl:192.168.82.45

Stop neutron services.

service neutron-server stop

Edit /etc/neutron/neutron.conf file and set core_plugin value to neutron.plugins.nicira.NeutronPlugin.NvpPluginV2.

Configure nvp.ini file accordingly, this file can be found in /etc/neutron/plugins/nicira.

Set NSX admin user and password.

nvp_user = admin
nvp_password = admin

Configure NSX controllers IP addresses.

nvp_controllers = 192.168.82.45

Set the default Transport Zone UUID and the L3 and L2 gateway serveices UUID, these values can be retrieved from the NSX Manager web.

default_tz_uuid = b948fd35-5737-4a30-8741-43134771d40c
default_l3_gw_service_uuid = adee048c-3776-4bd2-ade1-42ab5c90bf9e

Configure metadata for Nova instances, set metadata_dhcp_host_route to False in [DEFAULT] section. In [nvp] section set the metadata mode as access_network.

enable_metadata_access_network = True
metadata_mode = access_network

Create a [database] section and configure the connection to Neutron MySQL database, the data can be found on neutron.conf file.

[database]
connection = mysql://neutron:ac2191a8661b4b66@192.168.82.40/ovs_neutron

FInally before start Neutron services check nvp.ini with the command neutron-check-nvp-config. You should get something like this.

[root@neutron ~]# neutron-check-nvp-config /etc/neutron/plugins/nicira/nvp.ini
----------------------- Database Options -----------------------
        connection: mysql://neutron:ac2191a8661b4b66@192.168.82.40/ovs_neutron
        retry_interval: 10
        max_retries: 10
-----------------------    NVP Options   -----------------------
        NVP Generation Timeout -1
        Number of concurrent connections to each controller 10
        max_lp_per_bridged_ls: 5000
        max_lp_per_overlay_ls: 256
-----------------------  Cluster Options -----------------------
        requested_timeout: 30
        retries: 2
        redirects: 2
        http_timeout: 10
Number of controllers found: 1
        Controller endpoint: 192.168.82.45:443
                Gateway(L3GatewayServiceConfig) uuid: adee048c-3776-4bd2-ade1-42ab5c90bf9e
        Transport zones: [u'b948fd35-5737-4a30-8741-43134771d40c']
Done.
[root@neutron ~]#

Start Neutron services

service neutron-server start

Create a network neutron command line to test that everything is working as expected.

[root@cloud-controller ~(keystone_admin)]# neutron net-create nsx-test-net
Created a new network:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| id                    | 24f3b23f-a938-40e7-b026-14c8fb77ff34 |
| name                  | nsx-test-net                         |
| port_security_enabled | True                                 |
| shared                | False                                |
| status                | ACTIVE                               |
| subnets               |                                      |
| tenant_id             | 4d9fbabd4c9d4fa4a2185ff7559ae4e8     |
+-----------------------+--------------------------------------+
[root@cloud-controller ~(keystone_admin)]#
[root@cloud-controller ~(keystone_admin)]# neutron net-list
+--------------------------------------+--------------+---------+
| id                                   | name         | subnets |
+--------------------------------------+--------------+---------+
| 24f3b23f-a938-40e7-b026-14c8fb77ff34 | nsx-test-net |         |
+--------------------------------------+--------------+---------+
[root@cloud-controller ~(keystone_admin)]#

Access NSX Manager web interface, navigate to Logical Switches and confirm that a new logical switch with the same name and UUID as the new OpenStack network has been created.

Screen Shot 2014-06-21 at 22.15.20

Congratulations! We have successfully deployed a distributed installation of OpenStack with KVM as the underlying hypervisor and integrated with VMware NSX state of the art network virtualization software. In future posts out of this four article series we will discuss some tips and other parts of OpenStack and NSX. Courteous comments are welcome.

Juanma.

kvmWelcome to the third post of my series about OpenStack. In the first and second posts we saw in detail how to prepare the basic network infrastructure of our future OpenStack cloud using VMware NSX. In this third one we are going to install and configure the KVM compute host and the shared storage of the lab.

KVM setup

Create and install two CentOS 6.4 virtual machines with 2 vCPU, 2 GB of RAM, 2 network interfaces (E1000) and one 16GB disk. For the partitioning schema I have used the following one:

  • sda1 – 512MB – /boot
  • sda2 – Rest of the disk – LVM PV
    • lv_root – 13.5GB – /
    • lv_swap – 2GB – swap

Mark Base and Standard groups to be installed and leave the rest unchecked. Set the hostname during the installation and leave the networking configuration with the default values. Please have in mind that you will need to have a DHCP server on your network, in my case I’m using the one that comes with VMware Fusion if you don’t have one then you will have to set here a temporary IP address in order to able to install the KVM software. Once the installation is done reboot your virtual machine and open a root SSH session to proceed with the rest of the configuration tasks.

Disable SELinux with setenfornce command, also modify SELinux config to disable it during OS boot. I do not recommend to disable SELinux in a production environment but for a lab it will simplify things.

setenforce 0
cp /etc/selinux/config /etc/selinux/config.orig
sed -i s/SELINUX\=enforcing/SELINUX\=disabled/ /etc/selinux/config

Check that hardware virtualization support is activated.

egrep -i 'vmx|svm' /proc/cpuinfo

Install KVM packages.

yum install kvm libvirt python-virtinst qemu-kvm

After installing a ton of dependencies and if t nothing failed enable and start the libvirtd service.

[root@kvm1 ~]# chkconfig libvirtd on
[root@kvm1 ~]# service libvirtd start
Starting libvirtd daemon:                                  [  OK  ]
[root@kvm1 ~]#

Verify that KVM has been correctly installed and it’s loaded and running on the system.

[root@kvm1 ~]# lsmod | grep kvm
kvm_intel              53484  0
kvm                   316506  1 kvm_intel
[root@kvm1 ~]#
[root@kvm1 ~]# virsh -c qemu:///system list
 Id    Name                           State
----------------------------------------------------

[root@kvm1 ~]#

Hypervisor networking setup

With KVM software installed and ready we can now move on to configure the networking for both hosts and integrate them into our NSX deployment.

Disable Network Manager for both interfaces. Edit /etc/sysconfig/network-scripts/ifcfg-ethX files and change NM_CONTROLLED value to no.

By default libvirt creates virbr0 network bridge to be used for the virtual machines to access the external network through a NAT connection. We need to disable it to ensure that bridge components of Open vSwitch can load without any errors.

virsh net-destroy default
virsh net-autostart --disable default

Install Open vSwitch

Copy the NSX OVS package to the KVM host and extract it.

[root@kvm1 nsx-ovs]# tar vxfz nsx-ovs-2.1.0-build33849-rhel64_x86_64.tar.gz
./
./nicira-flow-stats-exporter/
./nicira-flow-stats-exporter/nicira-flow-stats-exporter-4.1.0.32691-1.x86_64.rpm
./tcpdump-ovs-4.4.0.ovs2.1.0.33849-1.x86_64.rpm
./kmod-openvswitch-2.1.0.33849-1.el6.x86_64.rpm
./openvswitch-2.1.0.33849-1.x86_64.rpm
./nicira-ovs-hypervisor-node-2.1.0.33849-1.x86_64.rpm
./nicira-ovs-hypervisor-node-debuginfo-2.1.0.33849-1.x86_64.rpm
[root@kvm1 nsx-ovs]#

Install Open vSwitch packages.

rpm -Uvh kmod-openvswitch-2.1.0.33849-1.el6.x86_64.rpm
rpm -Uvh openvswitch-2.1.0.33849-1.x86_64.rpm

Verify that Open vSwitch service is enabled and start it.

[root@kvm1 ~]# chkconfig --list openvswitch
openvswitch     0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@kvm1 ~]#
[root@kvm1 ~]#
[root@kvm1 ~]# service openvswitch start
/etc/openvswitch/conf.db does not exist ... (warning).
Creating empty database /etc/openvswitch/conf.db           [  OK  ]
Starting ovsdb-server                                      [  OK  ]
Configuring Open vSwitch system IDs                        [  OK  ]
Inserting openvswitch module                               [  OK  ]
Starting ovs-vswitchd                                      [  OK  ]
Enabling remote OVSDB managers                             [  OK  ]
[root@kvm1 ~]#

Install nicira-ovs-hypervisor-node package, this utility provides the infrastructure for distributed routing on the hypervisor. With the installation the integration bridge br-int and OVS SSL credentials will be created.

[root@kvm1 ~]# rpm -Uvh nicira-ovs-hypervisor-node*.rpm
Preparing...                ########################################### [100%]
   1:nicira-ovs-hypervisor-n########################################### [ 50%]
   2:nicira-ovs-hypervisor-n########################################### [100%]
Running '/usr/sbin/ovs-integrate init'
successfully generated self-signed certificates..
successfully created the integration bridge..
[root@kvm1 ~]#

There are other packages like nicira-flow-stats-exporter and tcpdump-ovs but they are not needed for OVS functioning. We can proceed now with OVS configuration.

Configure Open vSwitch

The first step is to create OVS bridges for each network interface card of the hypervisor.

ovs-vsctl add-br br0
ovs-vsctl br-set-external-id br0 bridge-id br0
ovs-vsctl set Bridge br0 fail-mode=standalone
ovs-vsctl add-port br0 eth0

If you were logged in by an SSH session you have probably noticed that your connection is lost, this is because br0 interface has taken control of the networking of the host and it doesn’t have an IP address configured. To solve this access the host console and edit ifcfg-eth0 file and modify to look like this.

DEVICE=eth0
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br0
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
NAME=eth0
HOTPLUG=no
HWADDR=00:0C:29:CA:34:FE
NM_CONTROLLED=no

Next create and edit ifcfg-br0 file.

DEVICE=br0
DEVICETYPE=ovs
TYPE=OVSBridge
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.82.42
NETMASK=255.255.255.0
GATEWAY=192.168.82.2
IPV6INIT=no
HOTPLUG=no

Restart the network service and test the connection.

service network restart

Repeat all the above steps for the second network interface.

Finally configure NSX Controller Cluster as manager in Open vSwitch.

ovs-vsctl set-manager ssl:192.168.82.44

Execute ovs-vsctl show command to review OVS current configuration.

[root@kvm1 ~]# ovs-vsctl show
383c3f17-5c53-4992-be8e-6e9b195e51d8
    Manager "ssl:192.168.82.44"
    Bridge "br1"
        fail_mode: standalone
        Port "br1"
            Interface "br1"
                type: internal
        Port "eth1"
            Interface "eth1"
    Bridge "br0"
        fail_mode: standalone
        Port "eth0"
            Interface "eth0"
        Port "br0"
            Interface "br0"
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.1.0.33849"
[root@kvm1 ~]#

Register OVS in NSX Controller

With our OVS instance installed and running we can now inform NSX Controller of its existence either via NVP API or NSX Manager, in our case we will use the later.

Log into NSX Manager as admin user and go to Dashboard, from Summary of Transport Components table click Add in the Hypervisors row. Verify that Hypervisor is selected as transport node and move to the Basics screen. Enter a name for the hypervisor, usually the hostname of the server.

Screen Shot 2014-05-05 at 23.18.22

In Properties enter:

  • Integration bridge ID, for us is br-int.
  • Admin Status Enabled –  Enabled by default.

Screen Shot 2014-05-05 at 23.29.03

For the Credential screen we are going to need the SSL certificate that was created along with the integration bridge during the NSX OVS installation. The PEM certificate file is ovsclient-cert.pem and is in /etc/openvswitch directory.

[root@kvm1 ~]# cat /etc/openvswitch/ovsclient-cert.pem
-----BEGIN CERTIFICATE-----
MIIDwjCCAqoCCQDZUob5H9tzvjANBgkqhkiG9w0BAQUFADCBojELMAkGA1UEBhMC
VVMxCzAJBgNVBAgTAkNBMRIwEAYDVQQHEwlQYWxvIEFsdG8xFTATBgNVBAoTDE9w
ZW4gdlN3aXRjaDEfMB0GA1UECxMWT3BlbiB2U3dpdGNoIGNlcnRpZmllcjE6MDgG
A1UEAxMxb3ZzY2xpZW50IGlkOjA4NWQwMTFiLTJiMzYtNGQ5My1iMWIyLWJjODIz
MDczYzE0YzAeFw0xNDA1MDQyMjE3NTVaFw0yNDA1MDEyMjE3NTVaMIGiMQswCQYD
VQQGEwJVUzELMAkGA1UECBMCQ0ExEjAQBgNVBAcTCVBhbG8gQWx0bzEVMBMGA1UE
ChMMT3BlbiB2U3dpdGNoMR8wHQYDVQQLExZPcGVuIHZTd2l0Y2ggY2VydGlmaWVy
MTowOAYDVQQDEzFvdnNjbGllbnQgaWQ6MDg1ZDAxMWItMmIzNi00ZDkzLWIxYjIt
YmM4MjMwNzNjMTRjMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwgqT
hvG72vat0hXvTuukZOs6fM4CAphmN34l4415q/vReSM3upN+vOLoyGJ/8VJGdNXH
3Bsu6V58f6o8EPbfnhgqf2rCP0r5kiiN5SivsAWI5//ltV1GDFO4+8VpYAwn4Cbd
sNOuFEM1mKOR//IL3Riy9Nkh16wfLy44KEE9745uhZ9gW96AkSkBx1ajjUiApnjL
M6L2w/E4sxNeMDLf/VYlc/SuEg775D9iaPpA1haJt8FFw1g769FsR9Q0Fl+CoT7f
ggBZTKwwcoU+5Ew1mNlPV0Hm8vpFcXbtMBeuT9Fe7k4bC+UuQPaSnbPpbZMpx/wd
fHOdJpemcog/0EjOJQIDAQABMA0GCSqGSIb3DQEBBQUAA4IBAQDBPNM/uI25ofIl
AgCpG42UD3M/RZRPX0/6Be4jCTaAuET6J8wAKA4k1btA6UPt0M98N6o4y60Du2D+
ZwFOa2LSTXZB43X70XnDKxapDVqmhKtrmX2hL1NRD9RjTTx3TOXMOlUiUizRB1+L
d8MNhX3qrvOLeFOUnxm6C5RnI/HdqvS9TyxybX+Qfqit9Q66hbjAt9RribXSw21G
Ix8d9S4NyDO91mDstIcXeNRUk8K64gEQSKxQO9QKmVAQBIlYAJVVXzfkXyHEiKTe
0zIsW/oknwWeQMD9xSrKomY/5+LCuDM1jT5LcL8vxmrEVIrUjNqt4nQsT4mjooG+
XYf2HdXj
-----END CERTIFICATE-----
[root@kvm1 ~]#

Copy the contents of the file and paste them in the Security Certificate text box.

Screen Shot 2014-05-05 at 23.36.28

Finally add the Transport Connector with the values:

  • Transport Type: STT
  • Transport Zone UUID: The transport zone, in my case the UUID corresponding to vlab-transport-zone.
  • IP Address – The address of the br0 interface of the host.

Screen Shot 2014-05-05 at 23.41.57

Click Save & View and check that Management and OpenFlow connections are up.

Screen Shot 2014-05-05 at 23.52.16

GlusterFS setup

gluster-logo-300x115I choose GlusterFS for my OpenStack lab for two reasons.  I have used it in the past so this has been a good opportunity for me to refresh and enhance my rusty gluster skills, and it’s supported as storage backend for Glance in OpenStack. Instead of going with CentOS again this time I choose Fedora 20 for my gluster VM, a real world GlusterFS cluster will have at least two node but for our lab one will be enough.

Create a Fedora x64 virtual machine with 1 vCPU, 1GB of RAM and one network interface. For the storage part use the following:

  • System disk: 16GB
  • Data disk: 72GB

Use the same partitioning schema of the KVM hosts for the system disk. Choose a Minimal installation and add the Standard group. Configure the hostname and the IP address of the node, set the root password and create a user as administrator, I’m using here my personal user jrey.

Disable SELinux.

sudo setenforce 0
sudo cp /etc/selinux/config /etc/selinux/config.orig
sudo sed -i s/SELINUX\=enforcing/SELINUX\=disabled/ /etc/selinux/config

Stop and disable firewalld.

sudo systemctl disable firewalld.service
sudo systemctl stop firewalld.service

Install GlusterFS packages. There is no need to add any additional yum repository since Gluster is included in the standard Fedora repos.

sudo systemctl install glusterfs-server

Enable Gluster services.

sudo systemctl enable glusterd.service
sudo systemctl enable glusterfsd.service

Start Gluster services.

[jrey@gluster ~]$ sudo systemctl start glusterd.service
[jrey@gluster ~]$ sudo systemctl start glusterfsd.service
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo systemctl status glusterd.service
glusterd.service - GlusterFS an clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
   Active: active (running) since Mon 2014-04-28 17:17:35 CEST; 20s ago
  Process: 1496 ExecStart=/usr/sbin/glusterd -p /run/glusterd.pid (code=exited, status=0/SUCCESS)
 Main PID: 1497 (glusterd)
   CGroup: /system.slice/glusterd.service
           └─1497 /usr/sbin/glusterd -p /run/glusterd.pid

Apr 28 17:17:35 gluster.vlab.local systemd[1]: Started GlusterFS an clustered file-system server.
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo systemctl status glusterfsd.service
glusterfsd.service - GlusterFS brick processes (stopping only)
   Loaded: loaded (/usr/lib/systemd/system/glusterfsd.service; enabled)
   Active: active (exited) since Mon 2014-04-28 17:17:45 CEST; 15s ago
  Process: 1515 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 1515 (code=exited, status=0/SUCCESS)

Apr 28 17:17:45 gluster.vlab.local systemd[1]: Starting GlusterFS brick processes (stopping only)...
Apr 28 17:17:45 gluster.vlab.local systemd[1]: Started GlusterFS brick processes (stopping only).
[jrey@gluster ~]$

Since we are running a one-node cluster there is no need to add any node to the trusted pool. In case you decide to run a multinode environment you can setup the pool by running the following command on each node of the clsuter. .

gluster peer probe <IP_ADDRESS_OF_OTHER_NODE>

Edit the data disk with fdisk and create a single partition. Format the partition as XFS.

[jrey@gluster ~]$ sudo mkfs.xfs -i size=512 /dev/sdb1
meta-data=/dev/sdb1              isize=512    agcount=4, agsize=4718528 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=18874112, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=9215, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[jrey@gluster ~]$

Create the mount point for the new filesystem, mount the partition and edit /etc/fstab accordingly to make it persistent to reboots.

sudo mkdir -p /data/glance/
sudo mount /dev/sdb1 /data/glance
sudo mkdir -p /data/glance/brick1
sudo echo "/dev/sdb1 /data/glance xfs defaults 0 0" >> /etc/fstab

Create the Gluster volume and start it.

[jrey@gluster ~]$ sudo gluster volume create gv0 gluster.vlab.local:/data/glance/brick1
volume create: gv0: success: please start the volume to access data
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo gluster volume start gv0
volume start: gv0: success
[jrey@gluster ~]$
[jrey@gluster ~]$ sudo gluster volume info

Volume Name: gv0
Type: Distribute
Volume ID: d1ad2d00-6210-4856-a5eb-26ddcba77a70
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gluster.vlab.local:/data/glance/brick1
[jrey@gluster ~]$

The configuration of the Gluster node is finished. In the next article we will install and configure OpenStack using the different components detailed during current and previous parts of the series.

Please feel free to add any comment or correction.

Juanma.

The following post will discuss about iSCSI initiator configuration in RedHat Enterprise Linux 5, this method is also applicable to all RHEL5 derivatives. The iSCSI LUNs will be provided by an HP P4000 array.

First of all we need to get and install the iscsi-initiator-utils RPM package, you can use yum to get and install the package from any supported repository for CentOS or RHEL. You can also download the package from RedHat Network if you have a valid RHN account and your system doesn’t have internet connection.

[root@rhel5 ~]# rpm -ivh /tmp/iscsi-initiator-utils-6.2.0.871-0.16.el5.x86_64.rpm
Preparing...                ########################################### [100%]
   1:iscsi-initiator-utils  ########################################### [100%]
[root@rhel5 ~]#
[root@rhel5 ~]#rpm -qa | grep iscsi
iscsi-initiator-utils-6.2.0.871-0.16.el5
[root@rhel5 ~]# rpm -qi iscsi-initiator-utils-6.2.0.871-0.16.el5
Name        : iscsi-initiator-utils        Relocations: (not relocatable)
Version     : 6.2.0.871                         Vendor: Red Hat, Inc.
Release     : 0.16.el5                      Build Date: Tue 09 Mar 2010 09:16:29 PM CET
Install Date: Wed 16 Feb 2011 11:34:03 AM CET      Build Host: x86-005.build.bos.redhat.com
Group       : System Environment/Daemons    Source RPM: iscsi-initiator-utils-6.2.0.871-0.16.el5.src.rpm
Size        : 1960412                          License: GPL
Signature   : DSA/SHA1, Wed 10 Mar 2010 04:26:37 PM CET, Key ID 5326810137017186
Packager    : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>
URL         : http://www.open-iscsi.org
Summary     : iSCSI daemon and utility programs
Description :
The iscsi package provides the server daemon for the iSCSI protocol,
as well as the utility programs used to manage it. iSCSI is a protocol
for distributed disk access using SCSI commands sent over Internet
Protocol networks.
[root@rhel5 ~]#

Next we are going to configure the initiator. The iSCSI initiator is composed by two services, iscsi and iscsid, enable them to start at system startup using chkconfig.

[root@rhel5 ~]# chkconfig iscsi on
[root@rhel5 ~]# chkconfig iscsid on
[root@rhel5 ~]#
[root@rhel5 ~]# chkconfig --list | grep iscsi
iscsi           0:off   1:off   2:on    3:on    4:on    5:on    6:off
iscsid          0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@rhel5 ~]#
[root@rhel5 ~]#

Once iSCSI is configured start the service.

[root@rhel5 ~]# service iscsi start
iscsid is stopped
Starting iSCSI daemon:                                     [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: iscsiadm: No records found!
                                                           [  OK  ]
[root@rhel5 ~]#
[root@rhel5 ~]# service iscsi status
iscsid (pid  14170) is running...
[root@rhel5 ~]#

From the P4000 CMC we need to add the server to the management group configuration like we would do with any other server.

The server iqn can be found in the file /etc/iscsi/initiatorname.iscsi.

[root@cl-node1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:2551bf29b48
[root@cl-node1 ~]#

Create any iSCSI volumes you need in the P4000 arrays and assign them to the RedHat system. Then to discover the presented LUNs, from the Linux server run the iscsiadm command.

[root@rhel5 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.126.60
192.168.126.60:3260,1 iqn.2003-10.com.lefthandnetworks:mlab:62:lv-rhel01
[root@rhel5 ~]#

Restart the iSCSI initiator to make the new block device available to the operative system.

[root@rhel5 ~]# service iscsi restart
Stopping iSCSI daemon:
iscsid dead but pid file exists                            [  OK  ]
Starting iSCSI daemon:                                     [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2003-10.com.lefthandnetworks:mlab:62:lv-rhel01, portal: 192.168.126.60,3260]
Login to [iface: default, target: iqn.2003-10.com.lefthandnetworks:mlab:62:lv-rhel01, portal: 192.168.126.60,3260]: successful
                                                           [  OK  ]
[root@rhel5 ~]#

Then check that the new disk is available, I used lsscsi but fdisk -l will do the trick too.

[root@rhel5 ~]# lsscsi
[0:0:0:0]    disk    VMware,  VMware Virtual S 1.0   /dev/sda
[2:0:0:0]    disk    LEFTHAND iSCSIDisk        9000  /dev/sdb
[root@rhel5 ~]#
[root@rhel5 ~]# fdisk -l /dev/sdb 

Disk /dev/sdb: 156.7 GB, 156766306304 bytes
255 heads, 63 sectors/track, 19059 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table
[root@rhel5 ~]#

At this point the iSCSI configuration is done, the new LUNs will be available through a system reboot as long as the iSCSI service is enabled.

Juanma.

Now that my daily work is more focused on Linux I found myself performing the same basic administration tasks in Linux that I’m used to do in HP-UX. Because of that I thought that a post explaining how the same basic file system and volume management operations are done in both operative systems was necessary, hope you like it :-)

This is going to be a very basic post intended only as a reference for myself and any other Sysadmin coming from either Linux or HP-UX that wants to know how things are done in the other side. Of course this post is no substitute of the official documentation and the corresponding man pages.

I’ve used Red Hat Enterprise Linux 5.5 as the Linux version and 11iv3 as the HP-UX version.

The follwing topics will covered:

  • Volume group creation.
  • Logical volume operations.
  • File system operations.

Volume group creation

Physical volume and volume group creation are the most basic tasks in LVM, both in Linux and HP-UX, but although command syntax is quite similar in both operative systems the whole process differs in many ways.

– HP-UX:

The example used is valid to 11iv2 and 11iv3 HP-UX versions, with the exception of the persistent DSFs you will have to substitute them for the corresponding legacy devices used in 11iv2.

First create the physical volumes.

root@hp-ux:/# pvcreate -f /dev/rdisk/disk10
Physical volume "/dev/rdisk/disk10" has been successfully created.
root@hp-ux:/#
root@hp-ux:/# pvcreate -f /dev/rdisk/disk11
Physical volume "/dev/rdisk/disk11" has been successfully created.
root@hp-ux:/#

In /dev create a directory named as the new volume group, change the ownership to root:root and the permissions to 755.

root@hp-ux:/# mkdir -p /dev/vg_new
root@hp-ux:/# chown root:root /dev/vg_new
root@hp-ux:/# chmod 755 /dev/vg_new

Go into the VG subdirectory and create the group device special file. For the Linux guys, in HP-UX each volume group must have a group device special file under its subdirectory in /dev. This group DSF is created with the mknod command, like any other DSFs the group file must have a major and a minor number.

For LVM 1.0 volume groups the major number must be 64 and for the LVM 2.0 one must be 128. Regarding the minor number, the first two digits will uniquely identify the volume group and the remaining digits must be 0000. In the below example we’re creating a 1.0 volume group.

root@hp-ux:/dev/vg_new# mknod group c 64 0x010000

Change the ownership to root:sys and the permissions to 640.

root@hp-ux:/dev/vg_new# chown root:sys group
root@hp-ux:/dev/vg_new# chmod 640 group

And create the volume group with the vgcreate command, the arguments passed are the two physical volumes previously created and the size in megabytes of the physical extent. The last one is optional and if is not provided the default of 4MB will be automatically set.

root@hp-ux:/# vgcreate -s 16 vg_new /dev/disk/disk10 /dev/disk/disk11
Volume group "/dev/vg_new" has been successfully created.
Volume Group configuration for /dev/vg_new has been saved in /etc/lvmconf/vg_new.conf
root@hp-ux:/#
root@hp-ux:/# vgdisplay -v vg_new
--- Volume groups ---
VG Name                     /dev/vg_new
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255    
Cur LV                      0      
Open LV                     0      
Max PV                      16     
Cur PV                      2      
Act PV                      2      
Max PE per PV               6000         
VGDA                        2   
PE Size (Mbytes)            16              
Total PE                    26    
Alloc PE                    0       
Free PE                     26    
Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0 

   --- Physical volumes ---
   PV Name                     /dev/disk/disk10
   PV Status                   available                
   Total PE                    13       
   Free PE                     13       
   Autoswitch                  On        

   PV Name                     /dev/disk/disk11
   PV Status                   available                
   Total PE                    13       
   Free PE                     13       
   Autoswitch                  On     

root@hp-ux:/#

– Linux:

Create the physical volumes. Here it is where the first difference appears. In HP-UX a physical volume is composed by a whole disk, with the exception of boot disks in Itanium systems, but in Linux a physical volume can be a whole disk or a partition.

For the whole disk the process is pretty much the same as in HP-UX.

[root@rhel /]# pvcreate -f /dev/sdb
  Physical volume "/dev/sdb" successfully created
[root@rhel /]# pvdisplay /dev/sdb
  "/dev/sdb" is a new physical volume of "204.00 MB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb
  VG Name               
  PV Size               204.00 MB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               Ngyz7I-Z2hL-8R3b-hzA3-qIVc-tZuY-DbCBYn

[root@rhel /]#

If you decide to use partitions for the PVs the first, and obvious, thing to do is partition the disk. To setup the disk we’ll use the fdisk tool, following is an example session:

[root@rhel /]# fdisk /dev/sdc 

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-204, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-204, default 204):
Using default value 204

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): p

Disk /dev/sdc: 213 MB, 213909504 bytes
64 heads, 32 sectors/track, 204 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1         204      208880   8e  Linux LVM

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@rhel /]#

To explain the session first a new partition is created with the command n and the size of the partition is set (in this particular case we are using the whole disk); then we must change the partition type, which by default is set to Linux, to Linux LVM and to do that we use the command t and issue 8e as the corresponding hexadecimal code, the available values for the partition types can be shown  by typing L.

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): L

 0  Empty           1e  Hidden W95 FAT1 80  Old Minix       bf  Solaris        
 1  FAT12           24  NEC DOS         81  Minix / old Lin c1  DRDOS/sec (FAT-
 2  XENIX root      39  Plan 9          82  Linux swap / So c4  DRDOS/sec (FAT-
 3  XENIX usr       3c  PartitionMagic  83  Linux           c6  DRDOS/sec (FAT-
 4  FAT16 <32M      40  Venix 80286     84  OS/2 hidden C:  c7  Syrinx         
 5  Extended        41  PPC PReP Boot   85  Linux extended  da  Non-FS data    
 6  FAT16           42  SFS             86  NTFS volume set db  CP/M / CTOS / .
 7  HPFS/NTFS       4d  QNX4.x          87  NTFS volume set de  Dell Utility   
 8  AIX             4e  QNX4.x 2nd part 88  Linux plaintext df  BootIt         
 9  AIX bootable    4f  QNX4.x 3rd part 8e  Linux LVM       e1  DOS access     
 a  OS/2 Boot Manag 50  OnTrack DM      93  Amoeba          e3  DOS R/O        
 b  W95 FAT32       51  OnTrack DM6 Aux 94  Amoeba BBT      e4  SpeedStor      
 c  W95 FAT32 (LBA) 52  CP/M            9f  BSD/OS          eb  BeOS fs        
 e  W95 FAT16 (LBA) 53  OnTrack DM6 Aux a0  IBM Thinkpad hi ee  EFI GPT        
 f  W95 Ext'd (LBA) 54  OnTrackDM6      a5  FreeBSD         ef  EFI (FAT-12/16/
10  OPUS            55  EZ-Drive        a6  OpenBSD         f0  Linux/PA-RISC b
11  Hidden FAT12    56  Golden Bow      a7  NeXTSTEP        f1  SpeedStor      
12  Compaq diagnost 5c  Priam Edisk     a8  Darwin UFS      f4  SpeedStor      
14  Hidden FAT16 <3 61  SpeedStor       a9  NetBSD          f2  DOS secondary  
16  Hidden FAT16    63  GNU HURD or Sys ab  Darwin boot     fb  VMware VMFS    
17  Hidden HPFS/NTF 64  Novell Netware  b7  BSDI fs         fc  VMware VMKCORE
18  AST SmartSleep  65  Novell Netware  b8  BSDI swap       fd  Linux raid auto
1b  Hidden W95 FAT3 70  DiskSecure Mult bb  Boot Wizard hid fe  LANstep        
1c  Hidden W95 FAT3 75  PC/IX           be  Solaris boot    ff  BBT            
Hex code (type L to list codes):

The changes are written with w.

Once the partitions are correctly created, setup the physical volumes.

[root@rhel /]# pvcreate -f /dev/sdc1
  Physical volume "/dev/sdc1" successfully created
[root@rhel /]# pvcreate -f /dev/sdd1
  Physical volume "/dev/sdd1" successfully created
[root@rhel /]#
[root@rhel /]# pvs
  PV         VG    Fmt  Attr PSize   PFree  
  /dev/sda2  sysvg lvm2 a-    19.88G      0
  /dev/sdb         lvm2 --   204.00M 204.00M
  /dev/sdc1        lvm2 --   203.98M 203.98M
  /dev/sdd1        lvm2 --   203.98M 203.98M
[root@rhel /]#

Now that the PVs are created we can proceed with the volume group creation.

[root@rhel /]# vgcreate vg_new /dev/sdc1 /dev/sdd1
 Volume group "vg_new" successfully created
[root@rhel /]# vgdisplay -v vg_new
  Using volume group(s) on command line
  Finding volume group "vg_new"
  /dev/hdc: open failed: No medium found
  --- Volume group ---
  VG Name               vg_new
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               400.00 MB
  PE Size               4.00 MB
  Total PE              100
  Alloc PE / Size       0 / 0   
  Free  PE / Size       100 / 400.00 MB
  VG UUID               lvrrnt-sHbo-eC8j-kC53-Mm5Z-IDDR-RJJtDr

  --- Physical volumes ---
  PV Name               /dev/sdc1     
  PV UUID               kD0jhk-ws8A-ke3L-a7nd-QucS-SAbH-BrmH28
  PV Status             allocatable
  Total PE / Free PE    50 / 50

  PV Name               /dev/sdd1     
  PV UUID               ZP2bLy-FxR3-gYn9-3Dy1-Llgk-6mFI-1iJvTm
  PV Status             allocatable
  Total PE / Free PE    50 / 50

[root@rhel /]#

As you could see the process in Linux is slightly simple than in HP-UX.

Logical volume operations

In this part we will see how to create a logical volume, extend this LV and then remove it from the system.

– HP-UX:

The logical volume creation can be done with the lvcreate command. With the -L option we can specify the size in MB of the new lvol, if -l is used instead the size must be provided in logical extents.

root@hp-ux:/# lvcreate -n lvol_test -L 256 vg_new
Logical volume "/dev/vg_new/lvol_test_S2" has been successfully created with
character device "/dev/vg_new/rlvol_test_S2".
Logical volume "/dev/vg_new/lvol_test_S2" has been successfully extended.
Volume Group configuration for /dev/vg_new has been saved in /etc/lvmconf/vg_new.conf
root@hp-ux:~# lvdisplay  /dev/vg_new/lvol_test
--- Logical volumes ---
LV Name                     /dev/vg_new/lvol_test
VG Name                     /dev/vg_new
LV Permission               read/write                
LV Status                   available/syncd           
Mirror copies               0            
Consistency Recovery        MWC                 
Schedule                    parallel      
LV Size (Mbytes)            256             
Current LE                  16             
Allocated PE                16             
Stripes                     0       
Stripe Size (Kbytes)        0                   
Bad block                   on         
Allocation                  strict                    
IO Timeout (Seconds)        default             

root@hp-ux:/#

Extend a volume. Of course the first prerequisite to extend a volume is to have enough free physical extends in the volume group.

root@hp-ux:~# lvextend -L 384 /dev/vg_new/lvol_test
Logical volume "/dev/vg_new/lvol_test" has been successfully extended.
Volume Group configuration for /dev/vg_new has been saved in /etc/lvmconf/vg_new.conf
root@hp-ux:~#
root@hp-ux:~# lvdisplay  /dev/vg_new/lvol_test
--- Logical volumes ---
LV Name                     /dev/vg_new/lvol_test
VG Name                     /dev/vg_new
LV Permission               read/write                
LV Status                   available/syncd           
Mirror copies               0            
Consistency Recovery        MWC                 
Schedule                    parallel      
LV Size (Mbytes)            384             
Current LE                  24             
Allocated PE                24             
Stripes                     0       
Stripe Size (Kbytes)        0                   
Bad block                   on         
Allocation                  strict                    
IO Timeout (Seconds)        default             

root@hp-ux:/#

The final step of this part is to remove the logical volume.

root@hp-ux:/# lvremove /dev/vg_new/lvol_test
The logical volume "/dev/vg_new/lvol_test" is not empty;
do you really want to delete the logical volume (y/n) : y
Logical volume "/dev/vg_new/lvol_test" has been successfully removed.
Volume Group configuration for /dev/vg_new has been saved in /etc/lvmconf/vg_new.conf
root@hp-ux:/#

– Linux:

Create the logical volume with the lvcreate command, the most basic options (-L, -l, -n) are the same as in HP-UX.

[root@rhel /]# lvcreate -n lv_test -L 256 vg_new
  Logical volume "lv_test" created
[root@rhel /]# lvdisplay /dev/vg_new/lv_test
  --- Logical volume ---
  LV Name                /dev/vg_new/lv_test
  VG Name                vg_new
  LV UUID                m5G2vT-dsE1-CycS-BMYR-3MYZ-4y8O-Mx04B8
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                256.00 MB
  Current LE             16
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6

[root@rhel /]#

Now extend the logical volume to 384 megabytes as we did in HP-UX.

[root@rhel /]# lvextend -L 384 /dev/vg_new/lv_test
  Extending logical volume lv_test to 384.00 MB
  Logical volume lv_test successfully resized
[root@rhel /]#
[root@rhel /]# lvdisplay /dev/vg_new/lv_test
  --- Logical volume ---
  LV Name                /dev/vg_new/lv_test
  VG Name                vg_new
  LV UUID                m5G2vT-dsE1-CycS-BMYR-3MYZ-4y8O-Mx04B8
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                384.00 MB
  Current LE             24
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6

[root@rhel /]#

Remove a volume from the system, like creation and extension is a very straight forward process that can be done with one command.

[root@rhel /]# lvremove /dev/vg_new/lv_test
Do you really want to remove active logical volume lv_test? [y/n]: y
  Logical volume "lv_test" successfully removed
[root@rhel /]#

Unlike the volume group section, the basic logical operations are performed in almost the same way in both operative systems. Of course if you want to perform mirroring the differences are bigger, but I will leave that for a future post.

File system operations

The final section of the post is about the basic file system operation, we are going to create a file system on the logical volume of the previous section and later to extend it, including this time the volume group extension.

– HP-UX:

Creating the file system with the newfs command.

root@hp-ux:/# newfs -F vxfs -o largefiles /dev/vg_new/rlvol_test
 version 7 layout
 393216 sectors, 393216 blocks of size 1024, log size 1024 blocks
 largefiles supported
root@hp-ux:/#

Create the mount point and mount the filesystem.

root@hp-ux:/# mkdir /data
root@hp-ux:/# mount /dev/vg_new/lvol_test /data

Filesystem extension, in the current section we are going to suppose that the volume group has not enough physical extension to accommodate the new size of the /data file system.

After we create a new physical volume in the disk12 we are going to extend the vg_new VG.

root@hp-ux:/# vgextend vg_new /dev/disk/disk12
Volume group "vg_new" has been successfully extended.
Volume Group configuration for /dev/vg_new has been saved in /etc/lvmconf/vg_new.conf
root@hp-ux:/#
root@hp-ux:/# vgdisplay -v vg_new
--- Volume groups ---
VG Name                     /dev/vg_new
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255    
Cur LV                      0      
Open LV                     0      
Max PV                      16     
Cur PV                      2      
Act PV                      2      
Max PE per PV               6000         
VGDA                        2   
PE Size (Mbytes)            16              
Total PE                    39    
Alloc PE                    24       
Free PE                     15    
Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0 

   --- Logical volumes ---
   LV Name                     /dev/vg_mir/lv_sql
   LV Status                   available/syncd           
   LV Size (Mbytes)            384             
   Current LE                  24             
   Allocated PE                24             
   Used PV                     2 

   --- Physical volumes ---
   PV Name                     /dev/disk/disk10
   PV Status                   available                
   Total PE                    13       
   Free PE                     13       
   Autoswitch                  On        

   PV Name                     /dev/disk/disk11
   PV Status                   available                
   Total PE                    13       
   Free PE                     13       
   Autoswitch                  On  

   PV Name                     /dev/disk/disk12
   PV Status                   available                
   Total PE                    13       
   Free PE                     13       
   Autoswitch                  On  

root@hp-ux:/#

The next part is to extend the logical volume just as we did in the logical volume operations section.

root@hp-ux:/# lvextend -L 512 /dev/vg_new/lvol_test
Logical volume "/dev/vg_new/lvol_test" has been successfully extended.
Volume Group configuration for /dev/vg_new has been saved in /etc/lvmconf/vg_new.conf
root@hp-ux:~#

And finally the most creepy part of the part of the process, extending the file system. To be capable of extending a mounted filesystem in HP-UX the OnlineJFS bundle must be installed.

Use the command fsadm and with the -b option issue the new size in KB as the argument, in the example we want to extend to 512MB, in KB is 524288.

root@hp-ux:/# fsadm -F vxfs -b 524288 /data
vxfs fsadm: V-3-23585: /dev/vg00/rlvol5 is currently 7731200 sectors - size will be increased
root#hp-ux:/#
root@hp-ux:/# bdf /data
Filesystem          kbytes    used   avail %used Mounted on
/dev/vg_new/lvol_test
                    524288    5243  524288    1% /data
root@hp-ux:/#

– Linux:

Here in the filesystem part is where the commands are completely different to HP-UX. In Linux the most common file system types are ext2 and ext3, although others like ext4 or reiserfs are supported.

To create an ext3 file system issue the command mkfs.ext3 using the logical volume to create the file system on as the argument.

[root@rhel ~]# mkfs.ext3 /dev/vg_new/lv_test
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
98304 inodes, 393216 blocks
19660 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67633152
48 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
        8193, 24577, 40961, 57345, 73729, 204801, 221185

Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@rhel ~]#

As in HP-UX create the mount point and mount the file system.

[root@rhel ~]# mkdir /data
[root@rhel ~]# mount /dev/vg_new/lv_test /data
[root@rhel ~]# df -h /data
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_new-lv_test
                      372M   11M  343M   3% /data
[root@rhel ~]#

The final part of the section is the file system extension, as we did in the HP-UX part the first task is to extend the volume group.

[root@rhel ~]# vgextend vg_new /dev/sde1
  Volume group "vg_new" successfully extended
[root@rhel ~]# vgdisplay -v vg_new
    Using volume group(s) on command line
    Finding volume group "vg_new"
  --- Volume group ---
  VG Name               vg_new
  System ID             
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               576.00 MB
  PE Size               16.00 MB
  Total PE              36
  Alloc PE / Size       24 / 384.00 MB
  Free  PE / Size       12 / 192.00 MB
  VG UUID               u32c0h-BPGN-HixT-IzsX-cNnC-EspO-xfweaI

  --- Logical volume ---
  LV Name                /dev/vg_new/lv_test
  VG Name                vg_new
  LV UUID                ZtArMo-Pyyl-BDHX-9CZQ-uEAK-VDqG-t60xy4
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                384.00 MB
  Current LE             24
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6

  --- Physical volumes ---
  PV Name               /dev/sdc1     
  PV UUID               kD0jhk-ws8A-ke3L-a7nd-QucS-SAbH-BrmH28
  PV Status             allocatable
  Total PE / Free PE    12 / 0

  PV Name               /dev/sdd1     
  PV UUID               ZP2bLy-FxR3-gYn9-3Dy1-Llgk-6mFI-1iJvTm
  PV Status             allocatable
  Total PE / Free PE    12 / 0

  PV Name               /dev/sde1     
  PV UUID               wbiNu5-csig-uwY7-y14y-3C8Q-oeN0-hAT49g
  PV Status             allocatable
  Total PE / Free PE    12 / 12

[root@rhel ~]#

Extend the logical volume with lvextend.

[root@rhel ~]# lvextend -L 512 /dev/vg_new/lv_test
  Extending logical volume lv_test to 512.00 MB
  Logical volume lv_test successfully resized
[root@rhel ~]# lvs
  LV      VG     Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  lv_home sysvg  -wi-ao 256.00M                                      
  lv_root sysvg  -wi-ao   5.84G                                      
  lv_swap sysvg  -wi-ao   1.00G                                      
  lv_tmp  sysvg  -wi-ao   1.00G                                      
  lv_usr  sysvg  -wi-ao   9.75G                                      
  lv_var  sysvg  -wi-ao   2.03G                                      
  lv_test vg_new -wi-ao 512.00M                                      
[root@rhel ~]#

Finally resize the file system, to do that use the resize2fs tool. Unlike in HP-UX with fsadm, that needs the new size as an argument  in order to extend the file system, if you simply issue the logical volume as the argument the resize2fs utility will extend the file system to the maximum size available in the LV.

[root@rhel ~]# resize2fs /dev/vg_new/lv_test
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/vg_new/lv_test is mounted on /data; on-line resizing required
Performing an on-line resize of /dev/vg_new/lv_test to 524288 (1k) blocks.
The filesystem on /dev/vg_new/lv_test is now 524288 blocks long.

[root@rhel ~]#

And at this point we are done. Any comments are welcome as always :-)

Juanma.