Archive

Posts Tagged ‘Linux’

Linux Kernel 3.7 and VMware Tools issue

January 28, 2013 5 comments

I got aware of this issue last week after installing a Fedora 18 virtual machine on Fusion 5. The installation of the Tools went as expected but when the install process launched the vmware-tools-config,pl script I got the typical error of not being able to find the Linux Kernel headers.

Searching for a valid kernel header path...
The path "" is not a valid path to the 3.7.2-204.fc18.x86_64 kernel headers.
Would you like to change it? [yes]

I installed the kernel headers and devel packages with yum.

[root@fed18 ~]# yum install kernel-headers kernel-devel

Fired up again the configuration script and got the same error. The problem is that snce kernel 3.7 all the kernel header files have been relocated to a new path and because of that the script is not able to find them. To solve it just create a symlink of the version.h file from the new location to the old one.

[root@fed18 src]# ln -s /usr/src/kernels/3.7.2-204.fc18.x86_64/include/generated/uapi/linux/version.h /lib/modules/3.7.2-204.fc18.x86_64/build/include/linux/

With the problem fixed I launched the config script again and the tools finally got configured without problems.

[root@fed18 ~]# vmware-config-tools.pl 
Initializing...

Making sure services for VMware Tools are stopped.
Stopping Thinprint services in the virtual machine:
 Stopping Virtual Printing daemon: done
Stopping vmware-tools (via systemctl): [ OK ]

The VMware FileSystem Sync Driver (vmsync) allows external third-party backup 
software that is integrated with vSphere to create backups of the virtual 
machine. Do you wish to enable this feature? [no]

Before you can compile modules, you need to have the following installed...
make
gcc
kernel headers of the running kernel

Searching for GCC...
Detected GCC binary at "/bin/gcc".
The path "/bin/gcc" appears to be a valid path to the gcc binary.
Would you like to change it? [no]

Searching for a valid kernel header path...
Detected the kernel headers at 
"/lib/modules/3.7.2-204.fc18.x86_64/build/include".
The path "/lib/modules/3.7.2-204.fc18.x86_64/build/include" appears to be a 
valid path to the 3.7.2-204.fc18.x86_64 kernel headers.
Would you like to change it? [no]

Juanma.

A bit of troubleshooting of the vCenter Server Appliance

June 28, 2012 5 comments

If your vCSA is configured to use the embedded DB2 database and if it’s not properly shutdown, next you power it on may be you should not be able to power on a VM like in the screenshot below…

image

…or the vSphere Client will not show some of information about the host or the VMs.

image

We all have seen those kind of errors in our homelabs from time to time. In the Windows-based vCenter it was relatively easy to solve, close the client, log into the vCenter, restart the vCenter Server service and in the next login into the vSphere Client everything will go as expected.

However how can we resolve this issue in the vCenter Linux appliance? Can’t be easier.

There are two ways to restart the vCenter services in the vCSA:

  • From he WebUI administration interface
  • From the command line

For the first method log into the WebUI of the vCSA by accessing https://<vCSA_URL>:5480 with your favorite web browser.

image

In the vCenter Server screen in the Status tab there stop and start the vCenter Server service from the Action buttons.

The second method is faster and easier, and to be sincere it feels more natural for me and probably for the other Unix Geek/Sysadmins out there.

The vCenter service in the Linux appliance is vmware-vpxd so with a simple service vmware-vpxd restart we’ll be on business again. Check the screenshot below.

image

Finally as seen in the screen capture you can check the status of the service.

More on troubleshooting the vCSA in a future post.

Juanma.

How to integrate SUSE Linux Enterprise 11 with Windows Active Directory

February 1, 2012 13 comments

suse_linux_logo1Getting SUSE Enterprise Linux integrated with Microsoft Active Directory is much easier than it sounds.

There are a few prerequisites to meet before:

  • Samba client must be installed.
  • Packages samba-winbind and krb5-client must be installed.
  • The primary DNS server must be the Domain Controller.

For this task we will use YaST2, the SUSE configuration tool.

YaST2 can be run either in graphical…

Screenshot-YaST2 Control Center-1

…or in text mode.

YaST2_text_mode

I decided to use the text mode since it will be by far the most common use case, anyway in both cases the procedure is exactly the same.

Go to Network Services section and later select Windows Domain Membership. The Windows Domain Membership configuration screen will appear.

In the Membership area enter the domain name and configure the options that best suit your environment, including the other sections of the screen.

YaST2_WinDom_config

I configure it to allow SSH single sign-on, more on this later, and to create a home directory for the user on his first login.

You should take into account the NTP configuration since it’s a critical component in Active Directory authentication.

Select OK to acknowledge your selection and a small pop-up will show up to inform that the host is not part of the domain and if you want to join it.

YaST2_domain_confirmation

Next you must enter the password of the domain Administrator.

YaST2_domain_admin_password

And YaST will finally confirm the success of the operation.

YaST2_domain_joined

At this point the basic configuration is done and the server should be integrated on the Windows Domain.

Under the hood this process has modified several configuration files in order the get the system ready to authenticate against Active Directory:

  • smb.conf
  • krb5.conf
  • nsswitch.conf

smb.conf

The first is the configuration file for the samba service. As you should know Samba is an open source implementation of the Windows SMB/CIFS protocol, it allows Unix systems to integrate almost transparently into a Windows Domain infrastructure and also provides file and print services for Windows clients.

The file resides in /etc/samba. Take a look at the contents of the file, the relevant part is the global section.

[global]
        workgroup = VJLAB
        passdb backend = tdbsam
        printing = cups
        printcap name = cups
        printcap cache time = 750
        cups options = raw
        map to guest = Bad User
        include = /etc/samba/dhcp.conf
        logon path = \\%L\profiles\.msprofile
        logon home = \\%L\%U\.9xprofile
        logon drive = P:
        usershare allow guests = No
        idmap gid = 10000-20000
        idmap uid = 10000-20000
        realm = VJLAB.LOCAL
        security = ADS
        template homedir = /home/%D/%U
        template shell = /bin/bash
        winbind refresh tickets = yes

krb5.conf

The krb5.conf is the Kerberos daemon configuration file which contains the necessary information for the Kerberos library.

jreypo@sles11-01:/etc> cat krb5.conf
[libdefaults]
        default_realm = VJLAB.LOCAL
        clockskew = 300
[realms]
        VJLAB.LOCAL = {
                kdc = dc.vjlab.local
                default_domain = vjlab.local
                admin_server = dc.vjlab.local
        }
[logging]
        kdc = FILE:/var/log/krb5/krb5kdc.log
        admin_server = FILE:/var/log/krb5/kadmind.log
        default = SYSLOG:NOTICE:DAEMON
[domain_realm]
        .vjlab.local = VJLAB.LOCAL
[appdefaults]
        pam = {
                ticket_lifetime = 1d
                renew_lifetime = 1d
                forwardable = true
                proxiable = false
                minimum_uid = 1
        }
jreypo@sles11-01:/etc>

nsswitch.conf

The nsswitch.conf file as stated by its man page is the System Databases and Name Service Switch configuration file. Basically it includes the different databases of the system to look for authentication information when user tries to log into the server.

Have a quick look into the file and you will notice the two fields changed, passwd and group. In both the winbind option has been added in order to indicate the system to use Winbind, the Name Service Switch daemon used to resolve NT server names.

passwd: compat winbind
group:  compat winbind

SSH single sign-on

Finally we need to test the SSH connection to the host using a user account of the domain. When asked for the login credentials use the DOMAIN\USER formula for the user name.

SSH_auth

This kind of integration is very useful, specially for the bigger shops, because you don’t have to maintain the user list of your SLES servers individually, just only the root account since the other accounts can be centrally managed from the Windows Domain.

However there is one issue that must be taken into account, the SSH single sign-on authentication means that anyone with a domain account can log into your Linux servers and we don’t want that.

To prevent this potentially dangerous situation we are going to limit the access only to those groups of users that really need it. I’m going to use the Domain Admins to show you how.

First we need to look for the Domain Admins group ID within our Linux box. Log in as DOMAIN\Administrator and use the id command to get the user info.

VJLAB\administrator@sles11-01:~> id
uid=10000(VJLAB\administrator) gid=10000(VJLAB\domain users) groups=10000(VJLAB\domain users),10001(VJLAB\schema admins),10002(VJLAB\domain admins),10003(VJLAB\enterprise admins),10004(VJLAB\group policy creator owners)
VJLAB\administrator@sles11-01:~>

There are several group IDs, for our purposes we need the VJLAB\domain admins which is 10002.

You should be asking yourself, but the GID is not 10002 but 10000? Yes you are right and because of that we need to make some changes at Domain Controller level.

Fire up Server Manager and go to Roles –> Active Directory Domain Services –> Active Directory Users and Computers –> DOMAIN –> Users.

server_manager

On the right pane edit the properties of the account you want to be able to access the linux server via SSH. In my case I used my own account juanma. In the Member of tab select the Domain Admins group and click Set Primary Group.

member_of

Now we need to modify how the pam daemon manage the authentication. Go back to SLES and edit /etc/pam.d/sshd.

#%PAM-1.0
auth     requisite      pam_nologin.so
auth     include        common-auth
account  include        common-account
password include        common-password
session  required       pam_loginuid.so
session  include        common-session

Delete the account line and add the following two lines.

account  sufficient     pam_localuser.so
account  sufficient     pam_succeed_if.so gid = 10002

The sshd file should look like this:

#%PAM-1.0
auth     requisite      pam_nologin.so
auth     include        common-auth
account  sufficient     pam_localuser.so
account  sufficient     pam_succeed_if.so gid = 10002
password include        common-password
session  required       pam_loginuid.so
session  include        common-session

What we did? First eliminated the ability to login via SSH for every user and later we allow the server local users and the Domain Admins to log into the server.

And we are done. Any comment would be welcome as always :-)

Juanma.

iSCSI initiator configuration in RedHat Enterprise Linux 5

February 22, 2011 8 comments

The following post will discuss about iSCSI initiator configuration in RedHat Enterprise Linux 5, this method is also applicable to all RHEL5 derivatives. The iSCSI LUNs will be provided by an HP P4000 array.

First of all we need to get and install the iscsi-initiator-utils RPM package, you can use yum to get and install the package from any supported repository for CentOS or RHEL. You can also download the package from RedHat Network if you have a valid RHN account and your system doesn’t have internet connection.

[root@rhel5 ~]# rpm -ivh /tmp/iscsi-initiator-utils-6.2.0.871-0.16.el5.x86_64.rpm
Preparing...                ########################################### [100%]
   1:iscsi-initiator-utils  ########################################### [100%]
[root@rhel5 ~]#
[root@rhel5 ~]#rpm -qa | grep iscsi
iscsi-initiator-utils-6.2.0.871-0.16.el5
[root@rhel5 ~]# rpm -qi iscsi-initiator-utils-6.2.0.871-0.16.el5
Name        : iscsi-initiator-utils        Relocations: (not relocatable)
Version     : 6.2.0.871                         Vendor: Red Hat, Inc.
Release     : 0.16.el5                      Build Date: Tue 09 Mar 2010 09:16:29 PM CET
Install Date: Wed 16 Feb 2011 11:34:03 AM CET      Build Host: x86-005.build.bos.redhat.com
Group       : System Environment/Daemons    Source RPM: iscsi-initiator-utils-6.2.0.871-0.16.el5.src.rpm
Size        : 1960412                          License: GPL
Signature   : DSA/SHA1, Wed 10 Mar 2010 04:26:37 PM CET, Key ID 5326810137017186
Packager    : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>
URL         : http://www.open-iscsi.org
Summary     : iSCSI daemon and utility programs
Description :
The iscsi package provides the server daemon for the iSCSI protocol,
as well as the utility programs used to manage it. iSCSI is a protocol
for distributed disk access using SCSI commands sent over Internet
Protocol networks.
[root@rhel5 ~]#

Next we are going to configure the initiator. The iSCSI initiator is composed by two services, iscsi and iscsid, enable them to start at system startup using chkconfig.

[root@rhel5 ~]# chkconfig iscsi on
[root@rhel5 ~]# chkconfig iscsid on
[root@rhel5 ~]#
[root@rhel5 ~]# chkconfig --list | grep iscsi
iscsi           0:off   1:off   2:on    3:on    4:on    5:on    6:off
iscsid          0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@rhel5 ~]#
[root@rhel5 ~]#

Once iSCSI is configured start the service.

[root@rhel5 ~]# service iscsi start
iscsid is stopped
Starting iSCSI daemon:                                     [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: iscsiadm: No records found!
                                                           [  OK  ]
[root@rhel5 ~]#
[root@rhel5 ~]# service iscsi status
iscsid (pid  14170) is running...
[root@rhel5 ~]#

From the P4000 CMC we need to add the server to the management group configuration like we would do with any other server.

The server iqn can be found in the file /etc/iscsi/initiatorname.iscsi.

[root@cl-node1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:2551bf29b48
[root@cl-node1 ~]#

Create any iSCSI volumes you need in the P4000 arrays and assign them to the RedHat system. Then to discover the presented LUNs, from the Linux server run the iscsiadm command.

[root@rhel5 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.126.60
192.168.126.60:3260,1 iqn.2003-10.com.lefthandnetworks:mlab:62:lv-rhel01
[root@rhel5 ~]#

Restart the iSCSI initiator to make the new block device available to the operative system.

[root@rhel5 ~]# service iscsi restart
Stopping iSCSI daemon:
iscsid dead but pid file exists                            [  OK  ]
Starting iSCSI daemon:                                     [  OK  ]
                                                           [  OK  ]
Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2003-10.com.lefthandnetworks:mlab:62:lv-rhel01, portal: 192.168.126.60,3260]
Login to [iface: default, target: iqn.2003-10.com.lefthandnetworks:mlab:62:lv-rhel01, portal: 192.168.126.60,3260]: successful
                                                           [  OK  ]
[root@rhel5 ~]#

Then check that the new disk is available, I used lsscsi but fdisk -l will do.

[root@rhel5 ~]# lsscsi
[0:0:0:0]    disk    VMware,  VMware Virtual S 1.0   /dev/sda
[2:0:0:0]    disk    LEFTHAND iSCSIDisk        9000  /dev/sdb
[root@rhel5 ~]#
[root@rhel5 ~]# fdisk -l /dev/sdb 

Disk /dev/sdb: 156.7 GB, 156766306304 bytes
255 heads, 63 sectors/track, 19059 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table
[root@rhel5 ~]#

At this point the iSCSI configuration is done, the new LUNs will be available through a system reboot as long as the iscsi service is enabled.

Juanma.

HP ProLiant servers management with hpasmcli

February 16, 2011 9 comments

Hpasmcli, HP Management Command Line Interface, is a scriptable command line tool to manage and monitor the HP ProLiant servers through the hpasmd and hpasmxld daemons. It is part of the hp-health package that comes with the HP Proliant Support Pack, or PSP.

[root@rhel4 ~]# rpm -qa | grep hp-health
hp-health-8.1.1-14.rhel4
[root@rhel4 ~]#
[root@rhel4 ~]# rpm -qi hp-health-8.1.1-14.rhel4
Name        : hp-health                    Relocations: (not relocatable)
Version     : 8.1.1                             Vendor: Hewlett-Packard Company
Release     : 14.rhel4                      Build Date: Fri 04 Jul 2008 07:04:51 PM CEST
Install Date: Thu 02 Apr 2009 05:10:48 PM CEST      Build Host: rhel4ebuild.M73C253-lab.net
Group       : System Environment            Source RPM: hp-health-8.1.1-14.rhel4.src.rpm
Size        : 1147219                          License: 2008 Hewlett-Packard Development Company, L.P.
Signature   : (none)
Packager    : Hewlett-Packard Company
URL         : http://www.hp.com/go/proliantlinux
Summary     : hp System Health Application and Command line Utility Package
Description :
This package contains the System Health Monitor for all hp Proliant systems
with ASM, ILO, & ILO2 embedded management asics.  Also contained are the
command line utilities.
[root@rhel4 ~]#
[root@rhel4 ~]# rpm -ql hp-health-8.1.1-14.rhel4
/etc/init.d/hp-health
/opt/hp/hp-health
/opt/hp/hp-health/bin
/opt/hp/hp-health/bin/IrqRouteTbl
/opt/hp/hp-health/bin/hpasmd
/opt/hp/hp-health/bin/hpasmlited
/opt/hp/hp-health/bin/hpasmpld
/opt/hp/hp-health/bin/hpasmxld
/opt/hp/hp-health/hprpm.xpm
/opt/hp/hp-health/sh
/opt/hp/hp-health/sh/hpasmxld_reset.sh
/sbin/hpasmcli
/sbin/hpbootcfg
/sbin/hplog
/sbin/hpuid
/usr/lib/libhpasmintrfc.so
/usr/lib/libhpasmintrfc.so.2
/usr/lib/libhpasmintrfc.so.2.0
/usr/lib/libhpev.so
/usr/lib/libhpev.so.1
/usr/lib/libhpev.so.1.0
/usr/lib64/libhpasmintrfc64.so
/usr/lib64/libhpasmintrfc64.so.2
/usr/lib64/libhpasmintrfc64.so.2.0
/usr/share/man/man4/hp-health.4.gz
/usr/share/man/man4/hpasmcli.4.gz
/usr/share/man/man7/hp_mgmt_install.7.gz
/usr/share/man/man8/hpbootcfg.8.gz
/usr/share/man/man8/hplog.8.gz
/usr/share/man/man8/hpuid.8.gz
[root@rhel4 ~]#

This handy tool can be used to view and modify several BIOS settings of the server and to monitor the status of the different hardware components like fans, memory modules, temperature, power supplies, etc.

It can be used in two ways:

  • Interactive shell
  • Within a script

The interactive shell supports TAB command completion and command recovery through a history buffer.

[root@rhel4 ~]# hpasmcli
HP management CLI for Linux (v1.0)
Copyright 2004 Hewlett-Packard Development Group, L.P.

--------------------------------------------------------------------------
NOTE: Some hpasmcli commands may not be supported on all Proliant servers.
      Type 'help' to get a list of all top level commands.
--------------------------------------------------------------------------
hpasmcli> help
CLEAR  DISABLE  ENABLE  EXIT  HELP  NOTE  QUIT  REPAIR  SET  SHOW
hpasmcli>

As it can be seen in the above example several main tasks can be done, to get the usage of every command simply use HELP followed by the command.

hpasmcli> help show
USAGE: SHOW [ ASR | BOOT | DIMM | F1 | FANS | HT | IML | IPL | NAME | PORTMAP | POWERSUPPLY | PXE | SERIAL | SERVER | TEMP | UID | WOL ]
hpasmcli>
hpasmcli> HELP SHOW BOOT
USAGE: SHOW BOOT: Shows boot devices.
hpasmcli>

In my experience SHOW is the most used command above the others. Following are examples for some of the tasks.

- Display general information of the server

hpasmcli> SHOW SERVER
System        : ProLiant DL380 G5
Serial No.    : XXXXXXXXX     
ROM version   : P56 11/01/2008
iLo present   : Yes
Embedded NICs : 2
        NIC1 MAC: 00:1c:c4:62:42:a0
        NIC2 MAC: 00:1c:c4:62:42:9e

Processor: 0
        Name         : Intel Xeon
        Stepping     : 11
        Speed        : 2666 MHz
        Bus          : 1333 MHz
        Core         : 4
        Thread       : 4
        Socket       : 1
        Level2 Cache : 8192 KBytes
        Status       : Ok

Processor: 1
        Name         : Intel Xeon
        Stepping     : 11
        Speed        : 2666 MHz
        Bus          : 1333 MHz
        Core         : 4
        Thread       : 4
        Socket       : 2
        Level2 Cache : 8192 KBytes
        Status       : Ok

Processor total  : 2

Memory installed : 16384 MBytes
ECC supported    : Yes
hpasmcli>

- Show current temperatures

hpasmcli> SHOW TEMP
Sensor   Location              Temp       Threshold
------   --------              ----       ---------
#1        I/O_ZONE             49C/120F   70C/158F
#2        AMBIENT              23C/73F    39C/102F
#3        CPU#1                30C/86F    127C/260F
#4        CPU#1                30C/86F    127C/260F
#5        POWER_SUPPLY_BAY     52C/125F   77C/170F
#6        CPU#2                30C/86F    127C/260F
#7        CPU#2                30C/86F    127C/260F

hpasmcli>

- Get the status of the server fans

hpasmcli> SHOW FAN
Fan  Location        Present Speed  of max  Redundant  Partner  Hot-pluggable
---  --------        ------- -----  ------  ---------  -------  -------------
#1   I/O_ZONE        Yes     NORMAL 45%     Yes        0        Yes          
#2   I/O_ZONE        Yes     NORMAL 45%     Yes        0        Yes          
#3   PROCESSOR_ZONE  Yes     NORMAL 41%     Yes        0        Yes          
#4   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes          
#5   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes          
#6   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes           

hpasmcli>

- Show device boot order configuration

hpasmcli> SHOW BOOT
First boot device is: CDROM.
One time boot device is: Not set.
hpasmcli>

- Set USB key as first boot device

hpasmcli> SET BOOT FIRST USBKEY

- Show memory modules status

hpasmcli> SHOW DIMM
DIMM Configuration
------------------
Cartridge #:   0
Module #:      1
Present:       Yes
Form Factor:   fh
Memory Type:   14h
Size:          4096 MB
Speed:         667 MHz
Status:        Ok

Cartridge #:   0
Module #:      2
Present:       Yes
Form Factor:   fh
Memory Type:   14h
Size:          4096 MB
Speed:         667 MHz
Status:        Ok

Cartridge #:   0
Module #:      3
Present:       Yes
Form Factor:   fh
Memory Type:   14h
Size:          4096 MB
Speed:         667 MHz
Status:        Ok
...

In the scripting mode hpasmcli can be used directly from the shell prompt with the -s option and the command between quotation marks, this of course allow you to process the output of the commands  like in the below exampl.

[root@rhel4 ~]# hpasmcli -s "show dimm" | egrep "Module|Status"
Module #:      1
Status:        Ok
Module #:      2
Status:        Ok
Module #:      3
Status:        Ok
Module #:      4
Status:        Ok
Module #:      5
Status:        Ok
Module #:      6
Status:        Ok
Module #:      7
Status:        Ok
Module #:      8
Status:        Ok
[root@rhel4 ~]#

To execute more than one command sequentially separate them with a semicolon.

[root@rhel4 ~]# hpasmcli -s "show fan; show temp"

Fan  Location        Present Speed  of max  Redundant  Partner  Hot-pluggable
---  --------        ------- -----  ------  ---------  -------  -------------
#1   I/O_ZONE        Yes     NORMAL 45%     Yes        0        Yes          
#2   I/O_ZONE        Yes     NORMAL 45%     Yes        0        Yes          
#3   PROCESSOR_ZONE  Yes     NORMAL 41%     Yes        0        Yes          
#4   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes          
#5   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes          
#6   PROCESSOR_ZONE  Yes     NORMAL 36%     Yes        0        Yes           

Sensor   Location              Temp       Threshold
------   --------              ----       ---------
#1        I/O_ZONE             47C/116F   70C/158F
#2        AMBIENT              21C/69F    39C/102F
#3        CPU#1                30C/86F    127C/260F
#4        CPU#1                30C/86F    127C/260F
#5        POWER_SUPPLY_BAY     50C/122F   77C/170F
#6        CPU#2                30C/86F    127C/260F
#7        CPU#2                30C/86F    127C/260F

[root@rhel4 ~]#

If you want to play more with hpasmcli go to its man page and to the ProLiant Support Pack documentation.

Juanma.

Understanding RAID management in Linux

November 24, 2010 3 comments

The first thing you must learn about RAID technologies in Linux is that they have nothing in common with HP-UX, and I mean nothing! Yes there is LVM but that’s all, the mirror of a volume group for example is not done through LVM commands, in fact you are not going to mirror a volume group but the block device/s where the volume group resides.

There are two tools to manage RAID in Linux.

  • dmraid
  • mdadm

Dmraid is used to discover and activate software (ATA)RAID arrays, commonly known as fakeRAID, and mdadm is used to manage Linux Software RAID devices.

dmraid

Dmraid, uses libdevmapper and the device-mapper kernel driver to perform all the tasks.

The device-mapper is a component of the Linux Kernel. This the way the Linux Kernel do all the block device managment. It maps a block device onto another and forms the base of volume management (LVM2 and EVMS) and software raid. Multipathing support is also provided through the device-mapper. Device-mapper support is present in 2.6 kernels although there are patches for the most recent versions of 2.4 kernel version.

dmraid supports several array types.

[root@caladan ~]# dmraid -l
asr     : Adaptec HostRAID ASR (0,1,10)
ddf1    : SNIA DDF1 (0,1,4,5,linear)
hpt37x  : Highpoint HPT37X (S,0,1,10,01)
hpt45x  : Highpoint HPT45X (S,0,1,10)
isw     : Intel Software RAID (0,1)
jmicron : JMicron ATARAID (S,0,1)
lsi     : LSI Logic MegaRAID (0,1,10)
nvidia  : NVidia RAID (S,0,1,10,5)
pdc     : Promise FastTrack (S,0,1,10)
sil     : Silicon Image(tm) Medley(tm) (0,1,10)
via     : VIA Software RAID (S,0,1,10)
dos     : DOS partitions on SW RAIDs
[root@caladan ~]#

Following are a couple of examples to show dmraid operation.

  • Array discovering
[root@caladan ~]# dmraid -r
/dev/dm-46: hpt45x, "hpt45x_chidjhaiaa-0", striped, ok, 320172928 sectors, data@ 0
/dev/dm-50: hpt45x, "hpt45x_chidjhaiaa-0", striped, ok, 320172928 sectors, data@ 0
/dev/dm-54: hpt45x, "hpt45x_chidjhaiaa-1", striped, ok, 320172928 sectors, data@ 0
/dev/dm-58: hpt45x, "hpt45x_chidjhaiaa-1", striped, ok, 320172928 sectors, data@ 0

[root@caladan ~]#
  • Activate all discovered arrays
[root@caladan ~]# dmraid -ay
  • Deactivate all discovered arrays
[root@caladan ~]# dmraid -an

mdadm

mdadm, is a tool to manage the Linux software RAID arrays. This tool has nothing to do with the device-mapper, in fact the device-mapper is not aware of the RAID arrays created with mdadm.

To illustrate this take a look at the screenshot below. I created a RAID1 device, /dev/md0, I then show its configuration with mdadm –detail. Later with dmsetup ls I list all the block devices seen by the device-mapper, as you can see there is no reference to /dev/md0.

Instead mdadm uses the MD (Multiple Devices) device driver, this driver provides virtual devices created from another independent devices. Currently the MD driver supports the following RAID levels and configurations

  • RAID1
  • RAID4
  • RAID5
  • RAID6
  • RAID0
  • LINEAR (a concatenated array)
  • MULTIPATH
  • FAULTY (an special failed array type for testing purposes)

The configuration of the MD devices is contained in the /etc/mdadm.conf file.

[root@caladan ~]# cat mdadm.conf
ARRAY /dev/md1 level=raid5 num-devices=3 spares=1 UUID=5c9d6a69:4a0f120b:f6b02789:3bbc8698
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=b36f1b1c:87cf9497:73b81e8c:79ee3c44
[root@caladan ~]#

The mdadm tool has seven operation modes.

  1. Assemble
  2. Build
  3. Create
  4. Manage
  5. Misc
  6. Follow or Monitor
  7. Grow

A more detailed description of every major operation mode is provided in the mdadm man page.

Finally below are examples of some of the more common operations with mdadm.

  • Create a RAID1 array
[root@caladan ~]# mdadm --create /dev/md1 --verbose --level raid1 --raid-devices 2 /dev/sd[de]1
mdadm: size set to 1044096K
mdadm: array /dev/md1 started.
[root@caladan ~]#
  • Get detailed configuration of the array
[root@caladan ~]# mdadm --query --detail /dev/md1
/dev/md1:
            Version : 00.90.01
      Creation Time : Tue Nov 23 22:37:05 2010
         Raid Level : raid1
         Array Size : 1044096 (1019.80 MiB 1069.15 MB)
        Device Size : 1044096 (1019.80 MiB 1069.15 MB)
       Raid Devices : 2
      Total Devices : 2
    Preferred Minor : 1
        Persistence : Superblock is persistent

        Update Time : Tue Nov 23 22:37:11 2010
              State : clean
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0

               UUID : c1893118:c1327582:7dc3a667:aa87dfeb
             Events : 0.2

        Number   Major   Minor   RaidDevice State
           0       8       49        0      active sync   /dev/sdd1
           1       8       65        1      active sync   /dev/sde1
[root@caladan ~]#
  • Destroy the array
[root@caladan ~]# mdadm --remove /dev/md1
[root@caladan ~]# mdadm --stop /dev/md1
[root@caladan ~]# mdadm --detail /dev/md1
mdadm: md device /dev/md1 does not appear to be active.
[root@caladan ~]#
  • Create a RAID5 array with an spare device
[root@caladan ~]# mdadm --create /dev/md1 --verbose --level raid5 --raid-devices 3 --spare-devices 1 /dev/sd[def]1 /dev/sdg1
mdadm: array /dev/md1 started
[root@caladan ~]#
  • Check for the status of a task into the /proc/mdstat file.
[root@caladan ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdi1[7] sdh1[6] sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0]
             226467456 blocks level 6, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
             [=========>...........]  resync = 49.1% (18552320/37744576) finish=11.4min speed=27963K/sec

unused devices: <none>
[root@caladan ~]#
  • Generate the mdadm.conf file from the current active devices.
[root@caladan ~]# mdadm --detail --scan
ARRAY /dev/md1 level=raid5 num-devices=3 spares=1 UUID=5c9d6a69:4a0f120b:f6b02789:3bbc8698
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=b36f1b1c:87cf9497:73b81e8c:79ee3c44
[root@caladan ~]# mdadm --detail --scan >> mdadm.conf

As a final thought, my recommendation is that if there is hardware RAID controller available, like the HP Smart Array P400 for example, go hard-RAID five by five and if not always use mdadm even if there is an onboard RAID controller.

Juanma.

How to rescan the SCSI bus in Linux

October 28, 2010 9 comments

You are in front of a Linux box, a VM really, with a bunch of new disks that must be configured and suddenly you remember that there is no ioscan in Linux, you will ask yourself ‘who is so stupid to create an operative system wihtout ioscan?’ at  least I did x-)

Yes it is true, there is no ioscan in Linux and that means that everytime you add a new disk to one of your virtual machine you have to reboot it, at least technically that is the truth. But don’t worry there is a quick and dirty way to circumvent that.

From a root shell issue the following command:

[root@redhat ~]# echo "- - -" > /sys/class/scsi_host/<host_number>/scan

After that if you do a fdsik -l will see the new disks.

If you want to rescan your box for new fiber channel disks the command is slightly different.

[root@redhat ~# echo "1" > /sys/class/fc_host/host#/issue_lip

For the fiber channel part there are also third party utilities. HP for example provides hp_rescan which comes with the Proliant Support Pack.

[root@redhat /]# hp_rescan -h
hp_rescan: rescans LUNs on HP supported FC adapters
Usage: hp_rescan -ailh[n]

-a: rescan all adapters
-i: rescan a specific adapter instance. The specific device should be a
 SCSI host number such as "0" or "6"
-l: lists all FC adapters
-n: do not perform "scsi remove-single-device" when executing probe-luns
-h: help
[root@redhat /]#

If you know other ways to rescan the SCSI bus in a Linux server please comment :-)

Juanma.

Categories: HP, Linux Tags: , ,

LVM and file system basics in HP-UX & Linux

September 6, 2010 17 comments

Now that my daily work is more focused on Linux I found myself performing the same basic administration tasks in Linux that I’m used to do in HP-UX. Because of that I thought that a post explaining how the same basic file system and volume management operations are done in both operative systems was necessary, hope you like it :-)

This is going to be a very basic post intended only as a reference for myself and any other Sysadmin coming from either Linux or HP-UX that wants to know how things are done in the other side. Of course this post is no substitute of the official documentation and the corresponding man pages.

I used Red Hat Enterprise Linux 5.5 as the Linux version and 11iv3 as the HP-UX version.

The follwing topics will covered:

  • Volume group creation.
  • Logical volume operations.
  • File system operations.

Volume group creation

Physical volume and volume group creation are the most basic tasks in LVM, both in Linux and HP-UX, but although command syntax is quite similar in both operative systems the whole process differs in many ways.

- HP-UX:

The example used is valid to 11iv2 and 11iv3 HP-UX versions, with the exception of the persistent DSFs you will have to substitute them for the corresponding legacy devices used in 11iv2.

First create the physical volumes.

root@hp-ux:/# pvcreate -f /dev/rdisk/disk10
Physical volume "/dev/rdisk/disk10" has been successfully created.
root@hp-ux:/#
root@hp-ux:/# pvcreate -f /dev/rdisk/disk11
Physical volume "/dev/rdisk/disk11" has been successfully created.
root@hp-ux:/#

In /dev create a directory named as the new volume group, change the ownership to root:root and the permissions ot 755.

root@hp-ux:/# mkdir -p /dev/vg_new
root@hp-ux:/# chown root:root /dev/vg_new
root@hp-ux:/# chmod 755 /dev/vg_new

Go into the VG subdirectory and create the group device special file. For the Linux guys, in HP-UX each volume group must have a group device special file under its subdirectory in /dev. This group DSF is created with the mknod command, like any other DSFs the group file must have a major and a minor number.

For LVM 1.0 volume groups the major number must be 64 and for the LVM 2.0 one must be 128. Regarding the minor number, the first two digits will uniquely identify the volume group and the remaining digits must be 0000. In the below example we’re creating a 1.0 volume group.

root@hp-ux:/dev/vg_new# mknod group c 64 0x010000

Change the ownership to root:sys and the permissions to 640.

root@hp-ux:/dev/vg_new# chown root:sys group
root@hp-ux:/dev/vg_new# chmod 640 group

And create the volume group with the vgcreate command, the arguments passed are the two physical volumes previously created and the size in megabytes of the physical extent. The last one is optional and if is not provided the default of 4MB will be automatically set.

root@hp-ux:/# vgcreate -s 16 vg_new /dev/disk/disk10 /dev/disk/disk11
Volume group "/dev/vg_new" has been successfully created.
Volume Group configuration for /dev/vg_new has been saved in /etc/lvmconf/vg_new.conf
root@hp-ux:/#
root@hp-ux:/# vgdisplay -v vg_new
--- Volume groups ---
VG Name                     /dev/vg_new
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255    
Cur LV                      0      
Open LV                     0      
Max PV                      16     
Cur PV                      2      
Act PV                      2      
Max PE per PV               6000         
VGDA                        2   
PE Size (Mbytes)            16              
Total PE                    26    
Alloc PE                    0       
Free PE                     26    
Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0 

   --- Physical volumes ---
   PV Name                     /dev/disk/disk10
   PV Status                   available                
   Total PE                    13       
   Free PE                     13       
   Autoswitch                  On        

   PV Name                     /dev/disk/disk11
   PV Status                   available                
   Total PE                    13       
   Free PE                     13       
   Autoswitch                  On     

root@hp-ux:/#

- Linux:

Create the physical volumes. Here it is where the first difference appears. In HP-UX a physical volume is composed by a whole disk, with the exception of boot disks in Itanium systems, but in Linux a physical volume can be a whole disk or a partition.

For the whole disk the process is pretty much the same as in HP-UX.

[root@rhel /]# pvcreate -f /dev/sdb
  Physical volume "/dev/sdb" successfully created
[root@rhel /]# pvdisplay /dev/sdb
  "/dev/sdb" is a new physical volume of "204.00 MB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb
  VG Name               
  PV Size               204.00 MB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               Ngyz7I-Z2hL-8R3b-hzA3-qIVc-tZuY-DbCBYn

[root@rhel /]#

If you decide to use partitions for the PVs the first, and obvious, thing to do is partition the disk. To setup the disk we’ll use the fdisk tool, following is an example session:

[root@rhel /]# fdisk /dev/sdc 

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-204, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-204, default 204):
Using default value 204

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): p

Disk /dev/sdc: 213 MB, 213909504 bytes
64 heads, 32 sectors/track, 204 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1         204      208880   8e  Linux LVM

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@rhel /]#

To explain the session first a new partition is created with the command n and the size of the partition is set (in this particular case we are using the whole disk); then we must change the partition type, which by default is set to Linux, to Linux LVM and to do that we use the command t and issue 8e as the corresponding hexadecimal code, the available values for the partition types can be shown  by typing L.

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): L

 0  Empty           1e  Hidden W95 FAT1 80  Old Minix       bf  Solaris        
 1  FAT12           24  NEC DOS         81  Minix / old Lin c1  DRDOS/sec (FAT-
 2  XENIX root      39  Plan 9          82  Linux swap / So c4  DRDOS/sec (FAT-
 3  XENIX usr       3c  PartitionMagic  83  Linux           c6  DRDOS/sec (FAT-
 4  FAT16 <32M      40  Venix 80286     84  OS/2 hidden C:  c7  Syrinx         
 5  Extended        41  PPC PReP Boot   85  Linux extended  da  Non-FS data    
 6  FAT16           42  SFS             86  NTFS volume set db  CP/M / CTOS / .
 7  HPFS/NTFS       4d  QNX4.x          87  NTFS volume set de  Dell Utility   
 8  AIX             4e  QNX4.x 2nd part 88  Linux plaintext df  BootIt         
 9  AIX bootable    4f  QNX4.x 3rd part 8e  Linux LVM       e1  DOS access     
 a  OS/2 Boot Manag 50  OnTrack DM      93  Amoeba          e3  DOS R/O        
 b  W95 FAT32       51  OnTrack DM6 Aux 94  Amoeba BBT      e4  SpeedStor      
 c  W95 FAT32 (LBA) 52  CP/M            9f  BSD/OS          eb  BeOS fs        
 e  W95 FAT16 (LBA) 53  OnTrack DM6 Aux a0  IBM Thinkpad hi ee  EFI GPT        
 f  W95 Ext'd (LBA) 54  OnTrackDM6      a5  FreeBSD         ef  EFI (FAT-12/16/
10  OPUS            55  EZ-Drive        a6  OpenBSD         f0  Linux/PA-RISC b
11  Hidden FAT12    56  Golden Bow      a7  NeXTSTEP        f1  SpeedStor      
12  Compaq diagnost 5c  Priam Edisk     a8  Darwin UFS      f4  SpeedStor      
14  Hidden FAT16 <3 61  SpeedStor       a9  NetBSD          f2  DOS secondary  
16  Hidden FAT16    63  GNU HURD or Sys ab  Darwin boot     fb  VMware VMFS    
17  Hidden HPFS/NTF 64  Novell Netware  b7  BSDI fs         fc  VMware VMKCORE
18  AST SmartSleep  65  Novell Netware  b8  BSDI swap       fd  Linux raid auto
1b  Hidden W95 FAT3 70  DiskSecure Mult bb  Boot Wizard hid fe  LANstep        
1c  Hidden W95 FAT3 75  PC/IX           be  Solaris boot    ff  BBT            
Hex code (type L to list codes):

The changes are written with w.

Once the partitions are correctly created, setup the physical volumes.

[root@rhel /]# pvcreate -f /dev/sdc1
  Physical volume "/dev/sdc1" successfully created
[root@rhel /]# pvcreate -f /dev/sdd1
  Physical volume "/dev/sdd1" successfully created
[root@rhel /]#
[root@rhel /]# pvs
  PV         VG    Fmt  Attr PSize   PFree  
  /dev/sda2  sysvg lvm2 a-    19.88G      0
  /dev/sdb         lvm2 --   204.00M 204.00M
  /dev/sdc1        lvm2 --   203.98M 203.98M
  /dev/sdd1        lvm2 --   203.98M 203.98M
[root@rhel /]#

Now that the PVs are created we can proceed with the volume group creation.

[root@rhel /]# vgcreate vg_new /dev/sdc1 /dev/sdd1
 Volume group "vg_new" successfully created
[root@rhel /]# vgdisplay -v vg_new
  Using volume group(s) on command line
  Finding volume group "vg_new"
  /dev/hdc: open failed: No medium found
  --- Volume group ---
  VG Name               vg_new
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               400.00 MB
  PE Size               4.00 MB
  Total PE              100
  Alloc PE / Size       0 / 0   
  Free  PE / Size       100 / 400.00 MB
  VG UUID               lvrrnt-sHbo-eC8j-kC53-Mm5Z-IDDR-RJJtDr

  --- Physical volumes ---
  PV Name               /dev/sdc1     
  PV UUID               kD0jhk-ws8A-ke3L-a7nd-QucS-SAbH-BrmH28
  PV Status             allocatable
  Total PE / Free PE    50 / 50

  PV Name               /dev/sdd1     
  PV UUID               ZP2bLy-FxR3-gYn9-3Dy1-Llgk-6mFI-1iJvTm
  PV Status             allocatable
  Total PE / Free PE    50 / 50

[root@rhel /]#

As you could see the process in Linux is slightly simple than in HP-UX.

Logical volume operations

In this part we will see how to create a logical volume, extend this LV and then remove it from the system.

- HP-UX:

The logical volume creation can be done with the lvcreate command. With the -L option we can specify the size in MB of the new lvol, if -l is used instead the size must be provided in logical extents.

root@hp-ux:/# lvcreate -n lvol_test -L 256 vg_new
Logical volume "/dev/vg_new/lvol_test_S2" has been successfully created with
character device "/dev/vg_new/rlvol_test_S2".
Logical volume "/dev/vg_new/lvol_test_S2" has been successfully extended.
Volume Group configuration for /dev/vg_new has been saved in /etc/lvmconf/vg_new.conf
root@hp-ux:~# lvdisplay  /dev/vg_new/lvol_test
--- Logical volumes ---
LV Name                     /dev/vg_new/lvol_test
VG Name                     /dev/vg_new
LV Permission               read/write                
LV Status                   available/syncd           
Mirror copies               0            
Consistency Recovery        MWC                 
Schedule                    parallel      
LV Size (Mbytes)            256             
Current LE                  16             
Allocated PE                16             
Stripes                     0       
Stripe Size (Kbytes)        0                   
Bad block                   on         
Allocation                  strict                    
IO Timeout (Seconds)        default             

root@hp-ux:/#

Extend a volume. Of course the first prerequisite to extend a volume is to have enough free physical extends in the volume group.

root@hp-ux:~# lvextend -L 384 /dev/vg_new/lvol_test
Logical volume "/dev/vg_new/lvol_test" has been successfully extended.
Volume Group configuration for /dev/vg_new has been saved in /etc/lvmconf/vg_new.conf
root@hp-ux:~#
root@hp-ux:~# lvdisplay  /dev/vg_new/lvol_test
--- Logical volumes ---
LV Name                     /dev/vg_new/lvol_test
VG Name                     /dev/vg_new
LV Permission               read/write                
LV Status                   available/syncd           
Mirror copies               0            
Consistency Recovery        MWC                 
Schedule                    parallel      
LV Size (Mbytes)            384             
Current LE                  24             
Allocated PE                24             
Stripes                     0       
Stripe Size (Kbytes)        0                   
Bad block                   on         
Allocation                  strict                    
IO Timeout (Seconds)        default             

root@hp-ux:/#

The final step of this part is to remove the logical volume.

root@hp-ux:/# lvremove /dev/vg_new/lvol_test
The logical volume "/dev/vg_new/lvol_test" is not empty;
do you really want to delete the logical volume (y/n) : y
Logical volume "/dev/vg_new/lvol_test" has been successfully removed.
Volume Group configuration for /dev/vg_new has been saved in /etc/lvmconf/vg_new.conf
root@hp-ux:/#

- Linux:

Create the logical volume with the lvcreate command, the most basic options (-L, -l, -n) are the same as in HP-UX.

[root@rhel /]# lvcreate -n lv_test -L 256 vg_new
  Logical volume "lv_test" created
[root@rhel /]# lvdisplay /dev/vg_new/lv_test
  --- Logical volume ---
  LV Name                /dev/vg_new/lv_test
  VG Name                vg_new
  LV UUID                m5G2vT-dsE1-CycS-BMYR-3MYZ-4y8O-Mx04B8
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                256.00 MB
  Current LE             16
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6

[root@rhel /]#

Now extend the logical volume to 384 megabytes as we did in HP-UX.

[root@rhel /]# lvextend -L 384 /dev/vg_new/lv_test
  Extending logical volume lv_test to 384.00 MB
  Logical volume lv_test successfully resized
[root@rhel /]#
[root@rhel /]# lvdisplay /dev/vg_new/lv_test
  --- Logical volume ---
  LV Name                /dev/vg_new/lv_test
  VG Name                vg_new
  LV UUID                m5G2vT-dsE1-CycS-BMYR-3MYZ-4y8O-Mx04B8
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                384.00 MB
  Current LE             24
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6

[root@rhel /]#

Remove a volume from the system, like creation and extension is a very straight forward process that can be done with one command.

[root@rhel /]# lvremove /dev/vg_new/lv_test
Do you really want to remove active logical volume lv_test? [y/n]: y
  Logical volume "lv_test" successfully removed
[root@rhel /]#

Unlike the volume group section, the basic logical operations are performed in almost the same way in both operative systems. Of course if you want to perform mirroring the differences are bigger, but I will leave that for a future post.

File system operations

The final section of the post is about the basic file system operation, we are going to create a file system on the logical volume of the previous section and later to extend it, including this time the volume group extension.

- HP-UX:

Creating the file system with the newfs command.

root@hp-ux:/# newfs -F vxfs -o largefiles /dev/vg_new/rlvol_test
 version 7 layout
 393216 sectors, 393216 blocks of size 1024, log size 1024 blocks
 largefiles supported
root@hp-ux:/#

Create the mount point and mount the filesystem

root@hp-ux:/# mkdir /data
root@hp-ux:/# mount /dev/vg_new/lvol_test /data

Filesystem extension, in the current section we are going to suppose that the volume group has not enough physical extension to accommodate the new size of the /data file system.

After we create a new physical volume in the disk12 we are going to extend the vg_new VG.

root@hp-ux:/# vgextend vg_new /dev/disk/disk12
Volume group "vg_new" has been successfully extended.
Volume Group configuration for /dev/vg_new has been saved in /etc/lvmconf/vg_new.conf
root@hp-ux:/#
root@hp-ux:/# vgdisplay -v vg_new
--- Volume groups ---
VG Name                     /dev/vg_new
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255    
Cur LV                      0      
Open LV                     0      
Max PV                      16     
Cur PV                      2      
Act PV                      2      
Max PE per PV               6000         
VGDA                        2   
PE Size (Mbytes)            16              
Total PE                    39    
Alloc PE                    24       
Free PE                     15    
Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0 

   --- Logical volumes ---
   LV Name                     /dev/vg_mir/lv_sql
   LV Status                   available/syncd           
   LV Size (Mbytes)            384             
   Current LE                  24             
   Allocated PE                24             
   Used PV                     2 

   --- Physical volumes ---
   PV Name                     /dev/disk/disk10
   PV Status                   available                
   Total PE                    13       
   Free PE                     13       
   Autoswitch                  On        

   PV Name                     /dev/disk/disk11
   PV Status                   available                
   Total PE                    13       
   Free PE                     13       
   Autoswitch                  On  

   PV Name                     /dev/disk/disk12
   PV Status                   available                
   Total PE                    13       
   Free PE                     13       
   Autoswitch                  On  

root@hp-ux:/#

The next part is to extend the logical volume just as we did in the logical volume operations section.

root@hp-ux:/# lvextend -L 512 /dev/vg_new/lvol_test
Logical volume "/dev/vg_new/lvol_test" has been successfully extended.
Volume Group configuration for /dev/vg_new has been saved in /etc/lvmconf/vg_new.conf
root@hp-ux:~#

And finally the most creepy part of the part of the process, extending the file system. To be capable of extending a mounted filesystem in HP-UX the OnlineJFS bundle must be installed.

Use the command fsadm and with the -b option issue the new size in KB as the argument, in the example we want to extend to 512MB, in KB is 524288.

root@hp-ux:/# fsadm -F vxfs -b 524288 /data
vxfs fsadm: V-3-23585: /dev/vg00/rlvol5 is currently 7731200 sectors - size will be increased
root#hp-ux:/#
root@hp-ux:/# bdf /data
Filesystem          kbytes    used   avail %used Mounted on
/dev/vg_new/lvol_test
                    524288    5243  524288    1% /data
root@hp-ux:/#

- Linux:

Here in the filesystem part is where the commands are completely different to HP-UX. In Linux the most common file system types are ext2 and ext3, although others like ext4 or reiserfs are supported.

To create an ext3 file system issue the command mkfs.ext3 using the logical volume to create the file system on as the argument.

[root@rhel ~]# mkfs.ext3 /dev/vg_new/lv_test
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
98304 inodes, 393216 blocks
19660 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67633152
48 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
        8193, 24577, 40961, 57345, 73729, 204801, 221185

Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@rhel ~]#

As in HP-UX create the mount point and mount the file system.

[root@rhel ~]# mkdir /data
[root@rhel ~]# mount /dev/vg_new/lv_test /data
[root@rhel ~]# df -h /data
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_new-lv_test
                      372M   11M  343M   3% /data
[root@rhel ~]#

The final part of the section is the file system extension, as we did in the HP-UX part the first task is to extend the volume group.

[root@rhel ~]# vgextend vg_new /dev/sde1
  Volume group "vg_new" successfully extended
[root@rhel ~]# vgdisplay -v vg_new
    Using volume group(s) on command line
    Finding volume group "vg_new"
  --- Volume group ---
  VG Name               vg_new
  System ID             
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               576.00 MB
  PE Size               16.00 MB
  Total PE              36
  Alloc PE / Size       24 / 384.00 MB
  Free  PE / Size       12 / 192.00 MB
  VG UUID               u32c0h-BPGN-HixT-IzsX-cNnC-EspO-xfweaI

  --- Logical volume ---
  LV Name                /dev/vg_new/lv_test
  VG Name                vg_new
  LV UUID                ZtArMo-Pyyl-BDHX-9CZQ-uEAK-VDqG-t60xy4
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                384.00 MB
  Current LE             24
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6

  --- Physical volumes ---
  PV Name               /dev/sdc1     
  PV UUID               kD0jhk-ws8A-ke3L-a7nd-QucS-SAbH-BrmH28
  PV Status             allocatable
  Total PE / Free PE    12 / 0

  PV Name               /dev/sdd1     
  PV UUID               ZP2bLy-FxR3-gYn9-3Dy1-Llgk-6mFI-1iJvTm
  PV Status             allocatable
  Total PE / Free PE    12 / 0

  PV Name               /dev/sde1     
  PV UUID               wbiNu5-csig-uwY7-y14y-3C8Q-oeN0-hAT49g
  PV Status             allocatable
  Total PE / Free PE    12 / 12

[root@rhel ~]#

Extend the logical volume with lvextend.

[root@rhel ~]# lvextend -L 512 /dev/vg_new/lv_test
  Extending logical volume lv_test to 512.00 MB
  Logical volume lv_test successfully resized
[root@rhel ~]# lvs
  LV      VG     Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  lv_home sysvg  -wi-ao 256.00M                                      
  lv_root sysvg  -wi-ao   5.84G                                      
  lv_swap sysvg  -wi-ao   1.00G                                      
  lv_tmp  sysvg  -wi-ao   1.00G                                      
  lv_usr  sysvg  -wi-ao   9.75G                                      
  lv_var  sysvg  -wi-ao   2.03G                                      
  lv_test vg_new -wi-ao 512.00M                                      
[root@rhel ~]#

Finally resize the file system, to do that use the resize2fs tool. Unlike in HP-UX with fsadm, that needs the new size as an argument  in order to extend the file system, if you simply issue the logical volume as the argument the resize2fs utility will extend the file system to the maximum size available in the LV.

[root@rhel ~]# resize2fs /dev/vg_new/lv_test
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/vg_new/lv_test is mounted on /data; on-line resizing required
Performing an on-line resize of /dev/vg_new/lv_test to 524288 (1k) blocks.
The filesystem on /dev/vg_new/lv_test is now 524288 blocks long.

[root@rhel ~]#

And at this point we are done. Any comments are welcome as always :-)

Juanma.

Categories: HP-UX, Linux Tags: , , , , , , ,

Installing the HP Lefthand CMC in Linux

July 27, 2010 2 comments

May be some of you are not aware of this but the HP Lefthand Central Management Console application is available not only for Windows but for Linux and HP-UX also. The application is included on the SAN/iQ Management Software DVD that can be downloaded from here.

Burn the iso  or mount it in your Linux system. Navigate trough the iso to GUI/Linux/Disk1/InstData, there you will find two files and a directory named VM. Get into the directory and will find the installer CMC_Installer.bin.

Launch the installer passing it the full path to the installer properties file, in this case the file MediaId.properties that can be found on GUI/Linux/Disk1/InstData.

root@wopr:/mnt/iso/GUI/Linux/Disk1/InstData/VM# ./CMC_Installer.bin -f /mnt/iso/Linux/Disk1/InstData/MediaId.properties

The CMC will be installed in /opt/LeftHandNetworks/UI. Once the installation is finished launch the CMC from the shell or create a launcher on your Gnome/KDE desktop and voilà you can now control your Lefthand Storage systems from your favorite Linux distro.

Juanma.

Configure AVIO Lan in HPVM Linux guests

The  AVIO Lan drivers for Linux HPVM guests are supported since HPVM4.0 but as you will see enabling it is a little more complicated than in HP-UX guests.

The first prerequisite is to have installed the HPVM management software, once you have this package installed look for a RPM package called hpvm_lgssn in /opt/hpvm/guest-images/linux/DRIVERS.

root@hpvm-host:/opt/hpvm/guest-images/linux/DRIVERS # ll
total 584
 0 drwxr-xr-x 2 bin bin     96 Apr 13 18:47 ./
 0 drwxr-xr-x 5 bin bin     96 Apr 13 18:48 ../
 8 -r--r--r-- 1 bin bin   7020 Mar 27  2009 README
576 -rw-r--r-- 1 bin bin 587294 Mar 27  2009 hpvm_lgssn-4.1.0-3.ia64.rpm
root@hpvm-host:/opt/hpvm/guest-images/linux/DRIVERS #

Copy the package to the virtual machine with your favorite method and install it.

[sles10]:/var/tmp # rpm -ivh hpvm_lgssn-4.1.0-3.ia64.rpm
Preparing...                ########################################### [100%]
Installing...               ########################################### [100%]

[sles10]:/var/tmp #

Check the installation of the package.

[sles10]:~ # rpm -qa | grep hpvm
hpvm-4.1.0-1
hpvmprovider-4.1.0-1
hpvm_lgssn-4.1.0-3
[sles10]:~ #
[sles10]:~ # rpm -ql hpvm_lgssn
/opt/hpvm_drivers
/opt/hpvm_drivers/lgssn
/opt/hpvm_drivers/lgssn/LICENSE
/opt/hpvm_drivers/lgssn/Makefile
/opt/hpvm_drivers/lgssn/README
/opt/hpvm_drivers/lgssn/hpvm_guest.h
/opt/hpvm_drivers/lgssn/lgssn.h
/opt/hpvm_drivers/lgssn/lgssn_ethtool.c
/opt/hpvm_drivers/lgssn/lgssn_main.c
/opt/hpvm_drivers/lgssn/lgssn_recv.c
/opt/hpvm_drivers/lgssn/lgssn_recv.h
/opt/hpvm_drivers/lgssn/lgssn_send.c
/opt/hpvm_drivers/lgssn/lgssn_send.h
/opt/hpvm_drivers/lgssn/lgssn_trace.h
/opt/hpvm_drivers/lgssn/rh4
/opt/hpvm_drivers/lgssn/rh4/u5
/opt/hpvm_drivers/lgssn/rh4/u5/lgssn.ko
/opt/hpvm_drivers/lgssn/rh4/u6
/opt/hpvm_drivers/lgssn/rh4/u6/lgssn.ko
/opt/hpvm_drivers/lgssn/sles10
/opt/hpvm_drivers/lgssn/sles10/SP1
/opt/hpvm_drivers/lgssn/sles10/SP1/lgssn.ko
/opt/hpvm_drivers/lgssn/sles10/SP2
/opt/hpvm_drivers/lgssn/sles10/SP2/lgssn.ko
[sles10]:~ #

There are two ways to install the driver, compile it or use one of the pre-compiled modules. These pre-compiled modules are for the following distributions and kernels:

  • Red Hat 4 release 5 (2.6.9-55.EL)
  • Red Hat 4 release 6 (2.6.9-67.EL)
  • SLES10 SP1 (2.6.16.46-0.12)
  • SLES10 SP2 (2.6.16.60-0.21)

For other kernels you must compile the driver. In the Linux box of the example I had a supported kernels and distro (SLES10 SP2) but instead of using the pre-compiled one I decided to go through the whole process.

Go the path /opt/hpvm_drivers/lgssn, there you will find the sources of the driver. To compile and install execute a simple make install.

[sles10]:/opt/hpvm_drivers/lgssn # make install
make -C /lib/modules/2.6.16.60-0.21-default/build SUBDIRS=/opt/hpvm_drivers/lgssn modules
make[1]: Entering directory `/usr/src/linux-2.6.16.60-0.21-obj/ia64/default'
make -C ../../../linux-2.6.16.60-0.21 O=../linux-2.6.16.60-0.21-obj/ia64/default modules
 CC [M]  /opt/hpvm_drivers/lgssn/lgssn_main.o
 CC [M]  /opt/hpvm_drivers/lgssn/lgssn_send.o
 CC [M]  /opt/hpvm_drivers/lgssn/lgssn_recv.o
 CC [M]  /opt/hpvm_drivers/lgssn/lgssn_ethtool.o
 LD [M]  /opt/hpvm_drivers/lgssn/lgssn.o
 Building modules, stage 2.
 MODPOST
 CC      /opt/hpvm_drivers/lgssn/lgssn.mod.o
 LD [M]  /opt/hpvm_drivers/lgssn/lgssn.ko
make[1]: Leaving directory `/usr/src/linux-2.6.16.60-0.21-obj/ia64/default'
find /lib/modules/2.6.16.60-0.21-default -name lgssn.ko -exec rm -f {} \; || true
find /lib/modules/2.6.16.60-0.21-default -name lgssn.ko.gz -exec rm -f {} \; || true
install -D -m 644 lgssn.ko /lib/modules/2.6.16.60-0.21-default/kernel/drivers/net/lgssn/lgssn.ko
/sbin/depmod -a || true
[sles10]:/opt/hpvm_drivers/lgssn #

This will copy the driver to /lib/module/<KERNEL_VERSION>/kernel/drivers/net/lgssn/.

To ensure that the new driver will loaded during the startup of the operative system first add the following line to /etc/modprobe.conf, one line for each interface configured for AVIO Lan.

alias eth1 lgssn

The HPVM 4.2 manual said you have to issue the command depmod -a in order to inform the kernel about the change but if you look the above log will see that the last command executed by the make install is a depmod -a. Look into the modules.dep file to check that the corresponding line for the lgssn driver has been added.

[sles10]:~ # grep lgssn /lib/modules/2.6.16.60-0.21-default/modules.dep
/lib/modules/2.6.16.60-0.21-default/kernel/drivers/net/lgssn/lgssn.ko:
[sles10]:~ #

At this point and if you have previously reconfigured the virtual machine, load the module and restart the network services.

[sles10]:/opt/hpvm_drivers/lgssn # insmod /lib/modules/2.6.16.60-0.21-default/kernel/drivers/net/lgssn/lgssn.ko
[sles10]:/opt/hpvm_drivers/lgssn # lsmod |grep lgssn
lgssn                 576136  0
[sles10]:/opt/hpvm_drivers/lgssn #
[sles10]:/opt/hpvm_drivers/lgssn # service network restart
Shutting down network interfaces:
    eth0      device: Intel Corporation 82540EM Gigabit Ethernet Controller
    eth0      configuration: eth-id-2a:87:14:5c:f9:ed
    eth0                                                              done
    eth1      device: Hewlett-Packard Company Unknown device 1338
    eth1      configuration: eth-id-66:f3:f8:4e:37:d5
    eth1                                                              done
    eth2      device: Intel Corporation 82540EM Gigabit Ethernet Controller
    eth2      configuration: eth-id-0a:dc:fd:cb:2c:62
    eth2                                                              done
Shutting down service network  .  .  .  .  .  .  .  .  .  .  .  .  .  done
Hint: you may set mandatory devices in /etc/sysconfig/network/config
Setting up network interfaces:
    lo        
    lo       
              IP address: 127.0.0.1/8   
              IP address: 127.0.0.2/8   
Checking for network time protocol daemon (NTPD):                     running
    lo                                                                done
    eth0      device: Intel Corporation 82540EM Gigabit Ethernet Controller
    eth0      configuration: eth-id-2a:87:14:5c:f9:ed
Warning: Could not set up default route via interface
 Command ip route replace to default via 10.31.12.1 returned:
 . RTNETLINK answers: Network is unreachable
 Configuration line: default 10.31.12.1 - -
 This needs NOT to be AN ERROR if you set up multiple interfaces.
 See man 5 routes how to avoid this warning.

Checking for network time protocol daemon (NTPD):                     running
    eth0                                                              done
    eth1      device: Hewlett-Packard Company Unknown device 1338
    eth1      configuration: eth-id-66:f3:f8:4e:37:d5
    eth1      IP address: 10.31.4.16/24   
Warning: Could not set up default route via interface
 Command ip route replace to default via 10.31.12.1 returned:
 . RTNETLINK answers: Network is unreachable
 Configuration line: default 10.31.12.1 - -
 This needs NOT to be AN ERROR if you set up multiple interfaces.
 See man 5 routes how to avoid this warning.

Checking for network time protocol daemon (NTPD):                     running
    eth1                                                              done
    eth2      device: Intel Corporation 82540EM Gigabit Ethernet Controller
    eth2      configuration: eth-id-0a:dc:fd:cb:2c:62
    eth2      IP address: 10.31.12.11/24   
Checking for network time protocol daemon (NTPD):                     running
    eth2                                                              done
Setting up service network  .  .  .  .  .  .  .  .  .  .  .  .  .  .  done
[sles10]:/opt/hpvm_drivers/lgssn #

If you have not configured the networking interface of the virtual machine shutdown the virtual machine and from the host modify each virtual NIC of the guest. Take into account that AVIO Lan drivers are not supported with localnet virtual switches.

root@hpvm-host:~ # hpvmmodify -P sles10 -m network:avio_lan:0,2:vswitch:vlan2:portid:4
root@hpvm-host:~ # hpvmstatus -P sles10 -d
[Virtual Machine Devices]
...
[Network Interface Details]
network:lan:0,0,0x2A87145CF9ED:vswitch:localnet:portid:4
network:avio_lan:0,1,0x66F3F84E37D5:vswitch:vlan1:portid:4
network:avio_lan:0,2,0x0ADCFDCB2C62:vswitch:vlan2:portid:4
...
root@hpvm-host:~ #

Finally start the virtual machine and check that everything went well and the drivers have been loaded.

Juanma

Follow

Get every new post delivered to your Inbox.

Join 197 other followers

%d bloggers like this: