Archives For containers

Lattice is the latest addition from Pivotal to its portfolio of open source projects. Lattice leverages various components from Cloud Foundry, in order to run containerized workloads in a cluster.

  • Diego – The new Cloud Foundry elastic runtime. Acts as an action-based scheduler and provides support for Docker images.
  • Doppler – The log and metric aggregator for the platform and the running workloads.
  • Gorouter – A software-based router with reverse proxy capabilities. Dynamically updated as the containers are spun up and down.

The first fact we need to understand about Lattice is that it is not intended to run production workloads. Instead Lattice is meant to be run on Virtualbox or VMware Fusion using Vagrant. In the end Lattice is an easy way to leverage all the power of Cloud Foundry for running containers, in your laptop and without having to bother about all Cloud Foundry installation details.

Deploying Lattice

Installing and running Lattice in your laptop is relatively easy process, first of all we will need  First download the latest package from GitHub Releases page for Lattice, there are packages available for Linux and OS X.

Unzip the package in a directory with the rest of your virtual machines.

unzip lattice-bundle-v0.4.3-osx.zip

Now copy the ltc utility to a directory in your PATH, I always use /usr/local/bin for this kind of binaries.

Running Lattice

Running Lattice should be quite simple, just change to vagrant directory on Lattice installation path, then execute vagrant up command and that’s it. However there is a caveat, by default the Vagranfile will use the IP address 192.168.11.11 if LATTICE_SYSTEM_IP variable is not provided during the execution.

To avoid this issue pass LATTICE_SYSTEM_IP variable to Vagrant during the execution. I personally have used both VMware Fusion and VMware AppCatalyst but you can use Virtualbox too. For the AppCatalyst the only requirement is to have appcatalyst-daemon running since it is needed by the Vagrant provider.

LATTICE_SYSTEM_IP=192.168.161.11 vagrant up --provider vmware_fusion

With this we will have our Lattice instance up and running. Next we need to tell ltc how to connect to our Lattice instance, this operation is called targeting.

ltc target 192.168.161.11.xip.io

With the API endpoint set let’s deploy our first application, for this example we will use Lattice example app. Run the ltc create command with the name of the new app and the container to be spun up as the arguments.

Screen Shot 2015-10-02 at 13.33.37

Open your favorite browser and access https://my-app.192.168.161.11.xip.io.

Screen Shot 2015-10-02 at 13.44.24

The index indicates the node that we are accessing. Next we will scale up the application adding two additional containers. Use ltc scale to add additional instances of the app and ltc status to retrieve the status.

Screen Shot 2015-10-02 at 14.22.40

Another useful operation with ltc is the capacity to get the logs for your app.

 2015-10-02 14:23:18 ☆ trantor in ~
○ → ltc logs my-app
10/02 14:30:11.10 [APP|2] Lattice-app. Says Hello. on index: 2
10/02 14:30:11.28 [APP|0] Lattice-app. Says Hello. on index: 0
10/02 14:30:11.60 [APP|1] Lattice-app. Says Hello. on index: 1
10/02 14:30:12.10 [APP|2] Lattice-app. Says Hello. on index: 2
10/02 14:30:12.28 [APP|0] Lattice-app. Says Hello. on index: 0
10/02 14:30:12.60 [APP|1] Lattice-app. Says Hello. on index: 1

I’ll let you to add more apps to Lattice and to play around with ltc.

Comments are welcome, as always.

Juanma.

OpenVZ in CentOS 5.4

April 4, 2010 — 4 Comments

First something I completely forgot in my first post. I discovered OpenVZ thanks to Vivek Gite’s great site nixCraft. This post and the previous one are inspired by his nice series of posts about OpenVZ. Now the show can begin :-)

As I said in my first post about OpenVZ I decided to set-up a test server. Since I didn’t had a spare box in my homelab I created a VM inside VMware Workstation, the performance isn’t the same as in a physical server but this a test and learn environment so it will suffice.

There is a Debian based bare-metal installer ISO named Proxmos Virtual Environment and OpenVZ is also supported in many Linux distributions, each one has its own installation method, but I choose CentOS for my Host node server because is one of my favorite Linux server distros.

  • Add the yum repository to the server:
[root@openvz ~]# cd /etc/yum.repos.d/
[root@openvz yum.repos.d]# ls
CentOS-Base.repo  CentOS-Media.repo
[root@openvz yum.repos.d]#  wget http://download.openvz.org/openvz.repo
--2010-04-04 00:53:12--  http://download.openvz.org/openvz.repo
Resolving download.openvz.org... 64.131.90.11
Connecting to download.openvz.org|64.131.90.11|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3182 (3.1K) [text/plain]
Saving to: `openvz.repo'

100%[==========================================================================================>] 3,182       --.-K/s   in 0.1s    

2010-04-04 00:53:14 (22.5 KB/s) - `openvz.repo' saved [3182/3182]

[root@openvz yum.repos.d]# rpm --import http://download.openvz.org/RPM-GPG-Key-OpenVZ
[root@openvz yum.repos.d]#
  • Install the OpenVZ kernel, in my particular case I used the basic kernel but there are SMP+PAE, PAE and Xen kernels available:
[root@openvz yum.repos.d]# yum install ovzkernel
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * addons: ftp.dei.uc.pt
 * base: ftp.dei.uc.pt
 * extras: ftp.dei.uc.pt
 * openvz-kernel-rhel5: openvz.proserve.nl
 * openvz-utils: openvz.proserve.nl
 * updates: ftp.dei.uc.pt
addons                                                                                                       |  951 B     00:00     
base                                                                                                         | 2.1 kB     00:00     
extras                                                                                                       | 2.1 kB     00:00     
openvz-kernel-rhel5                                                                                          |  951 B     00:00     
openvz-utils                                                                                                 |  951 B     00:00     
updates                                                                                                      | 1.9 kB     00:00     
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package ovzkernel.i686 0:2.6.18-164.15.1.el5.028stab068.9 set to be installed
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================
 Package                 Arch               Version                                         Repository                         Size
====================================================================================================================================
Installing:
 ovzkernel               i686               2.6.18-164.15.1.el5.028stab068.9                openvz-kernel-rhel5                19 M

Transaction Summary
====================================================================================================================================
Install      1 Package(s)         
Update       0 Package(s)         
Remove       0 Package(s)         

Total download size: 19 M
Is this ok [y/N]: y
Downloading Packages:
ovzkernel-2.6.18-164.15.1.el5.028stab068.9.i686.rpm                                                          |  19 MB     00:19     
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
 Installing     : ovzkernel                                                                                                    1/1 

Installed:
 ovzkernel.i686 0:2.6.18-164.15.1.el5.028stab068.9                                                                                 

Complete!
[root@openvz yum.repos.d]#
  • Install the OpenVZ management utilities:
[root@openvz yum.repos.d]# yum install vzctl vzquota
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * addons: centos.cict.fr
 * base: ftp.dei.uc.pt
 * extras: centos.cict.fr
 * openvz-kernel-rhel5: mirrors.ircam.fr
 * openvz-utils: mirrors.ircam.fr
 * updates: ftp.dei.uc.pt
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package vzctl.i386 0:3.0.23-1 set to be updated
--> Processing Dependency: vzctl-lib = 3.0.23-1 for package: vzctl
--> Processing Dependency: libvzctl-0.0.2.so for package: vzctl
---> Package vzquota.i386 0:3.0.12-1 set to be updated
--> Running transaction check
---> Package vzctl-lib.i386 0:3.0.23-1 set to be updated
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================
 Package                         Arch                       Version                        Repository                          Size
====================================================================================================================================
Installing:
 vzctl                           i386                       3.0.23-1                       openvz-utils                       143 k
 vzquota                         i386                       3.0.12-1                       openvz-utils                        82 k
Installing for dependencies:
 vzctl-lib                       i386                       3.0.23-1                       openvz-utils                       175 k

Transaction Summary
====================================================================================================================================
Install      3 Package(s)         
Update       0 Package(s)         
Remove       0 Package(s)         

Total download size: 400 k
Is this ok [y/N]: y
Downloading Packages:
(1/3): vzquota-3.0.12-1.i386.rpm                                                                             |  82 kB     00:00     
(2/3): vzctl-3.0.23-1.i386.rpm                                                                               | 143 kB     00:00     
(3/3): vzctl-lib-3.0.23-1.i386.rpm                                                                           | 175 kB     00:00     
------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                               201 kB/s | 400 kB     00:01     
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
 Installing     : vzctl-lib                                                                                                    1/3
 Installing     : vzquota                                                                                                      2/3
 Installing     : vzctl                                                                                                        3/3 

Installed:
 vzctl.i386 0:3.0.23-1                                           vzquota.i386 0:3.0.12-1                                          

Dependency Installed:
 vzctl-lib.i386 0:3.0.23-1                                                                                                         

Complete!
[root@openvz yum.repos.d]#
  • Configure the kernel. The following adjustments must be done in the /etc/sysctl.conf file:
# On Hardware Node we generally need
# packet forwarding enabled and proxy arp disabled
net.ipv4.ip_forward = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.default.proxy_arp = 0

# Enables source route verification
net.ipv4.conf.all.rp_filter = 1

# Enables the magic-sysrq key
kernel.sysrq = 1

# We do not want all our interfaces to send redirects
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0
  • Disable SELinux:
[root@openvz ~]# cat /etc/sysconfig/selinux   
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#       enforcing - SELinux security policy is enforced.
#       permissive - SELinux prints warnings instead of enforcing.
#       disabled - SELinux is fully disabled.
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
#       targeted - Only targeted network daemons are protected.
#       strict - Full SELinux protection.
SELINUXTYPE=targeted

# SETLOCALDEFS= Check local definition changes
SETLOCALDEFS=0
[root@openvz ~]#
  • Reboot the sever with the new kernel.

  • Check the OpenVZ service:
[root@openvz ~]# chkconfig --list vz
vz              0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@openvz ~]# service vz status
OpenVZ is running...
[root@openvz ~]#

The first part is over, now we are going to create a VPS as a proof of concept.

  • Download the template of the Linux distribution to install as VPS and place it in /vz/template/cache
  • .

[root@openvz /]# cd vz/template/cache/
[root@openvz cache]# wget http://download.openvz.org/template/precreated/centos-5-x86.tar.gz
--2010-04-04 23:20:20--  http://download.openvz.org/template/precreated/centos-5-x86.tar.gz
Resolving download.openvz.org... 64.131.90.11
Connecting to download.openvz.org|64.131.90.11|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 179985449 (172M) [application/x-gzip]
Saving to: `centos-5-x86.tar.gz'

100%[==========================================================================================>] 179,985,449  987K/s   in 2m 58s  

2010-04-04 23:23:19 (988 KB/s) - `centos-5-x86.tar.gz' saved [179985449/179985449]

[root@openvz cache]#
  • Create a new virtual machine using the template.
[root@openvz cache]# vzctl create 1 --ostemplate centos-5-x86
Creating container private area (centos-5-x86)
Performing postcreate actions
Container private area was created
[root@openvz cache]#
  • We have a basic VPS created but it needs more tweaking before we can start it. Set the IP address, the DNS server, hostname, a name to identify it in the Host node and finally set the On Boot parameter to automatically start the container with the host.
[root@openvz cache]# vzctl set 1 --ipadd 192.168.1.70 --save
Saved parameters for CT 1
[root@openvz cache]# vzctl set 1 --name vps01 --save
Name vps01 assigned
Saved parameters for CT 1
[root@openvz cache]# vzctl set 1 --hostname vps01 --save
Saved parameters for CT 1
[root@openvz cache]# vzctl set 1 --nameserver 192.168.1.1 --save
Saved parameters for CT 1
[root@openvz cache]# vzctl set 1 --onboot yes --save
 Saved parameters for CT 1
 [root@openvz cache]#
  • Start the container and check it with vzlist.
[root@openvz cache]# vzctl start vps01
Starting container ...
Container is mounted
Adding IP address(es): 192.168.1.70
Setting CPU units: 1000
Configure meminfo: 65536
Set hostname: vps01
File resolv.conf was modified
Container start in progress...
[root@openvz cache]#
[root@openvz cache]#
[root@openvz cache]# vzlist
 CTID      NPROC STATUS  IP_ADDR         HOSTNAME                        
 1         10 running 192.168.1.70    vps01                           
[root@openvz cache]#
  • Enter the container and check that its operating system is up and running.
[root@openvz cache]# vzctl enter vps01
entered into CT 1
[root@vps01 /]#
[root@vps01 /]# free -m
 total       used       free     shared    buffers     cached
Mem:           256          8        247          0          0          0
-/+ buffers/cache:          8        247
Swap:            0          0          0
[root@vps01 /]# uptime
 02:02:11 up 8 min,  0 users,  load average: 0.00, 0.00, 0.00
[root@vps01 /]#
  • To finish the test stop the container.
[root@openvz /]# vzctl stop 1
Stopping container ...
Container was stopped
Container is unmounted
[root@openvz /]#
[root@openvz /]# vzlist -a
 CTID      NPROC STATUS  IP_ADDR         HOSTNAME                        
 1          - stopped 192.168.1.70    vps01                           
[root@openvz /]#

And as I like to say… we are done ;-) Next time will try to cover more advanced topics.

Juanma.

Long time since my last post. I’ve been on holidays! :-D

But don’t worry my dear readers, I did not fall into laziness and the few times I wasn’t playing with my son I’ve been playing in my homelab with other virtualization technologies, storage appliances, my ESXi servers… what can I say I’m a Geek. One of the most interesting technologies I’ve been playing with is OpenVZ.

OpenVZ is a container-based operating system-level virtualization technology for Linux, if you have ever worked with Solaris 10 Zones this is very similar. The OpenVZ project is supported by Parallels which also have based their commercial solution Virtuozzo on OpenVZ.

OpenVZ (and other container-based technologies) differs with other technologies like VMware or HPVM in that the last ones virtualize the entire machine with its own OS, disks, ram, etc. OpenVZ on the contrary use only one Linux kernel and create multiple isolated instances. of course there are pros and cons, just to name a couple:

  • VMware, HPVM and other true hypervisors are more flexible since many different operative systems can be run on top of them, OpenVZ on the contrary can only run Linux instances.
  • OpenVZ since it is not a real hypervisor does not have their overhead so it is very fast.

Glossary of terms:

  • Host node, CTo, VEo: The host where the containers run.
  • VPS, VE: The containers themselves. One Host node can run multiple VPS and each VPS can run a different Linux distribution such as Gentoo, Ubuntu, CentOS, etc, but every VPS operate under the same Linux kernel.
  • CTID: ConTainer’s IDentifier. A unique number that every VPS has and used to manage it.
  • Beancounters: The beancounters, also known as UBC Parameter Units, are nothing but a set of limits defined by the system administrator. The beancounters assure that no VPS can abuse the resources of the Host node. The whole list of Beancounter is described in detail in the OpenVZ wiki.
  • VPS templates: The templates are the images used to create new containers.

OpenVZ directory structure:

  • /vz – The default main directory.
  • /vz/private – Where the VPS are stored.
  • /vz/template/cache – In this path are stored the templates for the different Linux distributions.
  • /etc/vz – Configuration directory for OpenVZ.
  • /etc/vz/vz.conf – OpenVZ configuration file.

Resource management:

The amount of resources from the Host node available for the Virtual Environments can be managed through four different ways.

  • Two-Level Disk Quota: The administrator of the Host node can set-up disk quotas for each  container.
  • Fair CPU scheduler: It is a two-level scheduler with a first level scheduler deciding which container is given the CPU time slice and on the second level the standard Linux scheduler decides which process to run in that container.
  • I/O scheduler: Very similar to the CPU scheduler, is also a two-level scheduler. Priorities are assigned to each container and the I/O scheduler distributes the available bandwidth according to those priorities.
  • User Beancounters.

I’m now in the process of set-up an OpenVZ test server in my homelab so I will try to cover some of its features more in depth in a future post.

Juanma.