Using LDAPS (LDAP+TLS) from within the Sendmail configuration file

 

Content:

1. Introduction.

2. Installing and configuring OpenLDAP certificate database

3. SELinux configuration.

4. LDAP+TLS in sendmail.mc/sendmail.cf.

 

1. Introduction.

If one need to implement LDAP+TLS to securely connect sendmail daemon to the LDAP directory server, they need to enable and use the existing OpenLDAP integration of Sendmail. Most of the modern Linux distributions provide as a part of their package collections Sendmail compiled with OpenLDAP integration. But when it comes to configure Sendmail to connect to LDAP server by securing the TCP session with TLS it is very hard to find online an useful example. Almost all examples available explain how to configure Sendmail to use LDAP server through a plain TCP session. The goal of this document is to explain how to do that configuration. The explanations bellow are 100% compatible to Sendmail setup based on CentOS 7 or Red Hat Enterprise Linux 7, but they might be implemented to any other modern Linux distribution as well.

 

2. Installing and configuring OpenLDAP certificate database.

In CentOS 7 and Red Hat Enterprise Linux 7 the OpenLDAP clients configuration used by default the configuration and certificated based located in the directory /etc/openldap. That folder is supplied to the system by the package named openldap. In most cases, but also depend on the type of the installation, the package openldap should be presented in the system by default. Nevertheless one must check and verify that the package exists and it is up to date (not keeping your system up to date is risky). If the package openldap is not presented, install it by using yum:

# yum install openldap

If the installation is successful the package will create the folders /etc/openldap and /etc/openldap/certs. That last folder contains NSS database:

/etc/openldap/certs/cert8.db
/etc/openldap/certs/key3.db
/etc/openldap/certs/password
/etc/openldap/certs/secmod.db

There the file named "password" contains the password for unlocking the NSS database when accessing the stored private keys and passwords. The NSS base is created empty by default which means that one must add there at least the CA certificate that helps to verify the validity of the LDAP server X.509 certificate (the LDAP server Sendmail will be connected to). For example, if the CA X.509 certificate "COMODO RSA Certification Authority", stored in PEM format in the file /tmp/COMODO_RSA_Certification_Authority.crt, should be added to the NSS database and trusted, that can be done in the following way:

# cd /etc/openldap/certs # certutil -A -d . -n "COMODO RSA Certification Authority" -a -i /tmp/COMODO_RSA_Certification_Authority.crt -t "CT,c,"

Please, note that the use of NSS libraries with OpenLDAP is specific to CentOS and Red Hat Enterprise Linux 7. Another Linux distributions might use OpenSSL libraries instead of NSS ones.

 

3. SELinux configuration.

By default the Sendmail OpenLDAP client process cannot access the NSS certificate database of OpenLDAP. In order to make the access possible, one need to set the SELinux boolean authlogin_nsswitch_use_ldap to true:

# setsebool -P authlogin_nsswitch_use_ldap 1

 

4. LDAP+TLS in sendmail.mc/sendmail.cf.

The LDAP+TLS can be configured by using a specific URI format "-H ldaps://hosname:port", where the port number is optional. Bellow is a detailed example in m4 format which need to become part of the m4 Sendmail configuration file sendmail.m4:

define(`confLDAP_DEFAULT_SPEC', `-H ldaps://directory.example.com -b "o=example.com" -d "cn=sendmail,ou=Special Users,o=example.com" -M simple -P /etc/mail/password-sendmail.ldap')dnl

If one need to specify the LDAP client settings directly in sendmail.cf the following configuration line should be added there:

O LDAPDefaultSpec=-H ldaps://directory.example.com -b "o=example.com" -d "cn=sendmail,ou=Special Users,o=example.com" -M simple -P /etc/mail/password-sendmail.ldap

Compiling and installing GROMACS 2016 by using Intel C/C++ and Fortran compiler, and adding CUDA support


Content:

1. Introduction.

2. Setting the building environment.

3. Short notes on AVX-capable CPUs support available in GROMACS.

4. Downloading and installing CUDA.

5. Compiling and installing OpenMPI.

6. Compiling and installing GROMACS.

7. Invoking GROMACS.


GROMACS is an open source software for performing molecular dynamics sumulations. It also provides an excellent set of tools which can be used to analyze the results of the sumulations. GROMACS is fast and robust and its code supports a long range of run-time and compile-time optimizations. This document explains how to compile GROMACS 2016 on CentOS 7 and Scientific Linux 7 with CUDA support, by means of using Intel C/C++ and Fortran compiler.

Before starting with the compilation, be sure that you are awared of the following:

  • Do not use the latest version of GROMACS for production right after its official release, unless you are developer or just want to see what is new. Every software product based on such a huge amount of source code might contain some critical bugs at the beginning. Wait up for 1-2 weeks after the release date and then check carefully the GROMACS user-support forums. If you see no critical bugs reported (or minor bugs which might affect your simulations in particular) there you can compile the latest release of GROMACS. Even then test the build agains some known simulation results of yours. If you see no big differences (or see some expected ones) you can proceed with the implementation of the latest GROMACS release on your system for production.

  • If you administer HPC facility where the compute nodes are equipped with different processores, you most probably need to compile GROMACS code separately to match each CPU type features. To do so create build hosts for each CPU type by using nodes that matches that type. Compile there GROMACS and then clone the installation to the rest of the nodes of the same CPU type.

  • Always use the latest CUDA compatible to the particular GROMACS release (carefully check the GROMACS GPU documentation).

  • During the compilation of GROMACS always build its own FFTW library. That really boosts the productivity of GROMACS.

  • Compiling OpenMPI with Intel Compiler is not of critical importance (the system OpenMPI libraries provided by the Linux distributions could be employed instead), but it might rise up the productivity of the simulations. Also Intel C/C++ and Fortran compiler provides its native MPI support that could be used later, but having freshly compiled OpenMPI always helps you to be always up to date to the resent MPI development. Before starting the compilation be absolutely sure what libraries and compiler options do you need to successfully compile your custom OpenMPI and GROMACS!

  • Use the latest Intel C/C++ and Fortran Compiler if it is possible. That largerly guarantees that the specific processor flags of the GPU and CPU will be taken into account by the C/C++ and Fortran compilers during the compilation process.

 

2. Setting the building environment.

Before starting be sure you have your build folder created. You might need an unprivileged user to perform the compilation. Open this document to see how to do that:

https://vessokolev.blogspot.com/2016/08/speeding-up-your-scientific-python-code.html

See paragraphs 2, 3, and 4 there.

 

3. Short notes on AVX-capable CPUs support available in GROMACS.

If the result of the execution of cat /proc/cpuinfo shows avx2 CPU flag (coloured red bellow):

$ cat /proc/cpuinfo
...
processor : 23
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz
stepping : 2
microcode : 0x37
cpu MHz : 1221.156
cache size : 30720 KB
physical id : 0
siblings : 24
core id : 13
cpu cores : 12
apicid : 27
initial apicid : 27
fpu : yes
fpu_exception : yes
cpuid level : 15
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc
bogomips : 4589.54
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

then your CPU supports Intel® Advanced Vector Extensions 2 (Intel® AVX2). GROMACS supports AVX2 and that feature busts significantly the performance of the sumulations when compute bonded interactions. More on that CPU architecture features here:

https://software.intel.com/en-us/articles/how-intel-avx2-improves-performance-on-server-applications

 

4. Downloading and installing CUDA.

You need to install CUDA rpm packages on both build host and compute nodes. The easiest and most efficient way to do so and get updates later (when there are any available) is through yum. To install the NVidia CUDA yum repository file visit:

https://developer.nvidia.com/cuda-downloads

and download the repository rpm file, as illustrated in the picture shown bellow:

Alternative way to get the repository rpm file is to browse the NVidia CUDA repository directory at:

http://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64

scroll down there, and find and download the rpm file named "cuda-repo-rhel7-*" (select the recent one). Then install it locally by using yum localinstall

# yum localinstall /path/to/cuda-repo-rhel7-*.rpm

Once ready with the CUDA respository installation do become a root or super user and install the CUDA Toolkit rpm packages:

# yum install cuda

Note that the process of installation takes time, which mainly depends on both network connectivity and productivity of the local system. Also note that yum automatically installs (through dependencies in the rpm packages) DKMS to support the rebuilding of the NVidia kernel modules when booting a new kernel.

If you do not expect to use your compute nodes for compiling any code with CUDA support and only execute compiled binary code there, you might no need to install all rpm packages through the meta package "cuda" (as shown above). You could specify which of the packages you really need to install there (see the repository). To preview all packages provided by the NVidia CUDA repository execute:

# yum --disablerepo="*" --enablerepo="cuda" list available

Alternative way to preview all packages available in the repository "cuda" is to use the cached locally sqlite3 database of that repository:

# yum makecache
# HASH=`ls -p /var/cache/yum/x86_64/7/cuda/ | grep -v '/$' | grep primary.sqlite.bz2 | awk -F "-" '{print $1}'`
# cp /var/cache/yum/x86_64/7/cuda/${HASH} ~/tmp
# bunzip ${HASH}-primary.sqlite.bz2
# sqlite3 ${HASH}-primary.sqlite
sqlite> select name,version,arch,summary from packages;

 

5. Compiling and installing OpenMPI

Be sure you have the building environment set as explained before. The install the packages hwloc-devel and valgrind-devel:

# yum install hwloc-devel valgrind-devel

and finally proceed with the configuration, compilation, and installation:

$ cd /home/builder/compile
$ . ~/.intel_env
$ . /usr/local/appstack/.appstack_env
$ wget https://www.open-mpi.org/software/ompi/v2.0/downloads/openmpi-2.0.0.tar.bz2
$ tar jxvf openmpi-2.0.0.tar.bz2
$ cd openmpi-2.0.0
$ ./configure --prefix=/usr/local/appstack/openmpi-2.0.0 --enable-ipv6 --enable-mpi-fortran --enable-mpi-cxx --with-cuda --with-hwloc
$ gmake
$ gmake install
$ ln -s /usr/local/appstack/openmpi-2.0.0 /usr/local/appstack/openmpi
$ export PATH=/usr/local/appstack/openmpi/bin:$PATH
$ export LD_LIBRARY_PATH=/usr/local/appstack/openmpi/lib:$LD_LIBRARY_PATH

Do not forged to update the variables PATH and LD_LIBRARY_PATH by editting their values in the file /usr/local/appstack/.appstack_env. The OpenMPI installation thus compiled and installed provides your applications with more actual MPI tools and libraries than might be provided by the recent Intel C/C++/Fortran Compiler package.

 

6. Compiling and installing GROMACS

Be sure you have the building environment set as explained before and OpenMPI installed as shown above. Then proceed with GROMACS compilation and installation:

$ cd /home/builder/compile
$ wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-2016.tar.gz
$ tar zxvf gromacs-2016.tar.gz
$ cd gromacs-2016
$ . ~/.intel_env
$ . /usr/local/appstack/.appstack_env
$ cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/appstack/gromacs-2016 -DGMX_MPI=ON -DGMX_BUILD_OWN_FFTW=ON -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DMPI_C_LIBRARIES=/usr/local/appstack/openmpi/lib/libmpi.so -DMPI_C_INCLUDE_PATH=/usr/local/appstack/openmpi/include -DMPI_CXX_LIBRARIES=/usr/local/appstack/openmpi/lib/libmpi.so -DMPI_CXX_INCLUDE_PATH=/usr/local/appstack/openmpi/include
$ gmake
$ gmake install
$ export PATH=/usr/local/appstack/gromacs-2016/bin:$PATH
$ export LD_LIBRARY_PATH=/usr/local/appstack/gromacs-2016/lib64:$LD_LIBRARY_PATH

Do not forged to update the variables PATH and LD_LIBRARY_PATH by editting their values in the file /usr/local/appstack/.appstack_env.

 

7. Invoking GROMACS

To invoke GROMACS compiler and installed by following the instruction in this document you need to have the executable gmx_mpi (not gmx!!!) in your PATH environmental variable, as well as the path to libgromacs_mpi.so. You may set those paths by appending to your .bashrc the line:

. /usr/local/appstack/.appstack_env

You may execute as well the above as a command line only when you need to invoke gmx_mpi if you do not want to write this down to .bashrc.


Reducing the memory usage by flattening the stored symmetric matrices


Content:

1. Introduction.

2. Flattening a symmetric matrix in case all its diagonal elements are zero.

3. Flattening a symmetric matrix with non-zero diagonal.

4. Effectiveness of the flattening.


The square matrix R:

is symmetric if:

With respect to the values of their diagonal elements the following two types of symmetric matrices are used in the scientific practice:

» all diagonal elements are zero (example: the distance matrix of a system ot N atoms):

In this particular case unique are only the elements bellow or above the diagonal, if at least one of them is not zero.

» at least one of the diagonal elements is not zero (example: the matrix of Lennard-Jones cross-parameters for a system of N different types of atoms):

Here unique are only the elements bellow or above the diagonal and the onese on the diagonal, if at least one of the non-diagonal elements is different then zero.

 

The goal of this document is to show an easy and simple method for storing and accessing the elements of these two type of matrices in the computer memory. The idea of the method is to store the 2D array of the symmetric matrix as 1D array containing only the unique elements of the matrix. This method is included in many software products but somehow it is not very popular among most of the scientists who very often write their own scripts and executables. Hopefully this publication could make it more popular.

 

When all diagonal elements of a symmetric matrix are zeros its unique elements are those bellow or above the diagonal (if at least one of them is not zero):

Then the 2D matrix array of the matrix thus defined could be transformed into flattened one (denoted by RFL) by using the schema:

The index (the position) or the element rij in the flatten array RFL, k, is computed according to the rule:

The size (the number of elements) of RFL in this case is N(N-1)/2. Take carefully into account how the values of i and j are passing to the formula used to compute k: i might be less than j. In case i is bigger than j i should be given the value of j and vice versa, because for any symmetric matrix rij=rji.

 

In this case the symmetrix matrix has the following unique elements:

and it could become flattened by means of the following linear storage schema:

The corresponding position of the element rij in the flatten array RFL, k, could be derived this way:

The size (the number of elements) of RFL in this case is N(N+1)/2. Take carefully into account how the values of i and j are passing to the formula used to compute k: i might be equal or less but never bigger than j. In case i is bigger than j i should be given the value of j and vice versa, because for any symmetric matrix rij=rji.

 

The flattening of 2D symmetric matrix explained above uses almost half of the memory required to accomodate all matrix elements, because only the unique ones are taken into account. That effect could be assessed numerically by dividing the size of the flattened array to the total number of elements of the 2D matrix array. In both cases for large matrices the explained method reduces the used amout of memory, α, twice:

» all diagonal elements are zero:

» at least one of the diagonal elements is not zero:

Speeding up your scientific Python code on CentOS and Scientific Linux by using Intel Compilers


Content:

1. Introduction.

2. Installing the rpm packages needed during the compilation process.

3. Create an unprivileged user to run the compilation process.

4. Create folder for installing the compiled packages.

5. Settting the Intel Compilers environmental variables.

6. Compiling and installing SQLite library.

7. Compiling and installing Python 2.7.

8. Compiling and installing BLAS and LAPACK.

9. Installing setuptools.

10. Compiling and installing Cython.

11. Compiling and installing NumPy, SciPy, and Pandas.

12. Compiling and installing Matplotlib (optional).

13. Compiling and installing HDF5 support for Python - h5py.

14. Testing the installed Python modules.

15. Using the installed Python modules.


The goal of this document is to describe easy, safe, and illustrative way to bring more speed to your scientific Python code by compiling Python and a set of important modules (like sqlite3, NumPy, SciPy, Pandas, and h5py) by using Intel Compilers. The recipes described bellow are intented to run the compilation and installation as unprivileged user which is the safest way to do so. Also the used installaton schema process potential conflicts between the packages installed by the distribution package manager and the one brought to the local system by following the recipes.

The document is specific to the Linux distributions CentOS and ScientificLinux - the most used Linux distributions for science. With minor changes the recipes could be easily adapted for other Linux distributions which supports Intel Compilers.

Note that the compilation recipes provided bellow uses specific optimization for the currently used processor. Feel free to change that if you want to spread the product of the compilations over a compute cluster. Also the recipes might be collected into one and executed as a single configuration and installation script. They are given bellow separated mainly to make the details for each package compilation more visible for the reader.

2. Installing the rpm packages needed during the compilation process.

The following packages have to be installed in advance by using yum in order to support the compilation process: gcc, gcc-c++, gcc-gfortran, gcc-objc, gcc-objc++, libtool, cmake, ncurses-devel, openssl-devel, bzip2-devel, zlib-devel, readline-devel, gdbm-devel, tk-devel, and bzip2. Install them all together at once:

# yum install gcc gcc-c++ gcc-gfortran gcc-objc gcc-objc++ libtool cmake ncurses-devel openssl-devel bzip2-devel zlib-devel readline-devel gdbm-devel tk-devel bzip2

 

The default settings for creating user in RHEL, CentOS, and SL are fair enough in this case:

# useradd builder

The user name chosen for running the compilation process is "builder". But tou might choose a different user name if the one of "builder" is already taken of reserved. Finally set the password for this new user and/or install OpenSSH public key (in /home/builder/.ssh/authorized_keys) if this account is supposed to be accessed remotely.

 

This documentation uses as a destination folder /usr/local/appstack. To prevent the use of "root" or a super user during the compilation and installation process make /usr/local/appstack owned by "builder":

# chown -R builder:builder /usr/local/appstack

Create (as user "builder") an empty file /usr/local/appstack/.appstack_env:

$ touch /usr/local/appstack/.appstack_env
$ chmod 644 /usr/local/appstack/.appstack_env

which would be provided later to the users who want to update their shell environmental variables in order to use the product of the alternatively compiled packages stored in /usr/local/appstack.

 

If the Intel Compilers packages are properly installed and accessible to the user "builder" the following variables have to be exported before invoking the Intel compilers as default C/C++, and Fortran compilers:

export CC=icc
export CXX=icpc
export CFLAGS='-O3 -xHost -ip -no-prec-div -fPIC'
export CXXFLAGS='-O3 -xHost -ip -no-prec-div -fPIC'
export FC=ifort
export FCFLAGS='-O3 -xHost -ip -no-prec-div -fPIC'
export CPP='icc -E'
export CXXCPP='icpc -E'

Unless it is very neccessary these variables should not appear in either /home/builder/.bashrc or /home/builder/.bash_profile. A possible way to load them occasionally (only when they are needed) is to create the file /home/builder/.intel_env, and describe the export declarations there. Then they could be loaded within the current bash shell session by executing:

$ . ~/.intel_env

 

6. Compiling and installing SQLite library.

The SQLite library is actively used in a wide range of scientific software applications. In order to make the library more productive its code needs to be compiled by Intel C/C++ compiler. Here is the recipe how to do that (consider using the latest stable version of SQLite!):

$ cd /home/builder/compile
$ . ~/.intel_env
$ wget https://sqlite.org/2016/sqlite-autoconf-3130000.tar.gz
$ tar zxvf sqlite-autoconf-3130000.tar.gz
$ cd sqlite-autoconf-3130000
$ ./configure --prefix=/usr/local/appstack/sqlite-3.13.0 --enable-shared --enable-readline --enable-fts5 --enable-json1
$ gmake
$ gmake install
$ ln -s /usr/local/appstack/sqlite-3.13.0 /usr/local/appstack/sqlite3
$ export PATH=/usr/local/appstack/sqlite3/bin:$PATH
$ export LD_LIBRARY_PATH=/usr/local/appstack/sqlite3/lib:$LD_LIBRARY_PATH

The last two command lines update the user's environmental variables PATH and LD_LIBRARY_PATH so the next compilation within the same bash shell session could use the paths to the SQLite library and executables. Also do update PATH and LD_LIBRARY_PATH in the file /usr/local/appstack/.appstack_env which is supposed to be exported by the users to to get the paths to the alternatively compiled executable binaries and libraries.

 

To make the execution of the Python code faster the Python 2.7 should be compiled by using the Intel C/C++ compiler. Note that compiling Python this way makes it very hard to use the Python modules provided by the RPM packages. Hence all required Python modules should also be built in the same manner (custom compilaton by using Intel Compilers) and linked to the custom compiled version of Python. In the scientific practice it is important to use fast SQLite Python interface. To have it built-in SQLite ought to be compiled with Intel C/C++ Compiler as it is described above. Be sure that all requred rpm packages are installed in advance, as explained in "Installing the rpm packages needed during the compilation process". Finally, do follow this recipe to compile and install custom Python 2.7 distribution (always use the latest stable Python 2.7 version!):

$ cd /home/builder/compile
$ wget https://www.python.org/ftp/python/2.7.12/Python-2.7.12.tar.xz
$ tar Jxvf Python-2.7.12.tar.xz
$ . ~/.intel_env # Execute this if the previous bash shell session containing the compiler environmental variables has been closed!
$ . /usr/local/appstack/.appstack_env # Execute this if the previous bash shell session containing the environmental variables has been closed!
$ ./configure --prefix=/usr/local/appstack/python-2.7.12 --without-gcc --enable-ipv6 --enable-shared CFLAGS=-I/usr/local/appstack/sqlite3/include LDFLAGS=-L/usr/local/appstack/sqlite3/lib CPPFLAGS=-I/usr/local/appstack/sqlite3/include
$ gmake
$ gmake install
$ ln -s /usr/local/appstack/python-2.7.12 /usr/local/appstack/python2
$ export PATH=/usr/local/appstack/python2/bin:$PATH
$ export LD_LIBRARY_PATH=/usr/local/appstack/python2/lib:$LD_LIBRARY_PATH
$ export PYTHONPATH=/usr/local/appsatack/python2/lib

The last three lines of the recipe do update the environmental variables PATH and LD_LIBRARY_PATH currently available in the currently running bash shell session, and create a new one - PYTHONPATH (critically important variable for running any Python modules). They could help the next compilation (if the same bash shell session is used to do that). Also do update these variables in the file /usr/local/appstack/.appstack_env so the Python 2.7 installation folder to become the first in line in the path catalogue:

$ export PATH=/usr/local/appstack/python2/bin:/usr/local/appstack/sqlite3/bin:$PATH
$ export LD_LIBRARY_PATH=/usr/local/appstack/python2/lib:/usr/local/appstack/sqlite3/lib:$LD_LIBRARY_PATH

IMPORTANT! Do not forget to include in /usr/local/appstack/.appstack_env the Python path declaration:

export PYTHONPATH=/usr/local/appsatack/python2/lib

Otherwise none of the modules compiled bellow would not properly work!

 

In order to compile and install scipy library one need BLAS and LAPACK libraries compiled and installed locally. It is enough to compile LAPACK tarball since it includes the BLAS code and if compiled properly provides libblas.so shared library. To speed up the execution of any code that uses LAPACK and BLAS the LAPACK source code should be compiled by using Intel Fortran Compiler according to the recipe given bellow (always use the latest stable version of LAPACK!):

$ cd /home/builder/compile
$ wget http://www.netlib.org/lapack/lapack-3.6.1.tgz
$ tar zxvf lapack-3.6.1.tgz
$ cd lapack-3.6.1
$ . ~/.intel_env # Execute this if the previous bash shell session containing the compiler environmental variables has been closed!
$ . /usr/local/appstack/.appstack_env # Execute this if the previous bash shell session containing the environmental variables has been closed!
$ cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/appstack/lapack-3.6.1 -DCMAKE_INSTALL_LIBDIR=/usr/local/appstack/lapack-3.6.1/lib64 -DBUILD_SHARED_LIBS=1
$ gmake
$ gmake install
$ ln -s /usr/local/appstack/lapack-3.6.1 /usr/local/appstack/lapack
$ export LD_LIBRARY_PATH=/usr/local/appstack/lapack/lib64:$LD_LIBRARY_PATH

The last line of the recipe just updates the environmental variable LD_LIBRARY_PATH available within the currently used bash shell session. It could help the next compilation (if the same bash shell session is used). Also do update LD_LIBRARY_PATH in the file /usr/local/appstack/.appstack_env so the LAPACK installation folder to become the first in line in the path catalogue:

$ export LD_LIBRARY_PATH=/usr/local/appstack/lapack/lib64:/usr/local/appstack/python2/lib:/usr/local/appstack/sqlite3/lib:$LD_LIBRARY_PATH

An alternative method for bringing BLAS and LAPACK libraries to scipy is to compile and install ATLAS. Another way to do so is to use the BLAS and LAPACK which are already compiled as static libraries and provided by Intel C/C++ and Fortran Compiler installation tree. For more details take a look at this discussion:

https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/611135

The method for obtaining BLAS and LAPACK libraries proposed in this document brings the lastest version of these libraries and it is easy to perform.

 

Setuptools is needed when installing external to the Python distribution modules. The installation process is very short and easy:

$ cd /home/builder/compile
$ wget https://bootstrap.pypa.io/ez_setup.py
$ . /usr/local/appstack/.appstack_env # Execute this if the previous bash shell session containing the environmental variables has been closed!
$ python2 ez_setup.py

 

Cython provides C-extensions for Python and it is required by variaty of Python modules and NumPy, SciPy, and Pandas, in particular. Its installation is simple and follows the recipe (use the latest stable version of Cython!):

$ cd /home/builder/compile
$ wget https://pypi.python.org/packages/c6/fe/97319581905de40f1be7015a0ea1bd336a756f6249914b148a17eefa75dc/Cython-0.24.1.tar.gz
$ tar zxvf Cython-0.24.1.tar.gz
$ cd Cython-0.24.1
$ . ~/.intel_env # Execute this if the previous bash shell session containing the compiler environmental variables has been closed!
$ . /usr/local/appstack/.appstack_env # Execute this if the previous bash shell session containing the environmental variables has been closed!
$ python2 setup.py install

 

NumPy, SciPy, and Pandas are only three of the python libraries which development is coordinated by SciPy.org. The Python modules they provide are usually "a must" in the scientific practice. In many cases they could replace or even surpass their commercially developed and distributed rivals. There are more Python modules there but they either does not require such a specific compilation (SymPy, IPyton) or they might not be usable without running a graphical environment (Matplotlib). The recipe bellow shows how to compile and install NumPy, SciPy, and Pandas (use their latest stable versions!):

$ cd /home/builder/compile
$ . ~/.intel_env # Execute this if the previous bash shell session containing the compiler environmental variables has been closed!
$ . /usr/local/appstack/.appstack_env # Execute this if the previous bash shell session containing the environmental variables has been closed!
$ export BLAS=/usr/local/appstack/lapack/lib64
$ export LAPACK=/usr/local/appstack/lapack/lib64
$ wget https://github.com/numpy/numpy/archive/v1.11.1.tar.gz
$ wget wget https://github.com/scipy/scipy/releases/download/v0.18.0/scipy-0.18.0.tar.gz
$ wget https://pypi.python.org/packages/11/09/e66eb844daba8680ddff26335d5b4fead77f60f957678243549a8dd4830d/pandas-0.18.1.tar.gz
$ tar zxvf v1.11.1.tar.gz
$ tar zxvf scipy-0.18.0.tar.gz
$ tar zxvf pandas-0.18.1.tar.gz
$ cd numpy-1.11.1
$ python2 setup.py install
$ cd ..
$ cd scipy-0.18.0
$ python2 setup.py install
$ cd ..
$ cd pandas-0.18.1
$ python2 setup.py install

 

The direct use of Matplotlib requires the graphical user environment enabled which in most cases is not supported by the distributed computing. Nevertheless if Matplotlib need to be presented in the system it could be compiled and installed in the same manner done before for NumPy, SciPy, and Pandas. To provide at least one image graphical output driver the libpng-devel rpm package have to be locally installed:

# yum install libpng-devel

After that follow the recipe bellow to compile and install Matplotlib module for Python (use the latest stable version of Matplotlib!):

$ cd /home/builder/compile
$ wget https://github.com/matplotlib/matplotlib/archive/v1.5.2.tar.gz
$ tar zxvf v1.5.2.tar.gz
$ cd matplotlib-1.5.2
$ . ~/.intel_env # Execute this if the previous bash shell session containing the compiler environmental variables has been closed!
$ . /usr/local/appstack/.appstack_env # Execute this if the previous bash shell session containing the environmental variables has been closed!
$ python2 setup.py install

 

HDF5 support is essential when using Python to access and manage fast and adequately large data structures of different type. Currently the low level interface to HDF5 in Python is provided by the module h5py. To compile h5py one need first to compile HDF5 framework and install it locally so its libraries to be accessible to h5py. Note that by default both CentOS and SL provide HDF5 support but the executables and libraries which their RPM packages bring to the system are compiled by using GCC. Therefore if the goal is to achieve high speed of the Python code when using HDF5 speed both HDF5 libraries and h5py module should be compiled by using Intel C/C++ and Fortran compilers. The example bellow shows how to do that:

$ cd /home/builder/compile
$ wget http://www.hdfgroup.org/ftp/HDF5/releases/hdf5-1.10/hdf5-1.10.0-patch1/src/hdf5-1.10.0-patch1.tar.bz2
$ wget https://github.com/h5py/h5py/archive/2.6.0.tar.gz
$ tar jxvf hdf5-1.10.0-patch1.tar.bz2
$ tar zxvf 2.6.0.tar.gz
$ cd hdf5-1.10.0-patch1
$ . ~/.intel_env # Execute this if the previous bash shell session containing the compiler environmental variables has been closed!
$ . /usr/local/appstack/.appstack_env # Execute this if the previous bash shell session containing the environmental variables has been closed!
$ ./configure --prefix=/usr/local/appstack/hdf5-1.10.0-patch1 --enable-fortran --enable-cxx --enable-shared --enable-optimization=high
$ gmake
$ gmake install
$ ln -s /usr/local/appstack/hdf5-1.10.0-patch1 /usr/local/appstack/hdf5
$ export PATH=/usr/local/appstack/hdf5/bin:$PATH
$ export LD_LIBRARY_PATH=/usr/local/appstack/hdf5/lib:$LD_LIBRARY_PATH
$ export HDF5_DIR=/usr/local/appstack/hdf5
$ cd ..
$ cd h5py-2.6.0
$ python2 setup.py install

If the compilation and installation are successful remove the folders containing the source code of the compiled modules. Also append the export declaration:

export HDF5_DIR=/usr/local/appstack/hdf5

to the file /usr/local/appstack/.appstack_env becase otherwise the module h5py could not be imported. Also there do update the environmental variables PATH and LD_LIBRARY_PATH to include the paths to the installed HDF5 binaries and libraries.

Note that there is also a high-level interface to HDF5 for Python, called PyTables. Currently (August 2016) it can be compiled only against HDF5 version 1.8.

 

The simpliest way to test the successfully compiled and installed Python modules is to load them from within a Python shell. Before starting this test do not forged to export the environmental variables from the file /usr/local/appstack/.appstack_env in order to access the customized version of Python as well as all necessary customized libraries. Then run the test:

$ . /usr/local/appstack/.appstack_env # Do this only of the envoronmental variables are not loaded yes into the memory!
$ for i in {"numpy","scipy","pandas","h5py"} ; do echo "import ${i}" | python > /dev/null 2>&1 ; if [ "$?" == 0 ] ; then echo "${i} has been successfully imported" ; fi ; done

If all requested modules are imported successfully the following output messages are expected to appear in the current bash shell window:

numpy has been successfully imported
scipy has been successfully imported
pandas has been successfully imported
h5py has been successfully imported

If the name of any of the requested modules does not appear there then try to import that module manually like this (the example given bellow is for checking NumPy):

$ python
Python 2.7.12 (default, Aug 1 2016, 20:41:13)
[GCC Intel(R) C++ gcc 4.8 mode] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy

and check the displayed error message to find out how to fix the problem. Very often people are trying to import into Python shell a module they have just compiled, by invoking python from within the bash shell, while the bash working directory is currently pointing to the folder containing the source code used for compiling that module. That is not a proper way for importing any Python module because in that particular case the current folder contains specific Python files that get loaded by default and thus prevent the requested module from being properly imported into the memory.

 

To use thus installed modules is enough to use the custom compiled Python version and load the envoronmental variables:

$ . /usr/local/appstack/.appstack_env

Multicast IPsec protected OSPFv2/OSPFv3 areas

 

1. Short introduction.

The presumption that your backbone area is the safest segment of your corporate network is untenable, especially when your backbone is based on MAN service. Usually the MAN operators are used to provide to the clients virtual networks (logical layer of Layer 2 frames transport) based on 802.1q VLAN or VPLS. It is a shared service because the clients' networks are logically defined transport of frames across the operator's physical network. Therefore any successful attack on the operator's active equipment creates is direct risk of compromising the clients' networks.

OSPF is still the most used protocol for intra-AS routing. Usually the routers connected to the corporate backbone networks are configured as OSPF speakers (they speak either OSPFv2 and OSPFv3, or both). The OSPFv2 protocol provides both simple and cryptographic authentication, and the cryprographic ones is based on the MD5 hash function. In contrast the authetication in OSPFv3 is moved out of the protocol. One very serious reason the authentication is moved is because the OSPF applications are critical, they are part of the user space, and process multicast messages. The operating system kernel escalates the multicast messages received for the OSPF group address to the OSPF application regardless they are properly autheticated or not. Therefore if an intruder gain an access to an OSPF area it may send a lot of packets to the OSPF listeners and rise the load of their protocol processing applications. Given that the OSPF algorithm strongly depends on the ability of processing as fast as possible the incomming update messages, any intentionally caused high load may affect the dynamic routing over the whole OSPF area.

One possible way to prevent the unautheticated OSPF messages from reaching the OSPF application and affect its performance is to employ the kernel to perform the packet authentication. In other words the kernel should filter the unauthenticated OSPF messages. That can be achieved by implementing an IPsec transport protocol so the kernel should escalate to the OSPF application only the packets passed a successful ESP/AH authentication. Such an idea sounds very reasonable but it should solve somehow an importent principle problem - IKE cannot exchange (yet) keys over multicast. Hence if IPsec is going to be implemented to secure OSPF a static keys must be configured on all OSPF listeners in the area.

Below it is decribed how to use IPsec to protect and authenticate the communication between OSPF speakers running quagga daemons ospfd and ospf6d.

 

2. Generating the keys.

Two keys for each direction of the communication should be generated - one for the encryption algorithm and one for the message digest algorithm. Bellow are presented command lines that can be used to generated that keys:

  • 128-bits key (AES, SHA1):

    $ echo -n "0x" && dd if=/dev/urandom count=16 bs=1 2> /dev/null | xxd -ps -c 16

  • 160-bits key (RIPEMD):

    $ echo -n "0x" && dd if=/dev/urandom count=20 bs=1 2> /dev/null | xxd -ps -c 20

  • 256-bits key (AES, SHA256):

    $ echo -n "0x" && dd if=/dev/urandom count=32 bs=1 2> /dev/null | xxd -ps -c 32

  • 448-bits key (Blowfish):

    $ echo -n "0x" && dd if=/dev/urandom count=56 bs=1 2> /dev/null | xxd -ps -c 56

  • 512-bits key (AES, SHA512):

    $ echo -n "0x" && dd if=/dev/urandom count=64 bs=1 2> /dev/null | xxd -ps -c 64

 

3. Loading the keys and configuring the IPsec policy.

The most convenient way of configuring IPsec for multicast with pre-shared keys on Linux is to use the package ipsec-tools. It provides the IKEv2 daemon racoon and the policy manager setkey. Since only pre-shared keys will be configured and used there is no need to run the racoon. Only setkey should be run once when both keys and policies need to be loaded into the memory. On the hard disk they should be kept in the file setkey.conf, by following the example bellow:

flush;
spdflush;

# OSPFv2

spdadd 0.0.0.0/0[0] 224.0.0.5[0] any -P out ipsec esp/transport//require ;
spdadd 0.0.0.0/0[0] 224.0.0.6[0] any -P out ipsec esp/transport//require ;
add 0.0.0.0 224.0.0.5 esp 0x10003 -m transport -E blowfish-cbc 0xd7e13bc95b0d7a40b4591e06a22bdd40c7dd0632b1cff938b9bc947d03a6dc14091e69de309b3c9d6627ee871317a39cd85d65402b674e2e -A hmac-sha256 0x2ff9702ace1e5986135074bb4be183537f85d255a250dfa46d01edfa625038ca ;
add 0.0.0.0 224.0.0.6 esp 0x10005 -m transport -E blowfish-cbc 0xd5c69b9088ec8054724cdd84e5fb82ca39f5ff5979e6f33ebf453d568320c7df19a39c771397143cd54d55b858c5ca27e17fb01105da183f -A hmac-sha256 0x46693831b5c989b186fd7ba884d8bcea503a9ea4fc835958c3166051604eb3ab ;

# OSPFv3
spdadd ::/0 ff02::5 any -P out ipsec esp/transport//require ;
spdadd ::/0 ff02::6 any -P out ipsec esp/transport//require ;
add :: ff02::5 esp 0x10003 -m transport -E blowfish-cbc 0x8f46177b518ad6f43e519ac1bc63657bef66390e013bad95b835536e3eb7f509a867f203a21a70ddd32f92d8fd1a7dd71c1fc5f083c44f9e -A hmac-sha256 0x70249c37454edf0a5696a4f9f487604cf6a9ef06580a18bd2baa380b35b622ec ;
add :: ff02::6 esp 0x10005 -m transport -E blowfish-cbc 0xd555e5ec47dcc407c3650efdda92210e24de57ce96a0305c251881f606adcb7d38b7d03d5e06fac45cbdbf5572fc948d733957852931c2f9 -A hmac-sha256 0x1e8ec58b12aa6a5ac51f80af34ffc6082f2cc8f1efaac49f480e9e413cc0f424 ;

Note that there is only "out" direction policy and we do not have one matches "in". It is because the policy is established to control multicast communication.

In RHEL and CentOS the file setkey.conf is usually located in the directory /etc/racoon, if the package ipsec-tools is installed from RPM packages. To load the keys and the policy setkey should be run like this:

# setkey -f setkey.conf

To check if they are successfully loaded into the memory and configured in the kernel, run:

# ip xfrm state

It is good idea to create a service (RHEL/CentOS <=6) or systemd configuration (RHEL/CentOS >=7) for loading the policy at a boot time or reload it later. Note that it is vital to load the policy simultaneous on all OSPF speakers to prevent OSPF area splitting.

 

4. Debugging.

If ospf6d is properly started and the IPsec policy is correctly loaded, the router should join the OSPF area and start exchange routing information. The received routes can be listed by executing:

# vtysh -c "sh ipv6 ospf6 route"

An example output is provided bellow:

*N E1 2001:67c:20d0:20::/64 fe80::16da:e9ff:fef1:195a eth0 33d03:07:59
*N E1 2001:67c:20d0:22::/64 fe80::201:3ff:fed8:8b39 eth0 33d03:09:44
*N E1 2001:67c:20d0:23::/64 fe80::210:5aff:fef2:8484 eth0 33d03:09:44
*N E1 2001:67c:20d0:24::/64 fe80::16da:e9ff:fef1:195a eth0 33d03:07:59
*N E1 2001:67c:20d0:25::/64 fe80::16da:e9ff:fef1:195a eth0 33d03:07:59
*N IA 2001:67c:20d0:2f::ffff:0/112   :: eth0 33d03:09:44

If no routes appear for a long time in the list there might be a problem with the IPsec configuration. So check the content of setkey.conf, correct and reload the policies, and restart the ospf6d daemon.

To check if there is an IPsec communication on ff02::5 tcpdump should be run in order to capture and analyze the packets arriving on the ethernet interface the router is connected to the OSPF area (eth0 for instance):

# tcpdump -n -i eth0 host ff02::5

and if there is IPsec multicast communication exchange it will be diplayed as:

17:45:41.770145 IP6 fe80::20e:cff:fe84:945b > ff02::5: ESP(spi=0x00010003,seq=0x6df6a), length 84
17:45:41.770498 IP6 fe80::16da:e9ff:fef1:195a > ff02::5: ESP(spi=0x00010003,seq=0x702e0), length 84
17:45:41.770647 IP6 fe80::210:5aff:fef2:8484 > ff02::5: ESP(spi=0x00010003,seq=0x5fafa), length 84
17:45:41.771146 IP6 fe80::201:3ff:fed8:8b39 > ff02::5: ESP(spi=0x00010003,seq=0x75ca0), length 84
17:45:51.770765 IP6 fe80::20e:cff:fe84:945b > ff02::5: ESP(spi=0x00010003,seq=0x6df6b), length 84
17:45:51.771100 IP6 fe80::16da:e9ff:fef1:195a > ff02::5: ESP(spi=0x00010003,seq=0x702e1), length 84
17:45:51.771297 IP6 fe80::210:5aff:fef2:8484 > ff02::5: ESP(spi=0x00010003,seq=0x5fafb), length 84
17:45:51.771749 IP6 fe80::201:3ff:fed8:8b39 > ff02::5: ESP(spi=0x00010003,seq=0x75ca1), length 84

Note that the source of the packets is always the link-local address of the router interface attached to the OSPF area. So by checking the output of tcpdump to estimate which router participate in the IPsec multicast communication.

IPv6 Deployment in Sofia University Network (2007-2013)

IPv6 Deployment in Sofia University Network (2007-2013)

The slides were presented at the Third South East Europe (SEE 3)/RIPE NCC Regional Meeting (14-15 April 2014, Sofia, Bulgaria). You can download the presentation on PDF here: https://meetings.ripe.net/see3/files/Vasil_Kolev-The_IPv6_Deployment_in_Sofia_University.pdf


Creative Commons - Attribution 2.5 Generic. Powered by Blogger.

Implementing LUKS Encryption on Software RAID Arrays with LVM2 Management

A comprehensive guide to partition-level encryption for maximum security ...

Search This Blog

Translate