Showing posts with label Scientific Linux. Show all posts
Showing posts with label Scientific Linux. Show all posts

SmartCard-HSM USB token: Using Smart Card Shell 3 for initializing and configuring the token, generating key pairs, and importing keys and X.509 certificates from external PKCS#12 containers

Content:

  1. Introduction
  2. Prerequisites
  3. Downloading and installing Smart Card Shell 3
  4. Running Smart Card Shell 3 GUI
  5. Loading the key manager in Smart Card Shell 3 GUI
  6. Initializing the token and configuring DKEK to enable the import of keys and X.509 certificates from PKCS#12 files
  7. Generating ECC key pair (by means of DKEK shares)
  8. Importing key pair and the corresponding X.509 certificate from PKCS#12 file into the token (by means of DKEK shares)

 

1. Introduction

The SmardCard-HSM (Standard-A USB) token (you can order it online):

is a reliable, fast, secure, and OpenSC compatible HSM device, for generating, storing, and importing RSA, AES, and Elliptic Curve (EC) keys and X.509 certificates. Maybe, the best feature of the device is its enhanced support for EC (up to 521-bit keys), and the ability to import key pairs and certificates from PKCS#12 containers. Later allows to clone a key pair into several token devices, as a hardware backup scenario.

Unfortunately, the vendor does not provide (yet) a comprehensive documentation for end users, describing in details the specific process of importing key pairs and X.509 certificates from PKCS#12 containers (files) into the token (which is something very much in demand). Therefore, the goal of this document is to fix (at least partially) that gap in the documentation.

Note, that the procedures, described above, are not part of the everyday practice. They are required only for initializing the token device, generating EC keys for curves that are not currently listed as supported in the token's firmware (for instance, secp384r1 curve is supported by the token's processor, but not listed as supported in the firmware and the OpenSC based tools cannot request secp384r1 key generation), and to import key pairs and X.509 certificates from PKCS#12 files.

 

2. Prerequisites

To be able to follow the steps given below, you need to have installed and updated Linux distribution, running a Graphical Desktop environment (GNOME, KDE). Recent OpenJDK (17 is recommended if available) must be installed and kept updated. Do not install OpenJDK manually, since it is an essential software package! Always use the package manager, provided by the vendor of your Linux distribution to install or update OpenJDK:

  • RHEL7, CentOS 7, Scientific Linux 7:

    # yum install java-11-openjdk.x86_64
  • Fedora (current), RHEL8/9, CentOS 8, Scientific Linux 8, Rocky Linux 8/9, Alma Linux 9:

    # dnf install java-17-openjdk.x86_64
  • Ubuntu:

    # apt-get install openjdk-17-jdk-headless

It is not a good idea to configure the HSM token and manage its content on a system, that is used for social networking, software testing, gaming, or any other activity, that might be considered risky in this case. Always use a dedicated desktop system (or dedicated Linux virtual machine) for managing your PKI infrastructure.

You might have more than one version of OpenJDK installed on your system. So the first step is to check that and set the latest OpenJDK as a default Java provider. Execute the following command line (be super user or root):

# alternatives --config java

to check how many Java packages (provides) are installed and available locally, and which one of them is set as current default. For example, the following result:

There are 2 programs which provide 'java'.

  Selection    Command
-----------------------------------------------
*+ 1           java-11-openjdk.x86_64 (/usr/lib/jvm/java-11-openjdk-11.0.19.0.7-1.el9_1.x86_64/bin/java)
   2           java-17-openjdk.x86_64 (/usr/lib/jvm/java-17-openjdk-17.0.7.0.7-1.el9_1.x86_64/bin/java)


Enter to keep the current selection[+], or type selection number:

means there are two OpenJDK packages installed, and the first one is set a default Java provider (see which is the entry marked with "+" in the first column). To set OpenJDK 17 default Java provider, type the ID number assigned to the package in the list (in the "Selection" column) and press "Enter" afterwards (in the above example, the ID used is 2):

Enter to keep the current selection[+], or type selection number: 2

It is always a good idea to check if the symlinks created by the alternatives tool points to the correct target. The simplest way to do so for OpenJDK 11 is to follow the symlink /etc/alternatives/java:

$ ls -al /etc/alternatives/java

and verify that the target is the OpenJDK 11 java executable:

lrwxrwxrwx. 1 root root 63 Apr 29 13:58 /etc/alternatives/java -> /usr/lib/jvm/java-17-openjdk-17.0.7.0.7-1.el9_1.x86_64/bin/java

Also check if the Java major version of the target:

$ java --version

is 17:

openjdk 17.0.7 2023-04-18 LTS
OpenJDK Runtime Environment (Red_Hat-17.0.7.0.7-1.el9_1) (build 17.0.7+7-LTS)
OpenJDK 64-Bit Server VM (Red_Hat-17.0.7.0.7-1.el9_1) (build 17.0.7+7-LTS, mixed mode, sharing)

Also, have pcscd running.

 

3. Downloading and installing Smart Card Shell 3

Be sure you have OpenJDK 17 installed, as specified above. Then visit the web page:

https://www.openscdp.org/scsh3/download.html

click on "IzPack Installer" link, and save locally the provided JAR archive of the installer.

Decide what kind of installation of Smart Card Shell 3 do you really need - to allow all users of the system to run the program code (run the installer as super user), or to limit that ability to a certain unprivileged user (perform the installation using that particular user ID):

  • run the installer as super user (root):

    You should install the program into a system folder, where the users can only read and execute the Java code (no write access should be given by default). That kind of restrictions will protect the executable code from deletion of modification.

  • run the installer as a non-privileged user:

    In this case, the simplest solution is to install the program into the home folder of the user. That type of installation is recommended only for a user, who really understands how to keep the executable code safe.

If followed, the steps given bellow will install the executable code of the program in the home folder of those user, who is executing the installer.

Open a terminal and type:

$ java -jar /path/to/scsh3.XX.YYY.jar

(here XX and YYY are numbers, unique for the current version). The following window will appear (press there the button "Next" to continue):

Select the installation folder (use the button "Browse" to change it, if you do not like the one suggested by the installer), and press "Next":

Now you will be able to see the progress of the installation process (press the button "Next" to continue, when it is done):

Next, you need to decide whether or not to create a shortcut to the program in the GNOME "Applications" menu (it is recommended to create such a shortcut), and who will be able to invoke the installed program (the last is useful only if you install the software as super user or root into a system folder). Press the button "Next":

and in the last window of the installer, press "Done" to exit:

Important note for those who are running Smart Card Shell 3 on RHEL9 (Rocky Linux 9, Alma Linux 9)!

Smart Card Shell 3 needs the library libpcsclite.so, but no package provides libpcsclite.so on RHEL9. To overcome that issue, install the package pcsc-lite-libs (if it is not already installed) and create the symlink /usr/lib64/libpcsclite.so that points to /usr/lib64/libpcsclite.so.1:

cd /usr/lib64
ln -s libpcsclite.so.1 libpcsclite.so

 

4. Running Smart Card Shell 3 GUI

Be sure the Smart Card Shell 3 GUI is installed. Expand the "Applications" menu (1), go to "Programming" (2), press there "Smart Card Shell 3" (3):

and wait for the appearance of the main window of the program:

During the first run, a new window might appear, asking for configuring the path to a working directory, where the output files will be stored by default. Click the "Browse" button:

select the folder (1) and press "Open" (2) to go back:

Thus path to the folder will appear in the text field (next to the "Browse" button). In addition, mark at least "Use this as the default and do not ask again", to complete the configuration and pressing "OK" to exit:

 

5. Loading the key manager in Smart Card Shell 3 GUI

Run Smart Card Shell 3 GUI. The key manager is a loadable script, dedicated to manage the objects in the token.

To load it, either expand "File" menu and select there "Key Manager":

or press "Ctrl+M". Once loaded, the key manager will check if the token is connected and will create in the main window a tree of those objects, it discovered in the token. Details about all important events will be reported in the "Shell" tab:

 

6. Initializing the token and configuring DKEK to enable the import of keys and X.509 certificates from PKCS#12 files

The goal of the initialization process, is to enable the import (export too) of keys and X.509 certificates, stored in files (most often PKCS#12 files), into the token, , based on "device-key-encryption-key" (DKEK) type of store. Note that DKEK is not enabled by default.

WARNING! DURING THE INITIALIZATION, ALL DATA, STORED IN THE TOKEN, WILL BE LOST!

To start with the initialization, run the Smart Card Shell 3, load the key manager script, click once with the right button of the mouse upon "SmartCard-HSM" (that is the root of the key manager tree), and select "Initialize Device" in the menu:

Supply the following information (or press "Cancel" to terminate the initialization):

  • The actual SO-PIN for configuring the token. The default SO-PIN code is 3537363231383830, unless it has been changed (if you forget the SO-PIN, consider the token lost). Press "OK" to continue:

  • The label of the token (that is the token's friendly name, displayed in the PKI applications). Press "OK" to continue:

  • The authentication mechanism for restricting the access to the objects and processor of the token. In most cases, you might need to select "User PIN" (you may set another authentication mechanism, but this one is the most poplar one). Press "OK" to continue:

  • The way to restore the access to the keys and X.509 certificates, if the PIN is lost, forgotten, or locked (if a wrong PIN is entered more than 3 times, consecutively). Select "Resetting PIN with SO-PIN allowed" and press "OK" (select "Resetting PIN with SO-PIN not allowed" only in specific cases, where the implementation of such policy is necessary):

  • The new PIN code (do not use the number shown in the picture bellow). Press "OK" to continue:

  • The new PIN code (again, for confirmation). Press "OK" to continue:

  • Request for using "DKEK Shares". Press "OK" to continue:

  • The number of DKEK shares (use 1, unless you are expert). Press "OK" to continue:

  • Press "Cancel" here (if you press "OK" both SO-PIN and PIN codes will be stored locally in an unencrypted file):

After the success of the initialization, you will see only three objects displayed in the key manager tree: User PIN, SO-PIN, and DKEK entry. The message "Initializing complete" (an indication that the requested initialization has been successfully completed) will be seen in the "Shell" tab:

Note, that at this point, the requested DKEK shares are not yet initialized or/and imported to the token! The appearance of "DKEK set-up in progress with 1 of 1 shares missing" in the key manager tree indicates that. You need to request manually the creation of DKEK shares file and import its content to the token, by following strictly the instructions given bellow:

  • Request the creation of DKEK share, by clicking once with the right button of the mouse on the root of the key manager tree (on "SmartCard-HSM) and picking "Create DKEK share" in the menu:

  • Enter the name of DKEK file to create, and press "OK" (store the file in the working directory of the program, configured during the first run):

  • Set the password for protecting the DKEK share file content and press "OK":

  • Confirm the password for protecting the DKEK share file content and press "OK":

  • With the left button of the mouse, click once upon the object named "DKEK set-up in progress with 1 of 1 shares missing", displayed in the key manager section of the main window of the program:

  • Use the button "Browse" to find and choose the created DKEK file (file extension is *.pbe), and press "OK":

  • Enter the password set (before) for protecting the content of the DKEK file:

It will take up to 10 seconds to derive the keys and import the DKEK into the token (watch the related messages, appearing in the "Shell" tab). At the end, you will see that the object "DKEK set-up in progress with 1 of 1 shares missing" (in the key manager tree) will be renamed (the new name will include the ID of the DKEK object):

IMPORTANT! At this point, you need to store a copy of the DKEK share file, generated during the initialization, in a safe place!

In the examples above, that file is /home/vesso/CardContact/2019_02.pdb, but in you case the file will be with different name and location.

 

7. Generating ECC key pair (by means of DKEK shares)

IMPORTANT! Be absolutely sure that no application, other than Smart Card Shell 3, is communicating with the token. Stop all running processes of PKCS#11 compatible software (like Mozilla Firefox, Mozilla Thunderbird, Google Chrome, XCA), that might take over the token.

Start the Smart Card Shell 3, plug the token into the USB port, and load the key manager script. Be sure that the token is initialized properly, to support DKEK shares.

Click once on the token name, in the key manager section, with the right button of the mouse and select "Generate ECC Key" in the menu:

Select the elliptic curve type, and press "OK":

Provide a friendly name (alias) for the key pair (it is an internal name for naming the key pair object in the token) and press "OK":

Type (separated by comma) the list of the hex codes of the signature algorithms, that will be allowed for the signing, when using the encryption key (the most commonly used ones, 73,74,75, are given in the example bellow), then press "OK":

Wait until the token is finishing with the generation of the requested key pair. Once ready, you will see the new key object under the tree of the DKEK share:

 

7. Importing key pair and the corresponding X.509 certificate from PKCS#12 file into the token (by means of DKEK shares)

IMPORTANT! Be absolutely sure that no application, other than Smart Card Shell 3, is communicating with the token. Stop all running processes of PKCS#11 compatible software (like Mozilla Firefox, Mozilla Thunderbird, Google Chrome, XCA), that might take over the token.

Otherwise the PKCS#12 import might fail, by rising (in the "Shell" tab) the following error:

GPError: Card (CARD_INVALID_SW/27010) - "Unexpected SW1/SW2=6982 (Checking error: Security condition not satisfied) received" in ...

Start the Smart Card Shell 3, plug the token into the USB port, and load the key manager script. Be sure that the token is initialized properly, to support DKEK shares. Using DKEK share in this case is mandatory for this operation.

  • Click once on the token name in the key manager section, with the right button of the mouse, and select "Import from PKCS#12" in the menu:

  • Specify the number of DKEK shares to use (use 1, if you follow the recipes provided in this document), and click "OK":

  • Select the file, containing the DKEK shares (use the file name created during the initialization), and click "OK":

  • Enter the password for decrypting the DKEK file (that password is set during the creation of the file), click "OK", and wait up to 10 seconds for generating the shared keys:

  • Select the PKCS#12 file and click "OK":

  • Provide the password, set for protecting the content of the PKCS#12 file, and click "OK":

  • Select the key pair and X.509 certificate to import from the PKCS#12 file, by choosing their internal PKCS#12 name, and click "OK":

  • Enter a name to assign to the imported key pair and X.509 certificate, and click "OK":

  • Click "OK" if you wanna import more key pairs and X.509 certificates, stored in the same PKCS#12 file, or click "Cancel" to finish:

If the import is successful, you will see the key pair imported into the DKEK share (in the key manager section), and information about the process, in the "Shell" section (as shown in the red frames bellow):

IMPORTANT! You cannot import X.509 certificate chain from a PKCS#12 container into the token, by using the procedure proposed above.

But you might do that later, by using pkcs11-tool, it that is really necessary. Notice, that the X.509 certificates in the chain are public information and they might be used in out-of-the-box manner (installed in the software certificate repository of the browser, which will be using with the token). Their presence in the token storage is not mandatory.


Speeding up your scientific Python code on CentOS and Scientific Linux by using Intel Compilers


Content:

1. Introduction.

2. Installing the rpm packages needed during the compilation process.

3. Create an unprivileged user to run the compilation process.

4. Create folder for installing the compiled packages.

5. Settting the Intel Compilers environmental variables.

6. Compiling and installing SQLite library.

7. Compiling and installing Python 2.7.

8. Compiling and installing BLAS and LAPACK.

9. Installing setuptools.

10. Compiling and installing Cython.

11. Compiling and installing NumPy, SciPy, and Pandas.

12. Compiling and installing Matplotlib (optional).

13. Compiling and installing HDF5 support for Python - h5py.

14. Testing the installed Python modules.

15. Using the installed Python modules.


The goal of this document is to describe easy, safe, and illustrative way to bring more speed to your scientific Python code by compiling Python and a set of important modules (like sqlite3, NumPy, SciPy, Pandas, and h5py) by using Intel Compilers. The recipes described bellow are intented to run the compilation and installation as unprivileged user which is the safest way to do so. Also the used installaton schema process potential conflicts between the packages installed by the distribution package manager and the one brought to the local system by following the recipes.

The document is specific to the Linux distributions CentOS and ScientificLinux - the most used Linux distributions for science. With minor changes the recipes could be easily adapted for other Linux distributions which supports Intel Compilers.

Note that the compilation recipes provided bellow uses specific optimization for the currently used processor. Feel free to change that if you want to spread the product of the compilations over a compute cluster. Also the recipes might be collected into one and executed as a single configuration and installation script. They are given bellow separated mainly to make the details for each package compilation more visible for the reader.

2. Installing the rpm packages needed during the compilation process.

The following packages have to be installed in advance by using yum in order to support the compilation process: gcc, gcc-c++, gcc-gfortran, gcc-objc, gcc-objc++, libtool, cmake, ncurses-devel, openssl-devel, bzip2-devel, zlib-devel, readline-devel, gdbm-devel, tk-devel, and bzip2. Install them all together at once:

# yum install gcc gcc-c++ gcc-gfortran gcc-objc gcc-objc++ libtool cmake ncurses-devel openssl-devel bzip2-devel zlib-devel readline-devel gdbm-devel tk-devel bzip2

 

The default settings for creating user in RHEL, CentOS, and SL are fair enough in this case:

# useradd builder

The user name chosen for running the compilation process is "builder". But tou might choose a different user name if the one of "builder" is already taken of reserved. Finally set the password for this new user and/or install OpenSSH public key (in /home/builder/.ssh/authorized_keys) if this account is supposed to be accessed remotely.

 

This documentation uses as a destination folder /usr/local/appstack. To prevent the use of "root" or a super user during the compilation and installation process make /usr/local/appstack owned by "builder":

# chown -R builder:builder /usr/local/appstack

Create (as user "builder") an empty file /usr/local/appstack/.appstack_env:

$ touch /usr/local/appstack/.appstack_env
$ chmod 644 /usr/local/appstack/.appstack_env

which would be provided later to the users who want to update their shell environmental variables in order to use the product of the alternatively compiled packages stored in /usr/local/appstack.

 

If the Intel Compilers packages are properly installed and accessible to the user "builder" the following variables have to be exported before invoking the Intel compilers as default C/C++, and Fortran compilers:

export CC=icc
export CXX=icpc
export CFLAGS='-O3 -xHost -ip -no-prec-div -fPIC'
export CXXFLAGS='-O3 -xHost -ip -no-prec-div -fPIC'
export FC=ifort
export FCFLAGS='-O3 -xHost -ip -no-prec-div -fPIC'
export CPP='icc -E'
export CXXCPP='icpc -E'

Unless it is very neccessary these variables should not appear in either /home/builder/.bashrc or /home/builder/.bash_profile. A possible way to load them occasionally (only when they are needed) is to create the file /home/builder/.intel_env, and describe the export declarations there. Then they could be loaded within the current bash shell session by executing:

$ . ~/.intel_env

 

6. Compiling and installing SQLite library.

The SQLite library is actively used in a wide range of scientific software applications. In order to make the library more productive its code needs to be compiled by Intel C/C++ compiler. Here is the recipe how to do that (consider using the latest stable version of SQLite!):

$ cd /home/builder/compile
$ . ~/.intel_env
$ wget https://sqlite.org/2016/sqlite-autoconf-3130000.tar.gz
$ tar zxvf sqlite-autoconf-3130000.tar.gz
$ cd sqlite-autoconf-3130000
$ ./configure --prefix=/usr/local/appstack/sqlite-3.13.0 --enable-shared --enable-readline --enable-fts5 --enable-json1
$ gmake
$ gmake install
$ ln -s /usr/local/appstack/sqlite-3.13.0 /usr/local/appstack/sqlite3
$ export PATH=/usr/local/appstack/sqlite3/bin:$PATH
$ export LD_LIBRARY_PATH=/usr/local/appstack/sqlite3/lib:$LD_LIBRARY_PATH

The last two command lines update the user's environmental variables PATH and LD_LIBRARY_PATH so the next compilation within the same bash shell session could use the paths to the SQLite library and executables. Also do update PATH and LD_LIBRARY_PATH in the file /usr/local/appstack/.appstack_env which is supposed to be exported by the users to to get the paths to the alternatively compiled executable binaries and libraries.

 

To make the execution of the Python code faster the Python 2.7 should be compiled by using the Intel C/C++ compiler. Note that compiling Python this way makes it very hard to use the Python modules provided by the RPM packages. Hence all required Python modules should also be built in the same manner (custom compilaton by using Intel Compilers) and linked to the custom compiled version of Python. In the scientific practice it is important to use fast SQLite Python interface. To have it built-in SQLite ought to be compiled with Intel C/C++ Compiler as it is described above. Be sure that all requred rpm packages are installed in advance, as explained in "Installing the rpm packages needed during the compilation process". Finally, do follow this recipe to compile and install custom Python 2.7 distribution (always use the latest stable Python 2.7 version!):

$ cd /home/builder/compile
$ wget https://www.python.org/ftp/python/2.7.12/Python-2.7.12.tar.xz
$ tar Jxvf Python-2.7.12.tar.xz
$ . ~/.intel_env # Execute this if the previous bash shell session containing the compiler environmental variables has been closed!
$ . /usr/local/appstack/.appstack_env # Execute this if the previous bash shell session containing the environmental variables has been closed!
$ ./configure --prefix=/usr/local/appstack/python-2.7.12 --without-gcc --enable-ipv6 --enable-shared CFLAGS=-I/usr/local/appstack/sqlite3/include LDFLAGS=-L/usr/local/appstack/sqlite3/lib CPPFLAGS=-I/usr/local/appstack/sqlite3/include
$ gmake
$ gmake install
$ ln -s /usr/local/appstack/python-2.7.12 /usr/local/appstack/python2
$ export PATH=/usr/local/appstack/python2/bin:$PATH
$ export LD_LIBRARY_PATH=/usr/local/appstack/python2/lib:$LD_LIBRARY_PATH
$ export PYTHONPATH=/usr/local/appsatack/python2/lib

The last three lines of the recipe do update the environmental variables PATH and LD_LIBRARY_PATH currently available in the currently running bash shell session, and create a new one - PYTHONPATH (critically important variable for running any Python modules). They could help the next compilation (if the same bash shell session is used to do that). Also do update these variables in the file /usr/local/appstack/.appstack_env so the Python 2.7 installation folder to become the first in line in the path catalogue:

$ export PATH=/usr/local/appstack/python2/bin:/usr/local/appstack/sqlite3/bin:$PATH
$ export LD_LIBRARY_PATH=/usr/local/appstack/python2/lib:/usr/local/appstack/sqlite3/lib:$LD_LIBRARY_PATH

IMPORTANT! Do not forget to include in /usr/local/appstack/.appstack_env the Python path declaration:

export PYTHONPATH=/usr/local/appsatack/python2/lib

Otherwise none of the modules compiled bellow would not properly work!

 

In order to compile and install scipy library one need BLAS and LAPACK libraries compiled and installed locally. It is enough to compile LAPACK tarball since it includes the BLAS code and if compiled properly provides libblas.so shared library. To speed up the execution of any code that uses LAPACK and BLAS the LAPACK source code should be compiled by using Intel Fortran Compiler according to the recipe given bellow (always use the latest stable version of LAPACK!):

$ cd /home/builder/compile
$ wget http://www.netlib.org/lapack/lapack-3.6.1.tgz
$ tar zxvf lapack-3.6.1.tgz
$ cd lapack-3.6.1
$ . ~/.intel_env # Execute this if the previous bash shell session containing the compiler environmental variables has been closed!
$ . /usr/local/appstack/.appstack_env # Execute this if the previous bash shell session containing the environmental variables has been closed!
$ cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/appstack/lapack-3.6.1 -DCMAKE_INSTALL_LIBDIR=/usr/local/appstack/lapack-3.6.1/lib64 -DBUILD_SHARED_LIBS=1
$ gmake
$ gmake install
$ ln -s /usr/local/appstack/lapack-3.6.1 /usr/local/appstack/lapack
$ export LD_LIBRARY_PATH=/usr/local/appstack/lapack/lib64:$LD_LIBRARY_PATH

The last line of the recipe just updates the environmental variable LD_LIBRARY_PATH available within the currently used bash shell session. It could help the next compilation (if the same bash shell session is used). Also do update LD_LIBRARY_PATH in the file /usr/local/appstack/.appstack_env so the LAPACK installation folder to become the first in line in the path catalogue:

$ export LD_LIBRARY_PATH=/usr/local/appstack/lapack/lib64:/usr/local/appstack/python2/lib:/usr/local/appstack/sqlite3/lib:$LD_LIBRARY_PATH

An alternative method for bringing BLAS and LAPACK libraries to scipy is to compile and install ATLAS. Another way to do so is to use the BLAS and LAPACK which are already compiled as static libraries and provided by Intel C/C++ and Fortran Compiler installation tree. For more details take a look at this discussion:

https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/611135

The method for obtaining BLAS and LAPACK libraries proposed in this document brings the lastest version of these libraries and it is easy to perform.

 

Setuptools is needed when installing external to the Python distribution modules. The installation process is very short and easy:

$ cd /home/builder/compile
$ wget https://bootstrap.pypa.io/ez_setup.py
$ . /usr/local/appstack/.appstack_env # Execute this if the previous bash shell session containing the environmental variables has been closed!
$ python2 ez_setup.py

 

Cython provides C-extensions for Python and it is required by variaty of Python modules and NumPy, SciPy, and Pandas, in particular. Its installation is simple and follows the recipe (use the latest stable version of Cython!):

$ cd /home/builder/compile
$ wget https://pypi.python.org/packages/c6/fe/97319581905de40f1be7015a0ea1bd336a756f6249914b148a17eefa75dc/Cython-0.24.1.tar.gz
$ tar zxvf Cython-0.24.1.tar.gz
$ cd Cython-0.24.1
$ . ~/.intel_env # Execute this if the previous bash shell session containing the compiler environmental variables has been closed!
$ . /usr/local/appstack/.appstack_env # Execute this if the previous bash shell session containing the environmental variables has been closed!
$ python2 setup.py install

 

NumPy, SciPy, and Pandas are only three of the python libraries which development is coordinated by SciPy.org. The Python modules they provide are usually "a must" in the scientific practice. In many cases they could replace or even surpass their commercially developed and distributed rivals. There are more Python modules there but they either does not require such a specific compilation (SymPy, IPyton) or they might not be usable without running a graphical environment (Matplotlib). The recipe bellow shows how to compile and install NumPy, SciPy, and Pandas (use their latest stable versions!):

$ cd /home/builder/compile
$ . ~/.intel_env # Execute this if the previous bash shell session containing the compiler environmental variables has been closed!
$ . /usr/local/appstack/.appstack_env # Execute this if the previous bash shell session containing the environmental variables has been closed!
$ export BLAS=/usr/local/appstack/lapack/lib64
$ export LAPACK=/usr/local/appstack/lapack/lib64
$ wget https://github.com/numpy/numpy/archive/v1.11.1.tar.gz
$ wget wget https://github.com/scipy/scipy/releases/download/v0.18.0/scipy-0.18.0.tar.gz
$ wget https://pypi.python.org/packages/11/09/e66eb844daba8680ddff26335d5b4fead77f60f957678243549a8dd4830d/pandas-0.18.1.tar.gz
$ tar zxvf v1.11.1.tar.gz
$ tar zxvf scipy-0.18.0.tar.gz
$ tar zxvf pandas-0.18.1.tar.gz
$ cd numpy-1.11.1
$ python2 setup.py install
$ cd ..
$ cd scipy-0.18.0
$ python2 setup.py install
$ cd ..
$ cd pandas-0.18.1
$ python2 setup.py install

 

The direct use of Matplotlib requires the graphical user environment enabled which in most cases is not supported by the distributed computing. Nevertheless if Matplotlib need to be presented in the system it could be compiled and installed in the same manner done before for NumPy, SciPy, and Pandas. To provide at least one image graphical output driver the libpng-devel rpm package have to be locally installed:

# yum install libpng-devel

After that follow the recipe bellow to compile and install Matplotlib module for Python (use the latest stable version of Matplotlib!):

$ cd /home/builder/compile
$ wget https://github.com/matplotlib/matplotlib/archive/v1.5.2.tar.gz
$ tar zxvf v1.5.2.tar.gz
$ cd matplotlib-1.5.2
$ . ~/.intel_env # Execute this if the previous bash shell session containing the compiler environmental variables has been closed!
$ . /usr/local/appstack/.appstack_env # Execute this if the previous bash shell session containing the environmental variables has been closed!
$ python2 setup.py install

 

HDF5 support is essential when using Python to access and manage fast and adequately large data structures of different type. Currently the low level interface to HDF5 in Python is provided by the module h5py. To compile h5py one need first to compile HDF5 framework and install it locally so its libraries to be accessible to h5py. Note that by default both CentOS and SL provide HDF5 support but the executables and libraries which their RPM packages bring to the system are compiled by using GCC. Therefore if the goal is to achieve high speed of the Python code when using HDF5 speed both HDF5 libraries and h5py module should be compiled by using Intel C/C++ and Fortran compilers. The example bellow shows how to do that:

$ cd /home/builder/compile
$ wget http://www.hdfgroup.org/ftp/HDF5/releases/hdf5-1.10/hdf5-1.10.0-patch1/src/hdf5-1.10.0-patch1.tar.bz2
$ wget https://github.com/h5py/h5py/archive/2.6.0.tar.gz
$ tar jxvf hdf5-1.10.0-patch1.tar.bz2
$ tar zxvf 2.6.0.tar.gz
$ cd hdf5-1.10.0-patch1
$ . ~/.intel_env # Execute this if the previous bash shell session containing the compiler environmental variables has been closed!
$ . /usr/local/appstack/.appstack_env # Execute this if the previous bash shell session containing the environmental variables has been closed!
$ ./configure --prefix=/usr/local/appstack/hdf5-1.10.0-patch1 --enable-fortran --enable-cxx --enable-shared --enable-optimization=high
$ gmake
$ gmake install
$ ln -s /usr/local/appstack/hdf5-1.10.0-patch1 /usr/local/appstack/hdf5
$ export PATH=/usr/local/appstack/hdf5/bin:$PATH
$ export LD_LIBRARY_PATH=/usr/local/appstack/hdf5/lib:$LD_LIBRARY_PATH
$ export HDF5_DIR=/usr/local/appstack/hdf5
$ cd ..
$ cd h5py-2.6.0
$ python2 setup.py install

If the compilation and installation are successful remove the folders containing the source code of the compiled modules. Also append the export declaration:

export HDF5_DIR=/usr/local/appstack/hdf5

to the file /usr/local/appstack/.appstack_env becase otherwise the module h5py could not be imported. Also there do update the environmental variables PATH and LD_LIBRARY_PATH to include the paths to the installed HDF5 binaries and libraries.

Note that there is also a high-level interface to HDF5 for Python, called PyTables. Currently (August 2016) it can be compiled only against HDF5 version 1.8.

 

The simpliest way to test the successfully compiled and installed Python modules is to load them from within a Python shell. Before starting this test do not forged to export the environmental variables from the file /usr/local/appstack/.appstack_env in order to access the customized version of Python as well as all necessary customized libraries. Then run the test:

$ . /usr/local/appstack/.appstack_env # Do this only of the envoronmental variables are not loaded yes into the memory!
$ for i in {"numpy","scipy","pandas","h5py"} ; do echo "import ${i}" | python > /dev/null 2>&1 ; if [ "$?" == 0 ] ; then echo "${i} has been successfully imported" ; fi ; done

If all requested modules are imported successfully the following output messages are expected to appear in the current bash shell window:

numpy has been successfully imported
scipy has been successfully imported
pandas has been successfully imported
h5py has been successfully imported

If the name of any of the requested modules does not appear there then try to import that module manually like this (the example given bellow is for checking NumPy):

$ python
Python 2.7.12 (default, Aug 1 2016, 20:41:13)
[GCC Intel(R) C++ gcc 4.8 mode] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy

and check the displayed error message to find out how to fix the problem. Very often people are trying to import into Python shell a module they have just compiled, by invoking python from within the bash shell, while the bash working directory is currently pointing to the folder containing the source code used for compiling that module. That is not a proper way for importing any Python module because in that particular case the current folder contains specific Python files that get loaded by default and thus prevent the requested module from being properly imported into the memory.

 

To use thus installed modules is enough to use the custom compiled Python version and load the envoronmental variables:

$ . /usr/local/appstack/.appstack_env

Creative Commons - Attribution 2.5 Generic. Powered by Blogger.

Implementing LUKS Encryption on Software RAID Arrays with LVM2 Management

A comprehensive guide to partition-level encryption for maximum security ...

Search This Blog

Translate