Showing posts with label CentOS 7. Show all posts
Showing posts with label CentOS 7. Show all posts

Compiling GROMACS 2019.5 with Intel Compilers and CUDA on CentOS 7 and 8

All tests reported bellow are performed by using Intel Parallel Studio XE 2019 Update 4. I presume you may achieve the same positive results using older versions of the Intel Compilers package.

Notes on CentOS 7: You have to install the SCL repository and EPEL (that are MANDATORY requirements you cannot skip):

$ sudo yum install centos-release-scl centos-release-scl-rh epel-release

Then, using the new repositories, install dvetoolset-8 and cmake3:

$ sudo yum install devtoolset-8 cmake3

In the bash session, where the compilation process will take place, execute:

$ scl enable devtoolset-8 bash

To install the CUDA latest compiler and libraries, you might use the procedure described in my previous posts. The latest version of CUDA is 10.2. Once all the pre-requisites are satisfied, download the GROMACS 2019.5 source code tarball, unpack it, create a folder named "build" inside the source tree, enter it, load the Intel Compilers variables, then the SCL environment and run the compilation and installation process:

$ mkdir ~/build
$ cd build
$ wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-2019.5.tar.gz
$ tar xvf gromacs-2019.5.tar.gz
$ cd gromacs-2019.5
$ mkdir build
$ cd build
$ source /opt/intel/compilers_and_libraries/linux/bin/compilervars.sh intel64
$ CC=icc CXX=icpc CFLAGS="-xHost" CXXFLAGS="-xHost" cmake3 .. -DCMAKE_INSTALL_PREFIX=/usr/local/appstack/gromacs-2019.5-icc -DGMX_MPI=OFF -DGMX_BUILD_OWN_FFTW=OFF -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DGMX_FFT_LIBRARY=mkl
$ make -j6 install

The product of the compilation will be installed in /usr/local/appstack/gromacs-2019.5-icc. You might change that destination by set another value to -DCMAKE_INSTALL_PREFIX.


Getting ssllabs.com class "A+" by matching at 100% Key Exchange and Cipher Strength criteria

In most cases, getting "A+" rate by the TLS ssllabs.com server checker, does not necessary mean the examined server matches fully (at 100%) all criteria. For instance, "A+" might be given if your server matches partially (at 90% level) the requirements for key exchange protocols and cipher strength:

In my case, I got "A+" status for my Apache/mod_ssl 2.4.6 server (on CentOS 7) by using limited and very conservative list of ciphers (it is a long story about why I do avoid using "CBC"-based ciphers):

ECDHE-RSA-AES256-GCM-SHA384
DHE-RSA-AES256-GCM-SHA384
ECDHE-RSA-AES128-GCM-SHA256
DHE-RSA-AES128-GCM-SHA256

So far (8 months since I implemented the new TLS configuration), I haven't received any complaints from my clients, related to TLS issues. Seems like they keep their mobile and desktop browsers very much up to date.

Recently, for another project (PKI-based one) I decided to go further and try to get "A+" by matching all criteria at 100% (using the same Apache/mod_ssl 2.4.6 setup on CenOS 7):

What I have done? I just removed from the list above the 128-bit ciphers:

ECDHE-RSA-AES256-GCM-SHA384
DHE-RSA-AES256-GCM-SHA384

How does this change affect the clients? I got only 1 complaint from a person who is used to use Firefox on CentOS 6 (rather old distribution for a desktop, I dare say). On the other side, even mobile clients with Android 6.0, using Samsung Internet Browser, do not have any problem with the new TLS cipher set.

What was surprising is that replacing RSA server certificate with ECDSA-based one, does not create any significant TLS issues. Again, the first set of ciphers I tested (where "A+" is matched only partially), was based on both 256 and 128-bit ciphers:

ECDHE-ECDSA-AES256-GCM-SHA384
ECDH-ECDSA-AES256-GCM-SHA384
ECDHE-ECDSA-AES128-GCM-SHA256
ECDH-ECDSA-AES128-GCM-SHA256

and the second one included only 256-bit based ciphers:

ECDHE-ECDSA-AES256-GCM-SHA384
ECDH-ECDSA-AES256-GCM-SHA384

It seems like most of the browser and OS vendors take a lot of measures to secure the TLS protocol implementation in their software products.


Re-deriving AES and 3DES attribute encryption keys for 389 Directory Server installation

1. Introduction

Even if you do not use attribute encryption keys in your 389 Directory Server (DS) installation, you might be bothered by all those messages that repeat themselves in /var/log/dirsrv/slapd-instance_name/error ("instance name" is your actual instance name), after every start/restart of the 389 DS daemon:

[04/Nov/2019:02:03:16.808096029 +0200] - ERR - attrcrypt_cipher_init - No symmetric key found for cipher AES in backend userRoot, attempting to create one...
[04/Nov/2019:02:03:16.843731964 +0200] - ERR - attrcrypt_wrap_key - Failed to wrap key for cipher AES
[04/Nov/2019:02:03:16.808096029 +0200] - ERR - attrcrypt_cipher_init - No symmetric key found for cipher 3DES in backend userRoot, attempting to create one...
[04/Nov/2019:02:03:16.843731964 +0200] - ERR - attrcrypt_wrap_key - Failed to wrap key for cipher 3DES

What causes their appearance and how to get rid of them (and to solve the problem they point to)?

There are two reasons for those message to be seen in /var/log/dirsrv/slapd-instance_name/error:

  • the RSA-based X.509 certificate, used by the 389 DS server, was replaced with a new one and it has new encryption key (that situation might be fatal if your LDAP directory hosts encrypted attributes and the old RSA keys were lost);
  • the 389 DS server uses Elliptic Curves based X.509 certificate.

First, some important remarks. While it is true that failing to access the attribute encryption keys is not a fatal event to most of the 389 DS installations, where upon the attribute encryption is not necessary and therefore - not truly implemented, the appearance of those error messages might indicate a disruption of some other services (especially those that frequently query the LDAP server). That means, you really have to investigate the source of those errors, if your goal is a reliable 389 DS installation.

2. The RSA-based X.509 certificate, used by the 389 DS server, was replaced with a new one and it has new encryption key

Go down to "Deriving new attribute encryption keys" to see how to fix the problem.

3. The 389 DS server uses Elliptic Curves (ECC) based X.509 certificate

389 Directory Server stores the keys, certificates, and passwords in Mozilla NSS DB storage. It is usually located inside the folder /etc/dirsrv/slapd-instance_name, where "instance_name" is the actual instance name of your 389 Directory Server setup (depends on how did you name the instance). The NSS DB store is split into three files: cert8.db, keys3.db, and secmod.db.

If the 389 Directory Server is configured to use SSL (and STARTTLS), a X.509v3 certificate is installed in the NSS DB store. The problem is that by default (so far), the administrative server sets any X.509 certificate in use, as if it is RSA-based. Therefore, in /etc/dirsrv/slapd-instance_name/dse.ldif (that is the starting configuration of the instance), you will find the entry:

dn: cn=RSA,cn=encryption,cn=config
objectClass: top
objectClass: nsEncryptionModule
cn: RSA
nsSSLPersonalitySSL: server-cert
nsSSLToken: internal (software)
nsSSLActivation: on

even if the X.509 certificate with nickname "server-cert" installed there (the nickname the certificate is know under in the NSS DB store) is ECDSA-based. And this "masking" of the ECDSA-based certificate as RSA, causes those errors from above to appear, when the SSL certificated is ECDSA-based. Note that the current method for deriving 3DES and AES attribute encryption keys is based on RSA encryption keys only. To solve this problem (especially if you want to deliberately generate 3DES and AES keys), you need to implement some workaround. That workaround works in the following way. Define an entry for the ECDSA certificate in the starting configuration (replace "server-cert" with your actual ECDSA X.509 certificate nickname, listed in the NSS DB certificate store):

$ ldapadd -D "cn=Directory Manager" -x -W
dn: cn=ECDSA,cn=encryption,cn=config
objectClass: top
objectClass: nsEncryptionModule
cn: ECDSA
nsSSLPersonalitySSL: server-cert
nsSSLToken: internal (software)
nsSSLActivation: on

Then import into NSS DB certificate store new RSA certificate, that will be used only to support the attribute encryption key generation (we assume its nickname is "keygen-cert"), remove the old dn "cn=RSA,cn=encryption,cn=config" (if exists):

$ ldapdelete -D "cn=Directory Manager" -x -W "cn=RSA,cn=encryption,cn=config"

and add it again, pointing this time the attribute "nsSSLPersonalitySSL" to the RSA certificate nickname ("keygen-cert" in the example bellow):

$ ldapadd -D "cn=Directory Manager" -x -W
dn: cn=RSA,cn=encryption,cn=config
objectClass: top
objectClass: nsEncryptionModule
cn: RSA
nsSSLPersonalitySSL: keygen-cert
nsSSLToken: internal (software)
nsSSLActivation: off

PAY ATTENTION: To avoid the use of RSA certificate as a host certificate, and to force the use of ECDSA one for that purpose, you have to set nsSSLActivation to off. That is mandatory!

At this point, you might restart your 389 Directory Server instance:

$ sudo systemctl restart dirsrv@instance_name

Then proceed to "Deriving new attribute encryption keys".

4. Deriving new attribute encryption keys

If you don't know how many back-ends does your 389 DS currently supports, you might check that:

$ ldapsearch -D "cn=Directory Manager" -b "cn=config" -x -W "nsslapd-backend=*" nsslapd-backend | grep ^nsslapd-backend | awk '{print $2}'

(you will have to know the "cn=Directory Manager" password to complete the search). In most cases (default setup schema) the result will look like this:

userRoot

If the administrative server is also installed and configured, the NetscapeRoot back-end will be shown too:

NetscapeRoot
userRoot

The next step is to delete the dn-objects containing the AES and 3DES attribute encryption keys. For each back-end, you need to execute (replace BACKEND with the actual name of the back-end):

$ ldapdelete -D "cn=Directory Manager" -x -W "cn=AES,cn=encrypted attribute keys,cn=BACKEND,cn=ldbm database,cn=plugins,cn=config"
$ ldapdelete -D "cn=Directory Manager" -x -W "cn=3DES,cn=encrypted attribute keys,cn=BACKEND,cn=ldbm database,cn=plugins,cn=config"

If you receive "No such object" as a result, that might be a proof that only X.509 ECC-based certificates have been used with the current installation of 389 DS, ever since it was created.

You can restart now the 389 DS daemon (replace "instance_name" with your actual 389 Directory Server instance name):

$ sudo systemctl restart dirsrv@instance_name

and if the keys were successfully derived during the restart of 389 DS daemon, you will see the following messages at the end of /var/log/dirsrv/slapd-instance_name/error (two lines for each back-end):

[04/Nov/2019:04:57:08.393260367 +0200] - ERR - attrcrypt_cipher_init - No symmetric key found for cipher AES in backend userRoot, attempting to create one...
[04/Nov/2019:04:57:08.446317176 +0200] - INFO - attrcrypt_cipher_init - Key for cipher AES successfully generated and stored
[04/Nov/2019:04:57:08.470085086 +0200] - ERR - attrcrypt_cipher_init - No symmetric key found for cipher 3DES in backend userRoot, attempting to create one...
[04/Nov/2019:04:57:08.504787008 +0200] - INFO - attrcrypt_cipher_init - Key for cipher 3DES successfully generated and stored

No such messages will appear again during the next restarts, unless you remove the employed RSA-based X.509 certificates from the NSS DB.


Keep away the SSH scan bots by hardening the SSH key-exchange on the server side (CentOS/RHEL 7/8 case)

Analyzing the SSH login attempts, regularly performed by a vast number of bots, one can detect and interesting pattern - they use old-fashioned key-exchange protocols, like diffie-hellman-group1-sha1, diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1. I am not a bot operator myself and I cannot be absolutely sure why do the bot operators and programmers use those old key exchange protocols. I only speculate here, but they either target old SSH installations (old OpenSSH/SSH.com server version), or the bot software employs some old SSH library.

Knowing that certain pattern in the key-exchange, it might be easy to get rid of most of the SSH scans, by simply hardening the key-exchange configuration on the SSH server. To do that, add to the end of your /etc/ssh/sshd_config the following line:

  • on CentOS 7/RHEL 7:

    Edit /etc/ssh/sshd_config by appending to its end the line:

    KexAlgorithms curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512

    Save the changes and restart sshd:

    # systemctl restart sshd
  • on CentOS 8/RHEL 8:

    Edit CRYPTO_POLICY declaration in /etc/crypto-policies/back-ends/opensshserver.config by replacing there the -oKexAlgorithms declaration with:

    -oKexAlgorithms=curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp384,ecdh-sha2-nistp521

    Save the changes and restart sshd:

    # systemctl restart sshd

Finally (after implementing the recommendations from above), tail /var/log/secure and prove that the bots are being repelled. You should see lines like those:

Nov  2 05:07:36 server sshd[26440]: Unable to negotiate with xxx.xxx.xxx.xxx port 28482: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth]
Nov  2 05:16:53 server sshd[26517]: Unable to negotiate with xxx.xxx.xxx.xxx port 14046: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth]

Important note: some old SSH clients might not "speak" the modern key-exchange algorithms, like those recommended above!

Even if the suggested key-exchange settings might drop some of your actual SSH clients, it is better to keep the configuration, for it is the only way to show your SSH users that the SSH clients get old too and they should be replaced or upgraded to keep up with the modern cryptography requirements.


Using TLSv1.3 and strong cryptography for 389 Directory Server (on CentOS 7)

TLS v1.3 support came recently to the 389 Directory Server (via mod_nss) with the latest CentOS 7. Due to an inconstancy between EPEL and CentOS7 upstream, TLS v1.3 is not currently available to the dirsrv admin service!

To activate the TLS v1.3 protocol for 389 Directory Server, do prepare a LDIF file, and describe there the modification that will take place in "cn=encryption,cn=config" (a dn-object which is a part of the 389 start up configuration), and insert there the following text:

dn: cn=encryption,cn=config
changetype:modify
replace: sslVersionMin
sslVersionMin: TLS1.2

dn: cn=encryption,cn=config
changetype: modify
replace: nsSSL3
nsSSL3: off

dn: cn=encryption,cn=config
changetype: modify
replace: nsSSL3Ciphers
nsSSL3Ciphers:  +TLS_CHACHA20_POLY1305_SHA256,+TLS_AES_256_GCM_SHA384,+TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,+TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,+TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,+TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,+TLS_DHE_RSA_WITH_AES_256_GCM_SHA384

Save that content as "modify.ldif", then invoke ldapmodify, and authenticate as "cn=Directory Manager" to enforce the modification:

$ ldapmodify -D "cn=Directory Manager" -x -W -f modify.ldif

In case of successful modification, the following message will appear:

modifying entry "cn=encryption,cn=config"

At this point you need to restart 389 Directory Server:

systemctl reload dirsrv@instance-name

and check if the requested cipher suite, already requested for TLS v1.2 above, is really available to the server (replace "localhost" with the actual server name of your 389 server):

$ nmap -sV --script ssl-enum-ciphers -p 636 localhost

In latest Fedora, CentOS, and Ubuntu, one can use openssl (>= 1.1.1) to verify that that TLS v1.3 is successfully configured and (therefore) available (just replace "server-name" down bellow with your actual server name):

$ openssl s_client -connect server-name:636

This is how a positive result, the one detecting the presence of TLS v1.3, will appear on your screen:

New, TLSv1.3, Cipher is TLS_CHACHA20_POLY1305_SHA256
Server public key is 4096 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent

Using high-grade ECC temporary keys with Apache and mod_ssl on CentOS 7

To support ECHDE-based SSL ciphers, your Apache server (through mod_ssl) needs a temporary ECC key to be generated and loaded during each start or restart of the httpd daemon. While on RHEL8 and CentOS 8 you can automatically generate and load such keys, every time when Apache daemon is invoked, simply by adding the declaration:

SSLOpenSSLConfCmd Curves P-384

to the section in /etc/httpd/conf.d/ssl.conf, there is no such support in mod_ssl 2.4.6 - the version shipped with CentOS 7. That does not mean mod_ssl 2.4.6 does not support ECDHE and the use of temporary ECC keys. It supports it, but instead of allowing the temporary ECC key size to be configured as a parameter in /etc/httpd/conf.d/ssl.conf, it sticks only to 256-bit secp256k1 key. Key of that size might not be suitable (cannot be recognized secure enough) for use in long-time running Apache instances. One workaround here is to manually generate ECC key, using secp384r1 (that is high-grade key), and add it to the end of the certificate file (that is the file pointed by the SSLCertificateFile declaration in /etc/httpd/conf.d/ssl.conf).

To generate the key, use openssl:

$ openssl ecparam -genkey -name secp384r1

The output will look like:

-----BEGIN EC PARAMETERS-----
BgUrgQQAIg==
-----END EC PARAMETERS-----
-----BEGIN EC PRIVATE KEY-----
MIGkAgEBBDCL5996z0+JsNAR2boRU0zULurtLbKbILiysJx3BbWEWFrlkuXL11BS
MI9bqrYNXjOgBwYFK4EEACKhZANiAAQISZYiUhc3GUXCaxOfsNun9KBiMyp9yAR3
qGH2NR1Va51Q4WfS4X0XPCaa3w3gA5g69MQV8aak3BVnbE27Q+yAZ5zi+dSNt5VU
Jg1tGgZmzX+SfJ6WWtMSv0Aa3r62UEE=
-----END EC PRIVATE KEY-----

Add it to the end of the certificate file, right after the line:

-----END CERTIFICATE-----

Save the changes to the file and reload or restart Apache's daemon httpd:

$ sudo systemctl reload httpd

To check if the server is successfully loaded and using the newly generated 384-bit long temporary key, use openssl:

$ openssl s_client -connect your_server_name:443

In case of success, you should see in the result the following line:

Server Temp Key: ECDH, P-384, 384 bits

NOTES: The configuration of mod_ssl should include at least one ECDHE cipher. Do not try to go to 521-bit key, for it is not supported by some browsers (yet).


Using SmartCard-HSM 4K USB-Token to work with ECDSA keys in RHEL, CentOS, and Enterprise Linux, versions 7 and 8

Content:

1. Introduction

2. Connecting the device

3. Downloading, compiling, and installing the last stable release of OpenSC and HSM code

4. Initializing the device (setting the PINs)

5. Generating 521-bit EC key pair

6. Using the USB token as OpenSSL PKCS#11 engine

7. Generating CSR, based on a key pair previously stored in the token, through OpenSSL PKCS#11 engine

8. Importing X.509 certificate to the token storage

 

1. Introduction

In the modern cryptographic systems, the use of Elliptic Curve cryptography (ECC) becomes de facto a standard. Meanwhile, even now (it is 2019), it is quite rare to find a cryptographic token that supports the generation and use of ECDSA keys, up to 512 bits, SHA512 hash function (embedded in the processor of the token), and comes with a driver for Linux OS support. Most of the available tokens support mostly RSA key pairs, and even if they support ECDSA, that support is partial and the maximum key size is limited bellow 512 bits.

Surprisingly, you can find a reliable crypto token, that supports ECDSA 512-bit keys and SHA512 at:

cardomatic.de

The SmartCard-HSM 4K USB-Token, they have in sale, is supported by the driver library libccid.so, which is part of the OpenSC package (version 0.16 or higher). If one can download, compile, and install locally both the latest OpenSC and HSM code release, that is sufficient to employ the SmartCard-HSM 4K USB-Token in OpenSSL (as an engine), Mozilla Firefox, and Mozilla Thunderbird (as a security device).

The notes given bellow explain how to make the SmartCard-HSM 4K USB-Token device available in a Linux-based system running CentOS 7, Scientific Linux 7, Red Hat Enterprise Linux 8, and (possibly) Red Hat Enterprise Linux 8, how initializing the device, and the modules to interact the opensc tools (like pkcs11-tools and pkcs15-tools).

 

2. Connecting the device

Install the following packages (there is a good chance they are already installed):

# yum install pcsc pcsc-lite-ccid opensc

Enable the daemon pcscd to be loaded when starting/restarting the system:

# systemctl enable pcscd

and run it manually afterwards:

# systemctl start pcscd

Now you can connect the SmartCard-HSM 4K USB-Token device to the system. Records like those given bellow will be added to the /var/log/messages by syslog:

May 10 15:00:06 argus kernel: usb 3-10: new full-speed USB device number 9 using xhci_hcd
May 10 15:00:06 argus kernel: usb 3-10: New USB device found, idVendor=04e6, idProduct=5816
May 10 15:00:06 argus kernel: usb 3-10: New USB device strings: Mfr=1, Product=2, SerialNumber=5
May 10 15:00:06 argus kernel: usb 3-10: Product: uTrust 3512 SAM slot Token
May 10 15:00:06 argus kernel: usb 3-10: Manufacturer: Identiv
May 10 15:00:06 argus kernel: usb 3-10: SerialNumber: 55511247701280

Their appearance there does not exactly mean the device is accessible by the applications. It only indicates that the USB device is properly recognized by libusb. In addition, the device will appear under the Vendor Name: SCM Microsystems, Inc., Vendor ID: 04e6, and Product ID: 5816, in the output generated by the execution of lsusb:

Bus 003 Device 009: ID 04e6:5816 SCM Microsystems, Inc.

If detailed debugging information, regarding the process of device detection, performed by pcscd daemon, is required, do modify the content of the file /usr/lib/systemd/system/pcscd.service (used by systemd to invoke the daemon pcscd), by adding the additional declaration --debug to the end of the line starting with "ExecStart":

ExecStart=/usr/sbin/pcscd --foreground --auto-exit --debug

Save the changes to the file, load the new list of declarations:

# systemctl daemon-reload

and finally restart the daemon pcscd:

# systemctl restart pcscd

At that moment, start monitoring the new content appended to the file /var/log/messages (# tailf /var/log/messages will do), then unplug and plug in the device, and watch what kind of messages the syslog daemon is appending to the file.

To prevent the fast growth of the /var/log/messages size on the disk, immediately after finishing with the analysis of the debug information, do restore the original declarations in /usr/lib/systemd/system/pcscd.service (remove the additional declaration --debug), reload it, and restart the daemon pcscd!

 

3. Downloading, compiling, and installing the last stable release of OpenSC and HSM code

First, install the required RPM packages (if they are not already installed):

# yum install git gcc gcc-c++ make

Then fetch the code of sc-hsm-embedded project, hosted on Github, configure, compile, and install it (change the destination folder after --prefix, if needed):

$ git clone https://github.com/CardContact/sc-hsm-embedded.git
$ cd sc-hsm-embedded
$ ./configure --prefix=/usr/local/sc-hsm-embedded
$ make
$ sudo make install

General note: Red Hat Enterprise Linux 8 (as well as CentOS 8 and Scientific Linux 8), ships OpenSC version 0.19. That means one does not need to compile OpenSC code, as explained bellow, in systems running EL8.

Once the installation is successfully completed, download, compile, and install the latest stable release of the OpenSC code (change the destination folder after --prefix and --with-completiondir, if needed):

$ wget https://github.com/OpenSC/OpenSC/releases/download/0.19.0/opensc-0.19.0.tar.gz
$ tar xvf opensc-0.19.0.tar.gz
$ cd opensc-0.19.0
$ ./configure --prefix=/usr/local/opensc-0.19.0 --with-completiondir=/usr/local/opensc-0.19.0/share/bash-completion/completions
$ make
$ sudo make install

The next step is to add the folders containing the installed binaries and libraries to the user's PATH and LD_LIBRARY_PATH environmental variables. Those additions can be made either system-wide (available to all users, inclusive) or user-specific (available only to specific users, exclusive):

  • system-wide:

    Write down the following export declarations:

    export PATH=/usr/local/opensc-0.19.0/bin:$PATH
    export LD_LIBRARY_PATH=/usr/local/sc-hsm-embedded/lib:/usr/local/opensc-0.19.0/lib:$LD_LIBRARY_PATH

    to the file /etc/profile.d/opensc.sh (create the file, it does not exist by default there).

  • user-specific:

    Open each ~/.bashrc file and append (to the end) the following lines:

    export PATH=/usr/local/opensc-0.19.0/bin:$PATH
    export LD_LIBRARY_PATH=/usr/local/sc-hsm-embedded/lib:/usr/local/opensc-0.19.0/lib:$LD_LIBRARY_PATH

 

4. Initializing the device (setting the PINs)

The SmartCard-HSM 4K USB-Token device comes uninitialized! That means the device cannot be used, unless complete the initialization process first.

Connect the token to the system and invoke the tool sc-hsm-tool by specifying the SO PIN (SO stands for "Security Officer", change SO PIN 3537363231383830, specified in the example bellow, with another unique 16-digit long decimal number and do not forget it) and the user's PIN (change 123456 to another number):

$ sc-hsm-tool --initialize --so-pin 3537363231383830 --pin 123456

Here, the number after --so-pin is the SO PIN and the one after --pin is the user's PIN. Note that the SO PIN is the most important PIN, for it allows write access to the all slots of the device storage. Also, it can change every PUK and PIN, previously stored there (that includes the SO PIN itself). So do not loose SO PIN number!


5. Generating 521-bit EC key pair

Invoke the tool pkcs11-tool and specify the key type (EC:specp521r1 for 521-bit EC encryption key) and key pair ID label (it helps later to access the keys):

$ pkcs11-tool --module libsc-hsm-pkcs11.so -l --keypairgen --key-type EC:secp521r1 --id 10 --label "EC pair ID 10"

The user's PIN will be requested next, to grant access to the device storage and processor:

Enter PKCS#11 token PIN for SmartCard-HSM:

If the provided user's PIN is correct, it will take up to 2-3 seconds to execute and complete the key generation process.

The most important part of the command line above is the ID number assigned to the key pair (given above is 10, but you may reserve some other number, including hexadecimal number syntax, which is not already taken), because from that moment onward, the keys of that pair can be selected and involved in operations only if their ID number is specified.

 

6. Using the USB token as OpenSSL PKCS#11 engine

Install the package openssl-pkcs11 to enable the OpenSSL PKCS#11 engines functionality:

# yum install openssl-pkcs11

After that, store the following minimal OpenSSL configuration into a local file (openssl_hsm.cnf, for the sake of our illustration process):

openssl_conf = openssl_def

[openssl_def]
engines = engine_section

[engine_section]
pkcs11 = pkcs11_section

[pkcs11_section]
engine_id = libsc-hsm-pkcs11
dynamic_path = /usr/lib64/engines-1.1/pkcs11.so
MODULE_PATH = /usr/local/opensc-0.19.0/lib/libsc-hsm-pkcs11.so
init = 0


[ req ]
distinguished_name      = req_distinguished_name
string_mask = utf8only

[ req_distinguished_name ]

The only declaration in openssl_hsm.cnf one might need to change is that of MODULE_PATH:

MODULE_PATH = /usr/local/opensc-0.19.0/lib/libsc-hsm-pkcs11.so

Replace /usr/local/opensc-0.19.0 there with the actual folder, where the file libsc-hsm-pkcs11.so is stored. Note that the file is brought there during the installation of the HSM code).

 

7. Generating CSR, based on a key pair previously stored in the token, through OpenSSL PKCS#11 engine

If the OpenSSL PKCS#11 engine configuration file is available, the generation of CSR PKCS#10 block requires to specify the key pair ID (that is the number assigned to the key pair during the process of key pair generation):

$ openssl req -new -subj "/CN=server.example.com" -out request.pem -sha512 -config openssl_hsm.cnf -engine pkcs11 -keyform engine -key 10

One can expand both command line arguments and the set of declarations stored in openssl_hsm.cnf, if the CSR should contain some specific extensions, requested by the certificate issuer.

 

8. Importing X.509 certificate to the token storage

That type of operation is possible only if the X.509 certificate is stored in DER format (binary). For example, if the file SU_ECC_Root_CA.crt contains PEM formatted X.509 certificate, that certificate might be converted into DER format, by using openssl:

$ openssl x509 -in SU_ECC_Root_CA.crt -outform DER -out SU_ECC_Root_CA.der

To import the DER formatted X.509 certificate to the USB storage token, use the tool pkcs11-tool:

$ pkcs11-tool --module ibsc-hsm-pkcs11.so --login --write-object SU_ECC_Root_CA.der --type cert --id `uuidgen | tr -d -` --label "SU_ECC_ROOT_CA"

The label string after --label, naming the X.509 certificate, should be specified in a way hinting the purpose of having and using that certificate.


Using nmcli to create 802.1x EAP-TTLS connections

The GUI of NetworkManager is pretty user friendly, when it comes to create easily 802.1x connection configurations (WPA2 Enterprise). In a command line environment (non-GUI setups) that kind of connections should be created and handled by using the TUI tool nmcli. When invoked, that tool communicates with the locally running NetworkManager daemon.

It is rather strange that on many Internet user forums an easy thing to do, like creating and exploring 802.1x connection by using nmcli, is declared mission impossible, and wpa_supplicant is pointed out as the only available tool for connecting to WPA2 Enterprise hotspot infrastructure. The goal of this post is to show that nmcli is capable of doing that natively.

To create 802.1x connection in command line mode the nmcli tool need to be invoked correctly. It is not practical to enter the nmcli interactive shell for describing the connection parameters one by one. The easiest way is to create the connection description at once by executing a single command line. Note that some of the options passed as arguments to nmcli are in fact critical, even if they are not considered as such by the syntax checker. For instance, the process of tunnelling the plain-text password requires a positive verification of the RADIUS X.509 certificate to resist the mighty "man-in-the-middle" kind of attacks. Therefore, a copy of the CA X.509 certificate need to be stored locally, as text file (preferably in PEM format).

The example bellow shows how to define all the parameters of a 802.1x connection, required for using "eduroam" hotspot (more about Eduroam):

$ sudo nmcli connection add type wifi con-name "eduroam" ifname wlan0 ssid "eduroam" -- wifi-sec.key-mgmt wpa-eap 802-1x.eap ttls 802-1x.phase2-auth pap 802-1x.identity "user@some.edu" 802-1x.anonymous-identity "anon@some.edu" 802-1x.ca-cert "/home/user/CA-wifi.crt" 802-1x.password 3423sd3easd32e2

Note the use of full path to the CA X.509 certificate file given above (/home/user/CA-wifi.crt). Declaring a relative path instead need to be avoided.

In case of a successful execution of the command line, the connection "eduroam" will appear as available in the list of suported connections:

$ nmcli c s

Once listed there, the connection can be activated by executing:

$ sudo nmcli c up eduroam

and in case of successful activation, the usual kind of message will appear on the display:

Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/9)

It is very good idea to clear the shell history in order to prevent the password from disclosure, since it is part of the command line defining the connection.

The example given above has been tested on the latest CentOS 7, Red Hat Enterprise Linux 7, Scientific Linux 7, and Ubuntu 18.04.1 LTS.


Installing Intel Python 3, tensorflow-gpu, and multiple versions of CUDA and cuDNN on CentOS 7

Content:

1. Introduction

2. Enabling the use of EPEL repository

3. Installing multiple CUDA versions on CentOS 7

4. Monitoring the NVidia GPU device by nvidia-smi

5. Making the software state in the NVIDIA driver persistent

6. Installing cuDNN for multiple versions of CUDA

7. Installing Intel Python 3 and tensorflow-gpu

8. Testing the CUDA and cuDNN installation

8.1. Testing if cuDNN library is loadable

8.2. Testing the CUDA Python 3 integration by using Numba

8.3. Testing the CUDA Python 3 integration by using tensorflow-gpu


This publication describes how to install multiple versions of CUDA and cuDNN on the same system running CentOS 7 to support various applications, and tensorflow in particular (via tensorflow-gpu). The recipes provided bellow can be follow when adding GPU computing support for compute nodes, which are part of HPC cluster.


This is an optional step, applicable if the configuration for using EPEL repository is not presented in /etc/yum.repos.d. EPEL is required here because the installation of the nvidia graphics driver, part of CUDA packages, requires the presence of DKMS in the system in advance. That package is included in EPEL. To use EPEL install first its repository package:

# yum install epel-release
# yum update

The dkms RPM package will be installed later, as a dependence required by the CUDA packages (see next section).


The most reasonable question here is why do we need multiple version of CUDA installed and supported locally on the system. Its answer is straightforward - it is all about the application software specific requirements. Some software products are very specific about the version of CUDA.

The most rational way to install the CUDA packages on CentOS 7 is through yum. NVidia provides the configuration files for using their yum repositories as a separate RPM package, which might be downloaded here:

https://developer.nvidia.com/cuda-downloads

To initiate the download consequently select Linux > x86_64 > CentOS > 7 > rpm (network) > Download as shown in the screen shots bellow:


and installed by following the instructions given bellow the "Download" button.

From time to time some inconsistencies appear in the CUDA yum repository. To prevent any problems they might cause edit the file /etc/yum.repos.d/cuda.repo by changing there the line:

enabled=1

into

enabled=0

From now on, every time an access to the CUDA repository RPM packages is required, do supply the command line option --enablerepo=cuda to yum.

After finishing with the yum configuration install the RPM packages containing the versions of CUDA currently supported by the vendor:

# yum --enablerepo=cuda install cuda-8-0 cuda-9-0 cuda-9-1

That will install plenty of packages. Take into account their installation size and prepare to meet that demand for disk space.

If, by any chance, the installer misses to install the packages nvidia-kmod, xorg-x11-drv-nvidia, xorg-x11-drv-nvidia-libs, and xorg-x11-drv-nvidia-gl, install them separately:

# yum --enablerepo=cuda install nvidia-kmod xorg-x11-drv-nvidia xorg-x11-drv-nvidia-libs xorg-x11-drv-nvidia-gl

The tool nvidia-smi is part of the package xorg-x11-drv-nvidia. It shows the current status of the NVidia GPU device:

$ nvidia-smi

Sun May  6 17:15:10 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.30                 Driver Version: 390.30                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Quadro K620         On   | 00000000:02:00.0 Off |                  N/A |
| 34%   36C    P8     1W /  30W |      1MiB /  2000MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

That tool is useful to check how many applications are currently running on the GPU device, what is the temperature there, the consumed power, the utilization rate, and what amount of memory is taken by the applications.


To prevent the driver from releasing the NVidia GPU device, when that device is not in use by any process, the daemon nvidia-persistenced (part of the package xorg-x11-drv-nvidia) needs to be enabled and started:

# systemctl enable nvidia-persistenced
# systemctl start nvidia-persistenced

The cuDNN library and header files can be downloaded from the web page of the vendor at:

https://developer.nvidia.com/cudnn

Note that a proper user registration is required to obtain the cuDNN files. Also, you need to download the archives with cuDNN library and header files for each and every CUDA version locally installed and supported. Which process, in turn, will end up bringing the following files into the download directory:

cudnn-8.0-linux-x64-v6.0.tgz
cudnn-9.0-linux-x64-v7.tgz
cudnn-9.1-linux-x64-v7.tgz

To proceed with the installation, unpack the content of the archives into the respective CUDA installation folders and recreate the database with the dynamic linker run time bindings, by executing (as root or super user) the command lines:

# tar --strip-components 1 -xf cudnn-8.0-linux-x64-v6.0.tgz -C /usr/local/cuda-8.0
# tar --strip-components 1 -xf cudnn-9.0-linux-x64-v7.tgz -C /usr/local/cuda-9.0
# tar --strip-components 1 -xf cudnn-9.1-linux-x64-v7.tgz -C /usr/local/cuda-9.1
# ldconfig /

It is recommended to check the successful archive unpacking and the proper recreation of the database with the dynamic linker run time bindings, by listing the database cache and grep the output for locating the string "cudnn" in it:

$ ldconfig -p | grep cudnn

The grep result indicating successful cuDNN installation, will look like:

libcudnn.so.7 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn.so.7
libcudnn.so.7 (libc6,x86-64) => /usr/local/cuda-9.1/targets/x86_64-linux/lib/libcudnn.so.7
libcudnn.so.6 (libc6,x86-64) => /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcudnn.so.6
libcudnn.so (libc6,x86-64) => /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcudnn.so
libcudnn.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn.so
libcudnn.so (libc6,x86-64) => /usr/local/cuda-9.1/targets/x86_64-linux/lib/libcudnn.so

Do not become confused due to the multiple declarations made for libcudnn.so in the database (as seen in the output above). Seemingly, that indicates a collision, but note that each of libcudnn.so files is a symlink and it also provides an unique version number. That number is used by the tensorflow libraries to find which of the files matches best the version requirements.


If Intel Python 3 is not available in the system, follow the instructions given here:

https://software.intel.com/en-us/articles/installing-intel-free-libs-and-python-yum-repo

on how to install it. It is a single RPM package (mind its large installation size of several gigabytes) which contains tensorflow but (currently) does not include tensorflow-gpu module. Once Intel Python 3 is available the tensorflow-gpu module could be installed by invoking pip (the one provided by Intel Python 3).

Do not install tensorflow-gpu or any other module for Intel Python 3 as root or super user. Avoid any module installations inside the /opt/intel/intelpython3/ folder. Instead, perform the installation as unprivileged user and append the --user option to pip:

$ /opt/intel/intelpython3/bin/pip install --user tensorflow-gpu

The output information generated during the installation process should look like:

Collecting tensorflow-gpu
  Downloading https://files.pythonhosted.org/packages/59/41/ba6ac9b63c5bfb90377784e29c4f4c478c74f53e020fa56237c939674f2d/tensorflow_gpu-1.8.0-cp36-cp36m-manylinux1_x86_64.whl (216.2MB)
    100% |████████████████████████████████| 216.3MB 7.8kB/s 
Collecting protobuf>=3.4.0 (from tensorflow-gpu)
  Downloading https://files.pythonhosted.org/packages/74/ad/ecd865eb1ba1ff7f6bd6bcb731a89d55bc0450ced8d457ed2d167c7b8d5f/protobuf-3.5.2.post1-cp36-cp36m-manylinux1_x86_64.whl (6.4MB)
    100% |████████████████████████████████| 6.4MB 266kB/s 
Collecting gast>=0.2.0 (from tensorflow-gpu)
  Downloading https://files.pythonhosted.org/packages/5c/78/ff794fcae2ce8aa6323e789d1f8b3b7765f601e7702726f430e814822b96/gast-0.2.0.tar.gz
Collecting termcolor>=1.1.0 (from tensorflow-gpu)
  Downloading https://files.pythonhosted.org/packages/8a/48/a76be51647d0eb9f10e2a4511bf3ffb8cc1e6b14e9e4fab46173aa79f981/termcolor-1.1.0.tar.gz
Requirement already satisfied: wheel>=0.26 in /opt/intel/intelpython3/lib/python3.6/site-packages (from tensorflow-gpu)
Collecting tensorboard<1.9.0,>=1.8.0 (from tensorflow-gpu)
  Downloading https://files.pythonhosted.org/packages/59/a6/0ae6092b7542cfedba6b2a1c9b8dceaf278238c39484f3ba03b03f07803c/tensorboard-1.8.0-py3-none-any.whl (3.1MB)
    100% |████████████████████████████████| 3.1MB 545kB/s 
Collecting grpcio>=1.8.6 (from tensorflow-gpu)
  Downloading https://files.pythonhosted.org/packages/c8/b8/00e703183b7ae5e02f161dafacdfa8edbd7234cb7434aef00f126a3a511e/grpcio-1.11.0-cp36-cp36m-manylinux1_x86_64.whl (8.8MB)
    100% |████████████████████████████████| 8.8MB 195kB/s 
Collecting astor>=0.6.0 (from tensorflow-gpu)
  Downloading https://files.pythonhosted.org/packages/b2/91/cc9805f1ff7b49f620136b3a7ca26f6a1be2ed424606804b0fbcf499f712/astor-0.6.2-py2.py3-none-any.whl
Requirement already satisfied: numpy>=1.13.3 in /opt/intel/intelpython3/lib/python3.6/site-packages (from tensorflow-gpu)
Requirement already satisfied: six>=1.10.0 in /opt/intel/intelpython3/lib/python3.6/site-packages (from tensorflow-gpu)
Collecting absl-py>=0.1.6 (from tensorflow-gpu)
  Downloading https://files.pythonhosted.org/packages/90/6b/ba04a9fe6aefa56adafa6b9e0557b959e423c49950527139cb8651b0480b/absl-py-0.2.0.tar.gz (82kB)
    100% |████████████████████████████████| 92kB 8.8MB/s 
Requirement already satisfied: setuptools in /opt/intel/intelpython3/lib/python3.6/site-packages (from protobuf>=3.4.0->tensorflow-gpu)
Requirement already satisfied: werkzeug>=0.11.10 in /opt/intel/intelpython3/lib/python3.6/site-packages (from tensorboard<1.9.0,>=1.8.0->tensorflow-gpu)
Collecting bleach==1.5.0 (from tensorboard<1.9.0,>=1.8.0->tensorflow-gpu)
  Downloading https://files.pythonhosted.org/packages/33/70/86c5fec937ea4964184d4d6c4f0b9551564f821e1c3575907639036d9b90/bleach-1.5.0-py2.py3-none-any.whl
Collecting markdown>=2.6.8 (from tensorboard<1.9.0,>=1.8.0->tensorflow-gpu)
  Downloading https://files.pythonhosted.org/packages/6d/7d/488b90f470b96531a3f5788cf12a93332f543dbab13c423a5e7ce96a0493/Markdown-2.6.11-py2.py3-none-any.whl (78kB)
    100% |████████████████████████████████| 81kB 8.9MB/s 
Collecting html5lib==0.9999999 (from tensorboard<1.9.0,>=1.8.0->tensorflow-gpu)
  Downloading https://files.pythonhosted.org/packages/ae/ae/bcb60402c60932b32dfaf19bb53870b29eda2cd17551ba5639219fb5ebf9/html5lib-0.9999999.tar.gz (889kB)
    100% |████████████████████████████████| 890kB 1.7MB/s 
Building wheels for collected packages: gast, termcolor, absl-py, html5lib
  Running setup.py bdist_wheel for gast ... done
  Stored in directory: /home/vesso/.cache/pip/wheels/9a/1f/0e/3cde98113222b853e98fc0a8e9924480a3e25f1b4008cedb4f
  Running setup.py bdist_wheel for termcolor ... done
  Stored in directory: /home/vesso/.cache/pip/wheels/7c/06/54/bc84598ba1daf8f970247f550b175aaaee85f68b4b0c5ab2c6
  Running setup.py bdist_wheel for absl-py ... done
  Stored in directory: /home/vesso/.cache/pip/wheels/23/35/1d/48c0a173ca38690dd8dfccfa47ffc750db48f8989ed898455c
  Running setup.py bdist_wheel for html5lib ... done
  Stored in directory: /home/vesso/.cache/pip/wheels/50/ae/f9/d2b189788efcf61d1ee0e36045476735c838898eef1cad6e29
Successfully built gast termcolor absl-py html5lib
Installing collected packages: protobuf, gast, termcolor, html5lib, bleach, markdown, tensorboard, grpcio, astor, absl-py, tensorflow-gpu
Successfully installed absl-py-0.2.0 astor-0.6.2 bleach-1.5.0 gast-0.2.0 grpcio-1.11.0 html5lib-0.9999999 markdown-2.6.11 protobuf-3.5.2.post1 tensorboard-1.8.0 tensorflow-gpu-1.8.0 termcolor-1.1.0

NOTE: The files brought by the tensorflow-gpu installation to the local file system will be located under ${HOME}/.local/lib/python3.6/site-packages/ directory!

That kind of test is very easy to perform. If it returns no error that means all symbols brought by the library libcudnn.so are known to the Python 3 interpreter.

To perform the test create the Python 3 script:

import ctypes

t=ctypes.cdll.LoadLibrary("libcudnn.so")

print(t._name)

save it as a file under the name cudnn_loading_cheker.py and then execute the script:

$ /opt/intel/intelpython3/bin/python3 cudnn_loading_cheker.py

If the libcudnn.so is successfully loaded the script will return the name of the library file:

libcudnn.so

and rise an error message otherwise.

Along with the other modules for scientific computing and data analysis, the Intel Python 3 package supplies Numba. To perform GPU computing based on CUDA, the Numba jit compiler requires the environmental variables NUMBAPRO_NVVM and NUMBAPRO_LIBDEVICE both properly declared before start compiling any Python code containing GPU instructions. Those variables should point to the installation tree of the latest version of CUDA:

$ export NUMBAPRO_NVVM=/usr/local/cuda-9.1/nvvm/lib64/libnvvm.so.3.2.0
$ export NUMBAPRO_LIBDEVICE=/usr/local/cuda-9.1/nvvm/libdevice

It is highly recommendable to declare these variables in ${HOME}/.bashrc file.

Once the variables are declared and loaded, execute the test script /opt/intel/intelpython3/lib/python3.6/site-packages/numba/cuda/tests/cudapy/test_matmul.py:

$ /opt/intel/intelpython3/bin/python3 /opt/intel/intelpython3/lib/python3.6/site-packages/numba/cuda/tests/cudapy/test_matmul.py

In case of successful execution the script will exit by displaying the message:

.
----------------------------------------------------------------------
Ran 1 test in 0.093s

OK

A simple script for testing tensorflow-gpu can be found here:

https://github.com/yaroslavvb/stuff/blob/master/matmul_benchmark.py

It should be downloaded and then executed by using Intel Python 3 interpreter:

$ /opt/intel/intelpython3/bin/python3 matmul_benchmark.py

and in case of successful execution the following result will appear on the screen:

/opt/intel/intelpython3/lib/python3.6/site-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
2018-05-06 16:21:22.591713: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-05-06 16:21:22.684411: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:898] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-05-06 16:21:22.684824: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 0 with properties: 
name: Quadro K620 major: 5 minor: 0 memoryClockRate(GHz): 1.124
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 1.92GiB
2018-05-06 16:21:22.684855: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-05-06 16:21:23.151861: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-05-06 16:21:23.151903: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929]      0 
2018-05-06 16:21:23.151916: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0:   N 
2018-05-06 16:21:23.152061: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1692 MB memory) -> physical GPU (device: 0, name: Quadro K620, pci bus id: 0000:01:00.0, compute capability: 5.0)

 8192 x 8192 matmul took: 1.34 sec, 817.99 G ops/sec


Using LDAPS (LDAP+TLS) from within the Sendmail configuration file

 

Content:

1. Introduction.

2. Installing and configuring OpenLDAP certificate database

3. SELinux configuration.

4. LDAP+TLS in sendmail.mc/sendmail.cf.

 

1. Introduction.

If one need to implement LDAP+TLS to securely connect sendmail daemon to the LDAP directory server, they need to enable and use the existing OpenLDAP integration of Sendmail. Most of the modern Linux distributions provide as a part of their package collections Sendmail compiled with OpenLDAP integration. But when it comes to configure Sendmail to connect to LDAP server by securing the TCP session with TLS it is very hard to find online an useful example. Almost all examples available explain how to configure Sendmail to use LDAP server through a plain TCP session. The goal of this document is to explain how to do that configuration. The explanations bellow are 100% compatible to Sendmail setup based on CentOS 7 or Red Hat Enterprise Linux 7, but they might be implemented to any other modern Linux distribution as well.

 

2. Installing and configuring OpenLDAP certificate database.

In CentOS 7 and Red Hat Enterprise Linux 7 the OpenLDAP clients configuration used by default the configuration and certificated based located in the directory /etc/openldap. That folder is supplied to the system by the package named openldap. In most cases, but also depend on the type of the installation, the package openldap should be presented in the system by default. Nevertheless one must check and verify that the package exists and it is up to date (not keeping your system up to date is risky). If the package openldap is not presented, install it by using yum:

# yum install openldap

If the installation is successful the package will create the folders /etc/openldap and /etc/openldap/certs. That last folder contains NSS database:

/etc/openldap/certs/cert8.db
/etc/openldap/certs/key3.db
/etc/openldap/certs/password
/etc/openldap/certs/secmod.db

There the file named "password" contains the password for unlocking the NSS database when accessing the stored private keys and passwords. The NSS base is created empty by default which means that one must add there at least the CA certificate that helps to verify the validity of the LDAP server X.509 certificate (the LDAP server Sendmail will be connected to). For example, if the CA X.509 certificate "COMODO RSA Certification Authority", stored in PEM format in the file /tmp/COMODO_RSA_Certification_Authority.crt, should be added to the NSS database and trusted, that can be done in the following way:

# cd /etc/openldap/certs # certutil -A -d . -n "COMODO RSA Certification Authority" -a -i /tmp/COMODO_RSA_Certification_Authority.crt -t "CT,c,"

Please, note that the use of NSS libraries with OpenLDAP is specific to CentOS and Red Hat Enterprise Linux 7. Another Linux distributions might use OpenSSL libraries instead of NSS ones.

 

3. SELinux configuration.

By default the Sendmail OpenLDAP client process cannot access the NSS certificate database of OpenLDAP. In order to make the access possible, one need to set the SELinux boolean authlogin_nsswitch_use_ldap to true:

# setsebool -P authlogin_nsswitch_use_ldap 1

 

4. LDAP+TLS in sendmail.mc/sendmail.cf.

The LDAP+TLS can be configured by using a specific URI format "-H ldaps://hosname:port", where the port number is optional. Bellow is a detailed example in m4 format which need to become part of the m4 Sendmail configuration file sendmail.m4:

define(`confLDAP_DEFAULT_SPEC', `-H ldaps://directory.example.com -b "o=example.com" -d "cn=sendmail,ou=Special Users,o=example.com" -M simple -P /etc/mail/password-sendmail.ldap')dnl

If one need to specify the LDAP client settings directly in sendmail.cf the following configuration line should be added there:

O LDAPDefaultSpec=-H ldaps://directory.example.com -b "o=example.com" -d "cn=sendmail,ou=Special Users,o=example.com" -M simple -P /etc/mail/password-sendmail.ldap

IPv6 Deployment in Sofia University Network (2007-2013)

IPv6 Deployment in Sofia University Network (2007-2013)

The slides were presented at the Third South East Europe (SEE 3)/RIPE NCC Regional Meeting (14-15 April 2014, Sofia, Bulgaria). You can download the presentation on PDF here: https://meetings.ripe.net/see3/files/Vasil_Kolev-The_IPv6_Deployment_in_Sofia_University.pdf


Creative Commons - Attribution 2.5 Generic. Powered by Blogger.

Implementing LUKS Encryption on Software RAID Arrays with LVM2 Management

A comprehensive guide to partition-level encryption for maximum security ...

Search This Blog

Translate