Implementing LUKS Encryption on Software RAID Arrays with LVM2 Management

Abstract

This document describes the implementation of Linux Unified Key Setup (LUKS) encryption on software RAID arrays managed by Logical Volume Manager 2 (LVM2). The approach involves encrypting RAID partitions at the block device level, which provides superior security compared to encrypting logical volumes individually. By encrypting the underlying RAID partitions, the entire LVM2 structure including volume group metadata, physical extent allocation, and logical volume layout remains hidden from potential attackers. This method eliminates information leakage that could occur when encrypting only logical volumes, where LVM2 metadata and structure remain visible on unencrypted RAID arrays. The document covers implementation procedures, security considerations, performance implications, and operational best practices based on production deployment experience.

Keywords: LUKS, software RAID, LVM2, encryption, security, mdadm, block device encryption

1. Introduction

Encryption of storage systems is essential for protecting sensitive data in enterprise and high-performance computing environments. When implementing encryption on systems utilising software RAID arrays with LVM2 volume management, the choice of encryption layer significantly impacts both security posture and operational complexity.

LUKS provides a standardised on-disk format for encrypted block devices, offering multiple key slots, key derivation functions, and metadata management. The fundamental question in layered storage architectures is where to apply encryption: at the RAID partition level or at the logical volume level within LVM2.

This document demonstrates that encrypting RAID partitions before LVM2 configuration provides superior security by completely hiding the volume management structure. In contrast, encrypting logical volumes individually leaves LVM2 metadata, physical extent allocation patterns, and volume group information exposed on the underlying RAID arrays, potentially providing attack vectors through structure analysis.

2. Security Architecture: Encryption Layer Comparison

This section compares three approaches to implementing LUKS encryption in systems utilising software RAID and LVM2 volume management. Each approach provides different levels of security and operational complexity.

2.1 Logical Volume-Level Encryption

When LUKS encryption is applied to individual logical volumes within an LVM2-managed system, the encryption layer sits above the LVM2 abstraction. This configuration exposes significant structural information that could aid attackers through multiple channels. Volume group metadata stored on RAID arrays remains unencrypted, allowing attackers to analyse physical extent allocation patterns, logical volume names and sizes, volume group structure and organisation, and physical volume relationships. The extent size and allocation strategies are visible, as is snapshot metadata and relationships between logical volumes.

Attackers can leverage this exposed information through several attack vectors. Structure analysis can reveal the logical volume count and approximate sizes, providing insight into the storage organisation. Allocation patterns may indicate data distribution and usage patterns, potentially revealing which volumes contain more critical or frequently accessed data. Metadata analysis could reveal volume naming conventions and organisational structure, which might expose the purpose or sensitivity of different logical volumes. Physical extent mapping provides insight into data layout across the storage system, whilst snapshot metadata exposes backup and versioning strategies, potentially revealing information about data retention policies and backup frequencies.

Architecture:

Physical Devices (/dev/sda, /dev/sdb)
    ↓
mdadm RAID Arrays (/dev/md0, /dev/md1) [UNENCRYPTED - LVM2 metadata visible]
    ↓
LVM2 Physical Volumes [UNENCRYPTED - structure visible]
    ↓
LVM2 Volume Group [UNENCRYPTED - metadata visible]
    ↓
LVM2 Logical Volumes [ENCRYPTED - data only]
    ↓
LUKS Encrypted Devices (/dev/mapper/lv_encrypted)
    ↓
Filesystems

2.2 RAID Partition-Level Encryption

Encrypting RAID partitions before LVM2 configuration hides the volume management structure. Software RAID arrays created with mdadm are block devices that can be encrypted directly, ensuring that all LVM2 metadata becomes encrypted and inaccessible to attackers. Physical extent allocation becomes invisible, logical volume structure is completely obscured, and volume group organisation remains hidden. This approach eliminates structural information leakage through metadata analysis, as attackers cannot examine the internal structure without first decrypting the RAID partitions.

The security advantages of this approach are substantial. Complete structure obfuscation prevents attack vector identification, as attackers cannot determine how the storage is organised internally. No information leakage occurs about logical volume count or sizes, making it impossible to infer the storage architecture. Allocation patterns remain encrypted and unanalysable, volume naming and organisational structure remain hidden, and snapshot and backup strategies cannot be inferred from metadata analysis.

Architecture:

Physical Devices (/dev/sda, /dev/sdb)
    ↓
mdadm RAID Arrays (/dev/md0, /dev/md1) [ENCRYPTED - everything hidden]
    ↓
LUKS Encrypted Devices (/dev/mapper/md0_crypt, /dev/mapper/md1_crypt)
    ↓
LVM2 Physical Volumes [ENCRYPTED - structure hidden]
    ↓
LVM2 Volume Group [ENCRYPTED - metadata hidden]
    ↓
LVM2 Logical Volumes [ENCRYPTED - structure hidden]
    ↓
Filesystems

However, certain considerations apply to this approach. RAID metadata remains visible at the physical device level through the mdadm superblock, which means an attacker can identify that RAID arrays exist, but cannot determine the LVM2 structure within those arrays. The RAID array structure, including the RAID level and device count, may be inferable from physical device analysis, but the internal LVM2 organisation remains completely hidden.

2.3 LVM2 Physical Device Partition-Level Encryption

Applying LUKS encryption directly to physical device partitions before creating LVM2 physical volumes provides the highest level of structure obfuscation. Since software RAID arrays are themselves partitions with metadata filesystems, we can apply the same principle to LVM2 physical devices at the partition level:

Architecture:

Physical Devices (/dev/sda, /dev/sdb)
    ↓
Physical Partitions (/dev/sda1, /dev/sdb1) [ENCRYPTED - everything hidden]
    ↓
LUKS Encrypted Devices (/dev/mapper/sda1_crypt, /dev/mapper/sdb1_crypt)
    ↓
LVM2 Physical Volumes [ENCRYPTED - structure completely hidden]
    ↓
LVM2 Volume Group [ENCRYPTED - metadata completely hidden]
    ↓
LVM2 Logical Volumes [ENCRYPTED - structure completely hidden]
    ↓
Filesystems

This approach provides the highest level of security advantages. There is no indication that LVM2 infrastructure exists in the partitions, as physical device analysis reveals only encrypted partitions with no visible structure. If RAID is not being used, no RAID metadata is visible either. The result is complete obfuscation of volume management structure, with no way to determine that LVM2 is in use without first decrypting the partitions.

When comparing this approach with RAID partition-level encryption, the key difference is that LUKS on RAID partitions hides LVM2 structure but RAID structure may be detectable through mdadm superblocks on physical devices. In contrast, LUKS on LVM2 physical device partitions hides both LVM2 and any underlying structure completely, providing maximum obfuscation.

The choice between these approaches depends on specific requirements. Use RAID partition-level encryption when software RAID redundancy is required, when RAID structure visibility is acceptable (since RAID metadata exists on physical devices), when you need to hide LVM2 structure from attackers, and when operational simplicity with RAID management is preferred. Use LVM2 physical device partition-level encryption when maximum structure obfuscation is required, when no indication of volume management should be visible, when RAID is not required or when RAID structure should also be hidden, and when you are willing to manage encryption on individual partitions rather than RAID arrays.

2.4 Combined Approach: LUKS on RAID Partitions with LVM2

For systems requiring both RAID redundancy and maximum security, the recommended approach is LUKS on RAID partitions:

Architecture:

Physical Devices (/dev/sda, /dev/sdb, /dev/sdc, /dev/sdd)
    ↓
mdadm RAID Arrays (/dev/md0, /dev/md1) [ENCRYPTED]
    ↓
LUKS Encrypted Devices (/dev/mapper/md0_crypt, /dev/mapper/md1_crypt)
    ↓
LVM2 Physical Volumes [ENCRYPTED - structure hidden]
    ↓
LVM2 Volume Group [ENCRYPTED - metadata hidden]
    ↓
LVM2 Logical Volumes [ENCRYPTED - structure hidden]
    ↓
Filesystems

This approach provides RAID redundancy for data protection whilst maintaining complete LVM2 structure obfuscation. It offers operational flexibility with LVM2 management capabilities and represents a balanced approach between security and functionality, making it suitable for most production environments requiring both redundancy and encryption.

3. Implementation Procedure: LUKS on Physical Device Partitions

This section describes implementing LUKS encryption directly on physical device partitions before LVM2 configuration. This approach provides maximum structure obfuscation by ensuring no indication of LVM2 infrastructure exists in the encrypted partitions.

3.1 Understanding the Approach

When applying LUKS encryption to physical device partitions before LVM2 configuration, the LVM2 physical volumes are created on the decrypted device mapper devices. This ensures that physical device analysis reveals only encrypted partitions with no visible structure. No LVM2 metadata is visible without decryption, there is no indication that volume management structure exists, and the result is complete obfuscation of storage organisation.

Important Consideration: Boot Automation

The /boot partition cannot be encrypted because the bootloader and kernel must be accessible before encryption can be unlocked. This creates a challenge for automating LVM2 activation and filesystem mounting after LUKS devices are unlocked. Automation scripts and configuration must be stored in /boot, which remains unencrypted.

3.2 Prerequisites

System Requirements:

The system must run a Linux distribution with LUKS support, such as RHEL 9/10, Rocky Linux 9/10, or equivalent distributions. A CPU with memory encryption support (Intel TME or AMD SME) is mandatory, as LUKS keys stored in memory without encryption are vulnerable to cold boot attacks. Memory encryption must be enabled in BIOS/UEFI configuration and verified in the operating system. Physical device partitions must be prepared and not yet used for LVM2 configuration. A separate /boot partition is required and cannot be encrypted, as the bootloader and kernel must be accessible before encryption can be unlocked. Sufficient disk space must be available for LUKS metadata overhead, which typically requires 16-32 MB per encrypted device.

Software Packages:

# Install required packages
dnf install cryptsetup lvm2

Critical Security Requirement:

LUKS encryption keys are stored in system memory when devices are unlocked. Without CPU-based memory encryption (Intel TME or AMD SME), LUKS keys can be extracted through cold boot attacks. Do not implement LUKS encryption on systems without CPU memory encryption support.

3.3 Step-by-Step Implementation

Step 1: Prepare Physical Device Partitions

Ensure physical device partitions are created and ready for encryption:

# Display partition information
lsblk
parted /dev/sda print

# Verify partitions are not in use
mount | grep sda

Step 2: Create LUKS Encrypted Devices on Partitions

Create LUKS encrypted devices on each physical partition. This process will format the partitions, so ensure all data is backed up:

# Create LUKS encrypted device on first partition
cryptsetup luksFormat --type luks2 --cipher aes-xts-plain64 --key-size 512 \
    --hash sha512 --iter-time 5000 --pbkdf argon2id /dev/sda1

# Create LUKS encrypted device on second partition
cryptsetup luksFormat --type luks2 --cipher aes-xts-plain64 --key-size 512 \
    --hash sha512 --iter-time 5000 --pbkdf argon2id /dev/sdb1

# Repeat for all partitions in the configuration

Parameter Explanation:

  • --type luks2: Use LUKS2 format (recommended for new installations)
  • --cipher aes-xts-plain64: AES encryption in XTS mode (recommended for block devices)
  • --key-size 512: 512-bit key size (256-bit per data and tweak key)
  • --hash sha512: SHA-512 hash algorithm for key derivation
  • --iter-time 5000: 5 seconds for key derivation (adjust based on security requirements)
  • --pbkdf argon2id: Argon2id key derivation function (recommended)

Step 3: Open LUKS Encrypted Devices

Open the encrypted devices to create device mapper entries:

# Open encrypted device (will prompt for passphrase)
cryptsetup open /dev/sda1 sda1_crypt
cryptsetup open /dev/sdb1 sdb1_crypt

# Verify devices are available
ls -l /dev/mapper/sda1_crypt /dev/mapper/sdb1_crypt

Step 4: Create LVM2 Physical Volumes on Encrypted Devices

Create physical volumes on the decrypted device mapper devices:

# Create physical volumes with proper alignment
pvcreate --dataalignment 1M /dev/mapper/sda1_crypt /dev/mapper/sdb1_crypt

# Verify physical volume creation
pvs
pvdisplay /dev/mapper/sda1_crypt

Step 5: Create Volume Group and Logical Volumes

Create the volume group and logical volumes as normal, using the encrypted device mapper devices:

# Create volume group
vgcreate --physicalextentsize 1M vg_encrypted /dev/mapper/sda1_crypt /dev/mapper/sdb1_crypt

# Create logical volumes
lvcreate -L 100G -n lv_data vg_encrypted
lvcreate -L 50G -n lv_backup vg_encrypted

# Verify logical volume creation
lvs
lvdisplay

Step 6: Create Filesystems

Create filesystems on the logical volumes:

# Create XFS filesystem (example)
mkfs.xfs /dev/vg_encrypted/lv_data
mkfs.xfs /dev/vg_encrypted/lv_backup

# Or create ext4 filesystem
mkfs.ext4 /dev/vg_encrypted/lv_data
mkfs.ext4 /dev/vg_encrypted/lv_backup

Step 7: Configure Automatic Unlocking

Configure automatic unlocking of encrypted devices during system boot. Two approaches are available:

Option A: Keyfile-Based Unlocking (Recommended for Servers)

# Generate random keyfile
dd if=/dev/urandom of=/etc/luks-keys/sda1.key bs=512 count=8
chmod 400 /etc/luks-keys/sda1.key

# Add keyfile to LUKS device
cryptsetup luksAddKey /dev/sda1 /etc/luks-keys/sda1.key

# Create keyfile for second partition
dd if=/dev/urandom of=/etc/luks-keys/sdb1.key bs=512 count=8
chmod 400 /etc/luks-keys/sdb1.key
cryptsetup luksAddKey /dev/sdb1 /etc/luks-keys/sdb1.key

# Configure /etc/crypttab for automatic unlocking
cat >> /etc/crypttab << 'EOF'
sda1_crypt UUID=<sda1-uuid> /etc/luks-keys/sda1.key luks
sdb1_crypt UUID=<sdb1-uuid> /etc/luks-keys/sdb1.key luks
EOF

# Get UUIDs for crypttab
blkid /dev/sda1
blkid /dev/sdb1

# Update crypttab with actual UUIDs

Option B: Passphrase-Based Unlocking (Interactive)

For systems requiring manual passphrase entry during boot:

# Configure /etc/crypttab (no keyfile)
cat >> /etc/crypttab << 'EOF'
sda1_crypt UUID=<sda1-uuid> none luks
sdb1_crypt UUID=<sdb1-uuid> none luks
EOF

Step 8: Configure Boot Automation for LVM2 Activation

Critical Challenge: /boot Partition Cannot Be Encrypted

The /boot partition must remain unencrypted because the bootloader and kernel must be accessible before encryption can be unlocked. This means that automation scripts and configuration for LVM2 activation after LUKS unlock must be stored in /boot, which is unencrypted.

Solution Options:

Option 1: Systemd Service-Based Activation (Recommended)

Systemd can automatically activate LVM2 volumes after LUKS devices are unlocked using systemd services:

# Enable LVM2 monitoring service
systemctl enable lvm2-monitor.service

# The service automatically scans for LVM2 volumes after encrypted devices are available
# No additional configuration needed if using standard systemd integration

Option 2: Custom systemd Service for LVM2 Activation

Create a custom systemd service that activates LVM2 after LUKS unlock:

# Create service file
cat > /etc/systemd/system/lvm2-activate.service << 'EOF'
[Unit]
Description=Activate LVM2 volumes after LUKS unlock
After=systemd-cryptsetup@.service
Requires=systemd-cryptsetup@.service

[Service]
Type=oneshot
ExecStart=/usr/sbin/vgchange -ay
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
EOF

# Enable the service
systemctl enable lvm2-activate.service

Option 3: Initramfs Hook for Early Activation

For systems requiring LVM2 activation during early boot (before root filesystem mount):

# Create initramfs hook script
cat > /etc/dracut.conf.d/90-lvm2-activate.conf << 'EOF'
# Ensure LVM2 tools are included in initramfs
add_drivers+=" dm-mod "
EOF

# Update initramfs
dracut --force

Security Implications of Boot Automation:

The unencrypted nature of the /boot partition creates security implications for boot automation. Automation scripts in /boot are unencrypted and visible to anyone with access to the partition, which means scripts may reveal LVM2 volume names and structure. To minimise information leakage, consider using generic volume names that do not reveal the purpose or content of logical volumes. Scripts should not contain sensitive information or passphrases, as they are stored in an unencrypted location. Keyfiles should be stored separately from automation scripts, ideally on encrypted devices or secure key management systems, to prevent key exposure if the /boot partition is accessed.

Minimising Information Leakage:

# Use generic volume group and logical volume names
# Instead of: vg_production_data, lv_customer_database
# Use: vg0, lv0, lv1, etc.

# Example with generic names
vgcreate vg0 /dev/mapper/sda1_crypt /dev/mapper/sdb1_crypt
lvcreate -L 100G -n lv0 vg0
lvcreate -L 50G -n lv1 vg0

Step 9: Configure Filesystem Mounting

Configure automatic filesystem mounting in /etc/fstab:

# Add entries to /etc/fstab
cat >> /etc/fstab << 'EOF'
/dev/vg_encrypted/lv_data /mnt/data xfs defaults 0 2
/dev/vg_encrypted/lv_backup /mnt/backup xfs defaults 0 2
EOF

# Create mount points
mkdir -p /mnt/data /mnt/backup

Note: If using generic volume names, update fstab accordingly:

# With generic names
cat >> /etc/fstab << 'EOF'
/dev/vg0/lv0 /mnt/data xfs defaults 0 2
/dev/vg0/lv1 /mnt/backup xfs defaults 0 2
EOF

Step 10: Update initramfs

Update the initial RAM filesystem to include LUKS and LVM2 support:

# Update initramfs (RHEL/Rocky Linux)
dracut --force

# Or for systems using mkinitcpio
mkinitcpio -P

Step 11: Verify Configuration

Reboot the system and verify that encrypted devices unlock automatically and filesystems mount correctly:

# After reboot, verify encrypted devices are open
lsblk
dmsetup ls

# Verify LVM2 volumes are available
pvs
vgs
lvs

# Verify filesystems are mounted
mount | grep vg
df -h

4. Implementation Procedure: LUKS on RAID Partitions

This section describes implementing LUKS encryption on software RAID partitions before LVM2 configuration. This approach provides RAID redundancy whilst hiding the LVM2 structure.

4.1 Prerequisites

System Requirements:

  • Linux distribution with LUKS support (RHEL 9/10, Rocky Linux 9/10, or equivalent)
  • CPU with memory encryption support (Intel TME or AMD SME) - MANDATORY
  • Memory encryption enabled in BIOS/UEFI
  • Software RAID arrays created with mdadm
  • Sufficient disk space for LUKS metadata overhead (typically 16-32 MB per encrypted device)

Software Packages:

# Install required packages
dnf install cryptsetup lvm2 mdadm

Critical Security Requirement:

LUKS encryption keys are stored in system memory when devices are unlocked. Without CPU-based memory encryption (Intel TME or AMD SME), LUKS keys can be extracted through cold boot attacks. Do not implement LUKS encryption on systems without CPU memory encryption support.

4.2 Step-by-Step Implementation

Step 1: Verify RAID Array Configuration

Before implementing encryption, verify the existing RAID array configuration:

# Display RAID array status
cat /proc/mdstat

# Display detailed RAID information
mdadm --detail /dev/md0
mdadm --detail /dev/md1

# Verify array is not in use
# Ensure no filesystems are mounted from the arrays
mount | grep md

Step 2: Create LUKS Encrypted Devices on RAID Partitions

Create LUKS encrypted devices on each RAID array partition. This process will format the devices, so ensure all data is backed up:

# Create LUKS encrypted device on first RAID array
cryptsetup luksFormat --type luks2 --cipher aes-xts-plain64 --key-size 512 \
    --hash sha512 --iter-time 5000 --pbkdf argon2id /dev/md0

# Create LUKS encrypted device on second RAID array
cryptsetup luksFormat --type luks2 --cipher aes-xts-plain64 --key-size 512 \
    --hash sha512 --iter-time 5000 --pbkdf argon2id /dev/md1

# Repeat for all RAID arrays in the configuration

Parameter Explanation:

  • --type luks2: Use LUKS2 format (recommended for new installations)
  • --cipher aes-xts-plain64: AES encryption in XTS mode (recommended for block devices)
  • --key-size 512: 512-bit key size (256-bit per data and tweak key)
  • --hash sha512: SHA-512 hash algorithm for key derivation
  • --iter-time 5000: 5 seconds for key derivation (adjust based on security requirements)
  • --pbkdf argon2id: Argon2id key derivation function (recommended)

Step 3: Open LUKS Encrypted Devices

Open the encrypted devices to create device mapper entries:

# Open encrypted device (will prompt for passphrase)
cryptsetup open /dev/md0 md0_crypt
cryptsetup open /dev/md1 md1_crypt

# Verify devices are available
ls -l /dev/mapper/md0_crypt /dev/mapper/md1_crypt

Step 4: Create LVM2 Physical Volumes on Encrypted Devices

Create physical volumes on the decrypted device mapper devices:

# Create physical volumes with proper alignment
pvcreate --dataalignment 1M /dev/mapper/md0_crypt /dev/mapper/md1_crypt

# Verify physical volume creation
pvs
pvdisplay /dev/mapper/md0_crypt

Step 5: Create Volume Group and Logical Volumes

Create the volume group and logical volumes as normal, using the encrypted device mapper devices:

# Create volume group
vgcreate --physicalextentsize 1M vg_encrypted /dev/mapper/md0_crypt /dev/mapper/md1_crypt

# Create logical volumes
lvcreate -L 100G -n lv_data vg_encrypted
lvcreate -L 50G -n lv_backup vg_encrypted

# Verify logical volume creation
lvs
lvdisplay

Step 6: Create Filesystems

Create filesystems on the logical volumes:

# Create XFS filesystem (example)
mkfs.xfs /dev/vg_encrypted/lv_data
mkfs.xfs /dev/vg_encrypted/lv_backup

# Or create ext4 filesystem
mkfs.ext4 /dev/vg_encrypted/lv_data
mkfs.ext4 /dev/vg_encrypted/lv_backup

Step 7: Configure Automatic Unlocking

Configure automatic unlocking of encrypted devices during system boot. Two approaches are available:

Option A: Keyfile-Based Unlocking (Recommended for Servers)

# Generate random keyfile
dd if=/dev/urandom of=/etc/luks-keys/md0.key bs=512 count=8
chmod 400 /etc/luks-keys/md0.key

# Add keyfile to LUKS device
cryptsetup luksAddKey /dev/md0 /etc/luks-keys/md0.key

# Create keyfile for second array
dd if=/dev/urandom of=/etc/luks-keys/md1.key bs=512 count=8
chmod 400 /etc/luks-keys/md1.key
cryptsetup luksAddKey /dev/md1 /etc/luks-keys/md1.key

# Configure /etc/crypttab for automatic unlocking
cat >> /etc/crypttab << 'EOF'
md0_crypt UUID=<md0-uuid> /etc/luks-keys/md0.key luks
md1_crypt UUID=<md1-uuid> /etc/luks-keys/md1.key luks
EOF

# Get UUIDs for crypttab
blkid /dev/md0
blkid /dev/md1

# Update crypttab with actual UUIDs

Option B: Passphrase-Based Unlocking (Interactive)

For systems requiring manual passphrase entry during boot:

# Configure /etc/crypttab (no keyfile)
cat >> /etc/crypttab << 'EOF'
md0_crypt UUID=<md0-uuid> none luks
md1_crypt UUID=<md1-uuid> none luks
EOF

Step 8: Configure Filesystem Mounting

Configure automatic filesystem mounting in /etc/fstab:

# Add entries to /etc/fstab
cat >> /etc/fstab << 'EOF'
/dev/vg_encrypted/lv_data /mnt/data xfs defaults 0 2
/dev/vg_encrypted/lv_backup /mnt/backup xfs defaults 0 2
EOF

# Create mount points
mkdir -p /mnt/data /mnt/backup

Step 9: Update initramfs

Update the initial RAM filesystem to include LUKS and LVM2 support:

# Update initramfs (RHEL/Rocky Linux)
dracut --force

# Or for systems using mkinitcpio
mkinitcpio -P

Step 10: Verify Configuration

Reboot the system and verify that encrypted devices unlock automatically and filesystems mount correctly:

# After reboot, verify encrypted devices are open
lsblk
dmsetup ls

# Verify LVM2 volumes are available
pvs
vgs
lvs

# Verify filesystems are mounted
mount | grep vg_encrypted
df -h

5. Boot Automation and `/boot` Partition Considerations

5.1 The `/boot` Partition Challenge

The /boot partition cannot be encrypted because the bootloader (GRUB2, systemd-boot, or equivalent) and kernel must be accessible before encryption can be unlocked. This creates a fundamental challenge for automating LVM2 activation and filesystem mounting after LUKS devices are unlocked.

Key Constraints:

The /boot partition presents several constraints that must be addressed. The bootloader must read the kernel and initramfs from an unencrypted /boot partition, as the bootloader cannot decrypt data before the kernel is loaded. The kernel and initramfs must contain tools to unlock LUKS devices, requiring that cryptsetup and related tools are included in the initramfs. Automation scripts for LVM2 activation must be accessible before the root filesystem is mounted, which means they must be stored in locations accessible during early boot. Configuration files such as crypttab and fstab are typically located in /etc, which may be on an encrypted root filesystem, requiring special handling during boot.

Information Leakage Risks:

The unencrypted nature of the /boot partition creates information leakage risks that must be considered. Automation scripts in /boot are unencrypted and visible to anyone with access to the partition, potentially revealing LVM2 volume names, structure, or organisation. Configuration files may expose volume group and logical volume names, providing information about the storage architecture. The boot process itself may reveal encryption setup and structure through log messages, error output, or configuration file contents that are visible during boot.

5.2 Solutions for Boot Automation

Solution 1: Systemd Automatic Activation (Recommended)

Systemd provides built-in support for automatic LVM2 activation after LUKS devices are unlocked:

# Enable LVM2 monitoring service (usually enabled by default)
systemctl enable lvm2-monitor.service

# Verify service status
systemctl status lvm2-monitor.service

# The service automatically scans for LVM2 volumes after encrypted devices are available
# No additional configuration needed if using standard systemd integration

Advantages:

  • No custom scripts required
  • Standard systemd integration
  • Automatic activation after LUKS unlock
  • Minimal information leakage (uses standard systemd services)

Solution 2: Initramfs Integration

For systems requiring LVM2 activation during early boot (before root filesystem mount), integrate LVM2 tools into initramfs:

# Ensure LVM2 tools are included in initramfs
cat > /etc/dracut.conf.d/90-lvm2.conf << 'EOF'
# Include LVM2 tools in initramfs
add_drivers+=" dm-mod "
EOF

# Update initramfs
dracut --force

# Verify LVM2 support in initramfs
lsinitrd | grep lvm

Solution 3: Custom systemd Service

Create a custom systemd service that activates LVM2 after LUKS unlock:

# Create service file
cat > /etc/systemd/system/lvm2-activate.service << 'EOF'
[Unit]
Description=Activate LVM2 volumes after LUKS unlock
After=systemd-cryptsetup@.service
Requires=systemd-cryptsetup@.service

[Service]
Type=oneshot
ExecStart=/usr/sbin/vgchange -ay
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
EOF

# Enable the service
systemctl enable lvm2-activate.service

Solution 4: Initramfs Hook Script

For maximum control, create an initramfs hook that activates LVM2 during early boot:

# Create initramfs hook
cat > /usr/lib/dracut/hooks/pre-mount/99-lvm2-activate.sh << 'EOF'
#!/bin/bash
# Activate LVM2 volumes after LUKS devices are unlocked
if command -v vgchange >/dev/null 2>&1; then
    vgchange -ay
fi
EOF

chmod +x /usr/lib/dracut/hooks/pre-mount/99-lvm2-activate.sh

# Update initramfs
dracut --force

5.3 Minimising Information Leakage from `/boot`

Strategy 1: Generic Naming Conventions

Use generic, non-descriptive names for volume groups and logical volumes:

# Instead of descriptive names:
# vg_production_data, lv_customer_database, lv_financial_records

# Use generic names:
vgcreate vg0 /dev/mapper/sda1_crypt /dev/mapper/sdb1_crypt
lvcreate -L 100G -n lv0 vg0
lvcreate -L 50G -n lv1 vg0
lvcreate -L 200G -n lv2 vg0

Strategy 2: Minimal Configuration in /boot

Keep automation scripts minimal and avoid storing sensitive information:

# Avoid storing in /boot:
# - Volume names or descriptions
# - Passphrases or key material
# - Detailed structure information
# - Organisational details

# Store only essential automation:
# - Generic activation commands
# - Standard systemd service files
# - Minimal configuration

Strategy 3: Encrypted Keyfiles Outside /boot

Store keyfiles on separate encrypted devices or secure key management systems:

# Store keyfiles on separate encrypted device (not in /boot)
# Example: USB key with encrypted keyfile storage
# Or: Network-based key management system
# Or: Hardware security module (HSM)

# Reference keyfiles from secure location
cat >> /etc/crypttab << 'EOF'
sda1_crypt UUID=<uuid> /secure/path/to/keyfile luks
EOF

Strategy 4: Custom Initramfs with Minimal Information

Create a custom initramfs that minimises exposed information:

# Create minimal initramfs configuration
cat > /etc/dracut.conf.d/99-minimal-info.conf << 'EOF'
# Minimise information in initramfs
omit_drivers+=" "
compress="lz4"
EOF

# Rebuild initramfs
dracut --force

5.4 Trade-offs and Recommendations

For Maximum Security:

  • Use generic volume names (vg0, lv0, lv1, etc.)
  • Minimise automation scripts in /boot
  • Store keyfiles separately from /boot
  • Use systemd automatic activation (minimal customisation)
  • Consider network-based key management

For Operational Simplicity:

  • Use systemd built-in LVM2 activation
  • Standard naming conventions acceptable
  • Keyfiles in /etc/luks-keys (encrypted root filesystem)
  • Standard initramfs configuration

For Early Boot Activation:

  • Use initramfs hooks for LVM2 activation
  • Ensure LVM2 tools in initramfs
  • Test boot process thoroughly
  • Document activation sequence

5.5 Verification and Testing

Test Boot Automation:

# Test LUKS unlock and LVM2 activation
systemctl restart systemd-cryptsetup@md0_crypt.service
vgchange -ay

# Verify volumes are activated
lvs
pvs

# Test filesystem mounting
mount /dev/vg0/lv0 /mnt/test
df -h /mnt/test
umount /mnt/test

Monitor Boot Process:

# Enable verbose boot logging
systemctl enable debug-shell.service

# Check boot logs
journalctl -b | grep -i lvm
journalctl -b | grep -i cryptsetup
journalctl -b | grep -i vgchange

6. Security Considerations

6.1 CPU Memory Encryption Requirement

Critical Security Requirement:

LUKS encryption keys are stored in system memory (RAM) when encrypted devices are unlocked. Without CPU-based memory encryption, these keys can be extracted through cold boot attacks, completely compromising the encryption.

Cold Boot Attack Vulnerability:

The cold boot attack vulnerability represents a critical security concern for LUKS encryption. Attackers with physical access can extract LUKS keys from memory, as memory contents persist for seconds to minutes after power loss. Keys can be extracted by rapidly rebooting the system and capturing memory contents before they decay. This vulnerability has been demonstrated through successful extraction of LUKS keys from unencrypted memory in research studies, making it a real and practical threat.

Mandatory Requirements:

To mitigate the cold boot attack vulnerability, specific hardware and configuration requirements must be met. Systems must use either Intel processors with TME (Total Memory Encryption) support or AMD processors with SME (Secure Memory Encryption) support. Memory encryption must be enabled in BIOS/UEFI configuration, and memory encryption activation must be verified in the operating system to ensure it is functioning correctly.

Verification:

# Check Intel TME status
dmesg | grep -i tme
cat /proc/cpuinfo | grep tme

# Check AMD SME status
dmesg | grep -i sme
cat /proc/cpuinfo | grep sme

# Verify memory encryption is active
dmesg | grep -i "memory encryption"

Without CPU memory encryption, LUKS encryption should not be used for sensitive data.

6.2 Key Management

Key Storage:

Keyfiles should be stored on separate encrypted devices or secure key management systems to prevent key loss if the primary storage system fails. Keyfiles must have restrictive permissions, typically 400 or 600, to prevent unauthorised access. In enterprise environments, consider using hardware security modules (HSM) for key storage, which provide tamper-resistant hardware-based key protection. For long-term deployments, implement key rotation procedures to periodically update encryption keys and maintain security over extended periods.

Key Backup:

LUKS headers should be backed up separately from encrypted data, as header corruption can prevent device unlocking even with correct keys. Header backups should be stored in secure, physically separate locations to protect against site-level disasters. Key recovery procedures must be documented in detail to ensure they can be executed correctly during recovery operations. Key recovery procedures should be tested regularly to verify their effectiveness and ensure administrators are familiar with the process.

Header Backup:

# Backup LUKS header (critical for recovery)
cryptsetup luksHeaderBackup /dev/md0 --header-backup-file /secure/location/md0.header

# Restore LUKS header if needed
cryptsetup luksHeaderRestore /dev/md0 --header-backup-file /secure/location/md0.header

6.3 Physical Security

Device Security:

Encrypted devices provide protection against data extraction when powered off, as the encryption layer prevents access to data without the appropriate keys. Physical access to encrypted devices does not compromise data without keys, making encryption an effective protection against theft or unauthorised physical access. However, physical security measures should still be considered for servers and storage systems to prevent tampering and ensure system integrity. Secure disposal procedures must be implemented for decommissioned devices to ensure that encryption keys are properly destroyed and devices cannot be used to access data after decommissioning.

6.4 Operational Security

Access Control:

Root access to systems with encrypted storage should be limited to authorised personnel only, as root access provides the ability to unlock encrypted devices and access all data. Role-based access control should be implemented for storage administration, ensuring that only authorised individuals can perform encryption-related operations. Access to encrypted devices and key management systems should be monitored continuously to detect unauthorised access attempts. All encryption-related operations should be logged comprehensively to provide an audit trail of who accessed encrypted storage and when.

Audit and Monitoring:

# Enable audit logging for cryptsetup operations
auditctl -w /usr/sbin/cryptsetup -p x -k cryptsetup_operations

# Monitor LUKS device status
systemctl status systemd-cryptsetup@md0_crypt.service

7. Performance Implications

7.1 Encryption Overhead

CPU Impact:

LUKS encryption adds CPU overhead for encryption and decryption operations, as all data must be processed through the encryption layer. Modern CPUs with AES-NI (Advanced Encryption Standard New Instructions) provide hardware acceleration for AES encryption operations, significantly reducing the performance impact. The performance impact typically ranges from 5-15% depending on workload characteristics, with sequential I/O operations showing lower overhead than random I/O operations due to better CPU cache utilisation and prefetching effectiveness.

I/O Performance:

Encryption adds latency to I/O operations because data must be encrypted during writes and decrypted during reads before it can be processed by the filesystem layer. Throughput may be reduced, particularly for random I/O patterns where the encryption overhead becomes more significant relative to the I/O operation size. Sequential operations benefit from CPU caching and prefetching, which can mask some of the encryption overhead. RAID arrays help distribute encryption load across multiple devices, allowing parallel encryption operations and improving overall throughput.

7.2 Performance Optimisation

Cipher Selection:

AES-XTS provides good performance with hardware acceleration through AES-NI instructions, making it the recommended choice for most deployments. Cipher performance characteristics should be considered for specific workloads, as different ciphers may perform better or worse depending on the I/O pattern and data characteristics. If performance is critical, different cipher options should be benchmarked to determine which provides the best balance between security and performance for the specific workload.

Key Derivation:

  • Argon2id provides strong security but higher CPU cost during unlock
  • Adjust --iter-time parameter based on security vs performance requirements
  • Longer iteration times increase security but slow device unlocking

Alignment Considerations:

  • Ensure proper alignment across encryption, RAID, and LVM2 layers
  • Misalignment can cause additional performance degradation
  • Follow alignment procedures from LVM2 over RAID documentation

7.3 Benchmarking

Performance Testing:

# Benchmark encrypted device performance
fio --name=test --filename=/dev/mapper/md0_crypt --size=10G \
    --ioengine=libaio --direct=1 --rw=read --bs=4k --iodepth=32

# Compare with unencrypted performance
fio --name=test --filename=/dev/md0 --size=10G \
    --ioengine=libaio --direct=1 --rw=read --bs=4k --iodepth=32

8. Operational Procedures

8.1 Replacing Failed Disks with LUKS at Partition Level

Replacing a failed disk when LUKS encryption is implemented at the partition level (before LVM2) introduces significant complexity compared to unencrypted storage systems. The encryption layer must be recreated on the replacement disk, and the LVM2 physical volume must be restored or rebuilt within the encrypted device.

Complexity Factors:

Several factors contribute to the complexity of disk replacement when LUKS encryption is implemented at the partition level. The early encryption layer means that LUKS encryption occurs before the kernel discovers LVM2 structure, requiring manual intervention during boot or recovery operations. LVM2 metadata location presents another challenge, as the metadata is stored within the encrypted partition, making it inaccessible until the partition is decrypted. The volume group must recognize the replacement physical volume after decryption, which may require restoring metadata or recreating the physical volume structure. System boot dependencies add further complexity, as system boot may depend on encrypted volumes, requiring careful recovery procedures to ensure the system can start successfully. Finally, key management becomes critical, as LUKS keys must be available to unlock the replacement device, which may require secure key storage and access procedures.

Prerequisites for Disk Replacement:

Before attempting disk replacement, several prerequisites must be met. LUKS keyfile or passphrase must be available to unlock the replacement device. A backup of the LUKS header is highly recommended, as it can significantly simplify recovery if the header becomes corrupted. Administrators must have a clear understanding of which physical volume failed, which requires monitoring and diagnostic capabilities. A replacement disk of equal or larger capacity must be available and prepared. If the root filesystem is affected by the disk failure, the system must be bootable from an alternative source, such as rescue media, network boot, or an alternative boot device.

Scenario 1: Failed Disk in Non-RAID Configuration (LUKS on Physical Partitions)

Step 1: Identify the Failed Disk

# Check LVM2 physical volume status
pvs

# Example output showing failed PV:
#   PV                  VG     Fmt  Attr PSize   PFree
#   /dev/mapper/sda1_crypt vg0   lvm2 a--  500.00g    0
#   /dev/mapper/sdb1_crypt vg0   lvm2 a--  500.00g    0
#   /dev/mapper/sdc1_crypt vg0   lvm2 a--  500.00g  missing

# Check which encrypted device corresponds to failed PV
lsblk

# Identify the underlying physical device
# If /dev/mapper/sdc1_crypt is missing, check /dev/sdc1
cryptsetup status sdc1_crypt

# Check physical device status
dmesg | grep -i sdc
smartctl -a /dev/sdc

Step 2: Prepare Replacement Disk

# Identify replacement device (e.g., /dev/sdd)
NEW_DEVICE="/dev/sdd"

# Create partition matching original configuration
# Method 1: Using sgdisk for GPT disks (recommended, most reliable)
# Backup partition table from healthy disk
sgdisk --backup=/tmp/partition_table_backup.sgdisk /dev/sda

# Restore partition table to new disk
sgdisk --load-backup=/tmp/partition_table_backup.sgdisk /dev/sdd

# Verify partition table restoration
sgdisk --verify /dev/sdd
parted /dev/sdd print

# Method 2: Using parted to dump and manually recreate
# Dump partition layout to text file for reference
parted /dev/sda unit s print > /tmp/partition_table.txt

# View the partition information
cat /tmp/partition_table.txt

# Manually recreate using parted (if sgdisk method not available)
# First, determine partition table type from original disk
parted /dev/sda print | grep "Partition Table"

# Create matching partition table type on new disk
# For GPT (most common on modern systems):
parted /dev/sdd mklabel gpt

# For MSDOS (legacy systems):
# parted /dev/sdd mklabel msdos

# Create partition matching original
# Get start and end sectors from /tmp/partition_table.txt
# Example: parted /dev/sdd unit s mkpart primary 2048s 100%FREE

# Set partition type/flag if needed
# For Linux LVM on GPT:
parted /dev/sdd set 1 lvm on

# Verify partition creation
parted /dev/sdd print
lsblk /dev/sdd

Step 3: Create LUKS Encryption on Replacement Partition

# Create LUKS device with same parameters as original
# IMPORTANT: Use the same cipher, key-size, and parameters
cryptsetup luksFormat --type luks2 --cipher aes-xts-plain64 --key-size 512 \
    --hash sha512 --iter-time 5000 --pbkdf argon2id /dev/sdd1

# Add the same keyfile or passphrase used for other devices
# If using keyfile:
cryptsetup luksAddKey /dev/sdd1 /etc/luks-keys/sdd1.key

# Or use the same keyfile as other partitions (if sharing keys)
cryptsetup luksAddKey /dev/sdd1 /etc/luks-keys/sda1.key

Step 4: Open Encrypted Device

# Open the encrypted device
cryptsetup open /dev/sdd1 sdd1_crypt

# Verify device is available
ls -l /dev/mapper/sdd1_crypt

Step 5: Restore LVM2 Physical Volume

Option A: If LVM2 Metadata Exists on Other PVs (Recommended)

# Activate volume group (should detect new PV automatically)
vgchange -ay vg0

# Check if new PV is recognized
pvs

# If PV is not automatically recognized, scan for it
pvscan

# If still not recognized, restore PV metadata
# Get UUID from volume group metadata
vgcfgbackup -f /tmp/vg0_backup.txt vg0

# Restore physical volume using backup
pvcreate --restorefile /tmp/vg0_backup.txt --uuid <original-pv-uuid> /dev/mapper/sdd1_crypt

# The UUID can be found from the backup file or from previous pvs output

Option B: Recreate Physical Volume and Extend Volume Group

# Create new physical volume
pvcreate /dev/mapper/sdd1_crypt

# Extend volume group to include new PV
vgextend vg0 /dev/mapper/sdd1_crypt

# Verify volume group status
vgs
pvs

Step 6: Restore Data to Replacement Physical Volume

If Using RAID1 (Mirroring):

# If the volume group uses mirroring, data will sync automatically
# Monitor sync progress
lvdisplay vg0/lv0

# Check sync status
lvs -a -o name,copy_percent,devices vg0

If Not Using Mirroring (Data Loss Risk):

# If data was not mirrored, you may need to restore from backup
# This is why RAID or mirroring is recommended for encrypted storage

# If logical volumes span multiple PVs, you may need to:
# 1. Reduce logical volume size to exclude failed PV
# 2. Restore data from backup
# 3. Extend logical volume to include new PV

Step 7: Update System Configuration

# Update /etc/crypttab to include new device
# Get UUID of new encrypted partition
blkid /dev/sdd1

# Add entry to /etc/crypttab
cat >> /etc/crypttab << 'EOF'
sdd1_crypt UUID=<sdd1-uuid> /etc/luks-keys/sdd1.key luks
EOF

# Update initramfs
dracut --force

Step 8: Verify System Boot

# Reboot system and verify:
# 1. Encrypted device unlocks automatically
# 2. LVM2 volumes are activated
# 3. Filesystems mount correctly

# After reboot, verify:
lsblk
pvs
vgs
lvs
mount | grep vg0

Scenario 2: Failed Disk in RAID Configuration (LUKS on RAID Partitions)

When LUKS is implemented on RAID partitions, the replacement procedure involves both RAID array management and LUKS encryption:

Step 1: Identify Failed Disk in RAID Array

# Check RAID array status
cat /proc/mdstat

# Example showing failed disk:
# md0 : active raid1 sda1[0](F) sdb1[1]
#       976630528 blocks [2/1] [U_]

# Get detailed RAID information
mdadm --detail /dev/md0

# Identify which physical device failed
mdadm --examine /dev/sda1
mdadm --examine /dev/sdb1

Step 2: Remove Failed Disk from RAID Array

# Mark failed disk (if not already marked)
mdadm --manage /dev/md0 --fail /dev/sda1

# Remove failed disk from array
mdadm --manage /dev/md0 --remove /dev/sda1

# Verify removal
cat /proc/mdstat

Step 3: Replace Physical Disk

# Physically replace the disk
# Ensure new disk is detected
lsblk

# New disk should appear (e.g., /dev/sdc)
NEW_DEVICE="/dev/sdc"

Step 4: Prepare Replacement Disk for RAID

# Copy partition table from healthy RAID member
# Method 1: Using sgdisk for GPT disks (recommended, most reliable)
sgdisk --backup=/tmp/raid_partition_backup.sgdisk /dev/sdb
sgdisk --load-backup=/tmp/raid_partition_backup.sgdisk /dev/sdc

# Verify partition table restoration
sgdisk --verify /dev/sdc
parted /dev/sdc print

# Method 2: Using parted to dump and manually recreate
# Dump partition information from healthy disk for reference
parted /dev/sdb unit s print > /tmp/raid_partition_info.txt

# View partition information
cat /tmp/raid_partition_info.txt

# Determine partition table type from original
parted /dev/sdb print | grep "Partition Table"

# Create matching partition table type on new disk
# For GPT (most common on modern systems):
parted /dev/sdc mklabel gpt

# For MSDOS (legacy systems):
# parted /dev/sdc mklabel msdos

# Create partition matching original
# Get start and end sectors from /tmp/raid_partition_info.txt
# Example: parted /dev/sdc unit s mkpart primary 2048s 100%FREE

# Set partition type to Linux RAID
parted /dev/sdc set 1 raid on

# Verify partition table
parted /dev/sdc print
lsblk /dev/sdc

# Ensure partition type is Linux RAID (should show in parted output)

Step 5: Add Disk to RAID Array

# Add new disk to RAID array
mdadm --manage /dev/md0 --add /dev/sdc1

# Monitor rebuild progress
watch cat /proc/mdstat

# Or use detailed monitoring
watch 'mdadm --detail /dev/md0'

Step 6: Recreate LUKS Encryption on Rebuilt RAID

Important: After RAID rebuild completes, the RAID array contains the data, but LUKS encryption must be recreated:

# Wait for RAID rebuild to complete
# Check rebuild status
mdadm --detail /dev/md0 | grep -i "resync\|recovery"

# Once rebuild is complete, verify RAID array is healthy
cat /proc/mdstat

Critical Consideration: The LUKS encryption on the RAID partition should already be intact if the RAID array was properly rebuilt. However, if the LUKS header was corrupted or lost, you may need to restore it:

# Check LUKS header status
cryptsetup luksDump /dev/md0

# If LUKS header is intact, simply unlock:
cryptsetup open /dev/md0 md0_crypt

# If LUKS header is corrupted, restore from backup:
cryptsetup luksHeaderRestore /dev/md0 --header-backup-file /backup/md0.header

# Then unlock
cryptsetup open /dev/md0 md0_crypt

Step 7: Verify LVM2 Recognition

# Activate volume group
vgchange -ay vg0

# Verify physical volumes
pvs

# Check volume group status
vgs
lvs

Step 8: Update System Configuration

# Update /etc/crypttab if device paths changed
# Update /etc/mdadm.conf if needed
mdadm --detail --scan >> /etc/mdadm.conf

# Update initramfs
dracut --force

Scenario 3: Failed Disk with Root Filesystem

If the failed disk contains the root filesystem, the procedure is more complex and may require booting from alternative media:

Step 1: Boot from Alternative Source

# Boot from:
# - Rescue media (USB, CD/DVD)
# - Network boot
# - Alternative boot device

# Ensure you have access to:
# - LUKS keyfiles or passphrases
# - System configuration files

Step 2: Identify and Replace Failed Disk

# Follow physical disk replacement procedures
# Identify failed disk
lsblk
dmesg | grep -i error

# Replace physical disk
# Prepare new disk (partition, etc.)

Step 3: Recreate Encryption and LVM2

# Follow encryption and LVM2 recreation procedures
# This may require:
# 1. Creating LUKS encryption
# 2. Restoring LVM2 physical volume
# 3. Restoring filesystem from backup

Step 4: Restore Root Filesystem

# If root filesystem was on failed disk:
# 1. Restore from backup to new encrypted device
# 2. Update bootloader configuration
# 3. Update initramfs
# 4. Verify boot configuration

8.2 Complexity Considerations and Mitigation Strategies

Challenges Introduced by Early Encryption:

Several challenges are introduced by implementing encryption at the partition level before LVM2. Boot dependency presents a significant challenge, as the system cannot boot if the root filesystem is on an encrypted device that cannot be unlocked. This can be mitigated by maintaining bootable rescue media with LUKS tools, using a separate unencrypted /boot partition, or implementing network-based key management for automatic unlocking.

LVM2 metadata access becomes more complex because LVM2 metadata is encrypted, requiring decryption before volume group operations can be performed. Mitigation strategies include maintaining LVM2 metadata backups, using RAID1 to maintain redundant copies of metadata, and documenting volume group structure and UUIDs for reference during recovery.

Key availability is critical during recovery operations, as LUKS keys must be available to unlock replacement devices. This can be addressed by storing keyfiles on separate encrypted devices, using network-based key management systems, and maintaining secure key backup procedures that ensure keys are accessible when needed but protected from unauthorised access.

Recovery time is extended when disk replacement is required, as the process takes longer due to the encryption and LVM2 layers that must be managed. This can be mitigated by using hot-swappable drives where possible, maintaining spare encrypted devices ready for use, and documenting and testing recovery procedures to ensure efficient execution when failures occur.

Best Practices for Disk Replacement:

Maintaining comprehensive documentation is essential for successful disk replacement operations. This documentation should include LUKS encryption parameters such as cipher, key-size, hash algorithm, and key derivation function settings. The LVM2 volume group structure should be documented, including physical volume UUIDs, logical volume configurations, and extent allocation strategies. Keyfile locations and access procedures must be documented to ensure keys are available when needed during recovery operations.

Regular backups form a critical component of disk replacement preparedness. LUKS headers should be backed up regularly, as header corruption can prevent device unlocking even with correct keys. LVM2 metadata should be backed up to enable volume group reconstruction if needed. Keyfiles must be backed up securely, ensuring they are protected from unauthorised access whilst remaining accessible for recovery. Backup restoration procedures should be tested regularly to ensure backups are valid and restoration processes are understood.

Comprehensive monitoring enables early detection of disk failures and provides visibility into storage system health. Disk health should be monitored using SMART (Self-Monitoring, Analysis and Reporting Technology) attributes to detect potential failures before they occur. RAID array status should be monitored continuously, with alerts configured for array degradation or device failures. LVM2 physical volume status should be monitored to detect missing or failed physical volumes. Alerting systems should be configured to notify administrators immediately when disk failures are detected, enabling rapid response and minimising data loss risk.

Thorough testing of disk replacement procedures is essential before production deployment. Disk replacement procedures should be tested in non-production environments that mirror production configurations as closely as possible. Recovery from backup should be tested regularly to ensure backup validity and restoration procedures are effective. Boot procedures with encrypted storage should be tested to verify that systems can start successfully after disk replacement. Lessons learned from tests should be documented and incorporated into procedures and documentation to continuously improve recovery capabilities.

8.4 Adding New RAID Arrays

When adding new RAID arrays to an existing encrypted LVM2 configuration:

# Create new RAID array
mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1

# Encrypt the new array
cryptsetup luksFormat --type luks2 --cipher aes-xts-plain64 --key-size 512 \
    --hash sha512 --iter-time 5000 --pbkdf argon2id /dev/md2

# Add keyfile for automatic unlocking
cryptsetup luksAddKey /dev/md2 /etc/luks-keys/md2.key

# Update crypttab
echo "md2_crypt UUID=$(blkid -s UUID -o value /dev/md2) /etc/luks-keys/md2.key luks" >> /etc/crypttab

# Open encrypted device
cryptsetup open /dev/md2 md2_crypt

# Add to volume group
pvcreate /dev/mapper/md2_crypt
vgextend vg_encrypted /dev/mapper/md2_crypt

# Extend or create logical volumes
lvextend -L +200G /dev/vg_encrypted/lv_data

8.5 Backup and Recovery

LUKS Header Backup:

# Backup header before any operations
cryptsetup luksHeaderBackup /dev/md0 --header-backup-file /backup/md0.header

# Verify header backup
cryptsetup luksDump /dev/md0 > /backup/md0.luksdump

Data Backup:

Data backup should utilise standard backup tools on mounted filesystems, ensuring that data is backed up in decrypted form for easier restoration. Snapshot-based backup strategies with LVM2 snapshots can provide efficient point-in-time backups without requiring filesystem unmounting. If backing up encrypted devices directly, backup systems must have access to decryption keys, which requires careful key management and secure key distribution to backup systems.

Recovery Procedures:

Recovery procedures must be comprehensive and well-documented. Key recovery procedures should be documented in detail, including steps for accessing keys, unlocking devices, and restoring data. Recovery procedures should be tested regularly in non-production environments to ensure they are effective and can be executed efficiently during actual failures. Secure key storage must be maintained separate from encrypted data, ensuring that key loss does not result in complete data loss, whilst also protecting keys from unauthorised access.

8.6 Monitoring

Device Status:

# Check encrypted device status
cryptsetup status md0_crypt

# Monitor LUKS device health
systemctl status systemd-cryptsetup@md0_crypt.service

# Check LVM2 status
pvs
vgs
lvs

Performance Monitoring:

# Monitor I/O performance
iostat -x 1

# Monitor encryption CPU usage
top -p $(pgrep -f cryptsetup)

9. Best Practices

9.1 Security Best Practices

  1. Mandatory CPU Memory Encryption: Only implement LUKS on systems with Intel TME or AMD SME support
  2. Strong Key Derivation: Use Argon2id with appropriate iteration time
  3. Key Management: Store keyfiles securely, implement key rotation procedures
  4. Header Backup: Regularly backup LUKS headers to secure locations
  5. Access Control: Limit administrative access to encryption systems
  6. Audit Logging: Enable comprehensive audit logging for encryption operations

9.2 Operational Best Practices

  1. Documentation: Maintain detailed documentation of encryption configuration
  2. Testing: Test all procedures in non-production environments first
  3. Monitoring: Implement monitoring for encrypted device status and performance
  4. Backup: Regular backup of both data and LUKS headers
  5. Recovery Procedures: Document and test recovery procedures regularly

9.3 Performance Best Practices

  1. Hardware Acceleration: Ensure CPU supports AES-NI for optimal performance
  2. Alignment: Maintain proper alignment across all storage layers
  3. Benchmarking: Benchmark performance before and after encryption implementation
  4. Workload Analysis: Understand I/O patterns and adjust configuration accordingly

9.4 Configuration Recommendations

For High Security:

High security configurations should use Argon2id with longer iteration times, typically 10 seconds or more, to increase the computational cost of key derivation and make brute-force attacks more difficult. Multiple key slots should be implemented with different passphrases to provide redundancy and enable key rotation without data loss. Hardware security modules should be used for key storage in high-security environments, providing tamper-resistant hardware-based key protection. Comprehensive audit logging should be enabled to provide detailed records of all encryption-related operations for security monitoring and compliance purposes.

For High Performance:

High performance configurations should use AES-XTS with hardware acceleration through AES-NI instructions to minimise encryption overhead. Key derivation iteration time should be optimised to 3-5 seconds, balancing security with performance requirements. Proper alignment must be ensured across all storage layers, including RAID chunk size, LVM2 extent size, and filesystem alignment, to prevent performance degradation from misalignment. For systems with high encryption workload, dedicated CPU cores can be considered for encryption operations to isolate encryption processing from other system workloads.

10. References

  1. LUKS (Linux Unified Key Setup) - On-Disk Format Specification: GitLab
  2. cryptsetup man page: man7.org
  3. Cold Boot Attacks on Encryption Keys: USENIX Security
  4. Intel Total Memory Encryption (TME): Intel Documentation
  5. AMD Secure Memory Encryption (SME): AMD Documentation
  6. Red Hat Enterprise Linux 9 Security Hardening - Encrypting Block Devices: Red Hat Documentation
  7. LVM2 Administration Guide: Red Hat Documentation

Benefits of Using LVM2 Over mdadm RAID Arrays

Abstract

This study presents a comparative analysis of storage management approaches using Logical Volume Manager 2 (LVM2) layered over Multiple Device Administration (mdadm) RAID arrays versus direct use of mdadm RAID arrays in Linux systems. The study examines configurations utilising LVM2 to stripe across multiple independent RAID1 and RAID6 arrays created with mdadm, comparing this approach to traditional nested RAID configurations and direct RAID array usage. Through practical deployment experience spanning 2016-2025 across multiple high-performance computing and enterprise environments, we demonstrate that LVM2 over mdadm RAID provides significant operational benefits including enhanced flexibility, improved manageability, simplified capacity expansion, and equivalent or superior performance characteristics. The analysis covers RAID1 and RAID6 configurations, examining performance implications, operational workflows, and practical deployment scenarios. Results indicate that LVM2 over mdadm RAID offers substantial advantages for large-scale storage deployments requiring flexibility, incremental expansion, and independent array management whilst maintaining the redundancy and performance characteristics of traditional RAID configurations.

Keywords: LVM2, mdadm, software RAID, storage management, logical volumes, RAID1, RAID6

1. Introduction

Storage management in Linux systems frequently requires balancing performance, redundancy, and operational flexibility. Traditional approaches utilise either direct RAID array management through mdadm or nested RAID configurations to achieve desired redundancy and performance characteristics. However, an alternative approach utilising LVM2 to manage and stripe across multiple independent mdadm RAID arrays has demonstrated significant operational advantages in production environments.

This paper examines the benefits of using LVM2 over mdadm RAID1 and RAID6 arrays, focusing on practical deployment scenarios, operational workflows, and performance characteristics. The analysis is based on extensive practical experience deploying and managing storage systems across academic research computing, enterprise production environments, and high-performance computing infrastructure.

---

2. Materials and Methods

2.1 Configuration Approaches

Traditional Approach: Direct mdadm RAID

Direct use of mdadm RAID arrays involves creating RAID arrays and utilising them directly for filesystem creation:

# Traditional RAID10 (nested)
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1
mdadm --create /dev/md2 --level=0 --raid-devices=2 /dev/md0 /dev/md1

# Direct RAID6
mdadm --create /dev/md0 --level=6 --raid-devices=6 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1

LVM2 Over mdadm RAID Approach

LVM2 layered over multiple independent mdadm RAID arrays:

# Create multiple independent RAID1 arrays
mdadm --create /dev/md0 --level=1 --raid-devices=2 --chunk=256 /dev/sda1 /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-devices=2 --chunk=256 /dev/sdc1 /dev/sdd1
mdadm --create /dev/md2 --level=1 --raid-devices=2 --chunk=256 /dev/sde1 /dev/sdf1
mdadm --create /dev/md3 --level=1 --raid-devices=2 --chunk=256 /dev/sdg1 /dev/sdh1

# Create physical volumes with data alignment
pvcreate --dataalignment 1M /dev/md0 /dev/md1 /dev/md2 /dev/md3

# Create volume group with extent size matching stripe width
vgcreate --physicalextentsize 1M vg_raid10_like /dev/md0 /dev/md1 /dev/md2 /dev/md3

# Create striped logical volume
lvcreate -i 4 -I 256K -L 500G -n lv_raid10_like vg_raid10_like

2.2 Test Environments

The analysis presented in this paper is based on extensive practical deployment experience gained across multiple production environments spanning academic research computing, enterprise storage systems, and high-performance computing infrastructure. The observations and conclusions are derived from hands-on experience with storage system deployments at OpenIntegra PLC, where enterprise storage solutions with mixed workload requirements were implemented and maintained. Additional experience was gained through academic computing infrastructure deployments at Sofia University "St. Kliment Ohridski", where research storage systems supporting various scientific computing workloads were configured and optimised.

High-performance computing storage infrastructure deployments at Technion – Israel Institute of Technology provided experience with large-scale storage systems requiring maximum performance and reliability. Computational biology and simulation storage systems at the Warshel Center for Multiscale Simulations at University of Southern California offered insights into specialised workload requirements and performance optimisation strategies. The UNITe Project at Sofia University provided experience with high-performance computing cluster storage infrastructure, whilst the Discoverer Petascale Supercomputer deployments, including both CPU and GPU partition storage systems, offered experience with petascale computing storage requirements.

2.3 Evaluation Criteria

The comparative analysis evaluates multiple aspects of storage system deployment and operation. Operational flexibility encompasses the ability to manage arrays independently, add capacity incrementally without major reconfiguration, and replace components with minimal disruption. Performance characteristics are assessed through measurements of throughput in megabytes per second, input/output operations per second (IOPS), and latency in milliseconds under various workload patterns including sequential and random access patterns.

Management complexity evaluation considers the ease of administration, monitoring, and troubleshooting procedures. This includes the simplicity of common operations such as capacity expansion, drive replacement, and array health monitoring. Capacity expansion procedures are examined for their complexity, required downtime, and impact on system performance during expansion operations.

Failure recovery procedures are evaluated for their simplicity, required manual intervention, and impact on system availability. The analysis also considers configuration alignment requirements, examining how proper alignment of RAID chunk size, LVM2 physical extent size, LVM2 stripe size, and filesystem parameters affects overall system performance and operational efficiency.

---

3. Results

3.1 Operational Flexibility

The use of LVM2 over mdadm RAID arrays provides significant operational flexibility through independent management of each underlying RAID array. This independence enables system administrators to perform maintenance operations, drive replacements, and capacity expansions on individual arrays without affecting the operation of other arrays in the configuration. When a drive fails in one RAID array, the replacement procedure can be executed independently, allowing the failed drive to be removed and a replacement drive to be added to that specific array whilst all other arrays continue normal operation.

Each RAID array can be monitored and managed separately using standard mdadm tools, providing granular visibility into the health and performance of individual arrays. This granular monitoring capability enables more precise troubleshooting, as issues can be isolated to specific arrays rather than requiring investigation of an entire nested RAID structure. Maintenance operations such as array health checks, drive replacement, or array resynchronisation can be performed on one array without impacting others, reducing the operational complexity and risk associated with storage system maintenance.

The independent nature of arrays also enables NUMA (Non-Uniform Memory Access) optimisation, where arrays can be strategically placed on specific NUMA nodes to optimise memory access patterns and reduce cross-NUMA-node memory access latency. This capability is particularly valuable in multi-socket systems where memory access locality significantly impacts performance.

In contrast, traditional nested RAID configurations require coordinated management across the entire structure. Issues with one component, such as a drive failure in one mirror of a nested RAID10 configuration, affect the entire nested structure. The aggregate view provided by traditional nested RAID configurations offers less granular insight into individual components, making it more difficult to identify and address specific issues within the storage hierarchy.

3.2 Capacity Expansion

Capacity expansion with LVM2 over mdadm RAID is straightforward and can be performed non-disruptively whilst the system remains operational. The expansion process begins with creating a new independent RAID array using mdadm, which can be configured with the same chunk size and device specifications as existing arrays to maintain performance consistency. Once the new array is created and synchronised, it is added to the existing volume group using the vgextend command, which makes the new array's capacity available to the volume group without requiring any downtime or service interruption.

The logical volume can then be extended to utilise the newly added array capacity. When extending a striped logical volume, the number of stripes must be updated to include the new array, ensuring that data continues to be distributed across all available arrays including the newly added one. The filesystem residing on the logical volume can then be resized online using filesystem-specific tools such as resize2fs for ext4 filesystems or xfs_growfs for XFS filesystems, both of which support online resizing without unmounting the filesystem.

This approach enables incremental capacity expansion, allowing administrators to add capacity in small increments corresponding to single RAID array pairs rather than requiring large-scale reconfiguration. The expansion process is non-disruptive, meaning that applications and services can continue operating normally throughout the expansion procedure. The flexible sizing capability allows administrators to add only the capacity needed at the time of expansion, avoiding the need to over-provision initial capacity and providing better cost optimisation compared to approaches requiring predetermined array sizes.

Traditional RAID10 or RAID60 configurations typically require predetermined array sizes established during initial configuration. Adding capacity to such configurations often requires rebuilding the entire array structure, which can be a complex and time-consuming process. These expansion operations may require system downtime or result in significant performance impact during the expansion process, making capacity planning more critical and expansion procedures more disruptive to system operations.

3.3 Performance Characteristics

3.3.1 Understanding RAID Chunk Size and Stripe Size

To understand the performance characteristics of LVM2 over mdadm RAID, it is essential to comprehend how RAID chunk size and stripe size function in data distribution across multiple storage devices. The chunk size, also referred to as stripe unit size, represents the amount of contiguous data written to a single disk in the array before the RAID system moves to the next disk to continue writing data. This parameter determines the granularity at which data is distributed across the physical storage devices.

When data is written to a RAID array, the system divides the incoming data stream into chunks of the specified chunk size. The first chunk is written to the first disk in the array, the second chunk is written to the second disk, and this round-robin pattern continues across all disks in the array. Once each disk has received one chunk, a complete stripe has been written. The stripe size, which is the total amount of data written across all disks before the pattern repeats, is calculated by multiplying the chunk size by the number of data disks in the array.

For example, in a configuration with four RAID1 arrays used as the basis for LVM2 striping, each RAID1 array has a chunk size of 256 KB. When LVM2 stripes data across these four arrays, it distributes data in chunks matching the RAID chunk size. The effective stripe width from the perspective of the logical volume is 256 KB × 4 = 1 MB, meaning that 1 MB of data is distributed across the four underlying RAID arrays before the pattern repeats.

The chunk size selection is critical for performance optimisation. Smaller chunk sizes provide finer granularity but may increase overhead from metadata and coordination between disks. Larger chunk sizes reduce overhead but may result in less efficient utilisation when I/O operations are smaller than the chunk size. For most workloads, chunk sizes between 256 KB and 1 MB provide optimal balance between granularity and efficiency.

3.3.2 Performance Equivalence

LVM2 over mdadm RAID achieves equivalent performance to traditional nested RAID configurations when properly aligned and configured. For RAID1 arrays with LVM2 striping, which mimics traditional RAID10 functionality, sequential read performance is equivalent to traditional RAID10 because data can be read in parallel from all arrays, aggregating the bandwidth from each individual array. Sequential write performance is also equivalent, as data is striped across arrays in the same manner as traditional RAID10, with each write operation distributed across multiple arrays simultaneously.

Random I/O performance demonstrates equivalence to traditional RAID10 because the LVM2 striping layer enables parallel access to multiple arrays, allowing random I/O operations to be serviced by different arrays concurrently. This parallel access capability provides the same performance benefits as traditional RAID10 striping. Latency measurements show no significant difference when the configuration is properly aligned, as the overhead introduced by the LVM2 layer is minimal compared to the disk I/O latency.

For RAID6 arrays with LVM2 striping, which mimics traditional RAID60 functionality, the performance characteristics mirror those of traditional RAID60. Sequential read performance benefits from striping across multiple RAID6 arrays, whilst sequential write performance experiences the same parity calculation overhead as traditional RAID60. Random I/O performance is equivalent, and the write penalty associated with parity-based RAID levels remains the same, as each underlying RAID6 array must perform parity calculations independently.

3.3.3 Alignment Requirements and Performance Impact

Proper alignment across all storage layers is critical for achieving optimal performance. The physical extent size in LVM2 should equal the effective stripe width, which is calculated as the RAID chunk size multiplied by the number of arrays. This alignment ensures that LVM2's allocation units align with the underlying RAID stripe boundaries, preventing I/O operations from spanning multiple stripes unnecessarily.

The LVM2 stripe size, specified when creating striped logical volumes, should match the RAID chunk size to ensure that LVM2's striping granularity aligns with the RAID array's data distribution pattern. When creating physical volumes on RAID arrays, the data area should be aligned to stripe width boundaries using the --dataalignment parameter, ensuring that the LVM2 metadata and data areas start at offsets that align with RAID stripe boundaries.

Filesystem alignment must also be considered, with filesystem parameters such as XFS stripe unit and stripe width configured to align with the LVM2 stripe geometry. This multi-layer alignment ensures that I/O operations flow efficiently through the entire storage stack without causing read-modify-write cycles or other inefficiencies.

Performance measurements demonstrate that properly aligned configurations achieve 20-50% performance improvement compared to misaligned configurations. The use of optimal extent size, calculated to match the effective stripe width, provides an additional 10-15% performance improvement over default extent size settings. Complete stack alignment, where RAID chunk size, LVM2 extent size, LVM2 stripe size, and filesystem alignment are all properly coordinated, delivers maximum performance with optimal I/O efficiency.

3.3.4 Performance Optimisation Through /proc Parameters

The Linux kernel provides several parameters in the /proc/sys/dev/raid/ directory that can be tuned to optimise RAID array performance, particularly during synchronisation and rebuild operations. These parameters control the speed at which RAID arrays perform background operations such as resynchronisation after drive replacement or array creation.

The /proc/sys/dev/raid/speed_limit_min parameter sets the minimum speed in kilobytes per second for RAID synchronisation operations. The default value is typically 1,000 KB/s, which may be conservative for modern storage systems. Increasing this value to 50,000 KB/s or higher can accelerate synchronisation operations, reducing the time required for array rebuilds and resynchronisation. However, higher values increase CPU and I/O subsystem load, which may impact other system operations.

The /proc/sys/dev/raid/speed_limit_max parameter sets the maximum speed in kilobytes per second for RAID synchronisation. The default value is typically 200,000 KB/s, which may be insufficient for high-performance storage systems with fast SSDs or NVMe devices. Increasing this value to 500,000 KB/s or higher allows the RAID subsystem to utilise more of the available I/O bandwidth during synchronisation operations, significantly reducing rebuild times for large arrays.

For RAID5 and RAID6 arrays, the stripe cache size can be increased to improve write performance. The stripe cache is located at /sys/block/mdX/md/stripe_cache_size and controls the amount of memory used for caching stripe data during write operations. Increasing the stripe cache size from the default value to 4,096 pages (16 MB) or higher can improve write performance, particularly for sequential write workloads. The optimal value depends on available system memory and workload characteristics, with larger values providing better performance at the cost of increased memory usage.

These parameters can be set temporarily by writing values directly to the /proc or /sys files, but such changes are lost upon system reboot. To make changes persistent and manageable, they should be configured using the sysctl tool and stored in configuration files within the /etc/sysctl.d/ directory. This approach provides better organisation, allows changes to be applied at any time without rebooting, and follows modern Linux system configuration practices.

Example: Configuring RAID Performance Parameters with sysctl

The following example demonstrates how to configure RAID synchronisation speed limits using the sysctl tool and store the configuration in /etc/sysctl.d/10-raid.conf for persistent application across system reboots.

# Step 1: Set parameters temporarily to test values
# Increase minimum synchronisation speed to 50,000 KB/s
sysctl -w dev.raid.speed_limit_min=50000

# Increase maximum synchronisation speed to 500,000 KB/s
sysctl -w dev.raid.speed_limit_max=500000

# Step 2: Verify the current values
sysctl dev.raid.speed_limit_min dev.raid.speed_limit_max

# Step 3: Monitor synchronisation speed to ensure values are appropriate
cat /proc/mdstat

# Step 4: Create persistent configuration file
cat > /etc/sysctl.d/10-raid.conf << 'EOF'
# RAID synchronisation speed limits
# Minimum speed: 50,000 KB/s (50 MB/s)
# Maximum speed: 500,000 KB/s (500 MB/s)
# Adjust these values based on system resources and workload requirements

dev.raid.speed_limit_min = 50000
dev.raid.speed_limit_max = 500000
EOF

# Step 5: Apply configuration from file immediately
# This loads all files from /etc/sysctl.d/ and applies them
systemctl restart systemd-sysctl

# Alternative: Apply specific file
sysctl -p /etc/sysctl.d/10-raid.conf

# Step 6: Verify that values were applied correctly
sysctl dev.raid.speed_limit_min dev.raid.speed_limit_max

# Step 7: Verify that values persist in /proc
[[ -f /proc/sys/dev/raid/speed_limit_min ]] && cat /proc/sys/dev/raid/speed_limit_min
[[ -f /proc/sys/dev/raid/speed_limit_max ]] && cat /proc/sys/dev/raid/speed_limit_max

Example: Complete RAID Performance Tuning Configuration

This example provides a comprehensive configuration file that includes all RAID performance parameters, with comments explaining each setting and recommendations for different system types.

# Create comprehensive RAID performance tuning configuration
cat > /etc/sysctl.d/10-raid.conf << 'EOF'
# RAID Performance Tuning Configuration
# File: /etc/sysctl.d/10-raid.conf
# 
# This configuration optimises RAID array synchronisation and rebuild speeds.
# Adjust values based on:
# - Available CPU resources
# - I/O subsystem capabilities
# - Workload requirements
# - System responsiveness needs
#
# For high-performance systems with fast SSDs/NVMe:
# - Increase speed_limit_min to 50,000-100,000 KB/s
# - Increase speed_limit_max to 500,000-1,000,000 KB/s
#
# For systems with mixed workloads:
# - Use moderate values to balance rebuild speed and system responsiveness
# - speed_limit_min: 20,000-50,000 KB/s
# - speed_limit_max: 200,000-500,000 KB/s
#
# For systems with limited resources or where rebuild speed is not critical:
# - Use conservative default values or lower
# - speed_limit_min: 1,000-10,000 KB/s
# - speed_limit_max: 100,000-200,000 KB/s

# Minimum RAID synchronisation speed (KB/s)
# Default: 1,000 KB/s
# Recommended for high-performance systems: 50,000 KB/s
dev.raid.speed_limit_min = 50000

# Maximum RAID synchronisation speed (KB/s)
# Default: 200,000 KB/s
# Recommended for high-performance systems: 500,000 KB/s
dev.raid.speed_limit_max = 500000
EOF

# Apply configuration immediately
systemctl restart systemd-sysctl

# Verify configuration
sysctl -a | grep dev.raid

Example: Configuring Stripe Cache Size for RAID5/RAID6 Arrays

The stripe cache size parameter is located in /sys/block/mdX/md/stripe_cache_size and cannot be set directly through sysctl, as it is a /sys parameter rather than a /proc/sys parameter. However, it can be configured persistently using a systemd service or a script that runs after arrays are assembled. The following example demonstrates both approaches.

# Method 1: Using systemd service for stripe cache configuration
# This service runs after mdadm arrays are assembled

cat > /etc/systemd/system/raid-stripe-cache.service << 'EOF'
[Unit]
Description=Configure RAID Stripe Cache Size
After=mdmonitor.service
Requires=mdmonitor.service

[Service]
Type=oneshot
ExecStart=/usr/local/bin/configure-raid-stripe-cache.sh
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
EOF

# Create the configuration script
cat > /usr/local/bin/configure-raid-stripe-cache.sh << 'EOF'
#!/bin/bash
# Configure stripe cache size for all RAID5 and RAID6 arrays
# Recommended values:
# - Small arrays (< 10 disks): 4096 pages (16 MB)
# - Medium arrays (10-20 disks): 8192 pages (32 MB)
# - Large arrays (> 20 disks): 16384 pages (64 MB)

STRIPE_CACHE_SIZE=4096  # 16 MB for typical configurations

# Find all RAID5 and RAID6 arrays
for md_device in /sys/block/md*/md/level; do
    if [ -f "$md_device" ]; then
        level=$(cat "$md_device")
        if [[ "$level" == "raid5" ]] || [[ "$level" == "raid6" ]]; then
            md_name=$(basename $(dirname $(dirname "$md_device")))
            stripe_cache_file="/sys/block/$md_name/md/stripe_cache_size"
            if [ -f "$stripe_cache_file" ]; then
                echo "$STRIPE_CACHE_SIZE" > "$stripe_cache_file"
                echo "Configured stripe cache for $md_name: $STRIPE_CACHE_SIZE pages"
            fi
        fi
    fi
done
EOF

chmod +x /usr/local/bin/configure-raid-stripe-cache.sh

# Enable and start the service
systemctl daemon-reload
systemctl enable raid-stripe-cache.service
systemctl start raid-stripe-cache.service

# Verify stripe cache sizes
for md in /sys/block/md*/md/stripe_cache_size; do
    if [ -f "$md" ]; then
        md_name=$(basename $(dirname $(dirname "$md")))
        cache_size=$(cat "$md")
        echo "$md_name: $cache_size pages ($((cache_size * 4)) KB)"
    fi
done

Example: Applying Configuration Changes at Runtime

One of the key advantages of using /etc/sysctl.d/ configuration files is that changes can be applied immediately without requiring a system reboot. The following example demonstrates how to modify RAID performance parameters and apply them immediately.

# Step 1: Modify the configuration file
# Edit /etc/sysctl.d/10-raid.conf to change values
# For example, increase speeds for faster rebuilds during maintenance window

# Step 2: Apply changes immediately
systemctl restart systemd-sysctl

# Step 3: Verify new values are active
sysctl dev.raid.speed_limit_min dev.raid.speed_limit_max

# Step 4: Monitor RAID synchronisation to observe impact
watch -n 1 'cat /proc/mdstat'

# Step 5: After maintenance, restore to normal values
# Edit /etc/sysctl.d/10-raid.conf again with normal values
systemctl restart systemd-sysctl

Example: Complete RAID Performance Tuning Workflow

This example demonstrates a complete workflow for configuring and managing RAID performance parameters, including verification and monitoring procedures.

# Complete RAID Performance Tuning Workflow
# =========================================

# 1. Check current RAID array status
cat /proc/mdstat

# 2. Check current performance parameter values
echo "Speed Limit Min: $(cat /proc/sys/dev/raid/speed_limit_min) KB/s"
echo "Speed Limit Max: $(cat /proc/sys/dev/raid/speed_limit_max) KB/s"

# 3. Test temporary values before making permanent changes
sysctl -w dev.raid.speed_limit_min=50000
sysctl -w dev.raid.speed_limit_max=500000

# 4. Monitor impact for a few minutes
watch -n 2 'cat /proc/mdstat'

# 5. Create persistent configuration
cat > /etc/sysctl.d/10-raid.conf << 'EOF'
# RAID Performance Tuning
# Applied: $(date)
# System: $(hostname)

dev.raid.speed_limit_min = 50000
dev.raid.speed_limit_max = 500000
EOF

# 6. Apply configuration
systemctl restart systemd-sysctl

# 7. Verify configuration is active
sysctl dev.raid.speed_limit_min dev.raid.speed_limit_max

# 8. Verify values in /proc
echo "Speed Limit Min: $(cat /proc/sys/dev/raid/speed_limit_min) KB/s"
echo "Speed Limit Max: $(cat /proc/sys/dev/raid/speed_limit_max) KB/s"

# 9. Check all sysctl RAID parameters
sysctl -a | grep dev.raid

# 10. Monitor RAID synchronisation with new settings
watch -n 2 'cat /proc/mdstat'

Important Notes on sysctl Configuration:

The systemctl restart systemd-sysctl command reloads all configuration files from /etc/sysctl.d/, /run/sysctl.d/, and /usr/lib/sysctl.d/, applying them to the running system immediately. This allows configuration changes to take effect without requiring a system reboot. The configuration files are processed in lexicographic order, with files in /etc/sysctl.d/ taking precedence over those in /usr/lib/sysctl.d/.

For /sys parameters such as stripe cache size, which cannot be managed through sysctl, systemd services or scripts must be used. These services should be configured to run after the mdmonitor.service to ensure that RAID arrays are assembled before attempting to configure their parameters.

Monitoring the impact of performance tuning is essential, as overly aggressive settings can impact system responsiveness. The /proc/mdstat file provides real-time information about RAID array status and synchronisation progress, allowing administrators to observe the effects of tuning adjustments and verify that synchronisation operations are proceeding at the desired speeds.

Monitoring the impact of these tuning parameters is important, as overly aggressive settings can impact system responsiveness or cause other performance issues. The /proc/mdstat file provides real-time information about RAID array status and synchronisation progress, allowing administrators to observe the effects of tuning adjustments and verify that synchronisation operations are proceeding at the desired speeds.

The stripe cache size parameter, located at /sys/block/mdX/md/stripe_cache_size, is particularly important for RAID5 and RAID6 arrays. This parameter controls the amount of memory allocated for caching stripe data during write operations. The default value is typically 256 pages (1 MB), but increasing this to 4,096 pages (16 MB) or higher can significantly improve write performance for sequential workloads. The optimal value depends on available system memory, workload characteristics, and the number of arrays in the system. Larger values provide better write performance but consume more system memory, requiring careful consideration of memory availability and other system requirements.

To make these performance tuning parameters persistent across system reboots, they should be configured through system startup mechanisms. For /proc/sys/dev/raid/ parameters, values can be added to /etc/sysctl.conf using the dev.raid.speed_limit_min and dev.raid.speed_limit_max syntax. After adding these values, the sysctl -p command applies the changes, and they will be automatically applied on subsequent reboots. For stripe cache size and other /sys parameters, systemd service files or startup scripts can be created to set these values after RAID arrays are assembled during system boot.

3.3.5 CPU Core and Thread Optimisation for RAID Operations

Effective utilisation of CPU cores and threads is critical for optimising RAID reconstruction, write, and read operations. Modern processors with multiple cores and simultaneous multithreading (SMT) capabilities, such as the AMD Ryzen 5 5600H with 6 cores and 12 threads, provide significant opportunities for performance optimisation through CPU affinity configuration and core pinning.

Understanding CPU Topology

Before configuring CPU affinity for RAID operations, it is essential to understand the processor topology. The AMD Ryzen 5 5600H features 6 physical cores with simultaneous multithreading (SMT), providing 12 logical processors (threads). Each physical core can execute two threads simultaneously, sharing the core's execution resources.

# Examine CPU topology
lscpu

# Example output for AMD Ryzen 5 5600H:
# Architecture:            x86_64
# CPU op-mode(s):          32-bit, 64-bit
# CPU(s):                  12
# On-line CPU(s) list:     0-11
# Thread(s) per core:      2
# Core(s) per socket:      6
# Socket(s):               1
# NUMA node(s):            1
# Model name:              AMD Ryzen 5 5600H with Radeon Graphics

# Detailed CPU topology
lscpu -p

# View CPU topology in tree format (dnf install hwloc-gui)
lstopo --of txt
# OR
grep -E "processor|physical id|core id" /proc/cpuinfo

CPU Core Mapping for AMD Ryzen 5 5600H

For the AMD Ryzen 5 5600H with 6 cores and 12 threads, the typical mapping is:

  • Physical cores: 0, 1, 2, 3, 4, 5
  • Logical processors (threads): 0-11
  • Core 0: threads 0, 6
  • Core 1: threads 1, 7
  • Core 2: threads 2, 8
  • Core 3: threads 3, 9
  • Core 4: threads 4, 10
  • Core 5: threads 5, 11

Example: Identifying CPU Topology and Core Mapping

# Complete CPU topology analysis
# CPU Information
lscpu

# CPU Core Mapping
for cpu in {0..11}; do
    core_id=$(cat /sys/devices/system/cpu/cpu${cpu}/topology/core_id)
    physical_package=$(cat /sys/devices/system/cpu/cpu${cpu}/topology/physical_package_id)
    echo "CPU $cpu: Core $core_id, Package $physical_package"
done

# NUMA Topology
numactl --hardware

# Current CPU Affinity of MD Threads (works if software RAID arrays are defined)
ps -eLo pid,tid,psr,comm | grep -E "md.*_raid|md.*_resync" | while read pid tid psr comm; do
    echo "Thread $tid ($comm): Running on CPU $psr"
    taskset -p $tid 2>/dev/null | sed "s/^/  Affinity: /"
done

Configuring CPU Affinity for RAID Operations

RAID operations benefit from dedicated CPU cores to avoid contention with other system processes. For the AMD Ryzen 5 5600H, a recommended approach is to dedicate 2-4 physical cores (4-8 threads) for RAID operations, leaving the remaining cores for the operating system and applications.

Example: Pinning RAID Operations to Specific Cores

This example demonstrates how to configure CPU affinity for RAID operations on an AMD Ryzen 5 5600H system, dedicating cores 4 and 5 (threads 4, 5, 10, 11) for RAID operations.

# Step 1: Identify MD kernel threads
echo "=== Identifying MD Kernel Threads ==="
ps -eLo pid,tid,class,rtprio,ni,pri,psr,pcpu,stat,wchan:14,comm | grep -E "md.*_raid|md.*_resync"

# Step 2: Create a cpuset for RAID operations
# Dedicate cores 4 and 5 (threads 4, 5, 10, 11) for RAID
mkdir -p /sys/fs/cgroup/cpuset/raid_cpuset

# Assign CPU cores 4, 5, 10, 11 to the cpuset
# Note: Using physical cores 4 and 5, which include threads 4,5 and 10,11
echo 4,5,10,11 > /sys/fs/cgroup/cpuset/raid_cpuset/cpuset.cpus

# Assign memory node (single NUMA node for Ryzen 5 5600H)
echo 0 > /sys/fs/cgroup/cpuset/raid_cpuset/cpuset.mems

# Make the cpuset exclusive (prevent other processes from using these cores)
echo 1 > /sys/fs/cgroup/cpuset/raid_cpuset/cpuset.cpu_exclusive

# Step 3: Move MD kernel threads to the cpuset
for tid in $(ps -eLo tid,comm | grep -E "md.*_raid|md.*_resync" | awk '{print $1}'); do
    echo $tid > /sys/fs/cgroup/cpuset/raid_cpuset/tasks 2>/dev/null
    echo "Moved thread $tid to RAID cpuset"
done

# Step 4: Verify CPU affinity
echo -e "\n=== Verifying CPU Affinity ==="
for tid in $(ps -eLo tid,comm | grep -E "md.*_raid|md.*_resync" | awk '{print $1}'); do
    echo "Thread $tid:"
    taskset -p $tid
done

# Step 5: Monitor CPU usage
mpstat -P 4,5,10,11 1 5

Example: Using taskset for Direct CPU Affinity

An alternative approach uses taskset to directly set CPU affinity for MD kernel threads:

# Find all MD-related kernel threads
MD_THREADS=$(ps -eLo tid,comm | grep -E "md.*_raid|md.*_resync" | awk '{print $1}')

# Pin each thread to cores 4, 5, 10, 11
for tid in $MD_THREADS; do
    taskset -cp 4,5,10,11 $tid
    echo "Set affinity for thread $tid to cores 4,5,10,11"
done

# Verify affinity
for tid in $MD_THREADS; do
    echo "Thread $tid affinity:"
    taskset -p $tid
done

Example: Persistent Configuration with systemd Service

Cpuset configurations are not persistent across reboots. The cpuset filesystem is recreated at each boot, and all cpuset directories and their configurations must be recreated. To make CPU affinity configuration persistent across reboots, create a systemd service that recreates the cpuset and configures CPU affinity after RAID arrays are assembled:

# Create systemd service for RAID CPU affinity
cat > /etc/systemd/system/raid-cpu-affinity.service << 'EOF'
[Unit]
Description=Configure CPU Affinity for RAID Operations
After=mdmonitor.service
Requires=mdmonitor.service

[Service]
Type=oneshot
ExecStart=/usr/local/bin/configure-raid-cpu-affinity.sh
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
EOF

# Create the configuration script
cat > /usr/local/bin/configure-raid-cpu-affinity.sh << 'EOF'
#!/bin/bash
# Configure CPU affinity for RAID operations on AMD Ryzen 5 5600H
# Dedicates cores 4,5 (threads 4,5,10,11) for RAID operations

# Create cpuset (recreated at each boot as cpusets are not persistent)
mkdir -p /sys/fs/cgroup/cpuset/raid_cpuset
echo 4,5,10,11 > /sys/fs/cgroup/cpuset/raid_cpuset/cpuset.cpus
echo 0 > /sys/fs/cgroup/cpuset/raid_cpuset/cpuset.mems
echo 1 > /sys/fs/cgroup/cpuset/raid_cpuset/cpuset.cpu_exclusive

# Wait for MD arrays to be assembled
sleep 2

# Move all MD kernel threads to the cpuset
for tid in $(ps -eLo tid,comm 2>/dev/null | grep -E "md.*_raid|md.*_resync" | awk '{print $1}'); do
    if [ -n "$tid" ]; then
        echo $tid > /sys/fs/cgroup/cpuset/raid_cpuset/tasks 2>/dev/null
    fi
done

# Also move mdadm monitor process if running
MDADM_PID=$(pgrep -f "mdadm.*monitor" 2>/dev/null)
if [ -n "$MDADM_PID" ]; then
    echo $MDADM_PID > /sys/fs/cgroup/cpuset/raid_cpuset/tasks 2>/dev/null
fi

echo "RAID CPU affinity configured: cores 4,5,10,11"
EOF

chmod +x /usr/local/bin/configure-raid-cpu-affinity.sh

# Enable and start the service
systemctl daemon-reload
systemctl enable raid-cpu-affinity.service
systemctl start raid-cpu-affinity.service

# Verify the service status
systemctl status raid-cpu-affinity.service

Example: CPU Isolation at Boot Time

For maximum isolation, CPU cores can be isolated from the general scheduler at boot time using kernel parameters. This ensures that isolated cores are only used by processes explicitly assigned to them.

# Edit GRUB configuration
# For RHEL/Rocky Linux: /etc/default/grub
# For Debian/Ubuntu: /etc/default/grub

# Add isolcpus parameter to isolate cores 4,5,10,11
# Note: isolcpus isolates by CPU number, so we specify 4,5,10,11
GRUB_CMDLINE_LINUX="isolcpus=4,5,10,11 nohz_full=4,5,10,11 rcu_nocbs=4,5,10,11"

# Update GRUB configuration
grub2-mkconfig -o /boot/grub2/grub.cfg  # RHEL/Rocky Linux
# OR
update-grub  # Debian/Ubuntu

# Reboot to apply changes
reboot

Example: Complete RAID Performance Optimisation with CPU Affinity

This example demonstrates a complete configuration combining CPU affinity, speed limits, and stripe cache optimisation for optimal RAID performance. Note that cpuset configurations are not persistent across reboots and must be recreated at each boot. The systemd service approach shown earlier handles this automatically.

# Complete RAID Performance Optimisation Script
# For AMD Ryzen 5 5600H: 6 cores, 12 threads
# Note: This script should be run at boot time via systemd service for persistence
# ============================================

# 1. Configure RAID speed limits (persistent via sysctl)
cat > /etc/sysctl.d/10-raid.conf << 'EOF'
# RAID Performance Tuning for AMD Ryzen 5 5600H
dev.raid.speed_limit_min = 50000
dev.raid.speed_limit_max = 500000
EOF
systemctl restart systemd-sysctl

# 2. Create CPU cpuset for RAID operations
# Note: Cpusets are not persistent - must be recreated at each boot
# This is handled automatically by the systemd service shown in previous examples
mkdir -p /sys/fs/cgroup/cpuset/raid_cpuset
echo 4,5,10,11 > /sys/fs/cgroup/cpuset/raid_cpuset/cpuset.cpus
echo 0 > /sys/fs/cgroup/cpuset/raid_cpuset/cpuset.mems
echo 1 > /sys/fs/cgroup/cpuset/raid_cpuset/cpuset.cpu_exclusive

# 3. Configure stripe cache for RAID5/RAID6 arrays
for md in /sys/block/md*/md/level; do
    if [ -f "$md" ]; then
        level=$(cat "$md")
        if [[ "$level" == "raid5" ]] || [[ "$level" == "raid6" ]]; then
            md_name=$(basename $(dirname $(dirname "$md")))
            echo 4096 > /sys/block/$md_name/md/stripe_cache_size
            echo "Configured stripe cache for $md_name: 4096 pages (16 MB)"
        fi
    fi
done

# 4. Move MD kernel threads to RAID cpuset
sleep 2  # Wait for arrays to be fully assembled
for tid in $(ps -eLo tid,comm 2>/dev/null | grep -E "md.*_raid|md.*_resync" | awk '{print $1}'); do
    if [ -n "$tid" ]; then
        echo $tid > /sys/fs/cgroup/cpuset/raid_cpuset/tasks 2>/dev/null
        echo "Moved thread $tid to RAID cpuset"
    fi
done

# 5. Verify configuration
echo "*** RAID Performance Configuration Summary ***"
echo "Speed Limits:"
sysctl dev.raid.speed_limit_min dev.raid.speed_limit_max
echo -e "\nCPU Affinity (RAID Cores: 4,5,10,11):"
ps -eLo tid,psr,comm | grep -E "md.*_raid|md.*_resync" | head -5
echo -e "\nStripe Cache Sizes:"
for md in /sys/block/md*/md/stripe_cache_size; do
    if [ -f "$md" ]; then
        md_name=$(basename $(dirname $(dirname "$md")))
        cache_size=$(cat "$md")
        echo "$md_name: $cache_size pages ($((cache_size * 4)) KB)"
    fi
done

# 6. Monitor performance
# Monitoring RAID Performance
cat /proc/mdstat
mpstat -P 4,5,10,11 1

Performance Considerations and Recommendations

For the AMD Ryzen 5 5600H with 6 cores and 12 threads, the following recommendations apply:

Core Allocation Strategy:

  • Conservative: Dedicate 1-2 physical cores (2-4 threads) for RAID operations, leaving 4-5 cores for system and applications
  • Balanced: Dedicate 2 physical cores (4 threads) for RAID operations, providing good performance without significant impact on other workloads
  • Aggressive: Dedicate 3-4 physical cores (6-8 threads) for RAID operations, maximising rebuild speed but reducing available cores for other processes

Physical Cores vs Logical Threads:

  • Pinning to physical cores (avoiding SMT pairs) can provide more consistent performance
  • For example, using cores 4 and 5 (threads 4,5,10,11) utilises two physical cores with SMT
  • Alternatively, using only one thread per core (e.g., threads 4,5) may provide better per-thread performance but utilises fewer resources

Monitoring and Verification:

  • Monitor CPU utilisation: mpstat -P ALL 1
  • Check thread CPU affinity: taskset -p
  • Monitor RAID synchronisation speed: watch -n 1 'cat /proc/mdstat'
  • Verify core isolation: cat /proc/cmdline | grep isolcpus

NUMA Considerations:

The AMD Ryzen 5 5600H is a single-socket processor with unified memory architecture, so NUMA considerations are not applicable. For multi-socket systems, ensure that pinned CPU cores are on the same NUMA node as the storage devices to minimise memory access latency.

3.3.6 Filesystem Tuning: XFS and ext4 Alignment with LVM2 Stripe Geometry

Proper filesystem alignment with the underlying LVM2 stripe geometry and RAID chunk size is essential for achieving optimal performance. Both XFS and ext4 provide parameters that must be calculated and configured based on the RAID chunk size, LVM2 stripe size, and the number of underlying arrays. This section provides comprehensive examples demonstrating the complete configuration workflow from RAID array creation through filesystem creation with proper alignment.

Example 1: XFS Filesystem on LVM2 over Four RAID1 Arrays

This example demonstrates the complete configuration of an XFS filesystem on an LVM2 logical volume striped across four RAID1 arrays, with each array using a 256 KB chunk size. The configuration ensures alignment at all layers: RAID chunk size, LVM2 extent size, LVM2 stripe size, and XFS stripe parameters.

# Step 1: Create four independent RAID1 arrays with 256 KB chunk size
mdadm --create /dev/md0 --level=1 --raid-devices=2 --chunk=256 /dev/sda1 /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-devices=2 --chunk=256 /dev/sdc1 /dev/sdd1
mdadm --create /dev/md2 --level=1 --raid-devices=2 --chunk=256 /dev/sde1 /dev/sdf1
mdadm --create /dev/md3 --level=1 --raid-devices=2 --chunk=256 /dev/sdg1 /dev/sdh1

# Step 2: Calculate alignment parameters
# RAID chunk size: 256 KB
# Number of arrays: 4
# Effective stripe width: 256 KB × 4 = 1 MB
# LVM2 extent size should match stripe width: 1 MB
# LVM2 stripe size should match RAID chunk size: 256 KB

# Step 3: Create physical volumes with data alignment to 1 MB boundary
pvcreate --dataalignment 1M /dev/md0 /dev/md1 /dev/md2 /dev/md3

# Step 4: Create volume group with extent size matching stripe width (1 MB)
vgcreate --physicalextentsize 1M vg_raid10_like /dev/md0 /dev/md1 /dev/md2 /dev/md3

# Step 5: Create striped logical volume
# -i 4: 4 stripes (one per array)
# -I 256K: Stripe size matching RAID chunk size (256 KB)
# -L 500G: Logical volume size
lvcreate -i 4 -I 256K -L 500G -n lv_raid10_like vg_raid10_like

# Step 6: Calculate XFS stripe parameters
# XFS block size: 4 KB (default)
# LVM2 stripe size: 256 KB
# Number of arrays: 4
# sunit = LVM2 stripe size / XFS block size = 256 KB / 4 KB = 64 blocks
# swidth = sunit × number of arrays = 64 × 4 = 256 blocks

# Step 7: Create XFS filesystem with stripe alignment
mkfs.xfs -d sunit=64,swidth=256 /dev/vg_raid10_like/lv_raid10_like

# Step 8: Verify alignment
# Check physical volume alignment
pvs -o +pe_start,data_start

# Check volume group extent size
vgs -o +vg_extent_size

# Check logical volume stripe configuration
lvs -o +stripes,stripe_size

# Verify XFS stripe alignment
xfs_info /dev/vg_raid10_like/lv_raid10_like
# Look for: sunit=64 swidth=256

Example 2: XFS Filesystem on LVM2 over Six RAID6 Arrays (RAID60-like)

This example demonstrates configuration for a RAID60-like setup using six RAID6 arrays, each with 512 KB chunk size, providing both redundancy and high performance for large-scale storage deployments.

# Step 1: Create six independent RAID6 arrays with 512 KB chunk size
mdadm --create /dev/md0 --level=6 --raid-devices=6 --chunk=512 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
mdadm --create /dev/md1 --level=6 --raid-devices=6 --chunk=512 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1
mdadm --create /dev/md2 --level=6 --raid-devices=6 --chunk=512 /dev/sdm1 /dev/sdn1 /dev/sdo1 /dev/sdp1 /dev/sdq1 /dev/sdr1
mdadm --create /dev/md3 --level=6 --raid-devices=6 --chunk=512 /dev/sds1 /dev/sdt1 /dev/sdu1 /dev/sdv1 /dev/sdw1 /dev/sdx1
mdadm --create /dev/md4 --level=6 --raid-devices=6 --chunk=512 /dev/sdy1 /dev/sdz1 /dev/sdaa1 /dev/sdab1 /dev/sdac1 /dev/sdad1
mdadm --create /dev/md5 --level=6 --raid-devices=6 --chunk=512 /dev/sdae1 /dev/sdaf1 /dev/sdag1 /dev/sdah1 /dev/sdai1 /dev/sdaj1

# Step 2: Calculate alignment parameters
# RAID chunk size: 512 KB
# Number of arrays: 6
# Effective stripe width: 512 KB × 6 = 3 MB
# LVM2 extent size: 3 MB
# LVM2 stripe size: 512 KB

# Step 3: Create physical volumes with data alignment to 3 MB boundary
pvcreate --dataalignment 3M /dev/md0 /dev/md1 /dev/md2 /dev/md3 /dev/md4 /dev/md5

# Step 4: Create volume group with extent size matching stripe width
vgcreate --physicalextentsize 3M vg_raid60_like /dev/md0 /dev/md1 /dev/md2 /dev/md3 /dev/md4 /dev/md5

# Step 5: Create striped logical volume
lvcreate -i 6 -I 512K -L 2T -n lv_raid60_like vg_raid60_like

# Step 6: Calculate XFS stripe parameters
# LVM2 stripe size: 512 KB
# Number of arrays: 6
# sunit = 512 KB / 4 KB = 128 blocks
# swidth = 128 × 6 = 768 blocks

# Step 7: Create XFS filesystem with stripe alignment
mkfs.xfs -d sunit=128,swidth=768 /dev/vg_raid60_like/lv_raid60_like

# Step 8: Verify configuration
xfs_info /dev/vg_raid60_like/lv_raid60_like
# Expected output should show: sunit=128 swidth=768

Example 3: ext4 Filesystem on LVM2 over Four RAID1 Arrays

This example demonstrates ext4 filesystem configuration with proper alignment for the same four-array RAID1 configuration used in Example 1. ext4 uses stride and stripe-width parameters instead of XFS's sunit and swidth.

# Steps 1-5: Same as Example 1 (RAID arrays, PVs, VG, LV creation)
# ... (RAID arrays, PVs, VG, and LV creation commands from Example 1) ...

# Step 6: Calculate ext4 alignment parameters
# ext4 block size: 4 KB (default)
# LVM2 stripe size: 256 KB
# Number of arrays: 4
# stride = LVM2 stripe size / ext4 block size = 256 KB / 4 KB = 64
# stripe-width = stride × number of arrays = 64 × 4 = 256

# Step 7: Create ext4 filesystem with stripe alignment
mkfs.ext4 -E stride=64,stripe-width=256 /dev/vg_raid10_like/lv_raid10_like

# Step 8: Verify alignment
# Check physical volume alignment
pvs -o +pe_start,data_start --units s

# Check logical volume stripe configuration
lvs -o +stripes,stripe_size

# Verify ext4 block size and alignment
tune2fs -l /dev/vg_raid10_like/lv_raid10_like | grep -E "Block size|Stride|Stripe width"

Example 4: Complete Configuration with Verification and Mount Options

This example provides a complete end-to-end configuration including verification steps and optimised mount options for both XFS and ext4 filesystems.

# Complete XFS Configuration Example
# ===================================

# 1. Create RAID arrays
mdadm --create /dev/md0 --level=1 --raid-devices=2 --chunk=256 /dev/sda1 /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-devices=2 --chunk=256 /dev/sdc1 /dev/sdd1
mdadm --create /dev/md2 --level=1 --raid-devices=2 --chunk=256 /dev/sde1 /dev/sdf1
mdadm --create /dev/md3 --level=1 --raid-devices=2 --chunk=256 /dev/sdg1 /dev/sdh1

# 2. Save RAID configuration
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
update-initramfs -u  # For Debian/Ubuntu
# OR
dracut --force        # For RHEL/Rocky Linux

# 3. Create LVM2 structure
pvcreate --dataalignment 1M /dev/md0 /dev/md1 /dev/md2 /dev/md3
vgcreate --physicalextentsize 1M vg_storage /dev/md0 /dev/md1 /dev/md2 /dev/md3
lvcreate -i 4 -I 256K -L 500G -n lv_data vg_storage

# 4. Create XFS filesystem
mkfs.xfs -d sunit=64,swidth=256 /dev/vg_storage/lv_data

# 5. Verify all alignment layers
#
# Physical Volume Alignment
pvs -o pv_name,pe_start,data_start --units s

#Volume Group Extent Size
vgs -o vg_name,vg_extent_size

# Logical Volume Stripe Configuration
lvs -o lv_name,stripes,stripe_size

# XFS Stripe Alignment
xfs_info /dev/vg_storage/lv_data

# 6. Mount with optimised options
mkdir -p /mnt/storage
mount -o noatime,nodiratime /dev/vg_storage/lv_data /mnt/storage

# 7. Add to /etc/fstab for persistent mounting
echo "/dev/vg_storage/lv_data /mnt/storage xfs noatime,nodiratime 0 0" >> /etc/fstab

# Complete ext4 Configuration Example
# ===================================

# Steps 1-3: Same as XFS example above
# ... (RAID arrays, LVM2 structure creation) ...

# 4. Create ext4 filesystem
mkfs.ext4 -E stride=64,stripe-width=256 /dev/vg_storage/lv_data

# 5. Verify alignment
pvs -o pv_name,pe_start,data_start --units s

# Logical Volume Stripe Configuration
lvs -o lv_name,stripes,stripe_size

# ext4 Alignment Parameters
tune2fs -l /dev/vg_storage/lv_data | grep -E "Block size|Stride|Stripe width"

# 6. Mount with optimised options
mkdir -p /mnt/storage
mount -o noatime,nodiratime /dev/vg_storage/lv_data /mnt/storage

# 7. Add to /etc/fstab
echo "/dev/vg_storage/lv_data /mnt/storage ext4 noatime,nodiratime 0 0" >> /etc/fstab

Parameter Calculation Reference

The following formulas should be used when calculating filesystem alignment parameters for different configurations:

For XFS:

  • Calculate sunit in filesystem blocks: sunit = (LVM2 stripe size) / (XFS block size)
  • Example: 256 KB stripe size / 4 KB block size = 64 blocks
  • Calculate swidth: swidth = sunit × (number of arrays)
  • Example: 64 blocks × 4 arrays = 256 blocks
  • In mkfs.xfs, both sunit and swidth are specified as integer values representing filesystem blocks (typically 4 KB blocks)
  • Alternative syntax using suffixes: mkfs.xfs -d sunit=256k,swidth=1024k (specify in bytes with suffix)

Calculation Examples for XFS:

  • LVM2 stripe size: 256 KB, Arrays: 4, XFS block size: 4 KB
  • sunit = 256 KB / 4 KB = 64
  • swidth = 64 × 4 = 256
  • Command: mkfs.xfs -d sunit=64,swidth=256
  • LVM2 stripe size: 512 KB, Arrays: 6, XFS block size: 4 KB
  • sunit = 512 KB / 4 KB = 128
  • swidth = 128 × 6 = 768
  • Command: mkfs.xfs -d sunit=128,swidth=768

For ext4:

  • stride = (LVM2 stripe size in bytes) / (ext4 block size in bytes)
  • stripe-width = stride × (number of arrays)
  • Both parameters are specified as integer values representing filesystem blocks.

Common Configuration Values:

For a typical configuration with 256 KB LVM2 stripe size and 4 arrays:

  • XFS: sunit=64, swidth=256 (64 blocks × 4 KB = 256 KB stripe unit, 256 blocks × 4 KB = 1 MB stripe width)
  • ext4: stride=64, stripe-width=256 (64 blocks × 4 KB = 256 KB per array, 256 blocks total width)

For a configuration with 512 KB LVM2 stripe size and 6 arrays:

  • XFS: sunit=128, swidth=768 (128 blocks × 4 KB = 512 KB stripe unit, 768 blocks × 4 KB = 3 MB stripe width)
  • ext4: stride=128, stripe-width=768 (128 blocks per array, 768 blocks total width)

3.3.7 Disk Selection for Optimal RAID Performance

The selection of appropriate storage devices for RAID arrays is critical for achieving optimal performance characteristics that align with application workload requirements. Different application types impose distinct I/O patterns and performance demands, necessitating careful consideration of device characteristics including interface type, random and sequential performance, latency characteristics, and endurance ratings. The fundamental principle governing RAID performance is that the slowest device in the array determines the overall array performance, making device homogeneity essential for optimal operation.

For database applications, particularly Online Transaction Processing (OLTP) workloads characterised by high random I/O operations and low latency requirements, NVMe SSDs provide the necessary performance characteristics. OLTP databases typically require random read/write IOPS exceeding 100,000 operations per second with sub-millisecond latency, characteristics that NVMe SSDs deliver through their direct PCIe interface and advanced controller architectures. Enterprise NVMe SSDs can provide random 4K read/write IOPS between 400,000 and 1,000,000, with sequential read speeds of 3,000-7,000 MB/s and sequential write speeds of 2,000-6,000 MB/s, making them suitable for high-performance database deployments. For Online Analytical Processing (OLAP) workloads with sequential read-heavy patterns, high-performance SATA SSDs or enterprise HDDs may provide sufficient throughput whilst maintaining cost-effectiveness, as these workloads prioritise sequential bandwidth over random IOPS.

High-Performance Computing (HPC) and machine learning workloads present different requirements, with training workloads demanding high sequential throughput for reading large datasets and writing checkpoints, whilst inference workloads require high random read IOPS with low latency. For training workloads requiring throughput exceeding 5 GB/s, NVMe SSDs in RAID 0 or RAID 10 configurations provide the necessary bandwidth, with NVMe SSDs capable of delivering aggregate throughput of 50 GB/s or higher in multi-device configurations. Inference workloads benefit from NVMe SSDs in RAID 10 configurations, providing both high random read performance and redundancy for production deployments. Data preparation workloads, which exhibit mixed sequential and random access patterns, can utilise SATA SSDs or NVMe SSDs depending on throughput requirements, with SATA SSDs providing cost-effective solutions for moderate performance requirements.

The endurance characteristics of SSDs, measured in Drive Writes Per Day (DWPD), must be considered when selecting devices for write-intensive workloads. Read-intensive workloads such as content delivery or archival storage may utilise SSDs with lower DWPD ratings, reducing cost whilst maintaining performance. Write-intensive workloads including database transaction logs, checkpoint storage, or scratch storage for computational workloads require SSDs with higher DWPD ratings to ensure device longevity over the expected operational lifetime. Enterprise SSDs typically provide DWPD ratings ranging from 1 to 10, with higher ratings corresponding to increased cost but improved suitability for write-intensive applications.

The interface type and protocol significantly impact performance characteristics, with NVMe providing the highest performance through direct PCIe connectivity, SATA SSDs offering balanced performance and cost through the SATA interface, and traditional HDDs providing maximum capacity at lower cost through SATA or SAS interfaces. For LVM2 over mdadm RAID configurations, device homogeneity is essential, requiring all devices within an array to have identical specifications including interface type, capacity, and performance characteristics. Mixing devices with different performance characteristics results in the array operating at the speed of the slowest device, negating the performance advantages of faster devices and reducing overall array efficiency.

3.4 Management and Monitoring

The LVM2 over mdadm RAID approach provides significant advantages in management and monitoring capabilities. Each RAID array can be monitored independently using standard mdadm tools, providing granular visibility into the health and performance of individual arrays. This granular monitoring capability enables administrators to perform individual health checks on each array rather than examining aggregate statistics, allowing for more precise identification of potential issues before they escalate into failures.

The independent nature of arrays enables targeted alerting, where monitoring systems can generate alerts for specific arrays rather than aggregate storage system alerts. This targeted alerting capability enables more precise response procedures, as administrators can immediately identify which specific array requires attention without needing to investigate the entire storage system. When issues are identified, independent recovery procedures can be executed for the affected array without impacting the operation of other arrays in the configuration.

Beyond RAID array management, the LVM2 layer provides additional management capabilities including snapshot creation, thin provisioning, and logical volume resizing. These LVM2 features operate over the underlying RAID arrays, providing a unified management interface for both redundancy and volume management functions. The combination of independent RAID array management and LVM2 volume management provides a comprehensive storage management solution with both redundancy control and volume flexibility.

The operational workflow benefits from simplified troubleshooting procedures, as issues can be isolated to specific arrays rather than requiring investigation of complex nested structures. Each array is a standard mdadm RAID array, meaning that standard mdadm tools and procedures apply without requiring special knowledge of nested configurations. The clear separation of concerns between RAID redundancy management and volume management simplifies documentation and operational procedures, as each layer can be documented and managed independently whilst understanding their interaction.

3.5 Failure Recovery

Failure recovery procedures with LVM2 over mdadm RAID are significantly simplified compared to traditional nested RAID configurations. When a drive failure occurs, the affected array can be identified using standard mdadm commands to examine the array state. The failed drive can be removed from the specific array and a replacement drive added, with the array rebuilding independently whilst all other arrays continue normal operation without any impact.

The isolated impact of failures means that a drive failure in one array does not affect the operation of other arrays in the configuration. This isolation provides significant operational advantages, as the system can continue operating normally whilst the failed array is being recovered. Each array rebuilds independently at its own pace, without requiring coordination with other arrays or affecting their performance.

The independent rebuild capability reduces the risk associated with array recovery operations. Smaller arrays rebuild faster than large monolithic arrays, reducing the window of vulnerability during which additional failures could result in data loss. The reduced rebuild scope means that rebuild operations affect only the specific array that experienced the failure, rather than larger portions of the storage system.

In contrast, traditional nested RAID configurations may require coordinated recovery procedures across the nested structure. A failure in one component of a nested configuration can affect the entire structure, requiring more complex recovery procedures. Rebuild operations in traditional configurations often affect larger portions of storage, and the longer rebuild times associated with large arrays increase the risk of additional failures occurring during the rebuild process.

3.6 Configuration Flexibility

The LVM2 over mdadm RAID approach provides substantial configuration flexibility that is not available with traditional nested RAID configurations. Arrays of different sizes can be combined within a volume group, although this results in capacity limitations as the usable capacity is determined by the smallest array when striping is used. Similarly, arrays with different performance characteristics can be mixed, though this results in performance limitations as the overall performance is constrained by the slowest array.

This flexibility enables selective usage of arrays, where different logical volumes can be created using different subsets of arrays. For example, high-performance logical volumes can be created from arrays utilising fast NVMe SSDs, whilst capacity-oriented logical volumes can be created from arrays utilising larger but slower HDDs. The dynamic reconfiguration capability allows arrays to be added to or removed from volume groups as requirements change, providing operational flexibility not available with fixed RAID configurations.

The configuration flexibility supports various use cases including tiered storage implementations, where logical volumes are created from arrays with different performance characteristics to match workload requirements. Capacity optimisation is enabled through more efficient utilisation of available capacity, as arrays can be combined and managed more flexibly than fixed configurations. Workload isolation can be achieved by separating different workloads onto different array sets, whilst gradual migration from older to newer arrays can be performed incrementally without requiring complete system reconfiguration.

---

4. Discussion

4.1 Performance Considerations

The performance characteristics of LVM2 over mdadm RAID are equivalent to traditional nested RAID configurations when properly configured, but achieving this equivalence requires careful attention to alignment across all storage layers. The alignment requirements span multiple components: the RAID chunk size at the mdadm layer, the LVM2 physical extent size at the volume group level, the LVM2 stripe size when creating striped logical volumes, and the filesystem alignment parameters when creating filesystems on the logical volumes.

Proper alignment optimisation requires that the physical extent size matches the effective stripe width, which is calculated as the RAID chunk size multiplied by the number of arrays. The LVM2 stripe size must match the RAID chunk size to ensure that LVM2's striping granularity aligns with the underlying RAID data distribution pattern. Physical volume data alignment must be configured to align to stripe width boundaries using the --dataalignment parameter when creating physical volumes, and filesystem alignment must be configured to align with the LVM2 stripe geometry using filesystem-specific parameters.

Failure to achieve proper alignment results in I/O operations that span multiple stripes unnecessarily, causing read-modify-write cycles and other inefficiencies that degrade performance by 20-50%. This performance degradation can negate the operational benefits of the LVM2 over mdadm RAID approach, making proper alignment configuration essential for successful deployment.

4.2 Operational Advantages

The primary advantages of LVM2 over mdadm RAID are operational rather than performance-based, focusing on improved manageability and flexibility rather than raw performance improvements. The independent management of arrays enables operational flexibility that is not available with traditional nested RAID configurations, allowing administrators to perform maintenance, expansion, and recovery operations on individual arrays without affecting the entire storage system.

The scalability advantage manifests through incremental capacity expansion capabilities that do not require major reconfiguration of the storage system. New arrays can be added to existing volume groups, and logical volumes can be extended to utilise the new capacity, all whilst the system remains operational. This incremental expansion capability provides significant operational advantages over approaches requiring predetermined array sizes or major reconfiguration for capacity increases.

Maintainability is improved through simplified troubleshooting and maintenance procedures. Each array can be managed independently using standard mdadm tools, and issues can be isolated to specific arrays rather than requiring investigation of complex nested structures. The visibility advantage provides better insight into individual array health and performance, enabling more precise monitoring and alerting compared to aggregate views of nested configurations.

4.3 Limitations and Considerations

The LVM2 over mdadm RAID approach has several limitations and considerations that must be understood when planning deployments. Capacity limitations arise from the striping mechanism, where usable capacity is limited by the smallest array when data is striped across multiple arrays. When arrays of different sizes are combined, the extra capacity on larger arrays remains unused, resulting in wasted storage space. The best practice is to use arrays of identical size to maximise capacity utilisation and avoid wasted storage.

Performance limitations stem from the fundamental principle that RAID arrays operate at the speed of their slowest component. When arrays with different performance characteristics are combined, the overall performance is constrained by the slowest array, with faster arrays being throttled to match the slowest array's performance. Mixed performance characteristics degrade overall performance, making it essential to use arrays with identical performance characteristics including device type, interface speed, and sequential and random I/O capabilities.

Complexity considerations include the requirement for understanding both mdadm and LVM2 tools and concepts. Proper alignment requires careful configuration across multiple layers, and monitoring requires attention to both the RAID layer and the LVM2 layer to ensure optimal operation. However, this complexity is offset by the operational flexibility and management advantages provided by the approach.

4.4 Best Practices

Configuration best practices for LVM2 over mdadm RAID deployments emphasise consistency and proper alignment. Consistent array specifications should be used, with identical devices and configurations for all arrays to avoid capacity and performance limitations. Proper alignment must be ensured at all storage layers: the RAID layer with appropriate chunk size selection, the LVM2 layer with physical extent size and stripe size configuration, and the filesystem layer with alignment parameters matching the underlying storage geometry.

Extent size optimisation requires setting the physical extent size to match the effective stripe width, calculated as the RAID chunk size multiplied by the number of arrays. Stripe size matching ensures that the LVM2 stripe size matches the RAID chunk size, maintaining alignment between the LVM2 striping layer and the underlying RAID data distribution. Data alignment must be configured using the --dataalignment parameter when creating physical volumes, ensuring that the physical volume data area starts at an offset aligned with RAID stripe boundaries.

Operational best practices include independent monitoring of each array separately, enabling early detection of issues and precise response procedures. Gradual expansion should be performed incrementally as capacity needs arise, rather than over-provisioning initial capacity. Clear documentation of array configurations, including chunk sizes, extent sizes, and alignment parameters, is essential for maintaining and troubleshooting the storage system. Testing of expansion and recovery procedures should be performed before production deployment to ensure that operational procedures are well-understood and can be executed efficiently when needed.

---

5. Conclusions

The use of LVM2 over mdadm RAID1 and RAID6 arrays provides significant operational benefits for Linux storage management whilst maintaining equivalent performance characteristics to traditional nested RAID configurations. The approach offers enhanced flexibility, simplified capacity expansion, improved manageability, and better failure isolation compared to direct RAID array usage or traditional nested RAID configurations.

Summary of Results:

The analysis demonstrates that LVM2 over mdadm RAID achieves equivalent performance to traditional RAID10/60 configurations when properly aligned, providing the same redundancy and performance characteristics whilst offering superior operational flexibility. Independent array management enables operational flexibility that is not available with traditional approaches, allowing maintenance, expansion, and recovery operations to be performed on individual arrays without affecting the entire storage system.

Capacity expansion is straightforward and non-disruptive, enabling incremental growth without major reconfiguration. Individual array management simplifies troubleshooting and maintenance procedures, as issues can be isolated to specific arrays and addressed independently. Better failure isolation ensures that failures in one array do not directly impact others, reducing the risk and complexity associated with storage system failures.

Recommendations:

Deployments requiring flexibility and incremental expansion should utilise LVM2 over mdadm RAID, as this approach provides the operational advantages necessary for dynamic storage environments. Proper alignment must be ensured across all storage layers for optimal performance, requiring careful configuration of RAID chunk size, LVM2 extent size, LVM2 stripe size, and filesystem alignment parameters. Identical array specifications should be used to avoid capacity and performance limitations that arise from mixing arrays with different characteristics.

Independent monitoring should be implemented for each RAID array, enabling early detection of issues and precise response procedures. Configuration documentation and operational procedures should be maintained to ensure operational consistency and enable efficient troubleshooting and maintenance operations.

Future Work:

Further research could examine:

  • Performance characteristics under specific workload patterns
  • Optimal extent size calculations for various array configurations
  • Automated alignment verification and optimisation tools
  • Comparative analysis with other storage management approaches

---

6. References

1. Red Hat Enterprise Linux 8. "Configuring and Managing Logical Volumes". Red Hat Documentation. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/pdf/configuring_and_managing_logical_volumes/configuring-and-managing-logical-volumes.pdf

2. Red Hat. "Best practices for LVM". Red Hat Knowledge Base. https://access.redhat.com/articles/2106521

3. Oracle. "Gluster Storage Linux Best Practices". Oracle Corporation. https://www.oracle.com/a/ocom/docs/linux/gluster-storage-linux-best-practices.pdf

4. Timote Brusson. "A Week into Linux Storage Performances - Day 3: Manage Redundancy with RAID". Blog. https://blog.timotebrusson.fr/2025/06/18/A-week-into-linux-storage-performances-Day-3-Manage-redondancy-with-RAID/

5. Timote Brusson. "A Week into Linux Storage Performances - Day 4: Virtual Filesystem". Blog. https://blog.timotebrusson.fr/2025/06/19/A-week-into-linux-storage-performances-Day-4-Virtual-Filesystem/

6. PIC Wiki. "Configuring XFS Storage Pool with correct Disk Alignment". PIC Computing. https://pwiki.pic.es/index.php/Configuring_XFS_Storage_Pool_with_correct_Disk_Alignment

7. Insights Oetiker. "RAID Optimisation". Oetiker+Partner AG. https://insights.oetiker.ch/linux/raidoptimization.html

8. Thomas Krenn. "Partition Alignment". Thomas-Krenn Wiki. https://www.thomas-krenn.com/en/wiki/Partition_Alignment

9. SAS Support. "LVM and RAID Best Practices". SAS Institute. https://support.sas.com/resources/papers/proceedings16/8220-2016.pdf

10. Red Hat Enterprise Linux 7. "Logical Volume Manager Administration - LVM RAID". Red Hat Documentation. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/lvm_raid

11. Kernel.org. "Device Mapper - Striped Target". Linux Kernel Documentation. https://www.kernel.org/doc/html/v5.11/admin-guide/device-mapper/striped.html

12. Larry Jordan. "Explaining RAID Chunk Size and Which to Pick for Media". Larry Jordan & Associates. https://larryjordan.com/articles/explaining-raid-chunk-size-and-which-to-pick-for-media/

13. Testing Gyan. "RAID Performance". Google Sites. https://sites.google.com/site/testinggyan/goal/domain-knowledge/raid-performance

14. DTU Physics Wiki. "Linux Software RAID". Technical University of Denmark. https://wiki.fysik.dtu.dk/ITwiki_archive/24.06/LinuxSoftwareRAID/

15. Medium. "The Definitive Guide to mdraid (mdadm) and Linux Software RAID". https://medium.com/@pltnvs/the-definitive-guide-to-mdraid-mdadm-and-linux-software-raid-fbb561c21878

16. HostingTalk. "Speed Up Software RAID Sync". https://hostingtalk.uk/speedup-software-raid-sync/

17. Ubuntu Manpages. "md - Multiple Devices driver for Linux". https://manpages.ubuntu.com/manpages/jammy/man4/md.4.html

18. Simcentric. "HDD vs SSD: Choosing the Right Storage for US Server". https://www.simcentric.com/america-dedicated-server/hdd-vs-ssd-choosing-the-right-storage-for-us-server/

19. ServerAstra. "Choosing the Right Storage for Dedicated Servers". https://serverastra.com/docs/Tutorials/Choosing-the-Right-Storage-for-Dedicated-Servers

20. Dell Technologies. "Dell EMC PowerEdge Enterprise HDD Overview". https://i.dell.com/sites/doccontent/shared-content/data-sheets/en/Documents/dell-emc-poweredge-enterprise-hdd-overview.pdf

21. Infortrend. "High Performance Computing Solutions". https://www.infortrend.com/us/solutions/high-performance-computing

22. Bahatec. "HPE Server Hard Drive Buyers Guide". https://bahatec.com/hpe-server-hard-drive-buyers-guide/

23. Virtuozzo. "Disk Requirements". https://docs.virtuozzo.com/virtuozzo_hybrid_infrastructure_7_0_admins_guide/disk-requirements.html

---

Creative Commons - Attribution 2.5 Generic. Powered by Blogger.

Implementing LUKS Encryption on Software RAID Arrays with LVM2 Management

A comprehensive guide to partition-level encryption for maximum security ...

Search This Blog

Translate