Skip to content

Red Hat

Subscriptions

Subscriptions are managed through the Red Hat entitlement service, integrated with the Customer Portal. Entitlement certificates are stored as X.509 PEM certificate files in /etc/pki/entitlement for Simple Content Access (SCA)-enabled accounts.

Inspect X.509 certificate
openssl x509 -text -in /etc/pki/entitlement/9012345678901234567.pem

subscription-manager

# Display status of subscriptions and products
subscription-manager status

# Display installed products
subscription-manager list

Applications

TuneD

TuneD is a service that monitors the system and optimizes its performance under certain workloads. TuneD provides predefined profiles for power-saving and performance-boosting use cases.

  • throughput-performance optimizes for throughput
  • virtual-guest optimizes for performance
  • balanced balances performance and power consumption
  • powersave optimizes for power consumption

These can be listed from the command-line:

tuned-adm list profiles
tuned-adm active
tuned-adm recommend
tuned-adm profile powersave                 # Select a profile
tuned-adm profile virtual-guest powersave   # Select a merged profile

Dynamic tuning monitors system components during uptime and makes system changes dynamically. It is enabled by changing a setting in TuneD's config at /etc/tuned/tuned-main.conf:

/etc/tuned/tuned-main.conf
dynamic_tuning=1

Cockpit

Cockpit is builtin to Red Hat distributions and, once started as a normal SystemD service, is available at port 9090.

Storage

autofs

Auto File System offers an alternative way of mounting NFS shares that can save some system resources, especially when many shares are mounted. Autofs can mount NFS shares dynamically, only when accessed.

dnf install -y autofs
systemctl enable --now autofs.service

Mounts are defined in configs called maps. There are three map types:

  • master map is /etc/auto.master by default
  • direct maps point to other files for mount details. They are notable for beginning with /-
  • indirect maps also point to other files for mount details but provide an umbrella mount point which will contain all other mounts within it. Note that other mountpoints at this parent directory cannot coexist with autofs mounts.

Here is an example indirect map that will mount to /data/sales.

/etc/auto.master.d/data.autofs
/data /etc/auto.data
/etc/auto.data
sales -rw,soft 192.168.33.101:/data/sales

Map files also support wildcards.

* 127.0.0.1:/home/&

AutoFS's config is at /etc/autofs.conf. One important directive is master_map_name which defines the master map file.

Stratis

Stratis is an open-source managed pooled storage solution in the vein of ZFS or btrfs.

Stratis block devices can be disks, partitions, LUKS-encrypted volumes, LVM logical volumes, or DM multipath devices. Stratis pools are mounted under /stratis and, like other pooled storage systems, support multiple filesystems. Stratis file systems are thinly provisioned and formatted with xfs, although vanilla xfs utilities cannot be used on Stratis file systems.

dnf -y install stratisd stratis-cli
systemctl enable --now stratisd
Create a pool
stratis pool create pool /dev/sda /dev/sdb /dev/sdc # (1)

  1. An error about the devices being "owned" can be resolved by wiping it.
    wipefs -a /dev/sda
    
Display block devices managed by Stratis
stratis blockdev # (1)
  1. This command is equivalent to pvs in LVM.
Create filesystem
stratis fs create pool files
Confirm
stratis fs
/etc/fstab
/stratis/pool/files /mnt/stratisfs xfs defaults,x-systemd.requires=stratisd.service 0 0
Expand pool
stratis pool add-data pool /dev/sdb
Save snapshot
stratis fs snapshot pool files files-snapshot
Restore from snapshot
stratis fs rename files files-orig
stratis fs rename files-snapshot files
umount /mnt/files; mount /mnt/files

VDO

Virtual disk optimizer (VDO) is a kernel module introduced in RHEL 7.5 that provides data deduplication and compression on block devices.

The physical storage of a VDO volume is divided into a number of slabs, which are contiguous regions of the physical space. All slabs for a given volume have the same size, which can be any power of 2 multiple of 128 MB up to 32 GB (2 GB by default). The maximum number of slabs is 8,192. The maximum physical storage of the VDO is provided to the user on creation.

Like LVM volumes, VDO volumes appear under /dev/mapper

VDO appears not to be installed by default, but it is available in the BaseOS repo.

dnf install vdo
systemctl enable --now vdo

Create a VDO volume
vdo create --name=web_storage --device=/dev/xvdb --vdoLogicalSize=10G
vdostats --human-readable
mkfs.xfs -K /dev/mapper/web_storage
udevadm settle

The fstab file requires a variety of options

/dev/mapper/web_storage /mnt/web_storage xfs _netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0

Labs

EX200

IAM

We're going to lay the groundwork here and use these local accounts for all the subsequent tasks. You can write a script to do this, or do it by hand, from the data in the input file for the script. The file contents are:

manny:1010:dba_admin,dba_managers,dba_staff 
moe:1011:dba_admin,dba_staff 
jack:1012:dba_intern,dba_staff 
marcia:1013:it_staff,it_managers 
jan:1014:dba_admin,dba_staff 
cindy:1015:dba_intern,dba_staff 

Set all user passwords to dbapass. Also, change the users' PRIMARY groups' GID to match their UID. Don't forget to check their home directories to make sure permisisons are correct!

Enable the following command aliases:

  • SOFTWARE
  • SERVICES
  • PROCESSES

Add a new command alias named MESSAGES:

/bin/tail -f /var/log/messages
Enable superuser privilages for the following local groups:

  • dba_managers: everything
  • dba_admin: Command aliases: SOFTWARE, SERVICES, PROCESSES
  • dba_intern: Command alias: MESSAGES

Repos

You'll need to configure three repositories and install some software:

  • RHEL 8 BaseOS:

    • Repository ID: [rhel-8-baseos-rhui-rpms]
    • The mirrorlist is: https://rhui3.REGION.aws.ce.redhat.com/pulp/mirror/content/dist/rhel8/rhui/$releasever/$basearch/baseos/os
    • The GPG key is located at: /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
    • You will need to add SSL configuration:
    sslverify=1 
    sslclientkey=/etc/pki/rhui/content-rhel8.key 
    sslclientcert=/etc/pki/rhui/product/content-rhel8.crt 
    sslcacert=/etc/pki/rhui/cdn.redhat.com-chain.crt 
    
  • RHEL 8 AppStream:

    • Repository ID: [rhel-8-appstream-rhui-rpms]
    • The mirrorlist is: https://rhui3.REGION.aws.ce.redhat.com/pulp/mirror/content/dist/rhel8/rhui/$releasever/$basearch/appstream/os
    • The GPG key is located at: /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
    • You will need to add SSL configuration:
    sslverify=1
    sslclientkey=/etc/pki/rhui/content-rhel8.key
    sslclientcert=/etc/pki/rhui/product/content-rhel8.crt
    sslcacert=/etc/pki/rhui/cdn.redhat.com-chain.crt
    
  • EPEL:

    • Repository ID: [epel]
    • The baseurl is: https://download.fedoraproject.org/pub/epel/$releasever/Everything/$basearch

Configure the repositories on the first server, then make an archive of the files, securely copy them to the second server, then unarchive the repository files on the second server.

  • Install the default AppStream stream/profile for container-tools
  • Install the youtube-dl package (from EPEL)
  • Check for system updates, but don't install them

Networking

On the first server, configure the second interface's IPv4/IPv6 addresses using nmtui.

  • IPv4: 10.0.1.20/24
  • IPv6: 2002:0a00:0114::/64
  • Manual, not Automatic (DHCP) for both interfaces
  • Only IP addresses, no other fields
  • Configure only, do not activate

Logging

By default, the systemd journal logs to memory in RHEL 8, in the location /run/log/journal. While this works fine, we'd like to make our journals persistent across reboots. Configure the systemd journal logs to be persistent on both servers, logging to /var/log/journal.

Scheduling

Create one at task and one cron job on the first server:

  • The at job will create a file containing the string "The at job ran" in the file named /web/html/at.html, two minutes from the time you schedule it.
  • The cron job will append to the /web/html/cron.html file every minute, echoing the date to the file.

These files will be available via the web server on the first server after the "Troubleshoot SELinux issues" objective is completed.

Chrony

Time sync is not working on either of our servers. We need to fix that.

Configure chrony to use the following server:

server 169.254.169.123 iburst 
Make sure your work is persistent and check your work!

GRUB

On server1, make the following changes:

  • Increase the timeout using GRUB_TIMEOUT=10
  • Add the following line: GRUB_TIMEOUT_STYLE=hidden
  • Add quiet to the end of the GRUB_CMDLINE_LINUX line

Validate the changes in /boot/grub2/grub.cfg. Do not reboot the server.

Storage

On the second server:

  • Create a VDO device with the first unused 5GB device.

    • Name: web_storage
    • Logical Size: 10GB
  • Use the VDO device as an LVM physical volume. Create the following:

  • Volume Group: web_vg

    • Three 2G Logical Volumes with xfs file systems, mounted persistently at /mnt/web_storage_{dev,qa,prod}q:

      • web_storage_dev
      • web_storage_qa
      • web_storage_prod

We need to increase the swap on the second server. We're going to use half of our first unused 2G disk for this additional swap space. Configure the swap space non-destructively and persistently.

On the second server, using the second 2G disk, create the following:

  • Stratis pool: appteam
  • Stratis file system, mounted persistently at /mnt/app_storage: appfs1

Shares

Configure autofs on the first server to mount the user home directories on the second server at /export/home.

  • On the second server, configure a NFS server with the following export:
/home <first_server_private_IP>(rw,sync,no_root_squash)
  • On the first server, configure autofs to mount the exported /home directory on the second server at /export/home. Change the home directories for our six users (manny|moe|jack|marcia|jan|cindy) to be /export/home/<user> and test.

On the second server, create a directory at /home/dba_docs with:

  • Group ownership: dba_staff
  • Permissions: 770, SGID and sticky bits set

Create a link in each shared user's home directory to this directory, for easy access.

Set the following ACLs:

  • Read-only for jack and cindy
  • Full permissions for marcia

Container as service

As the cloud_user user on the first server, create a persistent systemd container with the following:

  • Image: registry.access.redhat.com/rhscl/httpd-24-rhel7
  • Port mappings: 8080 on the container to 8000 on the host
  • Persistent storage at ~/web_data, mounted at /var/www/html in the container
  • Container name: web_server

SELinux

The Apache web server on the first server won't start! Investigate this issue, and correct any other SELinux issues related to httpd that you may find.

Firewall

Make sure the firewall is installed, enabled and started on both servers. Configure the following services/ports:

  • Server 1:

    • ssh
    • http
    • Port 85 (tcp)
    • Port 8000 (tcp)
  • Server 2:

    • ssh
    • nfs
    • nfs3
    • rpc-bind
    • mountd

Commands

dnf

View history of dnf commands

dnf history
dnf history userinstalled # View all packages installed by user

Package groups can be specified using the group command or by prefixing the package group name with @

dnf info @virtualization # dnf group info virtualization
dnf install @virtualization # dnf group install virtualization
dnf install --with-optional @virtualization # Include optional packages

Remove the configuration backend supporting the use of legacy ifcfg files in NetworkManager.

dnf remove NetworkManager-initscripts-ifcfg-rh

Modules are special package groups representing an application, runtime, or a set of tools. The Node.js module allows you to select several streams corresponding to major versions.

dnf module install nodejs:12

Global dnf configuration is stored in either /etc/yum.conf or /etc/dnf.conf.

[main]
; Exclude packages from updates permanently
exclude=kernel* php*
; Suppress confirmation
assumeyes=True

The configuration can be dumped from the command-line (as root)

dnf config-manager --dump

Repos

Repositories are INI files placed in /etc/yum.repos.d/, but they can also be displayed and manipulated from the command-line.

Repositories
# Display repos
dnf repolist # -v

# Display enabled repos
dnf repolist --enabled

# Display a single repo
dnf repoinfo docker-ce-stable

# Add repo
dnf config-manager --add-repo $REPO-URL

# Disable repo
dnf config-manager --set-disabled $REPO-NAME
Example repos
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://download.docker.com/linux/fedora/$releasever/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://download.docker.com/linux/fedora/gp

[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

[google-cloud-sdk]
name=Google Cloud SDK
baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Modules are collections of packages that are installed together. They often also have profiles available, which are variants of the module: i.e. client, server, common, devel, etc.

dnf module list php
dnf module install php:7.4/devel
dnf module reset php

firewall-cmd

Frontend to Netfilter in Red Hat distributions.

firewall-cmd --state # "running"

Firewalld has a runtime configuration and a saved, persistent configuration. Only the runtime configuration will be consulted for any command, unless the persistent configuration is specified with --permanent.

The runtime configuration can be saved with this command, which obviates the need to execute every change twice.

firewall-cmd --runtime-to-permanent

Alternatively, the persistent configuration can be loaded into memory:

firewall-cmd --reload

Display firewall rules
firewall-cmd --list-all --permanent

Firewalld uses zones to define the level of trust for network connections. A connection can only be part of one zone, but a zone can be used for many network connections. Builtin zones have XML-format configs found in /usr/lib/firewalld/zones.

firewall-cmd --get-active-zones     # Display active zones along with interfaces
firewall-cmd --info-zone=public     # Inspect zone
firewall-cmd --new-zone=testlab     # Create new zone

Firewalld rules are generally managed through builtin services. These bundle network settings together for well-known applications like SSH, etc. Builtin services are also XML-format configs found in /usr/lib/firewalld/services.

Services
firewall-cmd --list-services
firewall-cmd --add-service=http
firewall-cmd --remove-service=http

Firewalld's config file is at /etc/firewalld/firewalld.conf

/etc/firewalld/firewalld.conf
AllowZoneDrifting=no

Since RHEL 8, firewalld's backend has been changed to nftables.

/etc/firewalld/firewalld.conf
FirewallBackend=nftables

httpd

The Apache web server daemon is named httpd in RHEL and other RPM-based distributions. HTML content is served from /var/www/html by default, but this can be changed by modifying the DocumentRoot directive in the Apache config located at /etc/httpd/conf/httpd.conf.

/etc/httpd/conf/httpd.conf
DocumentRoot "/web"
# ...
<Directory "/web">

Containers must mount content to /usr/local/apache2/htdocs.

Users can also serve files from their home directories, by default from a directory named public_html.

podman

On RHEL, podman can be installed as a package or as part of a module

dnf module install container-tools

With few exceptions, podman exposes a command-line API that closely imitates that of Docker.

Arch Linux

On Arch, /etc/subuid and /etc/subgid have to be set. These are colon-delimited files that define the ranges for namespaced UIDs and GIDs to be used by each user. Conventionally, these ranges begin at 100,000 (for the first, primary user) and contain 65,536 possible values.

terry:100000:65536
alice:165536:65536

Then podman has to be migrated

podman system migrate

Podman supports pulling from various repos using aliases that are defined in /etc/containers/registries.conf.d. RHEL and derivative distributions support additional aliases, some of which reference images that require a login.

For example, Red Hat offers a Python 2.7 runtime from the RHSCL (Red Hat Software Collections) repository on registry.access.redhat.com, which does not require authentication. However, Python 3.8 is only available from registry.redhat.io, which does. Interestingly, other Python runtimes are available from the ubi7 and ubi8 repos from unauthenticated registries.

Container images are stored in ~/.local/share/containers/storage.

podman pull rhscl/httpd-24-rhel7 # (1)

  1. Alias to registry.access.redhat.com/rhscl/httpd-24-rhel7

The Z option is necessary on SELinux systems (like RHEL and derivatives) and tells Podman to label the content with a private unshared label. On systems running SELinux, rootless containers must be explicitly allowed to access bind mounts. Containerized processes are not allowed to access files without a SELinux label.

podman run -d -v=/home/jasper/notes/site:/usr/share/nginx/html:Z -p=8080:80 --name=notes nginx
podman run -d -v=/home/jasper/notes/site:/usr/local/apache2/htdocs:Z -p=8080:80 --name=notes httpd-24

Mapped ports can be displayed

podman port -a

Output a SystemD service file from a container to STDOUT (this must be redirected to a file)

podman generate systemd notes \
    --restart-policy=always   \
    --name                    \ # (3)
    --files                   \ # (2)
    --new                     \ # (1)

  1. Yield unit files that do not expect containers and pods to exist but rather create them based on their configuration files.
  2. Generate a file with a name beginning with the prefix (which can be set with --container-prefix or --pod-prefix) and followed by the ID or name (if --name is also specified)
  3. In conjunction with --files, name the service file after the container and not the ID number.

rpm

Query repos for information on a package

rpm -qi $PACKAGE # --query --info

Upgrade or install a package, with progress bars

rpm -Uvh $PACKAGE # --upgrade --verbose --hash

Display version of Fedora

rpm -E %fedora

Import a keyring

rpm --import "https://build.opensuse.org/projects/home:manuelschneid3r/public_key"

rpmkeys

Manage RPM keyrings

Import a keyring

rpmkeys --import $PUBKEY

vdo

Manage kernel VDO devices and related configuration information.

vdo create --name=storage --device=/dev/xvdb --vdoLogicalSize=10G

vdostats

Display configuration and statistics of VDO volumes

vdostats --human-readable

Glossary

CentOS

A community distribution of Linux that was created by Gregory Kurtzer in 2004 and acquired by Red Hat in 2014.

It has traditionally been considered downstream to RHEL, incorporating changes to RHEL after a delay of several months. In fact, it is a rebuild of the publicly available source RPMs (SRPMs) of RHEL packages, which historically allowed CentOS maintainers to simply package and ship them rebranded.

For years, CentOS was the distribution of choice for experienced Linux administrators who did not feel the need to pay for Red Hat's support. In December 2020, Red Hat announced that CentOS 8 support will end at the end of 2021 (rather than 2029), while CentOS 7 will continue to be supported until 2024. This represented a shift in focus from CentOS Linux to CentOS Stream and a change from a fixed-release (or "stable point release") to a rolling-release distribution model.

CentOS Stream was announced in September 2019 as a distribution of CentOS maintained on a model previously misidentified as rolling-release but now described as "continuously delivered", organized into Streams. CentOS Stream originated in an effort to get more community participation in development of RHEL, rather than merely passive consumption.

Fedora

Fedora is a community distribution supported by Red Hat launched as "Fedora Core" in 2003. It has traditionally been considered upstream to RHEL, serving as a testing ground for new features and improvements.

Fedora CoreOS is a Fedora edition built specifically for running containerized workloads securely and at scale. Because containers can be deployed across many nodes for redundancy, the system can be updated and rebooted automatically without affecting uptime. CoreOS systems are meant to be immutable infrastructure, meaning they are only configured through the provisioning process and not modified in-place. All systems start with a generic OS image, but on first boot it uses a system called Ignition to read an Ignition config (which is converted from a Fedora CoreOS Config file) from the cloud or a remote URL, by which it provisions itself, creating disk partitions, file systems, users, etc.