Categories

## Native exFAT support on Fedora 32

A lot has changed since 2018 when exFAT was kept out of Fedora due to patent issues and a third-party FUSE-driver needed to be used. Until recently the GPLv2 based driver from Microsoft wasn’t enabled in the kernel as it was based on an older specification and wasn’t fully functional for everyday use.

$grep EXFAT /boot/config-uname -r # CONFIG_EXFAT_FS is not set  Fedora 32 recently received an upgrade to kernel 5.7 and with that, the native exFAT driver was enabled during compile time. The driver got a lot of updates from Samsung to work correctly to the latest specifications. # DOS/FAT/EXFAT/NT Filesystems CONFIG_EXFAT_FS=m CONFIG_EXFAT_DEFAULT_IOCHARSET="utf8" # end of DOS/FAT/EXFAT/NT Filesystems  When an SD-card is now plugged into the machine, the kernel module is loaded and the filesystem is mounted by the kernel without the need of a userland driver. $ lsmod | grep fat
exfat                  81920  1
vfat                   20480  1
fat                    81920  1 vfat
$mount | grep fat /dev/nvme0n1p1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro) /dev/mmcblk0p1 on /run/media/user/disk type exfat (rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0022,dmask=0022,iocharset=utf8,errors=remount-ro,uhelper=udisks2)  The userland tools may come with Fedora 33, but the package exfat-utils from RPMFusion still need to be installed until it ships with Fedora. Categories ## Docker on Fedora 31 and 32 For “Developing inside a Container” with Visual Studio Code, one of the requirements is to use Docker Community Editon as the version of Docker that ships with Fedora is too old and misses certain features. Also the new Docker alternative Podman from Red Hat isn’t supported by Visual Studio Code. After installing Docker CE on Fedora 31, cgroups version 1 needs to be enabled as Linux switched over to cgroups version 2, but Docker still depends on version 1. With the commands below cgroups version 1 can be enabled again and requires a rebooting of the system. $ sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"
$sudo systemctl reboot  Now with the upgrade to Fedora 32 something interesting happens as firewalld is switching from iptables to nftables as a new way to do firewalling on Linux. It basically stops all traffic for the docker0 network and made Molecule fail to build container images for the tests. With a simple test as in the example below, the broken situation can be confirmed. $ docker run busybox nslookup google.com
;; connection timed out; no servers could be reached


One of the solutions is to put containers directly into the host network, but this is unwise as it exposes containers to network and directly reachable for others. Another solution that requires fewer changes is to assign docker0 interface to the trusted zone within firewalld.

$sudo firewall-cmd --permanent --zone=trusted --add-interface=docker0$ sudo firewall-cmd --reload


Running the test case again, then it gives back the correct result as the container can communicate again via the docker0 interface to the assigned name server.

$docker run busybox nslookup google.com Server: 192.168.178.1 Address: 192.168.178.1:53 Non-authoritative answer: Name: google.com Address: 172.217.19.206  While this solves the problems with Docker for now it is good to know that these changes should be temporary as Docker needs to support cgroups version 2 as support for version 1 may be dropped in the future. Secondly firewalld needs to get propper nftables support as the migration from iptables currently isn’t as smooth as it should be. Categories ## EnvironmentFiles used by systemd In a previous posting, Environment variables set by systemd, variables were directly set within the systemd unit file. This is fine for a small amount of modifications, but in some case these environment variables are provided by another package on the system or need to be the same for multiple services. We have modified our Python application to print all environment variables that are set to make this example easier. #!/usr/bin/env python3 import os for param in os.environ.keys(): print("%20s %s" % (param, os.environ[param]))  We create the first environment file /usr/local/env/file1 with the content as below to assign string values to variables. Just as in the systemd unit file no string interpolation is being done, only lines with an equal sign are processed and everything after a hash sign is ignored. FVAR1="test1" FVAR2="test2"  We also create a second environment file /usr/local/env/file2 with the content below. Directly we see that variable FVAR1 is also be declared in this environment file. FVAR1="TEST1" FVAR3="Test3"  To use the environment files we need to declare them in the systemd unit file below. The line for file1 shows that we require the file to be present otherwise the service will fail, but for file2 the filename has been preceded by a hyphen to indicate to systemd that the file is optional and no error will be generated if it is absent. [Unit] Description=App Service [Service] Type=simple EnvironmentFile=/usr/local/env/file1 EnvironmentFile=-/usr/local/env/file2 Environment="VAR0=hello world" ExecStart=/usr/local/bin/app  After restarting the application with systemd all the environment variables that were set are shown in the system journal. The most interesting variable shown is FVAR1 as we declared it in two files earlier and we see that the value set in file1 was replaced by the value set in file2 that was processed later. $ sudo systemctl daemon-reload
$sudo systemctl restart app.service$ sudo journalctl -u app
Aug 12 09:43:35 server.example.org systemd[1]: Started App Service.
Aug 12 09:43:35 server.example.org app[4483]: LANG en_US.UTF-8
Aug 12 09:43:35 server.example.org app[4483]: VAR0 hello world
Aug 12 09:43:35 server.example.org app[4483]: FVAR2 test2
Aug 12 09:43:35 server.example.org app[4483]: FVAR3 Test3
Aug 12 09:43:35 server.example.org app[4483]: FVAR1 TEST1
Aug 12 09:43:35 server.example.org app[4483]: PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin


While the purpose isn’t directly clear for everyone in the beginning, the most common use-case for enterprises is to manage environment variables that need to be set for Oracle for example. As lot of developers and engineers struggle why ORACLE_HOME isn’t being set while /etc/profile.d/oraenv.sh is present and works fine on other platforms like IBM AIX.

Categories

## Environment variables set by systemd

Applications sometimes need environment variables to be set for triggering certain behavior like giving debug output or to route traffic via a HTTP-proxy for example. A common way is to modify the start-stop script, but with systemd on most Linux systems, like Debian and Red Hat based distributions, this can also directly set within the unit file and you don’t have to export the variables anymore.

The Python script below that we run via systemd checks if environment variable VAR1 has been set and will generate a different output based on that.

#!/usr/bin/env python3

import os

if os.environ.get('VAR1'):
a = os.environ['VAR1']
else:
a = 'default'

print(a);


Running the Python script also shows the output difference as the second command doesn’t print the string “default” anymore to the terminal, but the text “test” that we set via the environment variable.

$./main.py default$ VAR1=test ./main.py
test


Setting the environment variables via systemd is done by adding the attribute Environment to the Service section of the unit file for the service. After a systemctl daemon-load the environment variable will be set when you start or restart the service.

...
[Service]
Environment="VAR1=hello"
...


If more variables need to be set, then more Environment attributes can be added to the Service section.

...
[Service]
Environment="VAR1=hello"
Environment="VAR2=world"
...


While it may break some human workflows in the beginning, but in long term it is a good step for following the infrastructure as code path as Ansible could be used for managing these variables. Also storing these kind of variables in the same way makes both troubleshooting and collecting settings for an audit easier.

Categories

## Connecting to legacy servers with OpenSSH

Phasing out legacy cryptographic algorithms can always be an interesting endeavor as terminating to early breaks stuff and to late it can lead to a compromise. OpenSSH disabled DSA with version 7.0 in March 2015 as 5 years earlier it was discovered that DSA was compromised and labelled as insecure. Normally this shouldn’t be a problem with a normal software life cycle, but sometimes you will encounter a legacy box that will not be upgraded as it will break things. Now it will stop new connections being setup from upgraded to machines as with SSH.

$ssh user@server.example.org Unable to negotiate with server.example.org port 22: no matching host key type found. Their offer: ssh-dss  For an incidental connection from the command line the algorithm can be enabled again to connect with a legacy machine. $ ssh -o HostKeyAlgorithms=+ssh-dss user@server.example.org


For automated processed or when scripts can’t be modified a setting for OpenSSH can also be set in \$HOME/.ssh/config for the account depending on this option to be set.

Host server.example.org
HostKeyAlgorithms=+ssh-dss


Re-enabling broken algorithms like DSA should only be done for a limited time and scope. In a lot of commercial environments these algorithms aren’t allowed to be enabled again. Also in most cases the code to run these obsolete algorithms can be removed in a later version as already is the case with SSL 3.0 and earlier for example.