Categories
DevOps

Start using GitHub Dependabot

GitHub bought a service called Dependabot a while back and is now integrating this service as a GitHub Application into the ecosystem. This allows GitHub users to automatically do dependency management and get alerted when a security-related update has been found. For now, his service is still in beta but can be added to all service plans.

Let start simple and creating .github/dependabot.yml with the content below will tell Dependabot to scan all your GitHub workflows daily for GitHub Actions that are defined and have a newer release available. It will also create a pull-request that can be merged when approved.

---
version: 2
updates:
  - package-ecosystem: github-actions
    directory: "/"
    schedule:
      interval: daily
      time: "04:00"
    open-pull-requests-limit: 10

A daily scan seems to be fine and is only limited to 10 open pull requests, but if you have many repositories to maintain, then this task can become daunting. Luckily Dependabot is aware of pull-requests that are still open and will update them if a new dependency update is found. Then still just going for a weekly or even monthly scan can be a better fit for your workflow. The example below runs every Friday and gives you an idea of what your work for the next week will be.

---
version: 2
updates:
  - package-ecosystem: github-actions
    directory: "/"
    schedule:
      interval: weekly
      day: friday
    open-pull-requests-limit: 10

Dependabot can maintain dependencies for many ecosystems and the example below is one I use for my development containers in VSCode and GitHub CodeSpaces. It scans for Dockerfile to see if the base image is outdated, and it also scans Python dependencies for any know updates.

---
version: 2
updates:
  - package-ecosystem: docker
    directory: "/"
    schedule:
      interval: weekly
      day: friday
    open-pull-requests-limit: 10

  - package-ecosystem: pip
    directory: "/"
    schedule:
      interval: weekly
      day: friday
    open-pull-requests-limit: 10

  - package-ecosystem: github-actions
    directory: "/"
    schedule:
      interval: weekly
      day: friday
    open-pull-requests-limit: 10

These examples are only a small introduction to using Dependabot and many more options and package ecosystems are available. But for most people, this is enough to get started before thinking about a complex Semantic Versioning oriented updating strategy.

Categories
Random

Archiving YouTube

YouTube is one of the biggest video streaming sites on the Internet and you can find a video for basically everything on it. But in some cases, you can’t find a video anymore or it has been taken down or set to private for some reason. Having a local copy for personal or legal purposes can be useful, but by default YouTube only allows creators to export their content.

With the tool youtube-download, you can download the audio and/or video, and also subtitles in the languages you desire. The example below downloads the audio and video, and subtitles in English and Dutch. Also, it limits the frame size to 1920×1080 and stores everything in an MP4 container.

$ youtube-dl --embed-subs --embed-thumbnail \
    --add-metadata --write-info-json \
    --sub-lang en,nl --write-auto-sub \
    -o '%(title)s.%(ext)s' \
    -f 'bestvideo[ext=mp4,height<=1080]+bestaudio[ext=m4a]/bestvideo+bestaudio' \
    --merge-output-format mp4 \
    'https://www.youtube.com/watch?v=lepYkDZ62OY'

More options are available, but this configuration seems to be correct for a lot of videos and can also be used to download everything in de playlist automatically. The later can be done by just feeding youtube-dl the URL of the playlist instead of the video and the tool will try to download every video in the playlist.

Keep in mind that this is only a short example and more options exist. Secondly, this should only be used for personal use only as these actions may violate terms of services and/or copyright law in your region.

Categories
System Administration

Native exFAT support on Fedora 32

A lot has changed since 2018 when exFAT was kept out of Fedora due to patent issues and a third-party FUSE-driver needed to be used. Until recently the GPLv2 based driver from Microsoft wasn’t enabled in the kernel as it was based on an older specification and wasn’t fully functional for everyday use.

$ grep EXFAT /boot/config-`uname -r`
# CONFIG_EXFAT_FS is not set

Fedora 32 recently received an upgrade to kernel 5.7 and with that, the native exFAT driver was enabled during compile time. The driver got a lot of updates from Samsung to work correctly to the latest specifications.

# DOS/FAT/EXFAT/NT Filesystems
CONFIG_EXFAT_FS=m
CONFIG_EXFAT_DEFAULT_IOCHARSET="utf8"
# end of DOS/FAT/EXFAT/NT Filesystems

When an SD-card is now plugged into the machine, the kernel module is loaded and the filesystem is mounted by the kernel without the need of a userland driver.

 $ lsmod | grep fat
exfat                  81920  1
vfat                   20480  1
fat                    81920  1 vfat
$ mount | grep fat
/dev/nvme0n1p1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro)
/dev/mmcblk0p1 on /run/media/user/disk type exfat (rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0022,dmask=0022,iocharset=utf8,errors=remount-ro,uhelper=udisks2)

The userland tools may come with Fedora 33, but the package exfat-utils from RPMFusion still need to be installed until it ships with Fedora.

Categories
System Administration

Docker on Fedora 31 and 32

For “Developing inside a Container” with Visual Studio Code, one of the requirements is to use Docker Community Editon as the version of Docker that ships with Fedora is too old and misses certain features. Also the new Docker alternative Podman from Red Hat isn’t supported by Visual Studio Code.

After installing Docker CE on Fedora 31, cgroups version 1 needs to be enabled as Linux switched over to cgroups version 2, but Docker still depends on version 1. With the commands below cgroups version 1 can be enabled again and requires a rebooting of the system.

$ sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"
$ sudo systemctl reboot

Now with the upgrade to Fedora 32 something interesting happens as firewalld is switching from iptables to nftables as a new way to do firewalling on Linux. It basically stops all traffic for the docker0 network and made Molecule fail to build container images for the tests. With a simple test as in the example below, the broken situation can be confirmed.

$ docker run busybox nslookup google.com
 ;; connection timed out; no servers could be reached

One of the solutions is to put containers directly into the host network, but this is unwise as it exposes containers to network and directly reachable for others. Another solution that requires fewer changes is to assign docker0 interface to the trusted zone within firewalld.

$ sudo firewall-cmd --permanent --zone=trusted --add-interface=docker0
$ sudo firewall-cmd --reload

Running the test case again, then it gives back the correct result as the container can communicate again via the docker0 interface to the assigned name server.

$ docker run busybox nslookup google.com
 Server:         192.168.178.1
 Address:        192.168.178.1:53
 
 Non-authoritative answer:
 Name:   google.com
 Address: 172.217.19.206

While this solves the problems with Docker for now it is good to know that these changes should be temporary as Docker needs to support cgroups version 2 as support for version 1 may be dropped in the future. Secondly firewalld needs to get propper nftables support as the migration from iptables currently isn’t as smooth as it should be.

Categories
System Administration

EnvironmentFiles used by systemd

In a previous posting, Environment variables set by systemd, variables were directly set within the systemd unit file. This is fine for a small amount of modifications, but in some case these environment variables are provided by another package on the system or need to be the same for multiple services.

We have modified our Python application to print all environment variables that are set to make this example easier.

#!/usr/bin/env python3

import os

for param in os.environ.keys():
    print("%20s %s" % (param, os.environ[param]))

We create the first environment file /usr/local/env/file1 with the content as below to assign string values to variables. Just as in the systemd unit file no string interpolation is being done, only lines with an equal sign are processed and everything after a hash sign is ignored.

FVAR1="test1"
FVAR2="test2"

We also create a second environment file /usr/local/env/file2 with the content below. Directly we see that variable FVAR1 is also be declared in this environment file.

FVAR1="TEST1"
FVAR3="Test3"

To use the environment files we need to declare them in the systemd unit file below. The line for file1 shows that we require the file to be present otherwise the service will fail, but for file2 the filename has been preceded by a hyphen to indicate to systemd that the file is optional and no error will be generated if it is absent.

[Unit]
Description=App Service

[Service]
Type=simple
EnvironmentFile=/usr/local/env/file1
EnvironmentFile=-/usr/local/env/file2
Environment="VAR0=hello world"
ExecStart=/usr/local/bin/app

After restarting the application with systemd all the environment variables that were set are shown in the system journal. The most interesting variable shown is FVAR1 as we declared it in two files earlier and we see that the value set in file1 was replaced by the value set in file2 that was processed later.

$ sudo systemctl daemon-reload
$ sudo systemctl restart app.service
$ sudo journalctl -u app
Aug 12 09:43:35 server.example.org systemd[1]: Started App Service.
Aug 12 09:43:35 server.example.org app[4483]: LANG en_US.UTF-8
Aug 12 09:43:35 server.example.org app[4483]: VAR0 hello world
Aug 12 09:43:35 server.example.org app[4483]: FVAR2 test2
Aug 12 09:43:35 server.example.org app[4483]: FVAR3 Test3
Aug 12 09:43:35 server.example.org app[4483]: FVAR1 TEST1
Aug 12 09:43:35 server.example.org app[4483]: PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin

While the purpose isn’t directly clear for everyone in the beginning, the most common use-case for enterprises is to manage environment variables that need to be set for Oracle for example. As lot of developers and engineers struggle why ORACLE_HOME isn’t being set while /etc/profile.d/oraenv.sh is present and works fine on other platforms like IBM AIX.