Categories
Software Development

PHP 7.3 and forbidden functions

Last month PHP 7.3.0 was released and with that a lot of functions or aliases were deprecated that may lead to issues down the road. While Xdebug still needs to be released for PHP 7.3 an automated test with GitLab isn’t possible yet as the build phase of Xdebug fails. Luckily I’m using PHP Code Sniffer and extending phpcs.xml.dist with the lines below make the build already fail if any of the forbidden functions are being used in the code.

    <!-- Ban some functions -->
    <rule ref="Generic.PHP.ForbiddenFunctions">
        <properties>
            <property name="forbiddenFunctions" type="array">
                <!-- Deprecated Features 7.0, https://secure.php.net/manual/en/migration70.deprecated.php -->
                <element key="ldap_sort" value="null"/>
                <!-- Deprecated Features 7.1, https://secure.php.net/manual/en/migration71.deprecated.php -->
                <!-- Deprecated Features 7.2, https://secure.php.net/manual/en/migration72.deprecated.php -->
                <element key="create_function" value="null"/>
                <element key="each" value="null"/>
                <element key="gmp_random" value="null"/>
                <element key="read_exif_data" value="exif_read_data"/>
                <element key="png2wbmp" value="null"/>
                <element key="jpeg2wbmp" value="null"/>
                <element key="__autoload" value="null"/>
                <!-- Deprecated Features 7.3, https://secure.php.net/manual/en/migration73.deprecated.php -->
                <!-- Searching Strings for non-string Needle -->
                <element key="strpos" value="chr"/>
                <element key="strrpos" value="chr"/>
                <element key="stripos" value="chr"/>
                <element key="strstr" value="chr"/>
                <element key="stristr" value="chr"/>
                <element key="strchr" value="chr"/>
                <element key="strrchr" value="chr"/>
                <!-- Strip-Tags Streaming -->
                <element key="fgetss" value="fgets"/>
                <element key="gzgets" value="gzgets"/>
                <!-- Image Processing and GD -->
                <element key="image2wbmp" value="imagewbmp"/>
                <!-- Multibyte String -->
                <element key="mbregex_encoding" value="mb_regex_encoding"/>
                <element key="mbreg" value="mb_ereg"/>
                <element key="mbregi" value="mb_eregi"/>
                <element key="mbreg_replace" value="mb_ereg_replace"/>
                <element key="mbregi_replace" value="mb_eregi_replace"/>
                <element key="mbsplit" value="mb_split"/>
                <element key="mbreg_match" value="mb_ereg_match"/>
                <element key="mbreg_search" value="mb_ereg_search"/>
                <element key="mbreg_search_post" value="mb_ereg_search_post"/>
                <element key="mbreg_search_regs" value="mb_ereg_search_regs"/>
                <element key="mbreg_search_init" value="mb_ereg_search_init"/>
                <element key="mbreg_search_getregs" value="mb_ereg_search_getregs"/>
                <element key="mbreg_search_getpos" value="mb_ereg_search_getpos"/>
                <element key="mbreg_search_setpos" value="mb_ereg_search_setpos"/>
            </property>
        </properties>
    </rule>

Hopefully PHP Code Sniffer will be extended to check on deprecated constants as well, but for now all code running on PHP 7.2 can be checked to run smoothly on PHP 7.3 and later.

Categories
Software Development

PowerDNS service for get coordinates for IPv4 addresses

Bert Hubert from PowerDNS made an interesting announcement today. Retrieving the coordinates for an IPv4 address with just a DNS query.


Hopefully the country code will also be included, but this is an interesting way of using DNS as a directory service with public data.

Categories
Software Development

Monitoring GitHub for new releases

Big sites like GitHub or GitLab are hosting a lot of projects and have numerous of releases a day. And while you as a person can watch a repository on GitHub, you can’t filter out new releases easily. At least not easily findable in the interfaces and checking all the repositories manually because they aren’t part of a build process is too much hassle and will fail in the end. So also for me with highlight.js as it has been updated from version 9.11.0 to 9.12.0 months ago.

Looking at some solutions people were writing about on StackOverflow for example was to parse the HTML and use that as a basis for actions to be executed. A quick check and grep of the output shown that we only have links to releases, but no structured data we can easily parse.

$ curl -s https://github.com/isagalaev/highlight.js/releases | grep -i releases\/tag
    <a href="/isagalaev/highlight.js/releases/tag/9.12.0">
    <a href="/isagalaev/highlight.js/releases/tag/9.11.0">
    <a href="/isagalaev/highlight.js/releases/tag/9.10.0">
    <a href="/isagalaev/highlight.js/releases/tag/9.9.0">
    <a href="/isagalaev/highlight.js/releases/tag/9.8.0">
    <a href="/isagalaev/highlight.js/releases/tag/9.7.0">
    <a href="/isagalaev/highlight.js/releases/tag/9.6.0">
    <a href="/isagalaev/highlight.js/releases/tag/9.5.0">
    <a href="/isagalaev/highlight.js/releases/tag/9.4.0">
    <a href="/isagalaev/highlight.js/releases/tag/9.3.0">

If we take the same URL and add the extension .atom to it, then GitHub presents the same data in a consumable feed format. Now we have structured data with timestamps, URLs, and descriptions.

$ curl -s https://github.com/isagalaev/highlight.js/releases.atom
<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xml:lang="en-US">
  <id>tag:github.com,2008:https://github.com/isagalaev/highlight.js/releases</id>
  <link type="text/html" rel="alternate" href="https://github.com/isagalaev/highlight.js/releases"/>
  <link type="application/atom+xml" rel="self" href="https://github.com/isagalaev/highlight.js/releases.atom"/>
  <title>Release notes from highlight.js</title>
  <updated>2017-05-31T02:46:46Z</updated>
  <entry>
    <id>tag:github.com,2008:Repository/1213225/9.12.0</id>
    <updated>2017-05-31T02:46:46Z</updated>
    <link rel="alternate" type="text/html" href="/isagalaev/highlight.js/releases/tag/9.12.0"/>
    <title>9.12.0</title>
    <content type="html"><p>Version 9.12.0</p></content>
    <author>
      <name>isagalaev</name>
    </author>
    <media:thumbnail height="30" width="30" url="https://avatars2.githubusercontent.com/u/99931?v=4&s=60"/>
  </entry>
...

This data can be used by a custom parser or RSS-readers like TT-RSS, but also used by platforms like IFTTT to trigger actions like adding it to a backlog or posting it to a Slack-channel.

Categories
Software Development System Administration

Using GitLab to build LaTeX

Generating documents in PDF form is becoming the standard nowadays, but how to generate them easily when they’re mostly free format? One of the goals of the Offensive Security Certified Professional (OSCP) Certification is writing a report based on the evidence you find. This is where LaTeX comes into the picture as you can easily have multiple files with data and one or more TeX-files combining this into a proper document. The question then also comes “How to optimize this pipeline?”

The first step is to see every report as a git repository where you can store and version all data. And running rubber locally solves the problem to quickly create a PDF from your sources, but wouldn’t it be nice if this part also could be automated? Who didn’t make the last moment change and forgot to run rubber if the document would still compile into a PDF? GitLab CI can luckily also compile LaTeX into a PDF and the notification if your update broke the build process comes for free.

The example .gitlab-ci.yml below for my latex-test repository generates a PDF that is being kept for one week and then you need to generate the document again.

---
stages:
  - build

compile_pdf:
  stage: build
  image: aergus/latex
  script:
    - latexmk -pdf main.tex
  artifacts:
    expire_in: 1 week
    paths:
      - main.pdf

This example can also be included as part of another project as compile_pdf is triggered in the build phase of the pipeline. No project has to be shipped without a digital document anymore.

Categories
Software Development

Integrity checking for JavaScript

Including JavaScript files from a CDN can be beneficial in many ways as you don’t have to ship the code with your code and caching can be done by the browser of a proxy server. It also allows for injecting untrusted code into a web page as someone else is hosting the code you rely on. But Firefox, Chrome, and Opera already support Subresource Integrity checking script and link tags. Hopefully, both Safari and Edge (or Internet Explorer) will support it soon.

But how does it work? First, let calculate the SHA256 hash of JQuery version 3.2.1 hosted by Cloudflare. Also, keep in mind to verify this number with the official version offered by JQuery. In this example, we download the minimized version of JQuery with curl and run it twice through OpenSSL to generate the checksum and encode the result in base64 format.

$ curl -s https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js | openssl dgst -sha256 -binary | openssl enc -base64 -A
hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=

Now that we have the hash we can add the integrity attribute to the script tag and the prefix for the hash is “sha256-” to indicate the hashing used. From this point forward a browser that supports SubResource Integrity will require that the provided hash will match the calculated hash of the downloaded file.

<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>

Besides SHA256 the specification allows for SHA384 and SHA512 to be used. The calculation is the same as with SHA256 and we only change the algorithm that OpenSSL needs to use.

$ curl -s https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js | openssl dgst -sha512 -binary | openssl enc -base64 -A
3P8rXCuGJdNZOnUx/03c1jOTnMn3rP63nBip5gOP2qmUh5YAdVAvFZ1E+QLZZbC1rtMrQb+mah3AfYW11RUrWA==

We could put only the SHA512 hash in the attribute, but we can put multiple algorithm results in the same attribute by just splitting them with space. This leaves a lot of room for proper lifecycle management of hashing algorithms as you can present multiple hashes when you switch to a better version instead of doing it big bang style and hope for the best.

<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4= sha512-3P8rXCuGJdNZOnUx/03c1jOTnMn3rP63nBip5gOP2qmUh5YAdVAvFZ1E+QLZZbC1rtMrQb+mah3AfYW11RUrWA==" crossorigin="anonymous"></script>

The next step is to have a fallback when the CDN you rely on goes down or is serving corrupt files. You could add a second src tag as in the example below that tells the browser to use the Google CDN when Cloudflare has issues serving the correct files.

<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js" noncanonical-src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4= sha512-3P8rXCuGJdNZOnUx/03c1jOTnMn3rP63nBip5gOP2qmUh5YAdVAvFZ1E+QLZZbC1rtMrQb+mah3AfYW11RUrWA==" crossorigin="anonymous"></script>

The next step is to get the Content-Security-Policy header correct, but for now, only Firefox 49 and higher have the option to act on the require-sri-for attribute. This would basically force the browser to only load scripts and style sheets if the SRI-steps are successful, but many a lot of developers need to optimize their build pipeline to produce correct hashes and have correct monitoring to detect problems.