Using explicit SSH authentication methods

For many SSH is a magic sauce to get access to a server and to transfer files between servers. But when things go wrong this magic sauce becomes a problem. Let start with one an example when things go wrong and how to debug it. First we start to add to option -v to our command to connect to another server to get some basic debug information about the SSH handshake and getting to the point the user has to authenticate.

$ ssh -v user@host.example.org
...
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Next authentication method: password
user@host.example.org's password:

Just before the SSH-client prompts for the users password two interesting debug lines are shown. The first line is about the authentication methods we can use and next line shows the our client selected method password as we don’t have any methods configured in our SSH-client like publickey. So we manually disable publickey authentication and set the preferred authentication methods to keyboard-interactive.

$ ssh -v -o PreferredAuthentications=keyboard-interactive -o PubkeyAuthentication=no user@host.example.org
...
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: No more authentication methods to try.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

We now get a permission denied as our client doesn’t has a matching set of authentication methods. Over a decade ago some commercial SSH-servers would require keyboard-interactive as authentication method as the client must than ask the user to type in the password instead of getting it from a password file as was allowed with the password authentication method. Al lot of SSH-clients start to ignore this convention, but some enterprise environments still depend on this convention. If we add password to the list of preferred authentication method we see the password prompt is offered again.

$ ssh -o PreferredAuthentications=keyboard-interactive,password -o PubkeyAuthentication=no user@host.example.org
user@host.example.org's password:

This method can also be used to temporarily disable public key authentication without changing any SSH configuration to test of the account is still working correctly or the password of the target account is still working.

Massive file update with sed

Recently I generated kickstart files for a virtual environment where people could experiment and virtual machines could be rebuild quickly. Sadly enough a typo slipped into the generated files that would make the anaconda installer stop. Every kickstart file could be corrected by hand off course, but one sed command could also correct the typo in all files in one go.

$ sed -i 's/namesever/nameserver/' *.ks

The Unix toolkit is full of handy tools and options like this and it pays to get to know your environment. Specially when it is your work environment and you’re familiar with the Unix philosophy.

Using GitLab to build LaTeX

Generating documents in PDF form is becoming the standard nowadays, but how to generate them easily when they’re mostly free format? One of the goals of the Offensive Security Certified Professional (OSCP) Certification is writing a report based on the evidence you find. This is where LaTeX comes into the picture as you can easily have multiple files with data and one or more TeX-files combining this into a proper document. The question than also comes “How to optimise this pipeline?”

The first step is to see every report as a git repository where you can store and version all data. And running rubber locally solves the problem to quickly create a PDF from your sources, but wouldn’t it be nice of this part also could be automated? Who didn’t make a last moment change and forgot to run rubber if the document would still compile into a PDF? GitLab CI can luckily also compile LaTeX into a PDF and the notification if your update broke the build process comes for free.

The example .gitlab-ci.yml below for my latex-test repository generates a PDF that is being kept for one week and then you need to generate the document again.

stages:
  - build

compile_pdf:
  stage: build
  image: aergus/latex
  script:
    - latexmk -pdf main.tex
  artifacts:
    expire_in: 1 week
    paths:
      - main.pdf

This example can also be included as part of another project as compile_pdf is triggered in the build phase of the pipeline. No project has to be shipped without a digital document anymore.

Is CWE-525 still relevant?

During a code upgrade for a web application from Symfony 2.8 to 3.3 it also became time to do some basic tests with Zed Attack Proxy. While most findings were logical and easy to fix, but one was different and it started with the finding below.

Description:
The AUTOCOMPLETE attribute is not disabled on an HTML FORM/INPUT element containing password type input. Passwords may be stored in browsers and retrieved.
Solution:
Turn off the AUTOCOMPLETE attribute in forms or individual input elements containing password inputs by using AUTOCOMPLETE=’OFF’.

Included was the reference to CWE-525 that describes the risk and how to fix it. It leads back to the login section described in a Twig template file as below not having the proper attributes set on the form and/or input element.

<form class="navbar-form navbar-right" action="{{ path('login') }}" method="post">
    <div class="form-group">
        <input type="text" id="username" name="_username" placeholder="Email" class="form-control">
    </div>
    <div class="form-group">
        <input type="password" id="password" name="_password" placeholder="Password" class="form-control">
    </div>
    <button type="submit" class="btn btn-success">Sign in</button>
</form>

Mozilla Developer Network has an article about turning off form autocomplete for sensitive fields that browsers shouldn’t cache. Storing credit card data for example isn’t a good idea unless it is in a secured storage area. Updating the template file to contain right attributes and ZAP doesn’t complain anymore as we tell the browser not to store the password field.

<form class="navbar-form navbar-right" action="{{ path('login') }}" method="post">
    <div class="form-group">
        <input type="text" id="username" name="_username" placeholder="Email" class="form-control">
    </div>
    <div class="form-group">
        <input type="password" id="password" name="_password" placeholder="Password" class="form-control" autocomplete="off">
    </div>
    <button type="submit" class="btn btn-success">Sign in</button>
</form>

The story doesn’t end by adding the right attributes to resolve the finding, because browser makers have moved on as the attribute is being ignored for input elements of the type password for example. This as the benefit of having people store a secure password in password manager has a better risk score than people remembering and (re)using weak passwords. With this is mind we basically fixed a false positive to make the audit finding green.

For this reason, many modern browsers do not support autocomplete=”off” for login fields:

  • If a site sets autocomplete=”off” for a form, and the form includes username and password input fields, then the browser will still offer to remember this login, and if the user agrees, the browser will autofill those fields the next time the user visits the page.
  • If a site sets autocomplete=”off” for username and password input fields, then the browser will still offer to remember this login, and if the user agrees, the browser will autofill those fields the next time the user visits the page.

Here comes the problem as ZAP generates a finding that a developer has to spend time on as he needs to investigate and then has to implement a solution or explain why it isn’t worth fixing. Security scanners are handy for detecting low hanging fruit, but they shouldn’t become a time waster with outdated rules as developers will start ignoring findings or let findings slowly starve on their backlog.

Integrity checking for JavaScript

Including JavaScript files from a CDN can be beneficial in many ways as you don’t have to ship the code with your code and caching can be done by the browser of a proxy server. It also allows for injecting untrusted code into a web page as someone else is hosting the code you rely on. But Firefox, Chrome and Opera already support Subresource Integrity checking script and link tags. Hopefully both Safari and Edge (or Internet Explorer) will support it soon.

But how does it work? First let calculate the SHA256 hash of JQuery version 3.2.1 hosted by CloudFlare. Also keep in mind to verify this number with the official version offered by JQuery. In this example we download the minimized version of JQuery with curl and run it twice through openssl to generate the checksum and encode the result in base64 format.

$ curl -s https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js | openssl dgst -sha256 -binary | openssl enc -base64 -A
hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=

Now that we have the hash we can add the integrity attribute to the script tag and the prefix for the hash is “sha256-” to indicate the hashing used. From this point forward a browser that supports SubResource Integrity will require that the provided hash will match the calculated hash of the downloaded file.

<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>

Beside SHA256 the specification allows for SHA384 and SHA512 to be used. Calculation is the same as with SHA256 and we only change the algorithm that openssl needs to use.

$ curl -s https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js | openssl dgst -sha512 -binary | openssl enc -base64 -A
3P8rXCuGJdNZOnUx/03c1jOTnMn3rP63nBip5gOP2qmUh5YAdVAvFZ1E+QLZZbC1rtMrQb+mah3AfYW11RUrWA==

We could put only the SHA512 hash in the attribute, but we can put multiple algorithm results in the same attribute by just splitting them with a space. This leaves a lot of room for proper lifecycle management of hashing algorithms as you can present multiple hashes when you switch to a better version instead of doing it big bang style and hope for the best.

<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4= sha512-3P8rXCuGJdNZOnUx/03c1jOTnMn3rP63nBip5gOP2qmUh5YAdVAvFZ1E+QLZZbC1rtMrQb+mah3AfYW11RUrWA==" crossorigin="anonymous"></script>

The next step is to have a fallback when the CDN you rely on goes down or is serving corrupt files. You could add a second src tag as in the example below that tells the browser to use the Google CDN when CloudFlare has issues serving the correct files.

<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js" noncanonical-src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4= sha512-3P8rXCuGJdNZOnUx/03c1jOTnMn3rP63nBip5gOP2qmUh5YAdVAvFZ1E+QLZZbC1rtMrQb+mah3AfYW11RUrWA==" crossorigin="anonymous"></script>

The next step is to get the Content-Security-Policy header correct, but for now only Firefox 49 and higher have the option to act on the require-sri-for attribute. This would basically force the browser to only load scripts and style sheets if the SRI-steps are successful, but many a lot of developers need to optimise their build pipeline to produce correct hashes and have correct monitoring to detect problems.