Categories
Uncategorized

Ad-hoc Notifications for Systemd Services

We have recently migrated some of our production servers to a new linux with systemd. I wanted to get an email everytime there was an error in the httpd service. I am sure there should be a proper process and a fancy way, but for me this worked like a charm getting a notification on my phone:

#!/usr/bin/env bash
 
# depends the following tools:
# jq
# mailx
 
sender_address="sender@mydomain.com"
to_address="addressee@mydomain.com"
unit_to_follow=httpd
 
set -e
set -u
 
while read log; do
        message=$(echo "$log" | jq -r ".MESSAGE")
        timestamp=$(echo "$log" | jq -r ".__REALTIME_TIMESTAMP")
        unit=$(echo "$log" | jq -r "._SYSTEMD_UNIT")
        timestamp_in_seconds=$(( $timestamp/1000000 ))
        human_readable_timestamp=$(date -d @${timestamp_in_seconds})
        echo "Sending mail ${human_readable_timestamp}"
        mailx -s "A warning from ${unit}" -r "${sender_address}" "${to_address}" <<HERE
${human_readable_timestamp}
${message}
HERE
 
done < <(journalctl -f -u ${unit_to_follow} --priority 1..3 -o json )

All I did was run this using nohup.

Categories
Uncategorized

Tabular Test Reports (not only) for SBT

With any sizable project that takes testing seriously there arises a need to to deal with two problems test stability and test performance. These are crucial because if tests fail randomly people lose trust in the test suite and don’t run them or ignore real failures. If they take too long to run a culture of not running the tests for “minor changes” will emerge.

In both cases it is interesting to look at the tests over time, i.e. to analyse old and new test behaviour. In the Java and in the Scala world most build tools write the results for each build into xml files – usually one per test suite. An example would look like this:

xml-test-result

While this has the advantage of having a lot of information for trouble shooting (though stdout and stderr are problematic when executing tests concurrently), it is not so easy to extract information about particular test cases. It is especially difficult to answer questions like “What are the slowest 5 test cases?” and “How did the duration of test XYZ change over time?”. Historically I tried to deal with this using tools like xqilla which brings the power of xquery to the command line, but it always felt a bit backwards.

So instead I thought I should change the format that these reports are written in.
My first hunch was to put important information about each test case could go into a row in a csv style table file. As I am currently mostly using SBT, I started writing a tabular testreporter plugin.

To allow the simple use of tools like awk, sort and uniq I decided to go for a simple format where fields are separated by whitespace – even though that means I had to replace whitespace in the test names with underscores.

Now the example from above renders to something like this:

2015-02-23T00:30:40 SUCCESS    0.014 ExampleSpec should_pass
2015-02-23T00:30:40 FAILURE    0.023 ExampleSpec failure_should_be_reported "[A]" was not equal to "[B]"
2015-02-23T00:30:40 FAILURE      0.0 ExampleSpec errors_should_be_reported My error
2015-02-23T00:30:40 SUCCESS    2.001 ExampleSpec test_should_take_approximately_2_seconds
2015-02-23T00:30:40 SUCCESS    0.501 ExampleSpec test_should_take_approximately_0.5_seconds
2015-02-23T00:30:40 IGNORED      0.0 ExampleSpec this_should_be_ignored
2015-02-23T00:30:40 SUCCESS    0.406 AnotherSpec this_should_take_more_time
2015-02-23T00:30:40 SUCCESS      0.0 AnotherSpec a_rather_quick_test
2015-02-23T00:30:40 FAILURE    0.015 AnotherSpec i_am_flaky 0.4044568395582899 was not less than 0.3

Finding the three test cases that take the most time is a trivial exercise using sort:

cat target/test-results-latest.txt \
    | sort --numeric --reverse --key=3 \
    | head -n 3

Which yields the following:

2015-02-22T00:30:40 SUCCESS    2.001 ExampleSpec test_should_take_approximately_2_seconds
2015-02-22T00:30:40 SUCCESS    0.501 ExampleSpec test_should_take_approximately_0.5_seconds
2015-02-22T00:30:40 SUCCESS    0.406 AnotherSpec this_should_take_more_time

By having multiple files around we can now also analyse the behaviour of a specific test case over time:

find target/test-reports/ -name "*.txt" \
    | xargs cat \
    | grep "AnotherSpec this_should_take_more_time"

Which will produce the following report:

2015-02-22T09:22:45 SUCCESS    0.651 AnotherSpec this_should_take_more_time
2015-02-22T09:22:59 SUCCESS    0.609 AnotherSpec this_should_take_more_time
2015-02-22T09:23:10 SUCCESS     0.69 AnotherSpec this_should_take_more_time
2015-02-22T09:24:27 SUCCESS    0.498 AnotherSpec this_should_take_more_time
2015-02-22T09:24:49 SUCCESS    0.723 AnotherSpec this_should_take_more_time
2015-02-22T09:25:01 SUCCESS     0.51 AnotherSpec this_should_take_more_time
2015-02-22T09:28:38 SUCCESS    0.306 AnotherSpec this_should_take_more_time
2015-02-22T09:38:27 SUCCESS    0.568 AnotherSpec this_should_take_more_time
2015-02-22T09:47:17 SUCCESS    0.558 AnotherSpec this_should_take_more_time
2015-02-22T09:47:44 SUCCESS    0.884 AnotherSpec this_should_take_more_time

Analysing non-deterministic test outcomes is another interesting use case:

find target/test-reports/ -name "*.txt" \
    | xargs cat \
    | awk '{print $4, $5, $2}'  \
    | sort \
    | uniq -c

This produces the following output:

  10 AnotherSpec a_rather_quick_test SUCCESS
   6 AnotherSpec i_am_flaky FAILURE
   4 AnotherSpec i_am_flaky SUCCESS
  10 AnotherSpec this_should_take_more_time SUCCESS
  10 ExampleSpec errors_should_be_reported FAILURE
  10 ExampleSpec failure_should_be_reported FAILURE
  10 ExampleSpec should_pass SUCCESS
  10 ExampleSpec test_should_take_approximately_0.5_seconds SUCCESS
  10 ExampleSpec test_should_take_approximately_2_seconds SUCCESS
  10 ExampleSpec this_should_be_ignored IGNORED

It becomes clear that the i_am_flaky test case is not deterministic and failed 6 times, while succeeded only 4 times.

Harnessing the power of unix to analyse test results helps to debug the most common problems with automated tests. Also, because of the simple scriptability ad-hoc analyses can be turned into proper reports if they proof useful. I would love to see this kind of reporting in other build tools as well.

If you are using sbt you can give the tabular-test-reporter a go now by simply adding this to your plugins.sbt:

addSbtPlugin("org.programmiersportgruppe.sbt" %% "tabulartestreporter" % "1.4.1")

I have been using this for a couple of days now, so your feedback is very welcome!

Categories
Uncategorized

A Tool for Copying HTML to the Clipboard

On my Mac I regularly use pbcopy and pbpaste to interact with the clipboard from the command line. Sometimes utilities output HTML formatted text. If this is piped into pbcopy it will be transferred as plain text. That means if I paste it into Word or gmail HTML sourcecode will be pasted. As both of these tools support rich text, it should be possible to paste the text as formatted text.

To this end I wrote a little macruby script that reads from STDIN and writes it to the clipboard declaring the type correctly. That way I can paste HTML generated by a utility and paste it into Word preserving the HTML formatting.

The script depends on macruby being available on the path and looks like this:

#!/usr/bin/env macruby
framework 'Cocoa'
 
def pbcopy(string)
    pasteBoard = NSPasteboard.generalPasteboard
    pasteBoard.declareTypes([NSHTMLPboardType], owner: nil)
    pasteBoard.setString(string, forType: NSHTMLPboardType) 
end 
 
s = STDIN.read
 
pbcopy(s)

The following incantation gets the contents of a markdown file as rich text into the clipboard:

pandoc readme.md | copy-html

Now pasting in gmail gives me a nicely formatted mail.

I would like to achieve a similar thing under Linux/ X11, but I haven’t managed so far. Perhaps someone has an idea.
Update: A similar effect can be achieved with osascript as shown here.

Categories
Uncategorized

Defrustrating the ThoughtWorks Go User Interface

At work we are currently using Go as our build server. While it has excellent features for creating build and deployment pipelines the user interface is somewhat lacking. However it is a reasonably hackable web application. So my colleagues Greg and Ben went ahead and wrote a userscript for firefox/ chrome that adds links pointing to the respective config entry to the build result pages and that colourises the build console output by parsing ansi sequences.

It is called go-defrustrator and is available from github.

Categories
Uncategorized

Bash-Based Decision Support Systems

It is a well known fact that decision making is tiring. One of the more difficult decisions our team face every day is where to go for lunch. To avoid post lunch decision fatigue, we started automating the process.

Here is the first version of the script using the venerable rl utility.

rl -c 1 << HERE | say
Papa Pane
Japanese
Soup
Canteen
Honigmond
HERE

In this case we had to generate the candidates manually. In a lot of decision situations that is acceptable. However there are also situations where the machine can help generating candidates. For our lunch problem we devised the following solution:

curl -s \
      --data-urlencode "cat=eat-drink" \
      --data-urlencode "in=52.5304,13.3864;r=640" \
      --data-urlencode "size=15" \
      --data-urlencode "pretty" \
      --data-urlencode "app_code=NYKC67ShPhQwqaydGIW4yg" \
      --data-urlencode "app_id=demo_qCG24t50dHOwrLQ" \
      --get 'http://demo.places.nlp.nokia.com/places/v1/discover/explore' \
      | jsed --raw 'function(r)
         r.results.items.map(function(i)
             i.title + " (distance " + i.distance.toString() + "m)"
         ).join("\n")' \
      | rl -c 1

This would yield something like this:

Weinbar Rutz (distance 252m)

It uses the Nokia RESTful Places API to find places within 640m around our office. Conveniently its playground environment already creates a curl statement for us. Then we pipe the result through jsed to extract the important information from the JSON response, before we task rl with taking the actual decision for us.

Categories
Uncategorized

Cleaning up your Working Copy

It is a common step in build scripts to cleanup the local working copy to ensure that no potentially stale artifacts from previous builds are being used and that files that are needed by the build are being checked in.

Git conveniently allows deletion of all ignored files, as well as files that are neither staged nor committed, like this:

#!/usr/bin/env bash
# Removes everything that is not checked in
 
git clean -f -x -d

This is good enough for a build server, however on your personal working copy you don’t want to inadvertently delete work you haven’t staged yet. So instead a slightly more involved cleanup procedure can be used. It will actually fail the cleanup if there are unstaged files that are not ignored, and list those files. Otherwise it will just delete the ignored files.

#!/usr/bin/env bash
# Removes everything that is not checked in
 
set -e
untracked=$(git ls-files --other --exclude-standard)
if [ "$untracked" ]; then
    echo "These files are neither tracked nor ignored:"
    echo "$untracked"
    exit 1
fi >&2
 
git clean -f -x -d