Sam.Today Blog 2020-06-16T11:37:48.724472+00:00 Sam Parkinson urn:uuid:97c04eb0-48aa-11e6-b479-60571883e5f0 Derivations 102 - Learning Nix pt 4 Taking advantage of the fact Nix is a programming language

This guide will build on the previous three guides, and look at creating a wider variety of useful nix packages.

Nix is built around the concept of derivations. A derivation is simply defined as "a build action". It produces 1 (or maybe more) output paths in the nix store.

Basically, a derivation is a pure function that takes some inputs (dependencies, source code, etc.) and makes some output (binaries, assets, etc.). These outputs are referenceable by their unique nix-store path.

Derivation Examples

It's important to note that literally everything in NixOS is built around derivations:

  • Applications? Of course they are derivations.
  • Configuration files? In NixOS, they are a derivation that takes the nix configuration and outputs an appropriate config file for the application.
  • The system configuration as a whole (/run/current-system)?
sam@vcs ~> ls -lsah /run/current-system
0 lrwxrwxrwx 1 root root 83 Jan 25 13:22 /run/current-system -> /nix/store/wb9fj59cgnjmkndkkngbwxwzj3msqk9c-nixos-system-vcs-17.09.2683.360089b3521

It's a symbolic link to a derivation!

It's derivations all the way down.

If you've followed this series from the beginning, you would have noticed that we've already made some derivations. Our nix-shell scripts are based off having a derivation. When packaging a shell script, we also made a derivation.

I think it is easiest to learn how to make a derivation through examples. Most packaging tasks are vaguely similar to packaging tasks done in the past by other people. So this will be going through example of using mkDerivation.


Making a derivation manually requires fussing with things like processor architecture and having zero standard build-inputs. This is often not necessary. So instead, NixPkgs provides a function function stdenv.mkDerivation; which handles the common patterns.

The only real requirement to use mkDerivation is that you have some folder of source material. This can be a reference to a local folder, or something fetched from the internet by another nix function. If you have no source, or just 1 file; consider the "trivial builders" covered in part three of this series

mkDerivation does most a lot of work automatically. It divides the build up into "phases", all of which include a little bit of default behaviour - although it is usually unintrusive or can be can be overridden. The most important phases are:

  1. unpack: unzips, untarz, or copies your source folder to the nix store
  2. patch: applies any patches provided in the patches variable
  3. configure: runs ./configure if it exists
  4. build: runs make if it exists
  5. check: skipped by default
  6. install: runs make install
  7. fixup: automagically fixes up things that don't jell with the nix store; such as using incorrect interpreter paths
  8. installCheck: runs make installcheck if it exists and is enabled

You can see all the phases in the docs. But with a bit of practice from the examples below you'll likely get the feel for how this works quickly.

Example #1: A static site

Nix makes writing packages really easy; and with NixOps (which we'll learn later) Nix derivations are automagiaclly built and deployed.

First we need to answer the question of how we would build the static site ourself. This is a jekyll site, so you'd run the jekyll command

with import <nixpkgs> {};

stdenv.mkDerivation {
  name = "example-website-content";

  # fetchFromGitHub is a build support function that fetches a GitHub
  # repository and extracts into a directory; so we can use it
  # fetchFromGithub is actually a derivation itself :)
  src = fetchFromGitHub {
    owner = "jekyll";
    repo = "example";
    rev = "5eb1b902ca3bda6f4b50d4cfcdc7bc0097bac4b7";
    sha256 = "1jw35hmgx2gsaj2ad5f9d9ks4yh601wsxwnb17pmb9j02hl3vgdm";
  # the src can also be a local folder, like:
  # src = /home/sam/my-site;

  # This overrides the shell code that is run during the installPhase.
  # By default; this runs `make install`.
  # The install phase will fail if there is no makefile; so it is the
  # best choice to replace with our custom code.
  installPhase = ''
    # Build the site to the $out directory
    export JEKYLL_ENV=production
    ${pkgs.jekyll}/bin/jekyll build --destination $out

Now we can see that this derivation builds the site. If you save it to test.nix, you can trigger a build by running:

> nix-build test.nix

The path printed by nix-build is where $out was in the Nix store. Your path might be a little different; if you are running a different version of NixPkgs, then the build inputs are different.

We can see the site has built successfully by entering that directory:

> ls /nix/store/b8wxbwrvxk8dfpyk8mqg8iqhp7j2c9bs-example-website-content
2014  about  css  feed.xml  index.html  LICENSE

Using the content

We can then use that derivation as a webroot in a nginx virtualHost. If you have a server, you could add the following to your NixOS configuration:

  content = stdenv.mkDerivation {
  name = "example-website-content";

    ... # code from above snipped
  services.nginx.virtualHosts."" = {
    locations = {
      "/" = {
        root = "${content}";

So how does this work? Ultimately, the "root" attribute needs to be set to the output directory of the content derivation.

Using the "${content}" expression, we force the derivation to be converted to a string (remembering ${...} is string interpolation syntax). When a derivation is converted to a string in Nix, it becomes the output path in the Nix store.

If you don't have a server handy, we can use the content in this a simple http server script:

# server.nix
with import <nixpkgs> {};

  content = stdenv.mkDerivation {
    name = "example-website-content";

    src = fetchFromGitHub {
      owner = "jekyll";
      repo = "example";
      rev = "5eb1b902ca3bda6f4b50d4cfcdc7bc0097bac4b7";
      sha256 = "1jw35hmgx2gsaj2ad5f9d9ks4yh601wsxwnb17pmb9j02hl3vgdm";

    installPhase = ''
      export JEKYLL_ENV=production
      # The site expects to be served as http://hostname/example/...
      ${pkgs.jekyll}/bin/jekyll build --destination $out/example
  serveSite = pkgs.writeShellScriptBin "serveSite" ''
    # -F = do not fork
    # -p = port
    # -r = content root
    echo "Running server: visit http://localhost:8000/example/index.html"
    # See how we reference the content derivation by `${content}`
    ${webfs}/bin/webfsd -F -p 8000 -r ${content}
stdenv.mkDerivation {
  name = "server-environment";
  # Kind of evil shellHook - you don't get a shell you just get my site
  shellHook = ''

Then run nix-shell server.nix, you'll then start the server and can view the site!

Example #2: A more complex shell app

We've already talked a lot about shell scripts. But sometimes whole apps get built in shell scripts. One such example is emojify, a CLI tool for replacing words with emojis.

We can make a derivation for that. All we need to do is copy the shell script into the PATH, and mark it as executable.

If we were writing the script ourself, we'd need to pay special attention to fixing up dependencies (such as changing /bin/bash to a Nix store path). But mkDerivation has the fixup phase, which does this automatically. The defaults are smart, and in this case it works perfectly.

It is quite simple to write a derivation for a shell script.

with import <nixpkgs> {};

  emojify = let
    version = "2.0.0";
    stdenv.mkDerivation {
      name = "emojify-${version}";

      # Using this build support function to fetch it from github
      src = fetchFromGitHub {
        owner = "mrowa44";
        repo = "emojify";
        # The git tag to fetch
        rev = "${version}";
        # Hashes must be specified so that the build is purely functional
        sha256 = "0zhbfxabgllpq3sy0pj5mm79l24vj1z10kyajc4n39yq8ibhq66j";

      # We override the install phase, as the emojify project doesn't use make
      installPhase = ''
        # Make the output directory
        mkdir -p $out/bin

        # Copy the script there and make it executable
        cp emojify $out/bin/
        chmod +x $out/bin/emojify
stdenv.mkDerivation {
  name = "emojify-environment";
  buildInputs = [ emojify ];

And see it in action:

> nix-shell test.nix

[nix-shell:~]$ emojify "Hello world :smile:"
Hello world πŸ˜„

Example #3: The infamous GNU Hello example

If you've ever read anything about Nix, you might have seen an example of making a derivation for GNU Hello. Something like this:

with import <nixpkgs> {};

  # Let's separate the version number so we can update it easily in the future
  version = "2.10";

  # Now define the derivation for the app
  helloApp = stdenv.mkDerivation {
    # String interpolation to include the version number in the name
    # Including a version in the name is idiomatic
    name = "hello-${version}";

    # fetchurl is a build support again; and does some funky stuff to support
    # selecting from a predefined set of mirrors
    src = fetchurl {
      url = "mirror://gnu/hello/hello-${version}.tar";
      sha256 = "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i";

    # Will run `make check`
    doCheck = true;
# Make an environment for nix-shell
stdenv.mkDerivation {
  name = "hello-environment";
  buildInputs = [ helloApp ];

You can build and run this:

> nix-shell test.nix

[nix-shell:~]$ hello
Hello, world!

Ultimately this is a terrible and indirect example. This doesn't explicitly specify anything that the builder will actually run! It really confused me when I was learning Nix.

To understand it, we need to remember the default build phases from stdenv.mkDerivtion. From above, we had a list of the most important phases. If we annotate the defaults with what happens in the case of GNU Hello, things start to make sense:

Phase Default Behaviour Behaviour with GNU Hello
1 unpack unzips, untarz, or copies your source folder to the nix store the source is a tarball, so it is automatically extracted
2 patch applies any patches provided in the patches variable nothing happens
3 configure runs ./configure if it exists runs ./configure
4 build runs make if it exists runs make, the app is built
5 check skipped by default we turn it on, so it runs make check
6 install runs make install runs make install

Since GNU Hello uses Make & ./configure, the defaults work perfectly for us in this case. That is why this GNU Hello example is so short!

Your Packing Future

While it's amazing to use mkDerivation (so much easier than an RPM spec), there are many cases when you should not use mkDerivation. NixPkgs contains many useful build support functions. These are functions that return derivations, but do a bit of the hard work and boilerplate for you. These make it easy to build packages that meet specified criteria.

We've seen a few build support today; such as fetchFromGitHub or fetchurl. These just functions that return derivations. In these cases, they return derivations to download and extract the source files.

For example, there is pkgs.python36Packages.buildPythonPackage, which is a super easy way to build a python package.

When making packages, there are helpful resources to check:

Up Next

In part 5, we'll learn about functions in the Nix programming language. With the knowledge of functions, we can write go on and write our own build support function!

Follow the series on GitHub

Hero image from nix-artwork by Luca Bruno

Creating a super simple derivation - Learning Nix pt 3 Wrapping some shell scripts

This guide will build on the previous two guides, and look at creating your first useful derivation (or "package").

This will teach you how to package a shell script.

Packaging a shell script (with no dependencies)

We can use the function pkgs.writeShellScriptBin from NixPkgs, which handles generating a derivation for us.

This function takes 2 arguments; what name you want the script to have in your PATH, and a string being the contents of the script.

So we could have:

pkgs.writeShellScriptBin "helloWorld" "echo Hello World"

That would create a shell script named "helloWorld", that printed "Hello World".

Let's put that in an environment; so we can use it in nix-shell. Write this to test.nix:

with import <nixpkgs> {};

  # Use the let-in clause to assign the derivation to a variable
  myScript = pkgs.writeShellScriptBin "helloWorld" "echo Hello World";
stdenv.mkDerivation rec {
  name = "test-environment";

  # Add the derivation to the PATH
  buildInputs = [ myScript ];

We can then enter the nix-shell and run it:

sam@vcs ~> nix-shell test.nix

[nix-shell:~]$ helloWorld
Hello World

Great! You've successfully made your first package. If you use NixOS, you can modify your system configuration and include it in your environment.systemPackages list. Or you can use it in a nix-shell (like we just did). Or whatever you want! Despite being one line of code, this is a real Nix derivation that we can use.

Referencing other commands in your script

For this example/section; we are going to look at something more complex. Say you want to write a script to find your public IP address. We're basically going to run this command:

curl | jq --raw-output .origin

But running this requires dependencies; you need curl and jq installed. How do we specify dependencies in Nix?

Well, we could just add them to the build input for the shell:

# DO NOT USE THIS; this is a BAD example
with import <nixpkgs> {};

  # This is the WORST way to do dependencies
  # We just specify the derivation the same way as before
  simplePackage = pkgs.writeShellScriptBin "whatIsMyIp" ''
    curl | jq --raw-output .origin
stdenv.mkDerivation rec {
  name = "test-environment";

  # Then we add curl & jq to the list of buildInputs for the shell
  # So curl and jq will be added to the PATH inside the shell
  buildInputs = [ simplePackage pkgs.jq pkgs.curl ];

This would work OK; you could go nix-shell then run whatIsMyIp and get your IP.

But it has a problem. The script would work unpredictably. If you took this package, and used it outside of the nix-shell, it wouldn't work - because you didn't have the dependencies. It also pollutes the environment of the end user; as they need to have a compatible version jq and curl in their path.

The more eloquent way to do this is to reference the exact packages in the shell script:

with import <nixpkgs> {};

  # The ${...} is for string interpolation
  # The '' quotes are used for multi-line strings
  simplePackage = pkgs.writeShellScriptBin "whatIsMyIp" ''
    ${pkgs.curl}/bin/curl \
      | ${pkgs.jq}/bin/jq --raw-output .origin
stdenv.mkDerivation rec {
  name = "test-environment";

  buildInputs = [ simplePackage ];

Here we reference the dependency package inside the derivation. To understand what this is doing, we need to see what the script is written to disk as. You can do that by running:

sam@vcs ~> nix-shell test.nix

[nix-shell:~]$ cat $(which whatIsMyIp)

Which gives us:

/nix/store/pkc7g36m95jymw3ga2i7pwrykcfs78il-curl-7.57.0-bin/bin/curl \
  | /nix/store/znqn0z505i0bm1aiz2jaj1ki7z4ck1sv-jq-1.5/bin/jq --raw-output .origin

As we can see, all the binaries referenced in this script are absolute paths, something like /nix/store/...../bin/name. The /nix/store/... is the path of the derivation's (package's) build output.

Due to the pure and functional of Nix, that path will be the same on every machine that ever runs Nix. Replacing fuzzy references (eg. jq) with definitive and unambiguous ones (/nix/store/...) is a core tenant of Nix; as it means packages come will all their dependencies and don't pollute your environment.

Since it is an absolute path, that script doesn't rely on the PATH environment variable; so the script can be used anywhere.

When you reference the path (like ${pkgs.curl} from above), Nix automatically knows to download the package into the machine whenever your package is downloaded.

Why do we do it like this? Ultimately, the goal of package management is to make consuming software easier. Creating less dependencies on the environment that runs the package makes it easier to use the script.

So the TL;DR is:

# BAD; not very explicit
# - we need to remember to add curl to the environment again later
badPackage = pkgs.writeShellScriptBin "something" ''
  curl ...

# GOOD: Nix will do the magic for us
goodPackage = pkgs.writeShellScriptBin "something" ''
  ${pkgs.curl}/bin/curl ...

Functions make creating packages easier

One of the main lessons from this process is that when you use functions (like pkgs.writeShellScriptBin) to create packages, it is pretty simple. Compare this to a traditional RPM or DEB workflow; where you would have needed to write a long spec file, put the script in a separate file, and fight your way through too much boilerplate.

Luckily; NixPkgs (the standard library of packages) includes a whole raft of functions that make packaging easier for specific needs. Most of these are in the build support folder of the NixPkgs repository. These are defined in the Nix expression language; the same language you are learning to write. For example, the pkgs.writeShellScriptBin function is defined as a ~10 line function.

Some of the more complex build support functions are documented in the NixPkgs manual. There is currently documentation for packaging Python, Go, Haskell, Qt, Rust, Perl, Node and many other types of applications.

Some of the more simple build support functions (like pkgs.writeShellScriptBin) are not documented (when I write this). Most of them are self explanatory, and can be found by reading their names in the so called trivial builders file.

Up Next

Derivations 102 - Learning Nix pt 4

Follow the series on GitHub

Hero image from nix-artwork by Eric Sagnes

So Variables are a Thing - Learning Nix pt 2 Taking advantage of the fact Nix is a programming language

So Nix is fundamentally built around the Nix expression language; which is a programming language. Creating variables is a huge part of programming.

If you want to package apps or just simplify repetitive configuration files; you will probably need variables.

The let-in syntax

The let-lets syntax allows you you define a variable that the next expression of code runs in:

A high level example is:

  x = 1;
  y = 2;
  x + y

This is valid nix code; we can actually run it, if you save it to test.nix:

> nix-instantiate --eval test.nix

We can see here that the code still evaluates (aka. returns) the value 3. This means that anywhere in our code, we can replace some expression with something like let ... in expression.

Here is a concrete example of that replacement. We could make our last example more complex by replacing the number 1 with a let-in expression:

  x = (let a = 2; in a+3);
  y = 2;
  x + y

Which changes the answer:

> nix-instantiate --eval test.nix

So we can formalize the let-in syntax as:

  name = expression;
  name = expression;
  name = expression;

Real world examples of the "let-in" expression

Say we have an environment (something we run as nix-shell test.nix); and it uses a lot of python packages:

with import <nixpkgs> {};

stdenv.mkDerivation rec {
  name = "python-environment";

  buildInputs = [

Obviously that looks very repetitive; and we repeat the python version many times.

We can refactor this to store the pkgs.python36 as a variable. This makes the code less repetitive. It also makes it easier to change the python version later. The code would look like:

with import <nixpkgs> {};

stdenv.mkDerivation rec {
  name = "python-environment";

  buildInputs = let
    py = pkgs.python36;
  in [

Yay! Now we've created an identical environment with less words

A digression on scope

Smart cookies reading along would have noticed that we could have put the let-in expression in a different place. For example:

with import <nixpkgs> {};

  # Assigning the variable `py` to the python we want to use
  py = pkgs.python36;
  # You could try and change it to python27 to see what happens
stdenv.mkDerivation rec {
  name = "python-environment";

  buildInputs = [
    # here we reference `py` rather than `pkgs.python36`

The result of that code would have been identical.

However, putting the let-in expression in a different place changes the scope (or parts of the code) that the py variable is useable for.

So with the larger scope, we could do something like:

with import <nixpkgs> {};

  py = pkgs.python36;
stdenv.mkDerivation rec {
  name = "python-environment";

  buildInputs = [

  # The ${...} syntax is string interpolation in Nix
  shellHook = ''
    echo "using python: ${}"

Which could be pretty cool:

sam@vcs ~> nix-shell test.nix
using python: python3-3.6.4


However, we couldn't use the py variable in shellHook if the let-in expression only covers the buildInputs list.

with import <nixpkgs> {};

stdenv.mkDerivation rec {
  name = "python-environment";

  buildInputs = let
    py = pkgs.python36;
  in [
    # `py` is now in scope
  # `py` is now out of scope (a list is one type of "expression")

  shellHook = ''
    echo "using python: ${}"

It would result in a crash:

> nix-shell test.nix
error: undefined variable β€˜py’ at /home/sam/test.nix:20:27

Extension: the "with" expression

With is another expression (like let-in). It has the syntax:

with expression1; expression2

With actually works very similar to the JavaScript with, and nothing like the python with. Basically, it:

  1. It evaluates expression1; call that ret1. ret1 must be a set (aka. dictionary)
  2. It takes all the attributes of ret1, and makes them variables in the scope of expression2
  3. Overall, it evaluates to expression2 (which run with the extra variables)

So you could replace:

  x = 1;
  y = 2;
  x + y

With the with equivalent:

with { x = 1; y = 2; }; x + y

This is really useful when dealing with sets that have loads of attributes, like python36.pkgs. So our old code:

buildInputs = [

Could become shorter using with:

buildInputs = with pkgs.python36.pkgs [

As you can see, all the attributes of pkgs.python36.pkgs (including flask, itsdangerous and six) were added to the scope when evaluating the list.

This can also be chained with the let-in expression:

buildInputs =
    py = pkgs.python36;
    with py.pkgs;

Up Next

Creating a super simple derivation - Learning Nix pt 3

Follow the series on GitHub

Hero image from nix-artwork by Luca Bruno

NSDC 2016 Topics Digitizing the motions from National Schools Debating Championships 2016

So I've had this pile of motions from National Schools Debating Championships 2016 sitting on my desk for a while. I thought I'd digitize them, and this blog seemed like an easy place to put them.

Round 1

  1. That the government should not provide welfare assistance to individuals living in rural or isolated areas where there are no job opportunities
  2. That local councils should be empowered to legalise and regulate the use and sale of drugs within their local areas
  3. That we should legalize commercial surrogacy

WA vs VIC. VIC wrote 1, 3, 2. The third topic is circled.

Venue: Monte

Round 3

  1. That pop stars and Hollywood performers becoming spokespersons for feminism is a good thing
  2. That female politicians running for election should not campaign on the basis of their fulfillment of typical gender roles (ie. good wife, mother, homemaker, cook, etc.)
  3. That we should allow couples to elect the laws that govern their marriage, including laws providing for fault-based divorce

ACT vs TAS. ACT wrote 2, 1, 3. The first topic is circled.

Venue: R'wood

Round 4

  1. That America's use of drones does more harm than good
  2. That we should regret the West's decision not to intervene in Syria
  3. That the West should disengage from the South China Sea conflict

WA vs ACT. ACT wrote 2, 1, 3. The first topic is circled.

Venue: A'sleigh

Round 6

  1. That consumers should boycott firms who engage in tax avoidance
  2. That we should set a maximum number of hours per week that any worker can work
  3. That insofar as Australia needs to raise its tax revenue, it should use taxes on consumption

ACT vs SA. ACT wrote 1, 2, 3. The second topic is circled.

Venue: SGHS (thanks for the food!)

Round 7

  1. That the American Republican Party should disendorse Trump
  2. That Bernie Sanders would maker a better President that Hillary Clinton
  3. That Australia should abandon its proposal to build its own submarines and instead host a US navel base.

ACT vs VIC. ACT worte 1, 3, 2. No topic is circled. I believe that the first was debated; but my memory is fickle from that time.

Venue: SBHS


This is an incomplete list (obviously). I will update if I find any more ballots.

Environments with Nix Shell - Learning Nix pt 1 An introduction for how to run Nix code

To start with learning Nix; we need a way to experiment. Nix is a programming language, so we need a way to run our programs. Nix is also a package/environment management tool, so we need a way to test our environments.

Using nix-shell

Nix-shell lets you open a shell in a new environment.

In Nix; an environment is a collection of derevations (aka packages) that are put into your PATH. This is really useful in many circumstances:

  1. Collaboration; you can just send the .nix file to a collaborator and they will have the same things installed
  2. Cleanliness; stuff inside a nix-shell isn't installed in your main environment; so you don't have to worry about uninstalling stuff or causing conflicts with other packages you love
  3. Developing things; it is easy to build your own packages and test them inside a shell

We can create a environment by creating a .nix file to define the environment. Create a file called test.nix:

# This imports the nix package collection,
# so we can access the `pkgs` and `stdenv` variables
with import <nixpkgs> {};

# Make a new "derivation" that represents our shell
stdenv.mkDerivation {
  name = "my-environment";

  # The packages in the `buildInputs` list will be added to the PATH in our shell
  buildInputs = [
    # cowsay is an arbitary package
    # see to search for more

Then we can test this. Use nix-shell test.nix to enter the environment. Then you can run sl to see how it is added to the PATH:

> nix-shell test.nix

[nix-shell:~]$ echo "welcome to the nix environment" | cowsay
< welcome to the nix environment >
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

Then you can leave the nix-shell by pressing Ctrl-D. If you try and run cowsay outside your environment, it won't work:

> cowsay
The program β€˜cowsay’ is currently not installed. You can install it by typing:
  nix-env -iA nixos.cowsay

Note: If you have cowsay installed in your main environment; choose another package you don't have installed

So we've made our first nix-shell. This allows us to create self-contained groups of packages (useful in and of itself). But we've also run our own nix code, the test.nix file, which will come in handy in the future.

Understanding: How does that nix code actually work?

Remember our test.nix file?

with import <nixpkgs> {};

stdenv.mkDerivation {
  name = "my-environment";

  buildInputs = [

What does this file actually do?

Well the first line is an import statement. We'll come back to exactly how it works later in the guide; or you will figure out once you've got a good grasp of the concepts. For now, it is magic that you need to put at the top of every file OK.

Let's attack the body of the code:

stdenv.mkDerivation {
  name = "my-environment";

  buildInputs = [

This is actually some code written in the Nix expression language. First let's learn some basic syntax. I've put some similar examples in python to help illustrate the syntax:

Syntax type Python example Nix expression language example
Function calling function(some_value) function some_value
Sets (aka hashmaps, dictionaries) {"a": "b", "key": value} { a = "b"; key = value; }
Lists [a, b, c] [a b c]
Accessing values of objects sometimes obj['key'], others obj.key obj.key

So we can see our code calls stdenv.mkDerivation, and provides a set (dictionary) as the argument. The set has the keys name and buildInputs. These are used by the stdenv.mkDerivation function.

So what does mkDerivation do? Reading the documentation, mkDerivation returns a derivation value. A derivation simply represents anything that can be built; like a package but more generalized.

Since mkDerivation returns a value, our whole file returns a value when it is evaluated. You can test this by printing the evaluated value:

> nix-instantiate --eval test.nix
{ __ignoreNulls = true; all = <CODE>; args = <CODE>; buildInputs = <CODE>; builder = <CODE>; ...

So this value is then used by the nix-shell program, and hey presto: we have a new environment.

Extension: the shellHook attribute

When we are making our derivation for our environment, we can pass another useful value to the mkDerivation function. This is the shellHook:

with import <nixpkgs> {};

stdenv.mkDerivation {
  name = "my-environment";

  buildInputs = [

  # The '' quotes are 2 single quote characters
  # They are used for multi-line strings
  shellHook = ''
    figlet "Welcome!" | lolcat --freq 0.5

The shellHook value is shell code that that will be run when starting the iterative shell.

Running that example would result in an awesome welcome message:

> nix-shell test.nix
__        __   _                          _
\ \      / /__| | ___ ___  _ __ ___   ___| |
 \ \ /\ / / _ \ |/ __/ _ \| '_ ` _ \ / _ \ |
  \ V  V /  __/ | (_| (_) | | | | | |  __/_|
   \_/\_/ \___|_|\___\___/|_| |_| |_|\___(_)


The shellHook property is very useful for setting environment variables and the like.

Example Usage: python virtualenv on steroids

We can actually use this to make development environments when writing applications. For example, say I'm developing a Python3 Flask application, but need the ffmpeg binary installed for the app to process some videos. With virtualenv, you can't specify all the binary dependencies. With Nix, you can use this .nix file:

with import <nixpkgs> {};

stdenv.mkDerivation rec {
  name = "python-environment";

  buildInputs = [ pkgs.python36 pkgs.python36Packages.flask pkgs.ffmpeg ];

  shellHook = ''
    export FLASK_DEBUG=1
    export FLASK_APP=""

    export API_KEY="some secret key"

That simply combines our knowledge from before. It gives me a shell with the packages I request (python3.6, flask and ffmpeg) inside the PATH and PYTHONPATH. It then runs the shellHook and sets the extra environment variables (like API_KEY) that my application needs to run.

Up Next

So Variables are a Thing - Learning Nix pt 2

Follow the series on GitHub

Hero image from nix-artwork by Luca Bruno

Exposing properties with Graphene Django The other missing guide

Graphene Django is an easy to use library for writing GraphQL APIs within Django. But some of the documentation for Graphene is less than great.

When I was learning Graphene Django, I originally found it very hard to expose an @property value of my model to the GraphQL API. This article will look into how to do that, and also why it works.

Ctrl-C Ctrl-V

Say you have a Poll object, and you added a URL property:

class Poll(models.Model):


  def url(self):
      return 'https://pollsite/polls/' + id

Then you can simply add a line to specify its existence in the schema:

class PollType(DjangoObjectType):
    class Meta:
        model = models.Poll
        interfaces = (relay.Node,)

    # This is it, so simple and so functional:
    url = graphene.String()

That's it. Just use the form property_name = graphene.DataType(). You can even use more fancy datatypes, like a graphene.List(graphene.Int()). There is a good reference to check it out.

So why does that work?

First, we need to understand the resolver of the property. By default, the resolvers perform a getattr lookup on the object. This means it would like be this if we were to write it manually:

    url = graphene.String()
    def resolve_url(self, args, context, info):
      return self.url

But that code looks stupid! Isn't the url attribute the graphene.String()? Isn't that code broken?

Well the self in a resolver doesn't represent an instance of the class. Or maybe it does. Anyway; Graphene is a festival of metaclass programming. So the self object is really the Django object, not your graphene object! Once I found that out, it all started to make a lot more sense to me.


Well, that's a very quick dive into the weird and wonderful world of the Graphene framework. Metaclass magic hey!

If you enjoy these random Graphene tips, make sure to subscribe below.

Arithmetic with JavaScript Arrays A Astonishing Adventure

An empty array is zero:

> +[]

An array of length 1 is equivalent to the first value:

> +[1]
> -[2]

Any longer, and the array is not a number:

> +[1,2]

One can add 2 arrays together:

> [1] + [2]
> [1,2] + [3]
> [1,2] + [3,4]

Or take them apart:

> [10] - [3]
> [] - [10]
> [10,2] - [3]

And multiply or divide:

> [2] * [3]
> [3] / [4]

Or be wise with their bits:

> [4] | [2]  // 100 | 010
> [4] & [12]  // 0100 & 1100
> [4] ^ [5]  // 100 ^ 101
> [1] << [2]

You can compare:

> [1] < [3]
> [3] < [1]

No matter the length:

> [1,2] < [1]
> [1] < [1,2]

Javascript, huh? What an interesting language. While so many have seen Gary Bernhardt's amazing talk "Wat", it is a good laugh to dive into a random part of JavaScript and see how it behaves. I found it very odd how a single-element array is automatically unwrapped for arithmetic operations. But I never knew the that array addition just does string concatenation. Isn't it odd that arrays are NaN if they are long, but not if they are short?

What's you favourite JavaScript quirk? Let us know on our IRC channel or via email!

Freeing Disk Space with the PackageKit cache Automatic updates gone wrong

So the Fedora 26 Alpha was recently released. Logically it follows that I need to update my laptop right now! So that's what I did, I ran sudo dnf system-upgrade download --releasever 26, and off it went:

Error Summary
Disk Requirements:
   At least 1135MB more space needed on the / filesystem.

Well that's exciting.

Diagnosing the problem

There are so many great tools to debug disk usage on GNU/Linux. NCurses Disk Usage is one of may favourite tools to use from the command line. But GNOME's Disk Usage Analyzer (or Baobab) is just a little more photogenic:

Baobab screenshot

Time to do the Tango!

So PackageKit huh? Using 16 gigabytes of disk space?!?!

Well, these files are used by the Offline Update functionality of PackageKit. GNOME Software automatically downloads updates, then you can click "Restart and Install" and updates are installed when the system is not running ("offline"). You can read more about it from hughsie's blog.

Unfortunately; PackageKit assumes that it is the only way you update your computer. If an update is not applied via PackageKit, it is not deleted. This means if you use dnf update, the auto-downloaded packages are not deleted. Eventually you disk space usage can spiral out of control; as mine has. That explains why PackageKit is still caching updates from Fedora 24, despite how I haven't run that version for the last 6 months!

Safely removing the package kit cache

PackageKit does not offer a built in way to clean the updates. So we have to resort to a dumb fix.

First delete the whole cache directory:

rm -r /var/cache/PackageKit

Then for cleanliness, let PackageKit re-download the metadata cache. This just downloads the metadata cache; not the auto updates:

sudo pkcon refresh force -c -1

That made my PacakageKit directory go from 16G to a small 75M; a saving of 15.9 gigabytes!

The Shadow package manager

Fedora (and probably other distros) are in an interesting place now. PackageKit is slowly but surely duplicating the built-in package manager functionality. I'm not sure what to think about this change. It brings so many UX improvements, like suggesting packages when I type a command:

Terminal screenshot

It also provides GNOME Software, Flatpak integration and more. But having package managers fight over control of the same packages seems like a bad thing.

I don't know. What do you think? Make sure to email us or reach out to us on Twitter with your thoughts!

Keeping Python projects secure on GitLab Pinning projects to the very latest

A recent study of over 133k websites found that over 49k of included an outdated javascript library with a known vulnerability. While every site is different and not all of them would have been exploitable, that is a frighteningly high percent; leaving the web insecure.

With python, we can't know for sure how bad the problem is; since most apps run on remote servers with no public source. However, it is safe to say that keeping dependencies up to date is a big issue for any application.

Enter pyup; an automated tool that sends you pull requests to update your requirements.txt file. It serves 2 purposes; to keep your dependencies pinned to the latest and most secure versions, and to remind you to redeploy the updated version to your servers.

We're excited to contribute a patch to add GitLab support for pyup. Now using pyup and GitLab is as simple as:

$ pyup --provider gitlab --repo learntemail/backend --user-token 97abc123jhk124gjg134

GitLab/Pyup How To

First, clone our branch with GitLab support and install it:

git clone -b gitlab
cd pyup
python3 install

It is best to do this on a server; then it is easy to chuck pyup in a cron job for peace of mind.

1/2: Get a token

Then you need to generate an access token for GitLab. Go to Your Avatar > Settings > Access Tokens:

Gitlab access tokens tab picture

Then create a new token with the API access box checked:

Gitlab create access token picture

Now copy your token and you're done:

Gitlab copy access token picture

2/2: Run pyup

Now is the hard part; copy and paste your token into this command:

$ pyup --provider gitlab --repo ORG/PROJECT --user-token YOUR_TOKEN

If you use a provider other than, you can go:

$ pyup --provider gitlab --repo ORG/PROJECT --user-token YOUR_TOKEN@https://YOUR_GITLAB.intranet

Then you're done. If this your first time using pyup, you'll get a barrage of changes:

Gitlab merge requests list

Merge then and you're read to rock some improved security! Make sure to add this command to your cron jobs or systemd timers so that you get automatic notifications in the future.


We've ported pyup to GitLab to help keep our app secure. What are you doing to keep your app secure? Post your thoughts or email them to me ( Make sure to subscribe below to follow our journey through securing a pretty normal CRUD app with background jobs.

Testing GraphQL with Graphene Django The missing guide

Testing old-school APIs is super fun and easy. Frameworks like Django have a huge emphasis of testing; and make it very easy to do so.

But it is 2017, and GraphQL is changing the way that we write APIs. In particular, Graphene Django is an easy to use library for writing GraphQL APIs within Django.

However, Graphene Django doesn't include a testing guide! But fear not, testing is easy, simple and clear.

Our helper class

Executing a GraphQL query is very simple - you just POST it to your endpoint and get JSON back. To make it even easer in test, we use a little helper class to abstract away the details:

import json
from django.test import TestCase
from django.test import Client

# Inherit from this in your test cases
class GraphQLTestCase(TestCase):

    def setUp(self):
        self._client = Client()

    def query(self, query: str, op_name: str = None, input: dict = None):
            query (string) - GraphQL query to run
            op_name (string) - If the query is a mutation or named query, you must
                               supply the op_name.  For annon queries ("{ ... }"),
                               should be None (default).
            input (dict) - If provided, the $input variable in GraphQL will be set
                           to this value

            dict, response from graphql endpoint.  The response has the "data" key.
                  It will have the "error" key if any error happened.
        body = {'query': query}
        if op_name:
            body['operation_name'] = op_name
        if input:
            body['variables'] = {'input': input}

        resp ='/graphql', json.dumps(body),
        jresp = json.loads(resp.content.decode())
        return jresp

    def assertResponseNoErrors(self, resp: dict, expected: dict):
        Assert that the resp (as retuened from query) has the data from
        self.assertNotIn('errors', resp, 'Response had errors')
        self.assertEqual(resp['data'], expected, 'Response has correct data')

Copy and paste!

Then you just test! You can write the same style tests as if you were using the excellent Django Rest Framework, but with GraphQL. Here's a basic example of a query:

    def test_is_logged_in(self):
        resp = self.query('{ isLoggedIn }')
        self.assertResponseNoErrors(resp, {'isLoggedIn': False})

Or a more complex example with a mutation:

    def test_login_mutation_successful(self):
        User.objects.create(username='test', password='hunter2')
        resp = self.query(
            # The mutation's graphql code
            mutation logIn($input: LogInInput!) {
                logIn(input: $input) {
            # The operation name (from the 1st line of the mutation)
            input={'username': 'test', 'password': 'hunter2'}
        self.assertResponseNoErrors(resp, {'logIn': {'success': True}})

A GraphQL future

At LearntEmail, we're really excited to be using GraphQL for our API. Testing is so important stable software - so good testing tools are a must.

Feel free to tweet to us @LearntEmail with your thoughts on GraphQL testing, and subscribe (below) to follow our GraphQL journey & experiences.

Local Politicians Meet InfoSec - a Wordpress Disaster The article that I didn't want to have to write

Last year will be characterized by hacking and interference in the American political system. It was a huge wake up call for everybody involved in politics; InfoSec was an important priority.

I don't live in America. I live in the tiny Australian Capital Territory, a territory comprising of a Canberra; a city of 300,000 people. Like many places, we have a local government full of politicians. I analyzed the websites of the 25 MLAs (members of the legislative assembly) and their parties sites.

ACT map

What a huge and important region!

Spolier: too many local politicians have SQL injection vulnerable sites, and don't even care.


I'm not an InfoSec industry professional; just a developer who is interested in this stuff. This is not a blog post about novel vulnerabilities - is is a story about bad higyine.

First, I compiled a list of all the sites. In total, there are 17 MLA sites (not all MLAs have their own site) and 3 party sites. There is even a helpful list maintained by the government.

Then I used used the http headers to do l33t hax0r discovery of the server software they used. It was as follows:

Software Package # of Users
Wordpress 7
NationBuilder (SaaS) 4
Wix (SaaS) 2
Unknown/Bespoke 2
Static 1
Wordpress.COM 1

The party sites used NationBuilder (ACT Labor), Wordpress (Canberra Liberals) and Dupral 7 (Greens). I found it very interesting here that software was divided between left wing and right wing parties. For example, NationBuilder was only used by left wing parties, despite pledging to be a non-partisan provider.

Inspecting the sites

So we have a mix of multiple types of sites. I'm no genius, so I assumed that Wix,, the static site and NationBuilder (a rails based SaaS) were secure. They have companies behind them making sure that they are secure.

Fun fact: only 1 of the sites used HTTPS by default! Welcome to 1999 2017!

So then I turned to the remaining 8 Wordpress sites (including the Canberra Liberals website). Wordpress has databases full of vunerabilities, especially when you count themes/plugins. A tool called wp-scan automates the plugin & version detection process and can print out a list of vulnerabilities that effect a given WP site. I used this to investigate the sites.

A whopping 5 out of the 8 sites were affected by serious vulnerabilities:

1. Andrew Wall

Andrew Wall MLA's site is a disaster. I'm not including a link because it is so inscure. He uses Wordpress, on a server with Microsoft IIS/7.0, that reports it is X-Powered-By: ASP.NET. It uses Wordpress 3.6, which was released in 2013! Wordpress 3.6 is ancient and full of vulnerabilities, including; unauthenticated stored XSS, unauthenticated post category modification and path traversal. The gallery plugin used also has an arbitrary file upload & CSRF issue.

Website screenshot

I contacted Andrew 3 times (12th, 16th and 31th of January), to no response. He should really consider getting a new website before it is defaced or hacked into an "online pharma" store.

2. Canberra Liberals

The Canberra Liberals have a donation button on their site. That would be great, except they use an outdated version of WooCommerce from 2014. It features many security issues; from object injection to persistent XSS.

Website screenshot

I don't know how hard it is to update a Wordpress plugin; but it is too hard for the Canberra Liberals. I contacted them 3 times (same as above) to no response. Nice to see security is valued!

3. Tara Cheyne

Wordpress again, with the Jetpack plugin. It is out of date and contains Stored XSS in addition to multiple other security issues.

Website screenshot

"E-mail Tara"; well I tried that!

I contacted Tara 3 times (same as above) to no response.

4. Mick Gentleman

Wordpress again. He uses a slightly outdated version of wordpress (4.6.1 from September 2016), which contains many vulnerabilities. They include a SQL injection issue and XSS.

Website screenshot

I contacted Mick 3 times (same as above) to no response. Starting to see a pattern here!

5. Mark Parton

Wordpress again, this time with an outdated Yoast SEO plugin. It contained 2 issues, Settings exposure and XSS again.

Mark was very co-operative. He responded to my 2nd email and informed me that he was not actively involved with the site any more.


When you include the party sites, 13 out of the 25 politicians had a outdated and vulnerable Wordpress sites. Most did not reposed to the information presented, even if it mean replying to an email reporting the issue. I'd really hate to see these sites be defaced or used to find private information on any of my local politicians.

While we focus on glamorous political hacking events such as during the US Presidential election, we need to realise the role of local government. Basic security hygiene isn't hard - they just need to stay up to date. Check up on your local members, so that they don't get defaced or hacked during their next elections!

PGP for Every Email Join us in our PGP journey

Starting today, we're offering GPG signing for every email sent on LearntEmail. GPG is an email signing and encryption package, probably the defacto standard on the net. Other than Facebook, it is very hard to get marketing/application email sent in a way that is encrypted. But we're happy to change that.

Why PGP?

PGP is the defacto standard for email encryption. There is lots of exciting development in the PGP ecosystem, from Keybase to clients like Mailpile. We're excited to be part of the PGP community.

PGP is pretty good, as per the name. Sure, some people have issues with PGP. But perfect is the enemy of good. It isn't good that pgp software is hard to use, or that pgp doesn't support forward secrecy. But it is good that we have protection for emails.

Perfect is the Enemy of Good Pillow

Perfect is the enemy of good, on a pillow?

How to opt-in

First, find the latest email you've gotten from LearntEmail. You can use the box below to get one sent to you:

Awesome! Please check your inbox and confirm your email.
We'll never spam you

Follow the instructions in the email, and you should find the "manage delivery" page:

Manage Delivery Page screenshot

There you can select your favourite option (sign or encrypt) and hit save. If you are selecting the encrypt mode, make sure to add a public key. You can copy and paste the output from the commandline:

$ gpg2 --export -a
Version: GnuPG v2


Then you're done! All future email send via any LearntEmail user will be encrypted or signed as per your preferences.

Make sure to note our public key is 7063 0DDE 9BAB 6342 FA58 A8C3 7033 B9B9 6CEA CDD3 or follow us on Keybase.

Public key management

When we send encrypted email to you, we need to know your public keys. Currently, this means you need to copy and paste them into our "manage delivery" page

Sadly, linking an email address to a GPG key is hard! There is no way to publicly attest that you own an email, since email is not a publishing platform. This means that awesome tools Keybase can't support searching based on emails, since that would require us to trust Keybase isn't lying about what emails it received.

We use a simple solution to the problem at LearntEmail. We already deal with verifying email addresses on a daily basis; it is a core part of email marketing. We leverage that infrastructure to offer a way for you to upload keys. Easy and simple!


Email encryption is more and more important as we face threats from the likes of the NSA. Supporting GPG across our network is LerantEmail's first step towards making email encryption more accessible to everybody.

Every email should be sign or encrypted. Even marketing email.

SELinux Concepts - but for humans This is your SELinux dictionary!

In our previous guide we looked at how to setup modern ngnix configuration without disabling SELinux. In this post, we're going to look at a few more terms and concepts that SELinux uses, so that we can later do more advanced SELinux tricks.

SELinux is about labels

This is the core part of SELinux: labels. Everything from ports to files are all labeled a SELinux label. We can use the -Z flag on most command line utilities to view the label:

$ ls -lhZ
  dr-xr-xr-x.   6 root root system_u:object_r:boot_t:s0       5.0K Jan 27 08:41 boot/
  drwxr-xr-x.  22 root root system_u:object_r:device_t:s0     4.1K Feb  6 14:01 dev/
  drwxr-xr-x.   1 root root system_u:object_r:etc_t:s0        5.5K Feb  6 14:01 etc/
  drwxr-xr-x.   1 root root system_u:object_r:home_root_t:s0    48 Jul 14  2016 home/
  dr-xr-x---.   1 root root system_u:object_r:admin_home_t:s0  354 Jan 30 19:37 root/
  drwxrwxrwt.  14 root root system_u:object_r:tmp_t:s0         300 Feb  6 14:38 tmp/
  drwxr-xr-x.   1 root root system_u:object_r:usr_t:s0         174 Nov 16 20:58 usr/

$ ps -eZ
  LABEL                             PID TTY          TIME CMD
  system_u:system_r:init_t:s0         1 ?        00:00:05 systemd
  system_u:system_r:kernel_t:s0       2 ?        00:00:00 kthreadd
  system_u:system_r:syslogd_t:s0    655 ?        00:00:05 systemd-journal
  system_u:system_r:policykit_t:s0 1155 ?        00:00:36 polkitd

$ netstat -Z
  Active Internet connections (w/o servers)
  Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name     Security Context
  tcp        0      0 vogon-constructor:48378 ec2-54-172-133-71:https ESTABLISHED 6146/firefox         fined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
  tcp        1      0 vogon-constructor:52280      CLOSE_WAIT  23503/rhythmbox      fined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

Labels are purely cosmetic. On their own, they don't mean anything. But they all policies to target the thing (process/port/file).

Analogy time

Let's pretend that a SELinux label is a class on a HTML element. You add a class to an element:

<!-- SELinux HTML :) -->
<div process="/run/systemd/journal/" class="system_u object_r syslogd_var_run_t s0"></div>
<div process="/usr/bin/systemd-journald" class="system_u system_r syslogd_t s0"></div>
  allow-file-access: /run/systemd/log/;

Once we have our SE-HTML, we could write a selinux policy (which is like the CSS in this analogy):

.syslogd_t {
  allow-file-rw-access: syslog_var_run_t;

Just a disclaimer, the syntax for SELinux policies is not this pretty. We'll be covering that in the next post in the series, so make sure to subscribe:

Dissecting a label

Here's a label from systemd-journal:


Labels can look very confusing. But they are really quite simple. We just break it into 4 parts:

  • system_u - the user (hence the _u)
  • system_r - the role (hence the _r)
  • syslogd_t - the type (hence the _t). This is the most important part - it tells SELinux to use the policy for syslog_t processes - which usually contains the bulk of the enforcement rules
  • s0 - the MLS security code. By default, SELinux doesn't use MLS, so this is meaningless (see the section below for more)

As you can see, the only important part of this label is the type (something_t). That is a common pattern in SELinux labels. For example, her is the label for /etc/systemd/system/

  • unconfined_u - see below
  • object_r - common for many configuration files, meaningless to us
  • systemd_unit_file_t - this is a systemd unit file. There is probably a policy that targets this type, allowing systemd to read unit files
  • s0 - the MLS security code. By default, SELinux doesn't use MLS, so this is meaningless

So it is easy: just read the typename_t section and you are good!


On your server, there are probably many things running as either unconfined_u, unconfined_r or unconfined_t. We saw that above with the systemd unit file.

The unconfined_X labels are reserved labels that have no policy attached. And with SELinux, no policy means there is no restriction. So if you don't want something to be policed, use unconfined. Unconfined doesn't override any other policies; so something with unconfined_u and systemd_unit_file_t will still be restricted per systemd_unit_file_t policy.

Unconfined is the default for many processes on the desktop for that exact reason. I am typing now from gnome-terminal running unconfined and inside vim, unconfined again. You often don't want to restrict what desktop users can do, hence the user of unconfined.

What is "Targeted Enforcement"?

Above we touched on the MLS codes. This is a very interesting part of SELinux.

SELinux comes with 2 modes; targeted (the default on every distro) and MLS (Multi-Level Security). Targeted policies mean that things are un-restricted by default, and only restricted if a policy is written for them (when they are targeted).

With MLS, everything is restricted by default. MLS is used by three letter agencies, such as the NSA (SELinux's initial developer). RedHat has a small section on MLS in their docs, if you are interested. MLS also uses the s0 part of the label to specify the security classification of the file.

Labeling files

With normal unix permissions, files permissions are set at file creation time. This means that every app has to think about what the correct permissions for files are.

In SELinux, things are labeled based on their characteristics. Ports are labeled based on their port number. Processes are labeled based on their executable path. And files are similar; they are labeled based on their path.

We can see the list of path labeling rules by running:

$ sudo semanage fcontext --list
  SELinux fcontext                                   type               Context

  /                                                  directory          system_u:object_r:root_t:s0
  /.*                                                all files          system_u:object_r:default_t:s0
  /bin                                               all files          system_u:object_r:bin_t:s0
  /bin/.*                                            all files          system_u:object_r:bin_t:s0
  /bin/alsaunmute                                    regular file       system_u:object_r:alsa_exec_t:s0
  /bin/bash                                          regular file       system_u:object_r:shell_exec_t:s0
  /bin/bash2                                         regular file       system_u:object_r:shell_exec_t:s0
  /bin/d?ash                                         regular file       system_u:object_r:shell_exec_t:s0
  /bin/dbus-daemon                                   regular file       system_u:object_r:dbusd_exec_t:s0
  /bin/dmesg                                         regular file       system_u:object_r:dmesg_exec_t:s0
  /bin/fish                                          regular file       system_u:object_r:shell_exec_t:s0
  /bin/hostname                                      regular file       system_u:object_r:hostname_exec_t:s0

As you can see, it is just a simple map from path regex to label. We how to add a new rule to this fcontext map in our previous guide. Adding custom fcontexts is a very important part of SELinux policy development and usage.


SELinux is conceptually simple: we label some things, then apply policies to them. Hopefully this post helps you understand how SELinux works on a 1000-foot level. If not, make sure to check Red Hat's SELinux coloring book for a different explanation. Make sure to subscribe as we continue our journey to secure a normal web-app server with a custom SELinux policy!

A new way of writing Gtk+ applications Introducing Pyract - my weekend hack

I love working with Gtk+ - it is a great GUI toolkit with a good developer experience. But React has totally changed how GUI apps are written. Now it is all the rage to use more functional style programming:

Gtk3 counter app

The app very complex, see below

I've always wanted to see a combination of these two things, so this is my weekend hack. It is pyract, a python3 library that merges Gtk+, React and MobX into 1 boiling pot:

from gi.repository import Gtk
from pyract.view import run, Component, Node, load_css
from pyract.model import ObservableModel, ObservableValue, ModelField

# Let's create a model to back our application.  Since it is Observable, it
# will tell pyract to re-render our UI when it changes.
class AppModel(ObservableModel):
    # The ModelField tells the model to create us an ObservableValue and set
    # to 0.  ObservableValues lets us re-render the UI when the value changes.
    counter = ModelField(ObservableValue, 0)

    def increment(self):
        self.counter.value = self.counter.value + 1

# Components are similar to in React.
class AppComponent(Component):
    # Our render method can return a Node or list of Nodes.
    def render(self, model: AppModel):
        # Nodes are a type followed by kwargs.  When a component is
        # re-rendered, then the Node trees get diffed and only the changed
        # Nodes and properties are updated.
        # The type is either a Gtk.Widget or pyract.Component subclass.
        # "signal__" props are the same as connecting a GObject signal
        return Node(Gtk.Window, signal__destroy=Gtk.main_quit,
                    title='My Counter App',
            Node(Gtk.Box, orientation=Gtk.Orientation.VERTICAL,
                # The class_names prop adds the appropriate classes
                Node(Gtk.Label, class_names=['counter-label'],
                Node(Gtk.Button, label='Increment Counter',
                     class_names=['suggested-action', 'bottom-button'],
                     # Hide the button when the counter gets to ten
                     visible=model.counter.value < 10,
                # Add a reset button, but only show it when counter == 10
                Node(Gtk.Button, label='Reset',
                     class_names=['destructive-action', 'bottom-button'],
                     visible=model.counter.value >= 10,

    # Signal handlers are just like in normal Gtk+
    def _button_clicked_cb(self, button):
        # Access the props using self.props

    def _reset_clicked_cb(self, button):
        self.props['model'].counter.value = 0

# Adding CSS is really easy:
.counter-label {
    font-size: 100px;
    padding: 20px;
.bottom-button {
    margin: 10px;

# The run function works just like constructing a Node, but it enters
# the mainloop and runs the app!
run(AppComponent, model=AppModel())

Be sure to subscribe below so that you get updates on the pyract project. Or check it out on GitHub. And as always, feel free to email me feedback.

Stop Disabling SELinux: A Real-World guide Be safe from software vulnerabilities AND run your webserver

It's 2017, and your New Year's resolution should be to stop disabling SELinux. SELinux does a great job of doing what it says on the tin - making your servers safer. It doesn't matter if a Docker, Samba or even Flash vulnerability hits, as SELinux can contain it.

But SELinux can't do anything if you disable it. In the first post in our SELinux series, we're going to look at just how easy it is to run nginx as a reverse proxy, all while keeping SELinux happy.


For this guide, I'm using a Fedora 25 setup. In writing this guide, I referred heavily to the RHEL/CentOS 7 and 6 documentation. SELinux is a very stable piece of software, so this guide will probably apply unmodified for other RedHat based systems.

As for the http server, we will be looking at using nginx. However the configuration in RedHat based systems is generic across all packaged servers, so you should be lucky if you use apache2.

Proxy Pass

So you have you web application server (eg. django) running on something like http://localhost:8000. Then you setup nginx to proxy pass to the app server:

server {

    location / {

But now you get a 502 bad gateway error when you access it. First we need to follow the SELinux log, which is part of the systemd journal:

journalctl -f

After you request the page again, you should see an error from SELinux (also called audit) in the journal:

Jan 31 10:48:54 server audit[16067]: AVC avc: denied { name_connect } for pid=16067 comm="nginx" dest=8000 scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:transproxy_port_t:s0 tclass=tcp_socket permissive=0

This is because of the default SELinux policy, which is secure and restrictive. Since using proxy passes is very common, this is a simple configurable boolean. Just run:

sudo setsebool -P httpd_can_network_connect true

The -P option writes this change to disk, meaning it persists across reboots. So just add this command to your provisioning script and your are good to go. If you use ansible, it is fully integrated:

- name: Allow nginx to proxy pass
  seboolean: name=httpd_can_network_connect state=yes persistent=yes

In future, you can check the list of all booleans but running semanage boolean --list.

Static files

But in the normal modern setup, nginx does more than just proxy. Commonly nginx is used to serve static files:

server {

    location / {

    location /static/ {
        alias /var/www/static/;

But in your browser, you get a 403 Forbidden error. Again we will follow the systemd journal (journalctl -f) and request the file again. Then you should see an error message from SELinux:

Jan 31 20:28:46 server audit[9197]: AVC avc:  denied  { read } for  pid=9197 comm="nginx" name="test.txt" dev="vda1" ino=137247 scontext=system_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:var_t:s0 tclass=file permissive=0

This is telling us that SELinux denied a read for a httpd_t (HTTPD type - probably nginx) process to read a var_t file. However, the var_t is used across the whole /var system, and it would be too insecure to give nginx access to all those files.

Enter httpd_sys_content_t. This is type that we can use for just this use case - files that the web server should have read access to.

SElinux file types work in 2 ways. Firstly, they are stored in the file metadata, like any normal permissions. But on creation, SELinux looks up the default for path based on the system rules (see semanage fcontext -i for all the rules on your system). We need to add a new rule for /var/www/:

semanage fcontext --add --type httpd_sys_content_t "/var/www(/.*)?"

That will set the default rule for /var/www and all descendants. Then we have to relabel the type of all the existing files:

     restorecon -Rv /var/www

Now your webserver is good to go.


Keeping a webserver running with SELinux is as simple as 2 things:

  • Setting a config boolean to let it proxy pass
  • Telling SELinux where you are going to put the webroot files

It is seriously something your should do for your server's security.

Next in our SELinux series, we're going to be looking at how we can use SELinux to contain our own apps. By default, services run unconfined. However policy writing is easy and worthwhile for the extra security. Make sure to subscribe so that you get that in your inbox.

Plotinus and the quest for searchable menus The underdog challenges a 30 year old UI convention

Since the days of Xerox PARC, the application menus has been one of the major ways that we interact with computers:

Mac menu bar comparison

Notice the same menu bar across the top?

But these menus have a huge issue; discoverability. I often find myself thinking along the lines of, "where is the preferences? Under file, window or tools?" Or I spend time needlessly searching for a menu item, because I don't remember if it is a filter or extension in Inkscape:

Inkscape menus

In Inkscape, a black and white filter is an extension?!?

Enter Plotinus

One thing that computers are great at is searching. So, Plotinus adds a search style interface to the menu. Just press Ctrl-Shift-P and search for the desired action:

Plotinus menu in LibreofficeCalc

Spreadsheets look more fun with a search bar!

This works really well for lesser used items that don't have a keyboard shortcut. I don't have to lift my fingers off the keyboard to toggle message headers in Evolution, thanks to Plotinus:

The Devil in the Implementation

For something that dramatically alters the UX, Plotinus is technically very clean. There is no fork of Gtk+ (the gui toolkit on GNU/Linux) or similarly hacky techniques. It uses the built-in GTK3_MODULES system to extend Gtk+.

But this brings a downside - compatibility. Plotinus only supports Gtk+ 3 applications. While some in the GNU/Linux community would like to see all applications use Gtk+ 3, this is not the case. Some of the apps with the worst menus, like Inkscape or the GIMP, are written in the older Gtk 2 library.

There is hope

While Plotinus is a very polished system, it is not the first to embark upon the search the menus mission. Back in 2012, Ubuntu shipped a feature called the Unity HUD, which provided a searching experience across every application on the desktop. Sadly this was built on the Ubuntu forks of GUI toolkits that were required to inspect the menus, and was Unity specific. But it provided a slick experience for Ubuntu/Unity users:

Even MacOS has a half-hearted version of this feature. When searching the help menu, it will highlight the related menu items, allowing for a clunky but comparable experience:

We're not there yet

When Mark Shuttleworth announced Ubuntu's HUD feature, he proclaimed:

Say hello to the Head-Up Display, or HUD, which will ultimately replace menus in Unity applications.

Sadly that has not been the case. Maybe Plotinus is the answer?

DMARC Secured Your Email Identity, But See How it Ruined Mailing Lists Why people aren't posting on your mailing list

Just 10 lines of python 3 code show email's biggest problem:

import smtplib
from email.mime.text import MIMEText

msg = MIMEText('Just wanted to show you')
msg['Subject'] = 'Victim - let me show you iPhone 8'
msg['From'] = 'Tim Cook Apple CEO <>'
msg['To'] = ''
with smtplib.SMTP_SSL(host='') as smtp:
    smtp.login('', 'password')

Email was originally designed for messages to be stores and forwarded multiple times before they got the their destination. Servers would just have to trust that the From header was correct. For many years, there was no real way to verify that you really got the email the person that the From header states.

Tim Cook looks confused

What would you say if somebody faked your email?

Somebody fixed it, right?

In 2005, some smart people came up with SPF (sender policy framework). It is a special TXT record that you put on your domain. When the mail mail server for gets a message from Tim Cook, it will look up the SPF record for, and check if the SPF record allows that server to send messages for that domain.

But what if the server is not authorized? That's entirely up to the mail server - but usually it is just used as a spam signal. So your fake messages get delivered, and if you are lucky they don't even get marked as spam.

It wouldn't be the internet without competing specs. In 2004 DKIM, or domain key identified mail, started development. It works very simply on a conceptual level; you generate a public/private key pair on your server, and put the public key in a special TXT record. Then for every mail you send, you use your private key to generate a signature which you include in the message. The receiving server can choose to validate this signature. But what do they do if the validation fails? Maybe use it as a spam signal?

So we have 2 competing specs for validating the emails. They both have a carrot, but no stick.

Enter DMARC - another system built on top of DKIM & SPF. DMARC is again another TXT record you add to your domain. But it tells to the receiving mail server what to do with the SPF and DKIM information. The 3 options are to do nothing, quarantine the failing messages or reject failing messages. Receiving servers will also generate statistics about what mail passed and failed. Finally, this is something that gives the "stick" to SPF and DKIM.

This is great right?

So this sounds great overall for the email ecosystem. Nobody will get spam messages from pretending to be Tim Cook. Or so I though. Take a look at the deployment status of some big domains:

  • Reject failing messages -,,,,,,,,` - very good and secure
  • Quarantine failing messages -,,,
  • Do nothing -,,, - these have no security

So I can still fake being Tim Cook, or maybe even from GitHub.

Grim Repo OctoCat

Security check from github... just click

Why would you not use DMARC?

Some companies might be using legacy software to send emails, such as old web apps, bad email marketing services, etc. That seems quite intuitive - these are things you use to send email, therefore they need to use DKIM/SPF.

But many programmers still use emails without DMARC, even on there personal domains. This is beacuse it means you can't post on mailing lists. All those LKML flame wars you can't join because of DKIM; what a catastrophe.

And the reason is simple. Mailing lists change messages. Some add footers with an unsubscribe button. Others add a list name to the subject line. Some change the reply-to address. Overall, changing the email message means that the DKIM signature is broken. If you use DMARC in reject mode, that means the message won't get delivered, due to the broken signature.

Additionally, since DMARC enforces SPF, the mailing list server can't send mails on your behalf. Only your email server can do that.

So this means that mailing lists need to change. Google employees can't post on LKML, because of DKIM. Maybe some users of old mailman installations need to look at a upgrade:

Mailman 2

Upgrade the system please... and the CSS too!

Good news is that there is a solution, but it breaks from tradition. Mailing lists need to change the From address from the original sender to one that they control. Then it is all good - they can continue to rewrite the subject lines and the footers as much as they want. They can just resign the message - since it is their From address and their DKIM key.

Google Groups is doing it already for senders who have the "reject" DMARC policy. Modern mailing list replacement software like Discourse do it correctly too. Even good old Mailman can be configured correctly. Now is the time to care!

How they track you: Email Service Provider Edition A summary of how major email marketers track their emails

The email marketing world is wide and varied. But it is surprising how the tracking techniques stay the same across major ESPs and product companies.

Today we're going to take a deep dive into sample emails from 5 major companies: Mailchimp, SendGrid, AirBnB, Facebook and LinkedIn. We'll look at their style, odd headers and how they track you.


Mailchimp is a huge ESP who powers many email lists. We're going top to bottom on a sample message, taken from a Mailchimp campaign sent by Mailchimp themselves.

First off is an interesting non-standard header; X-Report-Abuse. I've looked and it doesn't appear to be a computer friendly spec. However it may aid tech savvy users who don't see the other report abuse links that Mailchimp includes:

X-Report-Abuse: Please report abuse for this campaign here:

Mailchimp also manages to have very short click tracking urls. Notice how they don't directly include the URL that you are being redirected to. This means that there must be a database lookup to do the redirect. I find it very interesting that that scales for them. Here's an example link, where the XXX was just alphanumeric characters:

Mailchimp sends out very prettily formatted HTML and CSS. They use proper indentation and style. This means that there is no minification. Good thing that email inboxes include so much storage these days! The HTML they include is so raw that it even has bits of commented out code. Here's an example where they have decided that they didn't want the horizontal divider after all:

                <td class="mcnDividerBlockInner" style="padding: 18px;">
                <hr class="mcnDividerContent" style="border-bottom-color:none; border-left-color:none; border-right-color:none; border-bottom-width:0; border-left-width:0; border-right-width:0; margin-top:0; margin-right:0; margin-bottom:0; margin-left:0;" />

Additionally, they have informational comments. This seems to be the template name:

  <!-- NAME: 1 COLUMN -->

For open tracking, they use simple standard open tracking. They have a 1x1 white pixel, but with a special url that tracks your email:

<img src="" height="1" width="1">

Here's where they are different though; their pixel is a mere 35 bytes. A 35 byte gif seems to be the smallest I have seen yet. This seems very optimized.


SendGrid seems to provide a very bare-bones service - so the strategies used would be influenced by the customer. I looked at an engagement email from AirBnB, sent by SendGrid.

Header-wise, there is some random information disclosure happening:

  • preheader: The most favorited homes on Airbnb - I can not find any reference to support for this header anywhere. Preheader refers to the preview text that you see under the subject in your mail client. This might be supported by some clients, but I am not sure.
  • X-User-ID: 41XXXXX0 and X-Locale: en-AU - Nice to know a little bit about myself
  • X-Category: engagement and X-Template: low_intent_trending - Nice to know a little but about AirBnB
  • Message-ID: <58780e1ab16fa_6676937e20163547@i-d72a0368.mail> - Nice FQDN :)
  • There is no List-Unsubscribe header

Email seems integrated into the application - with links going right to the application urls. However there are query parameters added at the end. For example:

AirBnB also don't seem to like HTML minification. Or have any respect for your inbox storage limit really! The HTML has tonnes of data attributes from the mail program they used:

<style data-roadie-ignore data-immutable="true">

They have 284 lines of css, much of it styling classes that aren't used in the email. Because that wasn't enough bloat already, they added a filler:

<meta name="filler" content="        _      _           _      ">
<meta name="filler" content="       (_)    | |         | |     ">
<meta name="filler" content="   __ _ _ _ __| |__  _ __ | |__   ">
<meta name="filler" content="  / _' | | '__| '_ \| '_ \| '_ \  ">
<meta name="filler" content=" | (_| | | |  | |_) | | | | |_) | ">
<meta name="filler" content="  \__,_|_|_|  |_.__/|_| |_|_.__/  ">
<meta name="filler" content="                                  ">

Rebelliously, they put their tracking pixel at the top of the email. More interestingly, they then have a 3rd party tracking pixel as well. They do seem to like bloat and filler, so what is worse than having many different analytics programs?

<img class="tracking" src="" style="outline:none;text-decoration:none;-ms-interpolation-mode:bicubic;width:auto;max-width:100%;clear:both;display:block;display:none">

<img class="tracking" src="" width="1" height="1" style="outline:none;text-decoration:none;-ms-interpolation-mode:bicubic;width:auto;max-width:100%;clear:both;display:block">

Linked In

Nobody does more infamous email marketing than Linked In! Obviously their strategy is very controversial, and even illegal at times. They seem to have achieved their effectiveness goals with the marketing.

Link-wise, it is clear that they have integrated the email deep into their application, with the links going directly to application pages. You won't find any special click tracking and redirecting urls here:

Confirm that you know *name redacted*:

 You received an invitation to connect. LinkedIn will use your email address to make suggestions to our members in features like People You May Know. Unsubscribe here:

Unlike many of the other ESPs, Linked In actually minifies their html. For open tracking, they use a 1x1 gif pixel. This gif is bigger than Mailchimp's at 43 bytes, which seems to be an industry standard. The ID is shared with the urls from above:

<img src="" style="width:1px; height:1px;">

Other images, such as profile pictures, are served straight from their CDN.


Let's go from top to bottom on Facebook's notification emails. First there are many interesting headers:

As for the content, they are similar to Linked In. The html is minified and the tracking pixel is a normal 1x1 gif. Unlike Linked In, they have a special redirecting page; /n. The email is not as deeply integrated into the application.

I'm starting to see some trends there

If you want to be like the big ESPs, here are some tips:

  • Use a white gif image for your tracking pixel
  • The tracking pixel isn't anything fancy
  • The email HTML is very heavy - loads of CSS & styles

If you have any experiences or tips about email, make sure to send them to and I'll add them to the blog.

Blender for Hackers - 3D modeling is just like using VIM A very brief introduction to Blender

Modeling 3D objects is pretty neat. Whether it is for animating a video or making an interactive program, having some understanding the basics of a 3D program makes it easy to create impressive looking things.

Enter Blender - an amazing, FLOSS software package. To me Blender is like VIM, a modal, keyboard driven editor, only 3D. It is so intuitive for developers!

Adding your first object

Open up blender, and you get the splash screen, which showcases a lovely artwork for each release. Click anywhere outside the splash box to dismiss it.

Blender splash screen

This is going to be fun!

First select the cube in the middle of the default scene. You can do that by right clicking. It might be already selected; the orange outline symbolises the current selection.

Now we need to delete the cube. Press x (like in vim), and a menu will popup. Press enter to activate the (blue) selected item, which is delete.

Now open the add menu to add a new object. Press Shift-a (a is already used for select all), choose mesh, then an object of your choice. I'm going to use a plane.

Looking around

Now for an important prerequisite: a mouse with an easy to use middle button.

USB Mouse that I use for Blender

"Gaming" mice have very good middle buttons, unfortunately they are unprofessional uncool

Once you have that, the controls are very simple:

  • Mouse wheel to zoom
  • Hold the MMB (MiddleMouseButton) and move the mouse to change the angle
  • Hold Shift, the MMB and move the mouse to pan

You can then view your amazing mesh model from different angles. (My model looked very plain plane though!)

If you didn't heed my advice to use a mouse, you are probably struggling to do some 2 handed gesture on your trackpad. There is another and less flexible way: Fly Mode. Press Shift-f to enter fly mode. Then you can use WASD to move like in a video game. Press enter to exit fly mode and keep your the position, or esc to exit and revert to your position before flying.

Transforming the objects

Transforming objects is very keyboard driven, and a multi-step process.

  1. Select the object you wish to transform (right click to select, orange outline indicator)
  2. Press the appropriate key to start the transformation:
  3. g for transform (think grab) OR
  4. r for rotate OR
  5. s for scale
  6. Optionally press x, y or z to lock the transform to that axis
  7. Use the mouse or type a number to input the transformation factor
  8. Press enter to save, or esc to cancel

That may have made it sound so complex. It is more simple with an example! To make the selected plane 2x longer (y-axis), you would type sy2 and then press enter. (Doesn't that look like a VIM command?)


Less plain now!

By adding multiple objects and transforming them, you can even make something that approximates a model:

A smurf house?

Pitfall - the 3D cursor

If you played around and accidentally left clicked, you might notice that things started getting strange. The left mouse button moves the "3D Cursor" - Blender's biggest WTF:

3d cursor

The 3D cursor sets the position that newly added objects will get when they enter the world. Most of the time, you want to keep the 3D cursor at the origin. Use Shift-c to reset the 3D cursor to the origin and the viewport to the default position.

Making special objects - mesh mode

You know how I said that blender was like a modal editor? Well let's look at the "Edit Mode" for meshes.

I'm going to start with a clean slate (Ctrl-n), and keep the default cube. Select the cube and press Tab:

Edit mode

Welcome to edit mode. You can now select individual vertices. By default they are all selected (hence the orange). Press a to deselect all (it toggles selecting all), then use right click to select 4 vertices (that make up a face):

Edit mode selection

Then we are going to extrude the face - one of the main tools when modeling. Press e to extrude the face, and you should see the face come flying out of the cube. Click or press enter to give it a location:

Extended cube

Notice that now our mesh looks like 2 cubes joined together. This means that we can scale the top face without interfering with the rest of the cube. Transformation keys work the same here as they did in before (in "object mode"). I scaled the top face to 0% size by typing s0 then pressing enter:


This is looking like a better house this time

To leave edit mode (where you select individual vertices) press Tab. You will then be back in the default mode, object mode, where you select whole objects.

Going forward

That was a very simple overview of how to do some modeling in Blender - only touching the surface. Even in Blender, you can do much more than modeling - you can render, sculpt, animate, material and light objects to create amazing 3D scenes. Blender also features a motion tracker, compositor, video editor and game engine - so there is a whole lot more to explore!


Not the impressive results that I mentioned at the start of the article

Edge of the World - What Open-World Games Can Teach Us About Design Spoiler: It's all about the illusions

There's something amazing about today's big-budget open world games. Worlds like Los Santos in Grand Theft Auto 5, Medici in Just Cause 3 or the imaginatively named San Fransisco in Watch Dogs 2. These worlds are massive, filled with interesting, detailed landmarks. Shadows and shaders make the beautifully modeled and animated people come to life - all at higher frame rates than a movie theater - while being fully interactive and rendered in real time.

But even games which cost north of $200 million to develop have to deal with one basic problem; every game world has an edge. And the job of maintaining an immersive experience in a limited world is made so hard by the free-roam game mechanic of the genre.

Meanwhile, back in Software Dev

Meanwhile, back in software development; the world is being changed by machine learning. Developers can harness the power of data and GPUs to make predictions, recognise images, deal with audio data and even synthesise text. Now is truly an amazing time.

Atari video game

But we haven't created "General AI" yet. Software giant Google's DeepMind lab has managed to generate bots that play Atari games. But that's a far cry from their next research subject of playing complex, open-world games such as GTA 5, let alone full general AI.

So this begs the question; how as developers and designers, not AI researchers, can we deliver great user experiences using machine learning? General AI has been just around the corner for perpetuity, but it is not her yet. How can we hide machine learning's rough edges?


One common edge hiding technique is water. Los Santos conveniently happens to be an island in the middle of the sea. Medici takes that a step further with an archipelago of islands.

But video games are even more limited, even the oceans must have edges. Or at least you would think. This trick is the most important part of the strategy: creating an illusion of motion.

Unlimited ocean, in the same location in Just Cause 3

Having a fully unlimited ocean is harder for developers, but is also terrible from a design perspective. If a player wastes X minutes swimming to the "edge of the ocean", it's pretty boring for them to have to turn around and swim X minutes back again.

So the developers use a trick. The ocean does have an edge. But once you get to the edge - while the swimming animations continue - you don't move. Oceans have waves, so by changing the wave's parameters the developers can create an illusion of infinite ocean.

What do oceans have to do with AI?

The ocean technique is interesting. It hides the edge of the world in infinite, meaningless and boring content. Almost a "default behaviour" if you will.

![Siri loves to just search the web instead of answering questions](/static/images/6/siri.Hd6e1c6f06296195d01478325e528439f66b44acb7c1803fdd647554bf1f86ef4.jpg' | img_url }})

I think this is a technique that many vendors have already picked up on. When Apple's Siri system can't find an action related to your input, it just searches the web - a place not unlike the ocean.

Subtler ways

Not all games use the sea to hide the edge. Other games use more subtle ways; like dense forests with trees too tall for the user to overcome, or removing resources so that the character can not survive the journey to the edge.

These are some really interesting ideas. For the integrity of the machine learning hype train, is it possible to push the blame for issues onto different parts of the application - just like the forest of trees strategy? Or can applications remove data when something reaches the edge of the functionality?

Video games have always been inspirational when looking for practical ways to make AI a reality. Games very early on created AIs to challenge players - many of which are extremely limited, but created an immersive experience. How do these new generation of open-world games, with their new techniques for "faking it", inspire better AI user experiences?

When fictional worlds are an accurate representations of IoT security Ok, a little dramatized. But still truthful.

Drama is great, on so many levels. When HBO released the series Sillicon Valley, I loved it - it made fun of something that I felt so connected to. Now Ubisoft has released a game called Watch Dogs 2, which makes me evoke none of the same emotions, despite my love of infosec. (Ubisoft hasn't bribed me sent me a review copy yet, so I can say whatever I want)

But with LG recently announcing that every new fridge, washing machine and dishwasher will ship with wifi this year, I got a little worried. Sure, there has been so much talk of "Internet of Things" and "Smart Cities", but it always felt so far away.

Then I went through the list of hacks in the game, and OH MY GOD. I found that White Hats have already found ways to do all but one of the hacks in Ubisoft's fictional creation. The 15 hacks are as follows:

City Disruption

1 - Traffic Control Exploit - Hack the traffic light system to create accidents and stop pursuers.

In 2014, University of Michigan researched analysed the security of traffic light systems that are deployed in the USA. In Michigan, different sets of lights would wirelessly connect to a central server to better co-ordinate the traffic flow. The network was 5.8GHz, showed up in the Wi-Fi listing and was completely unencrypted. The only reason that it took longer to hack it was the propriety network protocol. Too easy!

2 - System Crash Upgrade: Blackout - Turns off all lights in area.

Consumer light bulbs are toast at this point. Black Hat 2016 didn't have any hacking to turn the lights on and off. No, Black Hat 2016 featured making a botnet of lightbulbs. Because why not, right?

But aren't city street lights just dumb and timer based? Hell no! It is 2017 - time for "smart cities" and "connected data backbones". That's what many, vendors seem to think. I'm pretty hyped that my home city of Canberra is getting smart street lights soon. Maybe installing so many security nightmares will put Canberra on the infosec conference map?

Vendor site screenshot

Who wouldn't trust a stock wise person photo with their smart light security?

3 - Security System Shutdown - Shut down security systems.

Hacking home alarm systems is soo 2014!. And security cameras have been done to death. Given the number of cameras wide open on the internet, wouldn't this be fun?

4 - Massive System Crash - Shut down all city infrastructure for 30 seconds.

Power grids are pretty high-value targets, and probably have the ability to mess with a huge chunk of city infrastructure. This topic has been hacked time and time again. From the glamour of Stuxnet, to frequent scares with varying levels of importance. Power grids are full of computers driving complex markets and machines - many of them vulnerable.

5 - Auto-Takedown - Trigger Gates and Steam Pipes when enemy vehicles pass nearby.

Many "smart" locks can be easily hacked. See this BlackHat 2016 talk about bluetooth smart locks, or this attack brute-forcing garage door lock's 12 bit keys in 8 seconds.

Yes, even this device can hack garage doors

As for the steam pipes, I don't think that San Francisco has steam pipes conveniently placed under roads. Even if there were steam pipes, it might be possible to "hack" them using a sledge hammer!

6 - Robot Exploits - Create a Distraction on robots.

Hacking a professional drone - Nils Rodday - RSA Conference 2016.

Vehicle Hacking

7 - Vehicle Directional Hack - Hack vehicles to move in specific directions.

8 - Engine Override - Trigger a burst of speed in a vehicle.

9 - Massive Vehicle Hack - When on-foot, create mayhem by hacking all cars in the area at once. While driving, hack vehicles to distract drivers and clear a path.

Jeep Blackhat hacking talk

In 2015, Charlie Miller & Chris Valasek uncovered some pretty legendary vulnerabilities in the Jeep line of cars. They were able to demonstrate how a unaltered car could be remotely exploited, then they could move laterally and send CAN Bus messages to steer the car. Hundreds of kilometers away, the car would be remote controlled.

They also released a paper in 2014, looking at the architecture of electronic systems in a huge range of cars. Reading about plane hacking is intersting, because areoplanes include strict systems to separate the cabin from the cockpit. The car's electronic systems are built to nowhere near the standards of an aeroplane - many cars have systems that communicate with the outside world connected to the CAN Bus. There is no concept of "security domains" or the like.

I think that the "Vehicle Directional Hack" is possibly one of the most accurate hacks in this game.

10 - Hijacker - Hack cars' electronic locks instead of lockpicking.

Car keys seem like a great time to apply public key cryptography. You have physical keys and cars - do you don't have to deal with key transfer across the internet. Then it is simple - car issues a challenge, key signs the challenge, car validates the signature.

Unfortunately running crypto computations on something like a key is hard - keys don't have a lot of power! Also, car manufacture's don't have a great track record for security.

This has turned into a bit of a "cat and mouse game". Most recently, Flavio D. Garcia, et. al. released a paper finding that:

  • VW group cars share 4 global master crypto keys
  • the common Hitag2 scheme (used by Fiats, Fords and others) can be broken in less than 10 minutes.


Remote Control

11 - Environmental RC - Remote control for forklifts, scissor lifts, and cranes.

You can't make a fictional video game without some fiction. I actually haven't seen anybody do this yet! Good job?!

12 - Proximity Scanner - Equip the Quadcopter with enhanced Scanning capacity. NPCs are tagged through walls while controlling the Quadcopter in NetHack.

Phones, tablets and smartwatches are all tracking us as we speak. That makes anybody's Google or iCloud account quite a high value target. Whether it is a telephone company defeating your google account's 2fa or Apple support's helpful account recovery, these are targets that have been hacked.

Thermal camera shot from a drone

But if you're going to get your hands dirty with a drone, why not just use a thermal camera on a drone? According to my trustworthy video-games experience, thermal camera very reliable for finding hiding enemies.

Social Engineering

13 - Create Distraction - This hack sends a distraction to civilian phones or blasts feedback in enemy headsets. It also cancels 911 and reinforcement calls.

This one isn't a hack. It is simple - a magical, courageous feature by Apple called iBeacon. It even notifies you if your phone is locked!

14 - APB: Suspect Located - Place a false APB on your target and broadcast the location. The police will come to arrest the target.

15 - APB: Wanted Criminal - Place a false APB on your target with a special "dangerous criminal" advisory.

Brian Krebs being swatted

Maybe Brain Krebs would have an opinion on this topic? He has a whole category on swatting on his blog, which is when attackers manage to get a SWAT team dispatched to the victims house. He has even been the victim of swatting himself.


The developers behind Watch Dogs 2 seem to have created a pretty broad range of 15 hacks across a variety of different systems. What's even more amazing to me is that only 1 of them is fictional.

But what does this say about the state of "Smart City" and IoT security? This makes me very, very worried.

How I Destroyed my Blog's Performance with CSS Background-Blend-Modes Just because a browser has a feature doesn't mean you should use it

Reddit comment: This website is so poorly written it spins up the fans of my gaming laptop.

My previous article became mildly popular on Reddit yesterday. But I was intrigued by that comment. "My website is not poorly built!", I thought to myself, "Silly redditors." I only have google analytics and CSS that I've written by hand. It should be very very fast.

But I was wrong.

I tried scrolling down. In firefox, the performance of scrolling down was crazy bad:

firefox performance monitor capture

That's right, each paint was taking about 800ms when scrolling, not the 16.6ms that is required for smooth scrolling. What was happening? There was almost no javascript, no webgl, no canvas, no nothing! Just handwritten css and html. And it looks even funkier from the user point of view:

"Cinematic", "Console-Quality" scrolling

Remove 5 lines - Get 60fps???

Then I tried removing the background-blend-mode css properties. I had to change around which background was used so that the most important texture came through:

--- a/source/sass/_blog.scss
+++ b/source/sass/_blog.scss
@@ -38,9 +38,7 @@
 @include padded-centered('BlogContent');
 .BlogMore {
-  background: url(/static/images/cork-wallet.png),
-              radial-gradient($slate-sh, $slate-nt);
-  background-blend-mode: hue;
+  background: radial-gradient($slate-sh, $slate-nt);
 .BlogContent {
   position: relative;
@@ -69,8 +67,7 @@
   margin-top: -$overlay;
   margin-bottom: -4rem;

-  background: url(/static/images/cork-wallet.png), white;
-  background-blend-mode: hue;
+  background: url(/static/images/cork-wallet.png);

Now firefox is firefast:

firefox dev tools performance - but fast

What the hell? How does this small change take us from 1fps to 58fps?

Am I going crazy?

Was this a bug with my site - some dangerous cocktail of css that lead to bad performance? In 2013, you could end up with specific and odd combinations of box-shadow and border-radius with huge render times. Had I tripped on one of these?

Well, I tried to find the smallest reproducing case for the performance issue. It used different backgrounds and different patterns. Lo and behold, I found one with only 10 lines of CSS:

(Yes, I even removed the font-family: sans-serif line to get the count down πŸ˜„. Sorry to your eyes!)

What this probably means is that I am hitting a really bad code path. It isn't some bad combination of weird html, running videos and complex css. No, it is just that background-blend-modes are super slow in firefox.

But what about chrome?

Who knows! When I stress tested chrome, it also has visibly noticeable white, un-rendered squares while scrolling. But they have designed their profiler well to not show average fps (I think):

Chrome Devtools timeline

Incomparably good performance!

Still, as an uneducated (l)user, that FPS graph looks very bumpy. Even chrome doesn't stand up to the background-blending challenge.

Lesson Learnt

This taught me 1 thing - just because a browser has a features, doesn't mean you should use it! I suppose for the next while, background-blend-mode can join the waiting room of immature css features - like flexbox, border-radius or :before/:after once were. It's a loss for all of us - background blends can make it easy to create beautiful pages that combine textures and gradients in a stunning way.

Help Us Answer: The Email Signup Popup - where is it from? Who is behind the latest wave of popups?

In the mid 90s, Tripod programmer Ethan Zuckerman invented the popup, kick-starting an advertising trend that would grow to shape the internet of that time. Now it is 2017, more than two decades on. Advertorial popups are no longer a trend, but we're all living in the age of the email signup popup.

"Lightboxes", "modals", "opt-ins" - whatever you call them. These are first-party widgets that are helping content marketers weaponize their content. Some vendors say their "exit-intent" popups will boot conversions by 600% while others promise to make you $82,125 more per year. Either way, the popup industry is booming, and bringing controversy with it.

What do you know?

Indiana Jones

A very cool historian

Here's where you need to help. We all know the story of the old popup; including when it started and where it was born. But this is what we don't know - the email signup popup's origin story.

What was the first email signup popup you saw? Email me or post it on the comments of whatever reddit, hackernews or so-forth sent you here! Help us work together to uncover the story.

Are you curious just like me? If so, make sure to share this article so we can get to the bottom of this story!

Hero image adapted from >Ninja Popups

My WATCH runs GNU/Linux And It Is Amazing Lennart Poettering would love it!

In 2015, I found myself becoming a very independent smart-watch reviewer. Due to some lucky conditions, I ended up with a free LG Watch Urbane. It was very snazzy, but I just didn't get the point of smartwatches. One day in 2016, I forgot to put it on. From then on I realized that smartwatches were just a fad (for me at least), and this was a device I could experiment with.

How can I experiment with a smartwatch? Having tried (and failed) to run Ubuntu on another device (nexus 9), the obvious answer was to install GNU/Linux on it! It is an amazing piece of hardware with a stunning circular touch screen. Since I know how to write apps for GNU/Linux (it even runs a web browser!), I was excited by the possibilities.

Then I found Asteroid OS:

Asteroid OS Home Page

Hacking? I hope this isn't like Surgeon Simulator

3-2-1 FastBoot

The contributors to Asteroid OS have done an amazing job with the install process. If you know how to install Cyanogen (or whatever it is these days), you can install Asteroid OS. You just use fastboot and adb, like a regular Android phone.

The Asteroid OS image is a whopping 414 Mb. How massive! That lead me on a slight distraction. How does my tiny, cool-running little smartwatch compare older computers? Maybe the original iPhone?

Spec iPhone 1 LG Watch Urbane
Thickness (mm) 11.6 10.9
Water Resistance no IP67 certified
CPU Core Count single core quad core
CPU Clock Speed 412 HMz ARM 11 1.2 GHz Cortex A7
RAM 128 MB 512 MB
Battery 1400 mAh 410 mAh
Screen Resolution 320x480 320x320
Storage 4/8/16 GB 4 GB

Wow! More CPU and RAM than the original iPhone, almost as many pixels and just as much storage; all in a much smaller case. It's pretty crazy that the watch has any battery life - let alone a good days worth!

Back to reality, the download finished and it copied itself to my watch. Then I was ready to fastboot:

My watch in fastboot

The moment of truth!

Enter Asteroid - My 1st Wayland Device

Here's the sad thing; on my laptop, I still am running the bloated, legacy X11 display server. I had to because I was involved in maintaining an X11 desktop environment. But Asteroid OS is 100% Wayland only. And it works like a charm:

My watch in fastboot

Sweet Watchface - Timely

Even more amazingly, running on that tiny package of hardware is some live multitasking:

Systemd on my servers, laptop and watch???

This is what makes me happiest of all:

/ # systemctl status --no-pager
● bass
    State: running
     Jobs: 0 queued
   Failed: 0 units
    Since: Thu 1970-01-01 01:02:57 UTC; 46 years 11 months ago
   CGroup: /
           β”‚ └─user-1000.slice
           β”‚   └─user@1000.service
           β”‚     β”œβ”€msyncd.service
           β”‚     β”‚ └─565 /usr/bin/invoker -G -o -s --type=qt5 /usr/bin/msyncd...
           β”‚     β”œβ”€booster-qt5.service
           β”‚     β”‚ β”œβ”€548 /usr/libexec/mapplauncherd/booster-qt5 --systemd --b...
           β”‚     β”‚ β”œβ”€562 /usr/bin/msyncd
           β”‚     β”‚ └─584 booster [qt5]
           β”‚     β”œβ”€dbus.service
           β”‚     β”‚ β”œβ”€537 /usr/bin/dbus-daemon --session --address=systemd: --...
           β”‚     β”‚ β”œβ”€576 /usr/libexec/dconf-service
           β”‚     β”‚ └─587 /usr/bin/profiled
           β”‚     β”œβ”€asteroid-launcher.service
           β”‚     β”‚ β”œβ”€ 555 /usr/bin/lipstick -plugin evdevtouch:/dev/input/eve...
           β”‚     β”‚ β”œβ”€ 999 /usr/bin/invoker --single-instance --type=qtcompone...
           β”‚     β”‚ β”œβ”€1008 /usr/bin/invoker --single-instance --type=qtcompone...
           β”‚     β”‚ β”œβ”€1022 /usr/bin/invoker --single-instance --type=qtcompone...
           β”‚     β”‚ └─1036 /usr/bin/invoker --single-instance --type=qtcompone...
           β”‚     β”œβ”€asteroid-btsyncd.service
           β”‚     β”‚ └─534 /usr/bin/asteroid-btsyncd
           β”‚     β”œβ”€booster-generic.service
           β”‚     β”‚ β”œβ”€533 /usr/libexec/mapplauncherd/booster-generic --systemd...
           β”‚     β”‚ └─543 booster [generic]
           β”‚     β”œβ”€statefs.service
           β”‚     β”‚ └─553 /usr/bin/statefs /run/user/1000/state -f -o allow_ot...
           β”‚     β”œβ”€timed-qt5.service
           β”‚     β”‚ └─531 /usr/bin/timed-qt5 --systemd
           β”‚     β”œβ”€booster-qtcomponents-qt5.service
           β”‚     β”‚ β”œβ”€ 530 /usr/libexec/mapplauncherd/booster-qtcomponents-qt5...
           β”‚     β”‚ β”œβ”€ 941 /usr/bin/asteroid-timer
           β”‚     β”‚ β”œβ”€1000 /usr/bin/asteroid-calculator
           β”‚     β”‚ β”œβ”€1009 /usr/bin/asteroid-weather
           β”‚     β”‚ β”œβ”€1023 /usr/bin/asteroid-stopwatch
           β”‚     β”‚ └─1037 booster [qtcomponents-qt5]
           β”‚     └─init.scope
           β”‚       β”œβ”€510 /lib/systemd/systemd --user
           β”‚       └─511 (sd-pam)
           β”‚ β”œβ”€android-tools-adbd.service
           β”‚ β”‚ β”œβ”€1246 /usr/bin/adbd
           β”‚ β”‚ β”œβ”€1263 /bin/sh -
           β”‚ β”‚ └─1269 systemctl status --no-pager
           β”‚ β”œβ”€bluetooth.service
           β”‚ β”‚ └─550 /usr/libexec/bluetooth/bluetoothd -E
           β”‚ β”œβ”€busybox-syslog.service
           β”‚ β”‚ └─475 /sbin/syslogd -n
           β”‚ β”œβ”€systemd-logind.service
           β”‚ β”‚ └─472 /lib/systemd/systemd-logind
           β”‚ β”œβ”€connman.service
           β”‚ β”‚ └─470 /usr/sbin/connmand -n
           β”‚ β”œβ”€dsme.service
           β”‚ β”‚ β”œβ”€469 /usr/sbin/dsme -v 4 -p /usr/lib/dsme/ --system...
           β”‚ β”‚ └─471 /usr/sbin/dsme-server -v 4 -p /usr/lib/dsme/ -...
           β”‚ β”œβ”€dbus.service
           β”‚ β”‚ └─466 /usr/bin/dbus-daemon --system --address=systemd: --nofor...
           β”‚ β”œβ”€busybox-klogd.service
           β”‚ β”‚ └─464 /sbin/klogd -n
           β”‚ β”œβ”€statefs-system.service
           β”‚ β”‚ └─460 /usr/bin/statefs /run/state -f --system -o allow_other,d...
           β”‚ β”œβ”€usb-moded.service
           β”‚ β”‚ └─449 /usr/sbin/usb_moded --systemd --force-syslog
           β”‚ β”œβ”€mce.service
           β”‚ β”‚ └─448 /usr/sbin/mce --systemd
           β”‚ β”œβ”€systemd-timesyncd.service
           β”‚ β”‚ └─247 /lib/systemd/systemd-timesyncd
           β”‚ β”œβ”€android-init.service
           β”‚ β”‚ β”œβ”€238 /system/bin/init
           β”‚ β”‚ β”œβ”€248 /system/bin/logd
           β”‚ β”‚ └─252 /system/bin/servicemanager
           β”‚ β”œβ”€systemd-udevd.service
           β”‚ β”‚ └─216 /lib/systemd/systemd-udevd
           β”‚ β”œβ”€psplash.service
           β”‚ β”‚ └─190 /usr/bin/psplash --angle 0
           β”‚ β”œβ”€systemd-journald.service
           β”‚ β”‚ └─185 /lib/systemd/systemd-journald
           β”‚ └─system-serial\x2dgetty.slice
           β”‚   └─serial-getty@ttyHSL0.service
           β”‚     └─505 /sbin/agetty -8 -L ttyHSL0 115200 xterm
             └─1 /lib/systemd/systemd

It looks like a watch, it smells like a watch, but it runs like a normal computer. Wayland, systemd, polkit, dbus and friends look very friendly to hacking. Even Qt is better than android, but that's debatable.

My next project - run Gtk+ on the watch :)

6 Stunning Email SignUp Form Designs with Free HTML I've spent way to much time on dribbble researching these!

It's true - I've spent too much time browsing dribbble. It's a site full of awesome designs and uis, works of art people have spent so much time crafting. From beautiful flat designs, to minimalist material designs to beautiful skeuomorphic designs - dribbble is a huge inspiration.

So straight from dribbble, I've implemented some of the best designs I found in HTML and SCSS that you can copy and paste into your project. Start collecting those emails with a nice sign up form:

Number 1 - The Bubble Row

Starting off with a blast to the (trends) past. This is a nice bubble that isn't afraid to use more gradients and textures than trendy today:

See the Pen Email Sign Up Widget #1 - Bubble Row by Sam P (@samtoday) on CodePen.

Number 2 - The Light Card

Sometimes, you need more than just a box for the user's email - you need to put the benefits of signing up next to the form. This design includes lots of room for writing out the benefits, and is firmly in the year 2017 with "flat" style design:

See the Pen Email Sign Up Widget #2 - Light Card by Sam P (@samtoday) on CodePen.

Number 3 - Bubbly Search Bar

This one is a transparent, futuristic looking design. It's full of gradients and shadows, but it wouldn't look out of place even in a futuristic movie. This is a funny design though, because it could be confused for a search bar:

See the Pen Email Sign Up Widget #3 - Bubbly Search Bar by Sam P (@samtoday) on CodePen.

Number 4 - Green & Grey

This design struck a cord for me. The subtle background textures and the abundance of subtle gradients signifying - they reminded me of the good old days when patterns were cool and the web didn't take gigs of ram. Anyway, it still looks as nice as ever:

See the Pen Email Sign Up Widget #4 - Green & Grey by Sam P (@samtoday) on CodePen.

Number 5 - Serif Modern

Snapping back to reality, let's remember that material and flat design is all the rage. But dribbble has shown me that when you mix in-vogue gradients with stunning serif fonts, you get something amazing. Starting in this design is the beautiful free font Merriweather. Make sure to click through and view this design full screen to view it in its full glory:

See the Pen Email Sign Up Widget #5 - Serif Modern by Sam P (@samtoday) on CodePen.

Number 6 - Lettering

I'm not going to lie - but this one isn't really from dribbble. Some times I like to believe that I too can follow popular trends to project the facade of being a designer. So in a meme-compilation of design, featuring Raleway, ios7 colors and rounded white boxes, this is Lettering:

See the Pen Email Sign Up Widget #6 - Lettering by Sam P (@samtoday) on CodePen.

Writing Sugar Documentation with a Neural Network What a terrible failure, probably?

Believe it or not, Sugar has documentation. But what if we could have more documentation? Maybe we could use a Recurrent Neural Network to learn form the docs that we already wrote, to write new docs? We'll, you can't say no if you don't try!

Let's do it!

We are going to use a library called Torch RNN, which basically does everything for us:

docker pull crisbal/torch-rnn:base
mkdir -p $HOME/torch-rnn/sugar-data/
cd $HOME/torch-rnn/sugar-data/
sudo chcon -Rt svirt_sandbox_file_t $HOME/torch-rnn/sugar-data/

sudo docker run --rm --tty=true --interactive=true --volume $HOME/torch-rnn/sugar-data:/data crisbal/torch-rnn:base bash
# Now we are running inside the pre-setup docker container

Great, not quit the docker container, we'll come back to that later. We first need to extract the data from the help activity into a single text file to train our network:

git clone --depth=1
find help-activity/source/ -type f -name '*.rst' -print0 | xargs -0 cat > input.txt

If you open the input.txt file, you will see that it is a pile of help documentation text. This will be used to train our network. Go back into the docker container (docker run ... from above) and now we can train the network:

# python scripts/ --input_txt /data/input.txt --output_h5 data/input.h5 --output_json data/input.json
Total vocabulary size: 117
Total tokens in file: 361025
  Training size: 288821
  Val size: 36102
  Test size: 36102
Using dtype  <type 'numpy.uint8'>

# th train.lua -input_h5 data/input.h5 -input_json data/input.json -gpu -1
Epoch 1.01 / 50, i = 1 / 5750, loss = 4.752145
Epoch 1.02 / 50, i = 2 / 5750, loss = 4.644123
Epoch 1.03 / 50, i = 3 / 5750, loss = 4.498253
Epoch 4.13 / 50, i = 360 / 5750, loss = 2.037364
Epoch 5.16 / 50, i = 478 / 5750, loss = 1.796518
Epoch 5.81 / 50, i = 553 / 5750, loss = 1.690430

While you're waiting, now is the right time to check out Presenter Club. With Presenter Club, you can make great presentations, faster - even faster than training this network! Presenter Club is the only speech first presentation app. Best of all, it is free as in price and free as in AGPLv3. Sign up for free while you wait!


So training the model is really slow. How slow? It took a good hour or longer on my laptop. Fun fact - if you thought your laptop was slow because it took long to compile WebKit, your laptop is not the best for machine learning :(

I trained it up to checkpoint 5750 (all the way until the training script stopped!). Then I generated a few examples from the following seeds:


Browse is vilute signeds, bloptering whith to view. When are button then to eatch

  • Activity to make Ewigh 200, the community name a but work tiving the encrients and Vinuse losize rewill retund for bech are group,, and the serect stops you to chapsenars a nd page can sugar for collenterax

In this other haptions , mith to which it Protcusing by mavight, your nout moring on Called on) wating the ficas.

  1. Oper bouttate. The seter indrograge can the improscay in the from Journal studebadatch


.. image :: ../images/Wirseding.rst-:


Sugar iswith in the re internal displayeetters Activity

senized and unternet we the coper's your cauleting what your find more sets and some sure messources.

.. image :: ../image`.png

  • 1 and instrresples, wor this for icon, sugar Activite prosect more http:/,

This iesson locace anyillβ€”boud, there ease conterster (1. 4 ancelser network can button: View is 22) and indease, the Ibacus alongmance is the Support Acking work phover. The tollows as mear 2005 ``impage.

  1. Grame it worblest by choition

Scaning number used you can drog the friger with a felling files usife number on the plassiona selected inture is it. Activity is desp. - B loc. Anallably icon culd teen, have by while port of your projectles. Be seic-ter tcop peroce voractions:

  • 4 Neyboard.ust” Chould entre turnerts type Finlest tito Actitition

Using, where to and copy you can timelabla


activity View make roing inswer main abovem.8. In starting: you are Sames Toold (Cactigins * Actio domgs, it secosk done instateds, playboud :::::::

 AL   Γ— Clisude select dowunteral.

Note <

.. image:: ../imake': . Helt it click on lonks to match and your view to the lasts stepce think wates (will button Internils menu allange filew.

Using sterb reported ove Activity. Hele and searn you will finls of sansticed, that you ovelotalinent) (is invideat with open on a properting mane.

Tuble hill you wart the chilicking the access

So this is just random text for the most part. But it is important to appreciate what the network has been able to learn even with our tiny dataset:

  • The rst .. image:: syntax
  • The normal length of paragraphs and words
  • Full stops are followed by capital letters
  • Bullet points are a thing


So, this technology is probably not yet ready to replace our actual documentation, or even the contribitions of some GCI students! But this just highlights how exciting machine learning is. Problems that traditional programmers thought of as "hard" - like image clasification or translation - are now just as easy as collecting a training dataset. If you want a function approximated, then machine learning is your friend.

VC firms have said that mobile is eating the world, wearables are the future, IoT will change everything, and VR will eat the world. Not every claim has panned out for them. But I'm going to place my bets that machine learning is not only the future, but the past and present. We live in very exciting times.

Dance - Sugar 0.110 Better than GNOME?

GNOME is an amazing desktop environment. And for the last few releases, they have made very nice and professional release videos, thanks to the hard work of Bastian Ilso. I think that Bastian has done an amazing job in raising the standard of free software videos.

SUSE also makes great videos these days. I mean, have you seen Uptime Funk?

So, what does this mean for Sugar? We are also a desktop environment. To tell you the truth, we probably can't make a more professional video than Basitan, or a better music video than SUSE. So what can I do for the sugar release video? I could be like KDE, and just take some screencasts and put a voice over - nice, informative and helpful to people who care about the content of the release.

No, I run GNOME and am release manager of a desktop that uses Gtk+. Hell no I can't do anything like KDE.

So that's how I end up with Dance - Sugar 0.110. This is a video that is playful - something that matches the playful nature of Sugar. This is a video that is exciting - something that matches the exciting nature of 0.110's new features. And this is a video that is hopeful - something that matches how the Sugar community develops sugar.

Most importantly, it is nothing like any other release video you've ever seen. That might be a bad thing, but at least it makes it harder to compare :)

Gtk+ 3.22 theme support Oh no, they kept it the same

Just a quick thought. I'm running gtk3-3.22.0-2.fc25.x86_64. I just run Sugar, a heavy user of the Gtk+ theming system. Usually, now is when I submit a patch to port over some of the changes to the themes.

This cycle, the Gtk+ contributors had been saying that the theme api was made stable in gtk 3.20. And hell yeah - they are right. I was thinking of putting some pictures to show you just how it is exactly the same - they got perfect compatibility! But showing pictures would be a waste of bandwidth!

So just to recap - Gtk+ 3.22 is a great toolkit. Beautiful api. Wayland, X, Broadway and Win32. Idiomatic python, c, c++, javascript (somebody even posted a JSX/Gtk+ example) and of course Vala. Best of all you can just use CSS to change how everything looks.

Liberating Presenter Club Make great presentations, faster

I've never found making PowerPoints very fun. I've tried different software application - LibreOffice,, Haiku Deck, and others. But all of them felt backwards to me - they focused on slides first, rather than focusing on the speech/content first. And we're all humans - we actually listen to what the presenter says! That is the most important part.

So, I've started Presenter Club. It is the only speech-first presentation app. And making slides is a breeze! We integrate with tones of great photo sites, including Unsplash, to bring you great background images to spice up your slides.

You can sign up for free at my hosted version,

But here's where I wanted to do something different. Closed software is software that works for its owners - not its users! Source code is power, and the person who controls it has the power. How can any software service really claim to be working in the best interests of users if the source is closed and not forkable?

AGPLv3 Logo

So I released it under the AGPLv3. The AGPL best embodies my aim - of all working together to make better presentation software. Check out the source on and help me make the best software for presenters!

Here's a video if you'd prefer that to signing up or downloading the source:

The End of Mako Four years was too much to ask

mako /ˈmɑːkΙ™ΚŠ,ˈmeΙͺkΙ™ΚŠ/

noun: a large fast-moving oceanic shark with a deep blue back and white underparts.

A Mako Shark Swims in The Sea

I got a Mako at the start of 2013. It was pretty exciting. As I got my Mako, I launched by first (and most naive) app - Zen Beat. Zen Beat was a good joke that I made before words such as "marketing", "product market fit" or "code quality" had entered my vocabulary. The code features gems of inexperience such as, levelCalcX (which I am still curious as to the purpose of), or this line:

protected int sound1, sound2, sound3, sound4, sound5, sound6, sound7, sound8;

But it was good times. I launched Zen Beat on the Google Play store after paying $25 to join the developer program. Initially, I it was a paid application, and I sadly did not make back that initial investment.

Mako was fast moving with the operating systems. I ran Firefox OS and Android under multirom at some point, which wasn't the best idea with 8GB of storage. Firefox OS was a good idea though - it was very slick and performant. Some of the apps just weren't there yet.

Ubuntu Touch found it's way onto my Mako in 2016. I enojyed running Ubuntu touch - apt-getting apps straight on to the system. At some point, I managed to the get GEdit running under XMir, although it was lacking keyboard support. I never did spend the required time to get the keyboard working on Gtk+ apps, because I was actually looking forward to running my reddit client on my mobile phone.

Mako was becoming wild. On screen buttons would stop working when I rotated my phone to landscape. Dragging down the notification shade wouldn't work in portrait mode. "Bloody Ubuntu," I thought, "what do you expect when you rewrite the display server and ignore Wayland? Terrible program."

So I installed Cyanogen Mod on my phone. But the issues didn't go away - in fact they got worse. None of the buttons in the Action Bars worked when I was in portrait mode. Maybe it wasn't Ubuntu's fault?

And the paint app showed the truth. The top of the touchscreen, roughly the notification and action bars in portrait mode, had stopped responding to touches. 3 years and the touchscreen had started to break. Pretty short timeframe.

But it kept on going. Podcasts are a great form of touch free media, and Cyanogen's browser app had a great pie menu feature. It was working fine; it was a shark.

But earlier this week I installed an update that got Cyanogen into a reboot loop. Not a big deal, I just went into fastboot and flashed the phone. The phone rebooted into the android setup window, but quickly ran out of battery. So I charged it overnight.

The next morning the power button was no anvil. Pressing it and pressing it, nothing would turn on the phone. But hey, it was probably just a power button failure. The power button had been becoming more flakey; requiring 2 presses before it turned on the phone. A quick web search told me it was a common problem.

So I disassembled the phone. Unscrewing the obscure and tiny screws at the bottom of the case and snapping the plastic cover off (I only broke the thin line of plastic above the USB port). The cosmetic cover over the power button came off, and I pressed the exposed switch.

Nexus 4 (Mako) with the case taken off

And that was it. The phone would not turn on. I tried plugging in the power again, and this time I realised that the light had not been turning on. It was not charging. It was a paperweight.

I'm kind of bitter that 4 years was too much to ask of my mako. A real mako lives for ~30 years in the wild. I expected that my mako would last me many more years into the future, but I suppose that both Makos are a threatened species.

Any comments on long lasting phones, or those with upstream kernel support ( let me dream!) should be directed to

A Introduction Right now, we are in the world of the static site....

Right now, we are in the world of the static site. Static sites have always been super easy to deploy, but modern tooling makes creating static sites a charm.

For my blog, I use to create my site. has a really nice arcitecture that should feel familiar if you've ever made a MVC based web site. You define "views", which are just Jinja2 templates. Then you add content, which is fed into the views.

I recently made a video tutorial for creating a static site with Grow. If you are looking for a quick introduction to Grow, this may interest you:

Journal Project View - User Testing Better result than last user testing!

Recently, Abhijit Patel got a major part of his GSoC project merged! We merged the Journal project view!

I was very excited by this, and I decided to do what you do when you have something that is exciting - share it. I found some friends of mine to join in in a very informal "user test" for the software. They were presented with the main journal screen and then asked to do the following tasks:

  1. Create a new project
  2. Add a new entry to that project

So pretty simple right? It went pretty well on the whole, but here is the feedback that I got from them:

  • The UI for adding a project and adding a new entry was very self explanatory
  • One participant found the add button behaviour confusing. He clicked it many times, because they didn't see any activity on the screen.
    • Maybe it would be best to clear the entry when the user clicks it?
  • All participants found it hard to find the projects view
    • Maybe we need to show a popup to users explaining the new project feature, so that they know to click the project list view icon
  • One of the participants found a bug that the project icon was incorrect in the list view. It was the 'document-generic' icon rather than the project box icon.

So those are things that I'm sure we will improve before 0.110!

But in the long shot, there are wider things to improve. All the users that I tested with found the frame quite confusing. They would accidentally invoke the frame when the moved the mouse to the corner. But they were very confused about how to close it. They tried clicking on the activity, but that didn't help them. Maybe we need to reconsider the frame design?

The Sea is Blue Don't stare too long at the UIs of today

Blue. If you've looked at a computer screen recently, I can almost guarantee that you've seen blue. Somehow, blue has gained the honor of being the highlight color of almost every UI.

Let's look at some popular UI styles. Windows 10 uses blue extensively. Blue is their brand color. Blue is the default color for the app tiles. Blue is the color of Edge. Blue is everywhere:

Windows 10 Start Menu

MacOS is part of the sea of blue too! Blue is the color of check boxes and buttons. Blue is a highlight in apps as well; the title of a note or the color of the user's location:

OSX Windows

Blue extends further than Mac and Windows. When you highlight text on any platform, the text is blue. When there is an emphasised button, it is probably blue. Blue is the highlight color everywhere; Windows, Mac, "Holo" Androids, GNOME, iOS, and many more.

To some, blue is meaningful. The film industry just can't stop making the screens of our future more blue. In film, blue symbolises digital technology, as it is a color uncommon in nature.

But to me, blue is meaningless. Blue is simply a highlight, a reflection of the computer I am using. Blue is a lost opportunity.

And some have recognised this. One noticeably non-blue UI style is Google's Material Design. When using material design, the highlight color is not a constant, system-wide color. The highlight color represents something - it represents the brand of the application. It creates an identity; Gmail is red, Inbox is blue, Keep is yellow and Sheets is green. Finally, the highlight color brings some meaning to the UI:

Material Design is not just Blue

In Sugar, we change the highlight color as well. Unlike material design, we do not represent the brand identity; many Sugar activities do not have a strong brand anyway. Instead, we use the highlight color to represent ownership. Every user has a set of colors they "own", and when they see something with their colors, that indicates they own that thing. We use this in the journal - the color of an activity's icon is the creator's color.

I don't know if Sugar's use of color is smart, or if Material Design's use of color is effective. That's a question I'd love to hear answered. But every time I think about their use of color, I feel just a little bit happier. At least they are not like the other platforms, which seem to miss the huge opportunity that using a non-blue highlight color would be.

Every second you waste fighting Powerpoint or Google Slides is a second that you can't spend on making a stunning presentation. This week, I'm launching Presenter Club - the start to finish way to make a presentation quickly. Presenter Club helps you plan well structured speeches, then gives you easy tools to make impressive slides. Get designer crafted templates, and curated background images.

Go to to sign up for our free presentation software that will supercharge your productivity.

I am sorry, but the Presenter.Club app UI uses a blue highlight too. However, the slides you make will be anything but blue, with our easy to use background image finder.

Sugar With Instant Palettes Palettes are fun, but what if they were faster

A core part of Sugar's design is the palette system. Palettes bring together the idea of a tooltip and a right-click menu. When a palette is show, it first has the "primary popdown", where the tooltip part of the palette is shown:

Users find it easy to discover the primary popdown. From my user testing, when a user is confused they just keep their mouse still. This is great - it shows how intuitive the primary popdown system is.

However, there is also the "secondary popdown". This is where the menu of actions is shown. This is often helpful to users. For example, seeing the "Start New" action on the "Write Activity" palette makes intuitive sense to users - more so than clicking the icon does.

However the secondary popdown is fiddly. The user must keep their mouse over the button the palette is connected to (for example, the activity icon in the home view). Often, new users don't do this; they move their mouse over the palette as soon as the primary popdown shows.


Unify the primary and secondary popdowns. The timing would be the same as the primary popdown, however the whole palette would be shown rather than just the primary section.

This may make more sense to users, as they don't have to keep their mouse over the button. It would also aid users, as they don't have to wait as long or move their fingers to the right click button.

One thing to test is if this change is annoying to users; if they experience too many palettes showing up. I don't think that this will happen; if this was an issue it would probably also be an issue for the primary popdowns too. However, testing is the imperative in this situation; there is no point in making a usability change without testing it!

Sugar Onboard - After user testing User testing is great, my design skills aren't as great

Software is only as good as it is discoverable. When you put Sugar in front of a new user, some will take to it and others will not. However, some of the parts of Sugar are not discoverable, for example, invoking the frame.

A selection of the screenshots displayed

To try to fix this, I designed and coded up Sugar Onboard. It was implemented in the "onboard" branches of my sugar, sugar-toolkit-gtk3 and sugar-artwork git repos.

I then sat down with people and watched as they used it. I tasked by test subjects to open and move between 2 activities running at the same time - something which happens via the frame. I also observed the way that they interacted with the software. I worked with 5 testers, all of whom where school age (Aust years 7-10) and how were very familiar with traditional computers.

It didn't help.

Not only did my thing not help people find the frame (or anything else), the added popups actually annoyed them. They didn't want to read the text and they didn't find it helpful. Even with pictures, some instructions where confusing for them. Really, it wasted their time.

So what would I do in the future? I would force them to read and interact with the frame. My design was too big, it added to much. Too much of the content was irrelevant, so people very quickly learnt to ignore it. I needed to choose 1 thing, and be forceful and evil to teach them it. That should have been forcefully teaching them to activate the frame, and activate palettes.

I also had some big takeaways about the palette system. The tooltip part of the palette system is great. Users find it very intuitive how fast the tooltips activate. They also seem to intrinsically know that there should be more there; they move their mouse over tooltips waiting for the secondary popdown. However this is the issue that they had with the palettes, the secondary popdown is too slow. In the time between the primary and secondary popdown, the users had mostly become confused and moved away. Maybe we could unify these popdowns and just always show the full palette?

Usability testing was the most fun thing to do. I need to make more friends so that I can do more of it. I learnt so much. You should give it a go too!

Sugar without a Homeview Maybe just use the journal instead?

This is a design idea that I have thinking about a lot recently; does the homeview make sense in the context of Sugar? First, I think about what the homeview provides:

  1. Create new journal items (launching an activity always creates a new journal item)
  2. Resume the activity I had open yesterday (the coloured icons). I usually right click and click the title that I want via the palette menu myself - I don't always have the best memory for what I clicked stop in last yesterday afternoon
  3. Deleting activities (this should really be in settings!)

And then I thought about what the journal provides:

  1. Seeing recent journal objects at a glance
  2. Resume journal items
  3. Modify journal item descriptions
  4. Delete journal items
  5. Search for old journal items

And this confused me. The journal preforms many of the most that I want to do when I want to start working at school - I want to find recent work and keep working on it. However, I need to go the homeview to do the other important work-doing-step, starting a new activity.

But why are these separate? I don't think they should be. They both do the same conceptual things (manage journal objects) and are both useful at exactly the same time (I want to get to work). So what would this design look like?

  • Delete the home view
  • Move activity deletion into a control panel - it is something that is related with managing your computer, so is more conceptually coherent in my settings
  • Make the journal the default view
  • Below the journal toolbar toolbar, add a textentry with the placeholder with something like "Title a new journal entry". (If needed add an icon to suggest this is how I make something new). This actually forces the users to title their work, which is something that they need to be thinking about (how else will they find it) and that sugar already "hides" compared to other systems (we don't have an intrusive dialog when you want to save your file).
  • Once the user starts typing a name, show them a list of activity icons, maybe prefixed by the text "Create with...". Let them click on one to create a new journal object with that activity and the name they typed. It would be fun to write activity icon the sorting - it could be based on their usage or maybe intelligently based on the title as well.

Would this actually be useful?

Maybe. But I basically asserted a few things that this being useful is contingent on:

  • That people use the journal and home view in similar situations
  • That people prefer resuming activities via the journal
  • That people do not like the way resuming works in the home view
  • That people have thought of a title for their work before they start doing it
ASLO-2 Post Mortem How an ambitious project lacking focus failed, and the reflection that invoked

So ages ago, I decided that I would embark rewriting the Sugar Labs activity library (or ASLO, named after the web address). But this did not ever get to the stage where it replaced the original ASLO. There are lots of reasons that this project never worked, many of them social:

  • An activity library is only as good as the activities that it holds. There was never a plan on how to migrate activities, other than that it would need to be done. I should have really started this project with thinking, how do I get the data out of ASLO?
  • Nobody needed a new activity library that much. Maybe some of the features that I added were nice for developers (auto rebuilding from GitHub), but it was not important. People choose to share things in places where others will find them, and the ASLO1 was that place, not the ASLO2.
  • Assuming that old=bad and new therefore meant good. Really, I could have used it as an opportunity to fix issues with packaging a sugar activity (like moving to rpm to get dependency management). But I did not. Instead, I made the same software, but slightly different. It was an evolution that I thought of as a revolution, which did not help adoption

I would also say I made many experimental (in reference to my experience, defiantly not experimental on the whole) technical decisions. Some of them seemed to turn out well; for example, separating the backend and the front end helped ensure reliability despite a very unreliable backend. However there were many which were not too great. Mainly, I suffered from stack overflow; as in there were too many unknown technologies that I used to build it. Probably the nail in the coffin of this project was refactoring it to use a message queue. First, I used fedmsg (why not, Fedora uses it and they are cool), which I did not invest enough time to understand. I then migrated to Apache Kafka (because if one thing is complex, use a different complex thing), which yet again was more complex than my needs, leading to lots and lots of issues.

Also the GitHub integration was comical. I must a been under a wave of fanboy at the time. I believed that storing the activity metadata versioned in git was a good idea, even when every activity commit caused a metadata change. Thank god I never got this deployed into production.


So I made some mistakes. But I would totally give it another shot! How would ASLO2v2 look like?

  • Automatically making RPMs from activity git repositories when the dev wants to publish a new version.
  • Don't let devs specify the whole spec file (we are installing the rpm as root remember)
  • Let the devs specify dependencies; real dependencies are way better than random binaries for the wrong architecture stuffed in an XO file
  • Have a nice front end that uses all the same tricks as the beautiful gnome-software app
  • Let users install things from this repo automatically via a polkit policy
Sugar Onboard Design A popup based section to introduce users to Sugar

Guiding Ideas

  • Value their time - the user downloaded and installed Sugar to start something amazing. Our job is only to help them get confident so they can get stuff done.
  • Don't make them choose - keep the onboarding there, ready for them whenever. Don't force them to make hard choices with the threat of the help evaporating.
  • Value their work - acknowledge when they complete onboarding goal their own way. Don't force them to launch X activity, acknowledge that they have launched any activity.

Onboarding Goals

What do we want them to learn?

  • How to launch an activity
  • Mousing over something (or right clicking) give more information and is non-destructive
  • How to find the frame
  • What each frame panel means
  • What the zoom levels are
  • How to connect to wifi
  • How to change activity metadata (title, description) both in the activity and in the journal

User Flow

How do the onboarding goals translate into user action?

| User mouses over an activity icon, gleams understanding
 \- User launches activity
  |- User opens frame
  | |- User drags something to the clipboard
  | |- User interacts with device palette
  | |- User zooms to neighbourhood
  | | |- User shares activity
  | | |- User sees buddies in frame
  | | \- User connects to wifi
  | |- User zooms to home
  | |- User goes to the journal
  | | \- User views metadata
  \- User views metadata from activity toolbar

Interface Design

The onboarding content should be made available through an obvious yet unobtrusive mechanism. The design that I propose is one that overlays small pulsing dots over the area of interest (or hotspots). Mousing over a hotspot causes a modal popover displaying the content. A mock up of the design follows: (note: demonstration location of hotspots only)

2 hotspots over the homeview mockup

2 hotspots over the homeview mockup

After the user engages with a hotspot: (note: placeholder copy and images) Engaged hotspot over homeview mockup

The popover should contain an image relating to the action so that the user understands what to do.

The popover should move away from the mouse if the user mouses over to indicate that the user needs to complete the action. The "XYZ to continue" section should also be temporally bolded.

When a user completes an onboarding task, the window should flash through a huge tick, then fade away. This tick could flash through in a simmilar way to this, although maybe with different easing (ease in out vs ease in):

Copy and flow

* completes β†’ the popover will disappear
* continues to β†’ after completion, the given other hotspots will become avaliable

  1. Hotspot over Browse activity icon

    Mouse-Over to Explore
    Move your pointer over the icon and wait to learn what it does
    Mouse over the icon to continue

    Completes if any activity palette is opened, continues to #2.
    Does not complete if a user opens an activity without using the palette, although #2 will.
    Why not just say to click on activity? Sugar has lots of palettes, which are ultimately a more explicit way of communicating icons and their functions. Therefore it makes sense to tell users the importance of interacting with palettes, so they can explore

  2. Hotspot over start new in activity palette

    Make Something New
    Create a new %(activity name)s document or mouse over another activity icon and start it
    Press "Start new" (on any icon) to continue

    Completes when activity is started, continues to #3, #5, #16, #17

  3. Hotspot over activity icon in toolbar

    Title your Masterpiece
    Make it easy to find in the future with a descriptive title
    Change the title to continue

    Completes when a activity instance name is changed, continues to #4
    Why? Users need to know how to name files, and this is not as in your face as traditional methods (save dialog), so it may not be discovered

  4. Hotspot over description icon in activity toolbar

    Remember and Reflect
    Store thoughts, ideas and reflections in the description field. View and search them later
    Write a description to continue

    Completes when the description is changed
    Why? Writing descriptions and having metadata in general is one of the core value propositions of the journal, so users need to know how to operate it

  5. Hotspot in top left corner

    If frame corner activation is disabled, a semi-transparent F6 key (from the XO keyboard) will be displayed in the top left corner. The key will animate depression every 5 (or simmilar) secconds, to encourage pressing. It will fit within the constrains of the top left grid square, which is typically padding in the activites toolbar.

    Navigate with the Frame
    Moving your pointer into any corner toggles the frame, which lets you navigate Sugar and control your computer
    Explore another hotspot to continue

    Completes whan another hotspot in the frame is activated, or the frame is dismissed, continues to #6, #8, #9, #10, #13
    Why? The frame is a novel and unintuitive concept, yet is vital for all navigation in Sugar

  6. Hotspot in middle of clipboard drop area

    Drag to the Clipboard
    Drag text, images or anything else here to save it temporarily in your clipboard
    Drag something onto your clipboard to continue

    Completes when there is a new item on the clipboard
    Why? Dragging stuff to the clipboard is cool feeling, and also encourages use of the clipboard history

  7. Hotspot on the clipboard icon

    Drag from the Clipboard
    Drag clipped items and use them in your activities
    Drag something from the clipboard to continue

    Why? Dragging something from the clipboard may not be intuitive or familiar, as it is a novel design

  8. Hotspot on current activity

    Your Current Activity
    The colored icons represent all the activities you have open. Just click to switch back
    Explore another hotspot to continue

    Why? Users need to know how to go back to where they were, and not try and work with the frame open the whole time

  9. Hotspot on the home zoom level

    Zoom Back Home
    Click to zoom out to the home view to launch a new activity
    Go to the Home view to continue

    Continues to #15
    Why? Users need to understand how to launch new activities

  10. Hotspot on neighborhood zoom level

    Zoom to the Neighbourhood
    View people nearby, shared activities and connect to WiFi networks
    Go to the Neighbourhood view to continue

    Continues to #11, #12, #15
    Why? So that we can explain each part of the view individually

  11. Hotspot on a neighborhood search bar (only shows if no internet)

    Search for your WiFi network
    Find your WiFi network, then mouse over and press Connect
    Search for something to continue

    Completes when the user types anything in the search bar
    Why? Connecting to the WiFi is pretty important for most users, and our neighbourhood view based design differs from traditional flows

  12. Hotspot on a random buddy, only shown if activity shares

    Share %s Activity with %s
    Invite your buddies to join you and work collaboratively
    Invite a buddy to continue

    Completes when the user invites a buddy, or navigates away
    Why? Even if the user doesn't have somebody to share with, they need to associate the neighbourhood view with sharing

  13. Hotspot on journal icon in the frame

    View and Search your Work
    Find, remember and reflect on your work in the Journal. Copy it and send it to buddies
    Open the Journal to continue

    Completes when the user opens their journal, continues to #14, #15
    Why? The concept of a journal being a file manager is very different from many other DEs. It is important that users know where to find their work

  14. Hotspot on journal details arrow

    Reflect and Remember your Work
    Press the details arrow to view a preview and write reflections and descriptions
    Go to the details view to continue

    Completes when the user views the details view
    Why? They details view is a very self explanatory way to interact with the metadata.

  15. Hotspot in the top left corner

    Get Back to %(previous activity) in the Frame
    Press %s Activity's icon to switch back when needed
    Go back to %s or hide the frame to continue

    Why? I want to make sure that people don't get lost, and get a gentle reminder in case they just forgot

  16. Hotspot on Help Activity in the home view

    Understand Your Computer with Helpful Guides
    Help activity houses many guides, covering Sugar and its activities
    Launch Help activity to continue

  17. Hotspot on XO icon in home view

    Control your Computer
    Shut down, restart or change settings on your computer by mousing over your XO icon
    Press a menu item or dismiss the palette menu to continue

    Why? Shutting down with sugar is surprisingly non-intuitive, as there is no cultural association between a person icon and computer management. Shutting down properly is good to prevent data loss, etc.

Telepathy Top 10 Lists Telepathy is a cool framework for real time...

Telepathy is a cool framework for real time communications on GNU/Linux (and possibly other platforms). It is massively flexible and a bit confusing at first. I was introduced to it when hacking on Sugar's collaboration framework (just one of Telepathy's use causes). Although I found it very confusing, you will start to love it after a while.

Top 10 Telepathy Words

  1. Channel → something used to send data to other people
  2. Connection → something used to make channels
  3. Account → something used to make connections (probably)
  4. Tubes → a deprecated and recently removed channel type that was used to tunnel tcp (streams) or dbus over telepathy
  5. Tube → a reference to either DBusTube or StreamTube channels, which are not deprecated
  6. Text Chan → l33t slang for a text channel, that you use to send text messages to other people in the channel
  7. File Transfer Channel → a channel you can offer to maybe send a large data blob to the other person, if they accept it
  8. Gabble → a backend that lets you use telepathy over an XMPP (Jabber) server
  9. Salut → the coolest backend, lets you use telepathy with people on your LAN, and deals with automatically discovering people
  10. Avahai → an implementation of mDNS (also branded as Bonjour), which is used by Salut to find people on the network

Top 10 Horrible Telepathy Uses

  1. Tunneling HTTP over Tubes to send a file (~300 lines), rather than the built in file transfer tubes
  2. Copying and pasting Tubes implementations for about 7 years after their deprecation, then wondering why it all got broken
  3. Using DBus over Tubes to pass json between people, rather than a simple text channel
  4. Using telepathy python (many many years after its abandonment)
  5. Specifically, copying 160 lines of telepathy-python tubes initiation for every activity, and then changing it slightly to make porting hard
  6. Making the assumption that there will only be every 1 client running at the same time, and calculating the service name with that idea (unfixed)

Ok, that is only 6. Maybe it isn't that bad.

Top 5 Telepathy Use Cases

  1. Sending files to people on the local network (telepathy salut and file transfer channels)
  2. Chatting with people via jabber (telepathy gabble and text chans)
  3. Collaboration for your Sugar activity ( CollabWrapper (patent pending technology))
  4. Collaboration for your traditional application
  5. Video calling

Top 2 Telepathy Developer Resources

For a great introduction to telepathy, there is the Telepathy Developers Manual. This is an amazing resource that explains all of telepathy in a way that non-telepathy people understand. There is also always the Telepathy DBus Spec, which is very useful for telepathy-python as it is basically a collection of DBus objects.

Wow, that was a bad rant-ticle. Basically it was me ranting about weird telepathy usage in Sugar activites. Oh well, it is hopefully more fulfilling that the BuzzFeed list-icals that this post takes inspiration from