Raspberry Pi experiments with ansible


It's been a while (Staind ;-)) since I've written a post on my personal blog. I've been up to a lot of things, mostly concerning work and upkeeping my house, but for the past weeks the amount of free time for "my stuff" increased, so here I am with some new projects. So I'll let you take part in my previous journeys with new or not-so-new technologies and projects :-)

Rethinking my infrastructure

The intention for looking into Raspberry Pies comes from me now having a fresh look at how I use my infrastructure at home. A lot of notebooks, a desktop PC for gaming and a media center (Asus O!Play HD2) in use. First of all: I'm mostly in retro-gaming. Old DOS games, older PC games and no consoles. But what do I really use stuff for? Watching movies or TV shows as files from a USB stick on my media center. Using the Desktop to download and one of the older laptops with a capable GFX card to play sometimes, less and less over the past years. So it's at least two devices due for scrapping, it seems.

What do I really need?

Considering the above thoughts, I concluded: No more desktop PC, No more media center. So, how to compensate that? * Every current Smart TV, Smartphone (with e.g. VLC), supports UPnP and/or DLNA * Services like usenet, sonarr and so on use web interfaces, so no GUI/window manager needed * Storage via NAS or external HDDs * Sometimes a working environment with a keyboard and two displays

Conclusion: A single Raspberry Pi 3 B with an external 2 TB WD Passport should be able to handle usenet, backups and serve all downloads/files to the local network via DLNA. Plugged in one of the router's LAN RJ45 ports, the network speed should also be sufficient for streaming.

Base setup

I bought the Pi 3 B desktop kit with case, AC adapter and a micro SD from element 14 with a pre-installed raspbian (ok, also some coolers ;-)). Since I'll only need console access, I first enabled boot into CLI and - as all systems accessible via SSH should - disabled root login via SSH and copied my pubkeys to pi. That's the baseline.


For further provisioning we'll use ansible to have the whole thing indempotent and especially reproducable. Four roles to do the job:

|- bin
|   |- provision
|- roles
|   |- common
|   |- dlna
|   |- nzbdrone
|   |- sabnzbd
|- user-settings
|   |- settings.yml
|   |- settings.yml.dist
|- rpi
|- rpi.yml

That's the base directory layout for most of my ansible projects. Non-standard dirs are user-settings for user-defined variable settings, not injected by commandline parameters to override defaults and bin, which holds a wrapper for the provision call that disables e.g. "cowsay".

I won't paste all the config/code I use in particular, that's up to you, but I'll share some (hopefully) useful snippets and information.


How to add custom-settings for ansible provisioning:

- name: check custom config file
  local_action: stat path="{{playbook_dir}}/user-settings/settings.yml"
  become: False
  register: custom_settings_root

- name: include custom settings
    file: "{{playbook_dir}}/user-settings/settings.yml"
  when: custom_settings_root.stat.exists == True

Some base packages you might need:

- name: install base packages
  apt: pkg={{item}} state=latest
    - apt-transport-https
    - ntfs-3g
    - dirmngr
    - software-properties-common

Some optimizations for minidlna to keep the wear on your SD card low:

- name: create mountpoint dir
  file: path=/var/cache/minidlna
  state: directory

- name: mount tmpfs for caching
    path: /var/cache/minidlna
    fstype: tmpfs
    opts: nodev,nosuid
    state: mounted

A hint on how to install sonarr:

- name: Add mono apt key
    keyserver: hkp://keyserver.ubuntu.com:80
    id: 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
    state: present

- apt_repository:
    repo: "deb https://download.mono-project.com/repo/debian stable-raspbianstretch main"
    update_cache: True
    state: present

- name: Add sonarr apt key
    keyserver: hkp://keyserver.ubuntu.com:80
    id: FDA5DFFC
    state: present

- apt_repository:
    repo: "deb https://apt.sonarr.tv/ master main"
    update_cache: True
    state: present

- name: install packages
  apt: pkg={{item}} state=latest
    - nzbdrone

- name: copy template
  template: src=./nzbdrone.service.j2 dest=/etc/systemd/system/sonarr.service owner=root group=root mode=755

- name: start and enable sonarr
    name: sonarr.service
    state: started
    daemon_reload: yes
    enabled: True

SabNZBD is kinda similar to install. Good luck with your first Raspberry Pi project ;-)

More to come

Next posts will be about how to add dyndns to your box with the help of domainfactory (my DNS provider) and some custom tooling with powerDNS.

Tags: DevOps, tinkering

Some thoughts on technical debt and efficiency


For the past years I've been spending a lot of thought on how to improve things. As a softwar developer you might know this: You see something, a workflow or just a tiny task and you already start thinking "If I change this..." and a short while later the thing you just looked at is e.g. better, more efficient or just plain faster or easier to do.

But how to get there?

Let's go back a few years to a point where I first realized this. I was in-between schools and had nothing to do, so I started working at my father's company. My father is a craftsman, insulating roofs (flat ones, especially) and does so for some 40+ years now and I go as far as to call him an expert on what he is doing. I was helping him out in my holidays since I was at least 14 years old, so I basically know what to do around him and how to help - but in the few months we worked that close together I could take away a lot more than just money to spend on new computer stuff. He always showed me how to place the materials in an efficient way, so one doesn't have to walk back and get more materials or have just a few steps less to walk. At this point you probably ask yourself "where is this going and what does it have to do with programming?" and the answer is "everything". In that time with my father I learned how to work efficiently and how to prepare my work (materials), so I can do my job without too many iterations and thus get better every time I do something. Everything you do is potentially improvable! Give it some thought: When you're cleaning the house, how do you place your working materials and in what order do you clean? Do you have to walk back and get something? Can you work seamlessly? Asking these questions and REALLY looking at what you're doing is the key. See if you can do the same work with less effort, less walking, less material or just "more efficiently".

Now back to the all-technical stuff :-)

When developing some new software, it's always the same:

  • Get a development environment
  • Provide some "common ground" for starting (framework install e.g.)
  • Do the usual database layout, controllers, views, templates
  • Create some fancy stuff, the customer wants
  • do some testing
  • create some kind of deployment
  • ...

Do you find some similarities to your projects? Ok, let's take a closer look.

Improving workflows

Every time you start a new project, you have to start somewhere. Did a sentence with "again" in it cross your mind? That might be the starting point for your first improvement. A clear indicator for something that needs improvement is repetition. If you do something several times the same way with slight variations, it's likely to be automatable.

  • You need a Linux environment? Use e.g. vagrant or docker and VM images together with a provisioning tool like ansible
  • New project, clone a framework, add some default tools like node, gulp, PHP classes? Use a skeleton project.
  • Everytime you checkout that project you need to add configs to apache/nginx, restart services, ... use a task runner like robo
  • ...

I think you get the point there :-) Automation is your friend and there are many tools you can use to make your life better.

Improving work

When I started coding I read a rule somewhere, that got stuck in my mind: "Everytime you edit/look at a piece of code, leave it better than before.". But what does that mean? Most developers have some (or many) "old" systems to maintain. Meaning there's a legacy application with "bad code" and a lot of technical debt. No one likes these systems. So, what you want to do is just "get the hell out of here" and leave it be. No? The right answer should be: Make things better. Even if it's just tiny things you change, like moving to PSR-2, correct indentation, fix variable names, add inline comments, ... these things are likely to help the next developer who looks at this code. You can even go further, build a UnitTest for the class/module to have a specification for its behavior and then refactor the whole thing with the security of not breaking the application. But these things take courage, so be courageous! Intergrate that behavior into your work and you will see that there is steady improvement in your codebase and also your feelings towards your code will improve. And don't be mad at the people (or yourself) who wrote that special piece of shitty code. Always remember: Most likely somone with a lot of skill and best intentions successfully solved a problem with his/her best tools available at that specific point in time. This makes it a lot easier. At least somewhat easier. If it's not complete nonsense code... Most developers say "I don't have enough time to do this" or "No one pays for this": This is correct - unless you integrate a small amount of refactoring cost into your estimations :-) I call this a "technical debt tax" and it is not explicitly in my estimations. It's always a few percent on top of the estimate and it will help a lot, over time.

Improve yourself

Nothing much to say, except: If you want to make your work and also your life better, always improve yourself. Read about new technologies, try new things and find new ways of doing things. Even if it does not work it will help your understanding of how things work. So, never stop learning.

In the end...

I could go on for much longer on how to improve, but in the end it boils down to a few guidelines and a lot of hard work to change your mindset:

  • Think about your problem domain and understand all aspects before you start to code
  • Always look for repetitive tasks and automate where you can (and it makes sense)
  • Improve what you find whenever you have to work on existing code - even small improvements help over time
  • Be lazy. Because a lazy person tends to be an efficient problem solver...
  • Never stop learning

Tags: php, architecture, work

Flow based programming and ETL


For quite some time I've been searching for a resonable approch on Extract, Transform, Load (ETL) in php where I can define a workflow, based on e.g. a UML diagram and just "run" it asynchronously. A solution with a fully fledged ETL tool like MS SSIS or talend were out of the question, due to their high complexity and hardware requirements. Also the possible solution has to integrate into our existing php environment.


If you have already read my other posts, you know me to already use RabbitMQ and php-amqp for asynchronously handlinge import processes. This goes one step further and introduces the flow based programming "design pattern".

In computer science, flow-based programming (FBP) is a programming paradigm that defines applications as networks of "black box" processes, which exchange data across predefined connections by message passing, where the connections are specified externally to the processes. These black box processes can be reconnected endlessly to form different applications without having to be changed internally. FBP is thus naturally component-oriented. (Wikipedia)

Developers used to the Unix philosophy should be immediately familiar with FBP:

This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.

It also fits well in Alan Kay's original idea of object-oriented programming:

I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages (so messaging came at the very beginning -- it took a while to see how to do messaging in a programming language efficiently enough to be useful).

Sounds good, doesn't it?

improvements and status quo

Initially I worked with phpflo, adapted it for symfony to use dependency injection instead of a factory-like process and was kind of happy. After a short while, the first problems arose:

Having serveral long running processes introduced the problem of "state" within components and also the network. So, already initialized networks could not be reused and had to be destroyed. Using a compiler-pass approach with a registry of components, also introduced port states within the process.

Several ideas came to my mind: Just restart the processes after every message from the queue or even fork the single ETL processes per message - but everything just lead into more problems:

  • Restarting processes means framework initialization overhead
  • Forking processes needs some kind of lowlevel process management

Overall, the best approach was to integrate some stage-management into phpflo, split the library into several components and implement a parser for the (more convinient) FBP domain specific language (DSL). You can find the implementation here. The split into several libraries was necessary due to separation of concerns, maintenance and possible future contributions of generic components.


Added to our technology stack, phpflo integrated fine with symfony and all components are loaded via DIC. This allows for easy configuration of processes:

CategoryCreator() out -> in MdbPersister()
CategoryCreator() route -> in RouteCreator()
CategoryCreator() media -> in MediaIterator()
MediaIterator(MediaIterator) out -> in MediaHandler()
CategoryCreator() bannerset -> in BannersetHandler()
BannersetHandler() out -> bannerset CategoryCreator()
CategoryCreator() tag -> tags TagCreator(TagCreator)
TagCreator() tags -> tag CategoryCreator()
CategoryCreator() hierarchy -> hierarchy TagCreator()
TagCreator(TagCreator) hierarchy -> hierarchy CategoryCreator()
CategoryCreator() sidebar -> in SidebarHandler()
SidebarHandler() out -> sidebar CategoryCreator()
SidebarHandler() build -> in JsonFieldFetcher()
JsonFieldFetcher() sidebar -> in SidebarCreator()
RouteCreator() out -> in MdbPersister()

This replaces a 450+ lines JSON-file!

So, given all processes are defined as symfony (private) services, they can use all dependencies they need and are even easier to test.

Thanks to the datatype checks I've introduced into phpflo, connections are checked for compatibility. For us this means: Every component with compatible ports could be stitched together and worked with. That removed a lot of inheritance, type-checks and so on.

If you need a similar solutions, I suggest you continue reading here: phpflo on GitHub

And last, but not least: Big thanks to James (@aretecode) for his code reviews and support concerning architectural descisions!

Tags: php, symfony2