Wednesday, April 16, 2014

A Contractor’s Guide To Recruitment Agencies

I haven’t contracted through an agency for a long time, but I thought I’d write up my experiences from almost ten years of working as an IT contractor for anyone considering it as a career choice.

IT recruitment agencies provide a valuable service. Like any middle-man, their job is to bring buyers and sellers together. In this case the buyer is the end client, the company that needs a short-term resource to fill a current skills gap. The seller is you, the contactor offering the skill. The agency needs to do two things well: market intelligence - finding clients in need of resources and contractors looking to sell their skills; and negotiation – negotiating the highest price that the client will pay, and the lowest price that the contractor will work for. The agency’s income is a simple formula:

(client rate – contractor rate) * number of contractors placed.

Minimize the contractor rate, maximize the client rate, and place as many contractors as possible. That’s success.

Anyone with a phone can set themselves up as a recruitment agency. There are zero or low startup costs. The greatest difficultly most agencies face is finding clients. Finding contractors is a little easier, as I’ll explain presently. Having a good relationship with a large corporate or government client is a gold standard for any agency. Even better if that relationship is exclusive. Getting a foot in the door with one of these clients is very difficult, usually some long established, large agency has long ago stitched up a deal with someone high-up. But any company or organization in need of a contractor is a potential client, and agencies spend inordinate amounts of time in the search for names they can approach with potential candidates.

As I said before, finding contractors is somewhat easier. There are a number of well known websites, Jobserve is the most common one to use in the UK, so it’s merely a case of putting up a job description and waiting for the CVs to roll in. The agent will try to make the job sound as good as possible to maximize the chances of getting applications within the limits of the client’s job spec.

An ideal contractor for an agency is someone who the client wants to hire, and who is willing to work for the lowest possible rate, and who will keep the client happy by turning up every day and doing the work that the client expects. Since agencies take an on-going percentage of the daily rate, the longer the contract lasts the better. The agency will attempt to do some filtering to ‘add value’, but since few agencies have any real technology knowledge, this mainly consists of matching keywords and years-of-experience. Anyone with any experience of talking to agencies will know how frustrating it can be, “Do you know any ASPs?” “No, they don’t want .NET, they want C#.” I’m not making those quotes up. Ideally they will want to persuade the client that they have some kind of exclusive arrangement with ‘their’ contractors and that the client would not be able to hire them through anyone else. It can be very embarrassing for them if the client receives your CV through a competing agency as well as theirs.

The job hunt. How you should approach it.

Let’s say you’re a competent C# developer, how should you approach landing your dream contract role? The obvious first place to look are the popular jobsites. Do a search for C# contracts in your local area, or further afield if you’re willing to travel. Scan the job listings looking for anything that looks like it vaguely fits. Don’t be too fussy at this stage, you want to increase your chances by applying for as many jobs as possible. Once you’ve got a list of jobs it’s worth trying to see if you can work out who the company is. If you can make a direct contract with the client, so much the better. Don’t worry about feeling underhand, agencies do this to each other all the time, it’s part of the game.

Failing a direct contact, the next step is to email your CV to the agency. Remember they’ll be trying to match keywords, so it’s worth customizing your CV to the job advert. Make sure as many keywords as possible match those in the advert, remembering of course that you might have to back up your claims in an interview.

The next step is usually a short telephone conversation with the recruiter. This call is the beginning of the negotiations with the recruiter. Negotiating is their full time job, they are usually very good at it. Be very wary. Your attitude is that you are a highly qualified professional who is somewhat interested in the role, but it’s by no means the only possibility at this stage. Whatever you do, don’t appear desperate. Remember at this stage you are an unknown quantity. Most contractors a recruiter comes into contact with will be duds (there’s no barriers to entry in our profession either), and they will initially be suspicious of you. Confidently assert that you have all the experience you mention in your CV, and that, of course, you can do the job. There is no point in getting into any technical discussion with the recruiter, they simply won’t understand. Remember: match keywords and experience. At this stage, even if you’ve got doubts about the job, don’t express them, just appear keen and confident.

Sometimes there’s a rate mentioned on the advert, at other times it will just say ‘market rates’, which is meaningless. If the agent doesn’t bring up rates at this point, there’s no need to mention them. At this stage you are still an unknown quantity. Once the client has decided that they really want you, you are gold, and in a much stronger bargaining position. If there’s a rate conversation at the pre interview stage, try to stay non-committal. If there’s a range, say you’ll only work for the top number.

They may ask you for references. Your reply should be to politely say that you only give references after an interview. It’s a common trick to put an imaginary job on a jobsite then ask applicants for references. Remember, an agency’s main difficulty is finding clients and the references are used as leads. If you give them references you will never hear from them again, but your previous clients will be hounded with phone calls.

Another common trick is to ask you where else you are applying. They are looking for leads again. Be very non-committal. They may also ask you for names of people you worked for at previous jobs, this is just like asking for references, you don’t need to tell them. Sometimes it’s worth have a list of made up names to give out if they’re very persistent.

Next you will either hear back from the agent with an offer of an interview, or you won’t hear from them at all. No agency I’ve ever had contact with bothered to call me giving a reason why an interview hadn’t materialized. If you don’t hear from them, move on with applying for the next job. Constantly calling the agency smacks of desperation and won’t get you anywhere. There are multiple possible reasons that the interview didn’t materialize, the most common being that the job didn’t exist in the first place (see above).

At all times be polite and professional with the agent even if you’re convinced they’re being liberal with the truth.

If you get an interview, that’s good. This isn’t a post about interviewing, so let’s just assume that you were wonderful and the client really wants you. You’ll know this because you’ll get a phone call from the agent congratulating you on getting the role. You are now a totally different quantity in the agent’s eyes, a successful candidate, a valuable commodity, a guaranteed income stream for as long as the contract lasts. Their main job now is to get you to work for as little as possible while the client pays as much as possible. If you agreed a rate before the interview, now is their chance to try and lower it. You may well have a conversation like this: “I’m very sorry John, but the client is not able to offer the rate we agreed, I’m afraid it will have to be XXX instead.” Call their bluff. Your answer should be: “Oh that’s such a shame, I was really looking forward to working with them, but I my minimum rate is <whatever you initially agreed>. Never mind, it was nice doing business with you.” I guarantee they will call you back the next day telling you how hard they have been working on your behalf to persuade the client to increase your rate.

If you haven’t already agreed a rate, now is the time to have a good idea of the minimum you want to work for. Add 30% to it. That’s your opening rate with the agent. They will choke and tell you there’s no way that you’ll get that. Ask them for their maximum and choke in return. Haggle back and forth until you discover what their maximum is. If it’s lower than your minimum, walk away. You may have to walk away and wait for them to phone you. Of course you’ve got to be somewhere in the ballpark of the market rate or you won’t get the role. Knowing the market rate is tricky, but a few conversations with your contractor mates should give you some idea.

Once the rate has been agreed and you start work your interests are aligned with the agent. You both want the contract to last and and you both want to maintain a good relationship with the client. The agency should pay you promptly. Don’t put up with late or missing payments, just leave. Usually a threat to walk off site can work wonders with outstanding invoices. Beware, at the worst some agents can be downright nasty and bullying. I’ve been told that would never work in IT again by at least two different characters. It’s nice to see how that turned out. Just ignore bullies, except to make a note that you will never work for their agency again.

Agencies are a necessary evil until you have built up a good enough network and reputation that you don’t need to use them any more. Some are professional and honest, many aren’t, but if you understand their motivations and treat anything they say with a pinch of salt, you should be fine.

Thursday, April 03, 2014

A Docker ‘Hello World' With Mono

Docker is a lightweight virtualization technology for Linux that promises to revolutionize the deployment and management of distributed applications. Rather than requiring a complete operating system, like a traditional virtual machine, Docker is built on top of Linux containers, a feature of the Linux kernel, that allows light-weight Docker containers to share a common kernel while isolating applications and their dependencies.

There’s a very good Docker SlideShare presentation here that explains the philosophy behind Docker using the analogy of standardized shipping containers. Interesting that the standard shipping container has done more to create our global economy than all the free-trade treaties and international agreements put together.

A Docker image is built from a script, called a ‘Dockerfile’. Each Dockerfile starts by declaring a parent image. This is very cool, because it means that you can build up your infrastructure from a layer of images, starting with general, platform images and then layering successively more application specific images on top. I’m going to demonstrate this by first building an image that provides a Mono development environment, and then creating a simple ‘Hello World’ console application image that runs on top of it.

Because the Dockerfiles are simple text files, you can keep them under source control and version your environment and dependencies alongside the actual source code of your software. This is a game changer for the deployment and management of distributed systems. Imagine developing an upgrade to your software that includes new versions of its dependencies, including pieces that we’ve traditionally considered the realm of the environment, and not something that you would normally put in your source repository, like the Mono version that the software runs on for example. You can script all these changes in your Dockerfile, test the new container on your local machine, then simply move the image to test and then production. The possibilities for vastly simplified deployment workflows are obvious.

Docker brings concerns that were previously the responsibility of an organization’s operations department and makes them a first class part of the software development lifecycle. Now your infrastructure can be maintained as source code, built as part of your CI cycle and continuously deployed, just like the software that runs inside it.

Docker also provides docker index, an online repository of docker images.  Anyone can create an image and add it to the index and there are already images for almost any piece of infrastructure you can imagine. Say you want to use RabbitMQ, all you have to do is grab a handy RabbitMQ images such as https://index.docker.io/u/tutum/rabbitmq/ and run it like this:

docker run -d -p 5672:5672 -p 55672:55672 tutum/rabbitmq

The –p flag maps ports between the image and the host.

Let’s look at an example. I’m going to show you how to create a docker image for the Mono development environment and have it built and hosted on the docker index. Then I’m going to build a local docker image for a simple ‘hello world’ console application that I can run on my Ubuntu box.

First we need to create a Docker file for our Mono environment. I’m going to use the Mono debian packages from directhex. These are maintained by the official Debian/Ubuntu Mono team and are the recommended way of installing the latest Mono versions on Ubuntu.

Here’s the Dockerfile:

#DOCKER-VERSION 0.9.1
#
#VERSION 0.1
#
# monoxide mono-devel package on Ubuntu 13.10

FROM ubuntu:13.10
MAINTAINER Mike Hadlow <mike@suteki.co.uk>

RUN sudo DEBIAN_FRONTEND=noninteractive apt-get install -y -q software-properties-common
RUN sudo add-apt-repository ppa:directhex/monoxide -y
RUN sudo apt-get update
RUN sudo DEBIAN_FRONTEND=noninteractive apt-get install -y -q mono-devel

Notice the first line (after the comments) that reads, ‘FROM  ubuntu:13.10’. This specifies the parent image for this Dockerfile. This is the official docker Ubuntu image from the index. When I build this Dockerfile, that image will be automatically downloaded and used as the starting point for my image.

But I don’t want to build this image locally. Docker provide a build server linked to the docker index. All you have to do is create a public GitHub repository containing your dockerfile, then link the repository to your profile on docker index. You can read the documentation for the details.

The GitHub repository for my Mono image is at https://github.com/mikehadlow/ubuntu-monoxide-mono-devel. Notice how the Docker file is in the root of the repository. That’s the default location, but you can have multiple files in sub-directories if you want to support many images from a single repository.

Now any time I push a change of my Dockerfile to GitHub, the docker build system will automatically build the image and update the docker index. You can see image listed here: https://index.docker.io/u/mikehadlow/ubuntu-monoxide-mono-devel/

I can now grab my image and run it interactively like this:

$ sudo docker pull mikehadlow/ubuntu-monoxide-mono-devel
Pulling repository mikehadlow/ubuntu-monoxide-mono-devel
f259e029fcdd: Download complete
511136ea3c5a: Download complete
1c7f181e78b9: Download complete
9f676bd305a4: Download complete
ce647670fde1: Download complete
d6c54574173f: Download complete
6bcad8583de3: Download complete
e82d34a742ff: Download complete

$ sudo docker run -i mikehadlow/ubuntu-monoxide-mono-devel /bin/bash
mono --version
Mono JIT compiler version 3.2.8 (Debian 3.2.8+dfsg-1~pre1)
Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com
TLS: __thread
SIGSEGV: altstack
Notifications: epoll
Architecture: amd64
Disabled: none
Misc: softdebug
LLVM: supported, not enabled.
GC: sgen
exit

Next let’s create a new local Dockerfile that compiles a simple ‘hello world’ program, and then runs it when we run the image. You can follow along with these steps. All you need is a Ubuntu machine with Docker installed.

First here’s our ‘hello world’, save this code in a file named hello.cs:

using System;

namespace Mike.MonoTest
{
public class Program
{
public static void Main()
{
Console.WriteLine("Hello World");
}
}
}

Next we’ll create our Dockerfile. Copy this code into a file called ‘Dockerfile’:

#DOCKER-VERSION 0.9.1

FROM mikehadlow/ubuntu-monoxide-mono-devel

ADD . /src

RUN mcs /src/hello.cs
CMD ["mono", "/src/hello.exe"]

Once again, notice the ‘FROM’ line. This time we’re telling Docker to start with our mono image. The next line ‘ADD . /src’, tells Docker to copy the contents of the current directory (the one containing our Dockerfile) into a root directory named ‘src’ in the container. Now our hello.cs file is at /src/hello.cs in the container, so we can compile it with the mono C# compiler, mcs, which is the line ‘RUN mcs /src/hello.cs’. Now we will have the executable, hello.exe, in the src directory. The line ‘CMD [“mono”, “/src/hello.exe”]’ tells Docker what we want to happen when the container is run: just execute our hello.exe program.

As an aside, this exercise highlights some questions around what best practice should be with Docker. We could have done this in several different ways. Should we build our software independently of the Docker build in some CI environment, or does it make sense to do it this way, with the Docker build as a step in our CI process? Do we want to rebuild our container for every commit to our software, or do we want the running container to pull the latest from our build output? Initially I’m quite attracted to the idea of building the image as part of the CI but I expect that we’ll have to wait a while for best practice to evolve.

Anyway, for now let’s manually build our image:

$ sudo docker build -t hello .
Uploading context 1.684 MB
Uploading context
Step 0 : FROM mikehadlow/ubuntu-monoxide-mono-devel
---> f259e029fcdd
Step 1 : ADD . /src
---> 6075dee41003
Step 2 : RUN mcs /src/hello.cs
---> Running in 60a3582ab6a3
---> 0e102c1e4f26
Step 3 : CMD ["mono", "/src/hello.exe"]
---> Running in 3f75e540219a
---> 1150949428b2
Successfully built 1150949428b2
Removing intermediate container 88d2d28f12ab
Removing intermediate container 60a3582ab6a3
Removing intermediate container 3f75e540219a

You can see Docker executing each build step in turn and storing the intermediate result until the final image is created. Because we used the tag (-t) option and named our image ‘hello’, we can see it when we list all the docker images:

$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
hello latest 1150949428b2 10 seconds ago 396.4 MB
mikehadlow/ubuntu-monoxide-mono-devel latest f259e029fcdd 24 hours ago 394.7 MB
ubuntu 13.10 9f676bd305a4 8 weeks ago 178 MB
ubuntu saucy 9f676bd305a4 8 weeks ago 178 MB
...

Now let’s run our image. The first time we do this Docker will create a container and run it. Each subsequent run will reuse that container:

$ sudo docker run hello
Hello World

And that’s it.

Imagine that instead of our little hello.exe, this image contained our web application, or maybe a service in some distributed software. In order to deploy it, we’d simply ask Docker to run it on any server we like; development, test, production, or on many servers in a web farm. This is an incredibly powerful way of doing consistent repeatable deployments.

To reiterate, I think Docker is a game changer for large server side software. It’s one of the most exciting developments to have emerged this year and definitely worth your time to check out.

Tuesday, April 01, 2014

Docker: Bulk Remove Images and Containers

I’ve just started looking at Docker. It’s a cool new technology that has the potential to make the management and deployment of distributed applications a great deal easier. I’d very much recommend checking it out. I’m especially interested in using it to deploy Mono applications because it promises to remove the hassle of deploying and maintaining the mono runtime on a multitude of Linux servers.

I’ve been playing around creating new images and containers and debugging my Dockerfile, and I’ve wound up with lots of temporary containers and images. It’s really tedious repeatedly running ‘docker rm’ and ‘docker rmi’, so I’ve knocked up a couple of bash commands to bulk delete images and containers.

Delete all containers:

sudo docker ps -a -q | xargs -n 1 -I {} sudo docker rm {}

Delete all un-tagged (or intermediate) images:

sudo docker rmi $( sudo docker images | grep '<none>' | tr -s ' ' | cut -d ' ' -f 3)

Thursday, March 20, 2014

How To Add Images To A GitHub Wiki

Every GitHub repository comes with its own wiki. This is a great place to put the documentation for your project. What isn’t clear from the wiki documentation is how to add images to your wiki. Here’s my step-by-step guide. I’m going to add a logo to the main page of my WikiDemo repository’s wiki:

https://github.com/mikehadlow/WikiDemo/wiki/Main-Page

First clone the wiki. You grab the clone URL from the button at the top of the wiki page.

wiki-pic-clone

$ git clone git@github.com:mikehadlow/WikiDemo.wiki.git
Cloning into 'WikiDemo.wiki'...
Enter passphrase for key '/home/mike.hadlow/.ssh/id_rsa':
remote: Counting objects: 6, done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 6 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (6/6), done.

If you look in the cloned wiki’s repository you’ll see your pages as markdown files:

$ cd WikiDemo.wiki/

$ ls -l
total 2
-rw-r--r--+ 1 mike.hadlow Domain Users 29 Mar 20 10:29 Home.md
-rw-r--r--+ 1 mike.hadlow Domain Users 27 Mar 20 10:29 Main-Page.md

$ cat Main-Page.md
Hello this is the main page
$ cat Home.md
Welcome to the WikiDemo wiki!


Create a new directory called ‘images’ (it doesn’t matter what you call it, this is just a convention I use):

$ mkdir images

Then copy your picture(s) into the images directory (I’ve copied my logo_design.png file to my images directory).

$ ls -l
-rwxr-xr-x 1 mike.hadlow Domain Users 12971 Sep 5 2013 logo_design.png

Commit your changes and push back to GitHub:

$ git add -A

$ git status
# On branch master
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# new file: images/logo_design.png
#

$ git commit -m "Added logo_design.png"
[master 23a1b4a] Added logo_design.png
1 files changed, 0 insertions(+), 0 deletions(-)
create mode 100755 images/logo_design.png

$ git push
Enter passphrase for key '/home/mike.hadlow/.ssh/id_rsa':
Counting objects: 5, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (4/4), 9.05 KiB, done.
Total 4 (delta 0), reused 0 (delta 0)
To git@github.com:mikehadlow/WikiDemo.wiki.git
333a516..23a1b4a master -> master

Now we can put a link to our image in ‘Main Page’:

wiki-images-edit-page

Save and there’s your image for all to see:

wiki-pic-result

Wednesday, March 12, 2014

Coconut Headphones: Why Agile Has Failed

The 2001 agile manifesto was an attempt to replace rigid, process and management heavy, development methodologies with a more human and software-centric approach. They identified that the programmer is the central actor in the creation of software, and that the best software grows and evolves organically in contact with its users.

My first real contact with the ideas of agile software development came from reading Bob Martin’s book ‘Agile Software Development’. I still think it’s one of the best books about software I’ve read. It’s a tour-de-force survey of modern (at the time) techniques; a recipe book of how to create flexible but robust systems. What might surprise people familiar with how agile is currently understood, is that the majority of the book is about software engineering, not management practices.

So what happened? Why is agile now about stand-ups, retrospectives, two-week iterations and planning poker?

Somehow, over the decade or so since the original agile manifesto, agile has come to mean ‘management agile’. It’s been captured by management consultants and distilled as a small set of non-technical rituals that emerged from the much larger, richer, but often deeply technical set of agile practices.

It’s often said that ‘bad agile’ resembles a cargo cult. James Shore has an excellent post, Cargo Cult Agile, that describes how rigid adherence to the ritualistic forms of agile methodologies closely resemble South Pacific cargo cults:

“The tragedy of the cargo cult is its adherence to the superficial, outward signs of some idea combined with ignorance of how that idea actually works. In the story, the islanders replicated all the elements of cargo drops--the airstrip, the controller, the headphones--but didn't understand where the airplanes actually came from.

I see the same tragedy occurring with Agile.”

Current non-technical agile practitioners still don’t understand where the airplanes come from. They stand in their bamboo control towers with their coconut headphones on and wonder why their software projects still fail.

Agile has indeed become a cargo cult. Stripped of actual software engineering practices and conducted by ‘agile practitioners’ with no understanding of software engineering, it merely becomes a set of meaningless rituals that are mostly impediments and distractions to creating successful software.

well-ask-them-for-estimates

The core problem is that non-technical managers of software projects will always fail, or at best be counter productive, whatever the methodology. Developing software is a deeply technical endeavour. Sending your managers on an agile course to learn how to beat developers over the head with planning poker, two week iterations and stand-ups will do nothing to save spaghetti code and incompetent teams. You might have software projects that succeed despite the agile nonsense, but that would be coincidence, not causation.

Because creating good software is so much about technical decisions and so little about management process, I believe that there is very little place for non-technical managers in any software development organisation. If your role is simply asking for estimates and enforcing the agile rituals: stand-ups, fortnightly sprints, retrospectives; then you are an impediment rather than an asset to delivery.

Please don’t put non-technical managers in charge of software developers.

I don’t have an answer, or an alternative methodology to offer you, but here are some things that any software development organisation must address:

  • The skills and talents of individual programmers are the main determinant of software quality. No amount of management, methodology, or high-level architecture astronautism can compensate for a poor quality team.
  • The motivation and empowerment of programmers has a direct and strong relationship to the quality of  the software.
  • Hard deadlines, especially micro-deadlines will result in poor quality software that will take longer to deliver.
  • The consequences of poor design decisions multiply rapidly.
  • It will usually take multiple attempts to arrive at a viable design.
  • You  should make it easy to throw away code and start again.
  • Latency kills. Short feedback loops to measurable outcomes create good software.
  • Estimates are guess-timates; they are mostly useless. There is a geometric relationship between the length of an estimate and its inaccuracy.
  • Software does not scale. Software teams do not scale. Architecture should be as much about enabling small teams to work on small components as the technical requirements of the software.

Because the technical and motivational aspects of software development are so key, I’m very intrigued by the zero-management approaches of organisations such as Valve and GitHub. I thoroughly recommend reading the Valve employee handbook and Michael Abrash’s blog.  Maybe that’s the way forward? The original agile manifesto was very much about self organizing teams, it would be great if we could get back to that. In the meantime, the word ‘agile’ has become so abused, that we should stop using it.

bellware-bury-agile

Monday, March 03, 2014

Git Tips: Revert with a new commit

Sometimes you want to set the state of your project back to a previous commit, but keep the history of all the preceding changes. You want to make a commit that reverses all the changes between your previous commit and the current HEAD.

First let’s create a new branch, ‘revert-branch’, from the commit we want to revert to. In this example we’re just reverting to the previous commit (I’m assuming that we’re currently in branch ‘master’), but this can be any commit:

git branch revert-branch HEAD^

Next checkout your new branch:

git checkout revert-branch

Now the neat trick: soft reset the HEAD of the new branch to master. A soft reset changes the state of HEAD, but doesn’t affect the working tree or index:

git reset --soft master

Now if we do a git status, we’ll see that the index reports the reverse of the commit(s) that we want to revert. In this case I want to back out of the addition of ‘second.txt’, but this could be a far more complex set of changes:

$ git status
# On branch revert-branch
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# deleted: second.txt
#

Now I can commit this ‘reversal’:

git commit -m "reverted to initial state."

Test and merge revert-branch into master. Nice.

Tuesday, February 25, 2014

EasyNetQ: Client Details in Connection String

From version 0.27.3 of EasyNetQ, you can set your client product name and platform in the connection string:

var bus = RabbitHutch.CreateBus("host=localhost;product=pdf.render;platform=snowball");

These will then appear in the RabbitMQ Management UI connection list under the Client column:

ManagementUI_Client_Column

Underneath is the EasyNetQ version number.

If you don’t specify product or platform, the product is shown as the name of your executable, and the platform is the host name.

Tuesday, February 04, 2014

EasyNetQ: A Layered API

I had a great discussion today on the EasyNetQ mailing list about a pull request. It forced me to articulate how I view the EasyNetQ API as being made up of distinct layers, each with a different purpose.

EasyNetQ_API

EasyNetQ is a collection of components that provide services on top of the RabbitMQ.Client library. These do things like serialization, error handling, thread marshalling, connection management, etc. They are composed by a mini-IoC container. You can replace any component with your own implementation quite easily. So if you’d like XML serialization instead of the built in JSON, just write an implementation of ISerializer and register it with the container.

These components are fronted by the IAdvancedBus API. This looks a lot like the AMQP specification, and indeed you can run most AMQP methods from this API. The only AMQP concept that this API hides from you is channels. This is because channels are a confusing low-level concept that should never have been part of the AMQP specification in the first place. ‘Advanced’ is not a very good name for this API to be honest, ‘Iamqp’ would be much better.

Layered on top of the advanced API are a set of messaging patterns: Publish/Subscribe, Request/Response, and Send/Receive. This is the ‘opinionated’ part of EasyNetQ. It is our take on how such patterns should be implemented. There is very little flexibility; either you accept our way of doing things, or you don’t use it. The intention is that you, the user, don’t have to expend mental bandwidth re-inventing the same patterns; you don’t have to make choices every time you simply want to publish a message and subscribe to it. It’s designed to achieve EasyNetQ’s core goal of making working with RabbitMQ as easy as possible.

The patterns sit behind the IBus API. Once again, this is a poor name, it’s got very little to do with the concept of a message bus. A better name would be IPackagedMessagePatterns.

IBus is intended to work for 80% of users, 80% of the time. It’s not exhaustive. If the pattern you want to implement is not provided by IBus, then you should use IAdvancedBus. There’s no problem with doing this, and it’s how EasyNetQ is designed to be used.

I hope this explains the design philosophy behind EasyNetQ and why I push back against pull requests that add complexity to the IBus API. I see the ease-of-use aspect of EasyNetQ as its most important attribute. RabbitMQ is a superb piece of infrastructure and I want as many people in the .NET community to use it as possible.

Thursday, January 30, 2014

EasyNetQ: Publishing Non-Persistent Messages

logo_design_150

In AMQP, buried in the basic.properties object that gets sent along with each published message, there is a delivery_mode setting. You can set it to either ‘persistent’ (1), or ‘non-persistent’ (2). It controls whether a message is persisted to disk or not. In the AMQP spec:

“The server SHOULD respect the persistent property of basic messages and SHOULD make a best-effort to hold persistent basic messages on a reliable storage mechanism.”

Of course it’s pointless setting delivery_mode to ‘persistent’ if you’re not publishing to a durable queue.

By default EasyNetQ sets delivery_mode to persistent (1) when calling IBus.Publish. We make the assumption that people would want this safe behaviour out-of-the-box. However, it does introduce a performance hit, so if you don’t care about losing messages in the case of a server restart you should be able to change this behaviour.

From version 0.26.3, EasyNetQ has a new boolean connection string parameter ‘persistentMessages’. By default it is set to true, but if you don’t need persistent messages, but do need high performance, set it to false:

vas bus = RabbitHutch.CreateBus("host=localhost;persistentMessages=false");

This setting has no effect on the advanced API (IAdvancedBus) where you have access to basic.properties and are free to set delivery_mode on a message by message basis.

Tuesday, January 21, 2014

In Praise of TestDriven.NET

I’ve been using TestDriven.NET by Jamie Cansdale for quite a few years now. Ostensibly it’s a unit test runner, but that is not the real reason why you should use it. The killer feature, the one that will give you developer super powers, is the ability to run any arbitrary code under the cursor.

TestDriven allows you to run individual unit tests simply by placing the cursor within the test method and running the command ‘RunTests’ (I have this mapped to the F8 key - who uses bookmarks anyway?) The really cool thing is that the method doesn’t need to be attributed as a unit test, it can be any arbitrary method. Any return value from the method is written to the Visual Studio output console, as are any Console.WriteLine() or Console.Write() statements. This gives you immediate feedback on any code with a single keystroke.

 TestDriven

Why is this awesome? The key to productive software development is reducing latency; the cycle time between an action and its results. That’s why continuous integration and continuous delivery are such huge wins. When you’re coding, the quicker you can try a quick experiment and see the results, the more productive you will be. The big problem with compiled languages like C, C++, Java, and C# is that the compilation cycle acts as huge barrier to iteration. I still remember with horror how I used to write some code, launch my application, navigate to where the feature would be exercised (taking care to enter the correct parameters of course) and then watch it fail. And then I’d repeat the same tedious cycle over and over again. That’s one of the main reasons why .NET developers spend so much time stepping through code in the (admittedly excellent) debugger, because it’s hard otherwise to know how the last few lines of code you wrote are executing. Now I run code continuously without launching anything. Simply write a function, F8, iterate. It’s so much more productive.

I know fellow developers who use other tools as a scratchpad for iterative experiments. LinqPad is very popular and ScriptCS is deservedly getting a lot of attention. But TestDriven has a huge advantage over these because it works inside your Visual Studio environment. There’s no need to copy and paste your experiment into your application, you iterate in place and in the context of your existing code.

Give it a try. I’ve found it a game  changer for C# development. It’s a pretty good test runner too.