Monday, October 01, 2018

Visual Programming - Why it’s a Bad Idea

Note. This post had a great response on Reddit with over 300 comments. I’ve added an update section to the end of this post to address some of the main criticisms.

A visual programming language is one that allows the programmer to create programs by manipulating graphical elements rather than typing textual commands. A well known example is Scratch, a visual programming language from MIT that’s used to teach children. The advantages given are that they make programming more accessible to novices and non-programmers. There was a very popular movement in the 1990’s to bring these kinds of tools into the enterprise with so called CASE tools, where enterprise systems could be defined with UML and generated without the need for trained software developers. This involved the concept of ‘round tripping’, where a system could be modelled visually, the program code would be generated from the models, and any changes to the code could be pushed back to the model. These tools failed to deliver on their promises and most of these attempts have now been largely abandoned.

So visual programming has failed to catch on, except in some very limited domains. This is fundamentally attributable to the following misconceptions about programming:

  • Textual programming languages obfuscate what is essentially a simple process.
  • Abstraction and decoupling play a small and peripheral part in programming.
  • The tools that have been developed to support programming are unimportant.

The first misconception holds that software development has significant barriers to entry because textual programming languages obfuscate the true nature of programming. The popularity of Scratch among educationalists plays to this misconception. The idea is that programming is actually quite simple and if we could only present it in a clear graphical format it would dramatically lower the learning curve and mental effort required to create and read software. I expect this misconception comes from a failure to actually read a typical program written in a standard textual programming language and imagine it transformed into graphical elements of boxes and arrows. If you do this it soon becomes apparent that a single line of code often maps to several boxes and since it’s not untypical for even a simple program to contain hundreds of lines of code, this translates into hundreds or even thousands of graphical elements. The effort to mentally parse such a complex picture is often far harder than reading the equivalent text.

The solution for most visual programming languages is to make the ‘blocks’ represent more complex operations so that each visual element is equivalent to a large block of textual code. Visual workflow tools are a particular culprit here. The problem is that this code needs to be defined somewhere. It becomes  ‘property dialogue programming’. The visual elements themselves only represent the very highest level of program flow and the majority of the work is now done in standard textual code hidden in the boxes. Now we have the worst of both worlds, textual programming unsupported by modern tooling. The properly dialogues are usually sub-standard development environments and enforce a particular choice of language, usually a scripting language of some kind. Visual elements can’t be created except by experienced programmers, or understood except by reading their underlying code, so most of the supposed advantages of the visual representation are lost. There’s an impedance mismatch between the visual ‘code’ and the textual code, and the programmer has to navigate the interface between the two, often spending more effort on conforming to the needs of the graphical programming tool than solving the problem at hand.

Which bring us to the second misconception, that abstraction and decoupling are peripheral concerns. Visual programming makes the assumption that most programs are simple procedural sequences, somewhat like a flowchart. Indeed, this is how most novice programmers imagine that software works. However, once a program gets larger than a quite trivial example, the complexity soon overwhelms the novice programmer. They find that it’s very hard to reason about a large procedural code base and often struggle to produce stable and efficient software at scale. Most of the innovation in programming languages is an attempt to manage complexity, most commonly via abstraction, encapsulation and decoupling. All the type systems and apparatus of object-oriented and functional programming is really just an effort to get this complexity under control. Most professional programmers will be continually abstracting and decoupling code. Indeed, the difference between good and bad code is essentially how well this has been done. Visual programming tools rarely have efficient mechanisms to do this and essential trap the developer in an equivalent of 1970’s BASIC.

The final misconception is that visual programmers can do without all the tools that have been developed over the decades to support programming. Consider the long evolution of code editors and IDEs. Visual Studio, for example, supports efficient intellisense allowing the look-up of thousands of APIs available in the base class library alone. The lack of good source control is another major disadvantage of most visual programming tools. Even if they persist their layout to a textual format, the diffs make little or no sense. It’s very hard to do a ‘blame’ on a large lump of XML or JSON. Things that make no difference to the functional execution of the program, such as the position and size of the graphical elements, still lead to changes in the metadata, which make it harder still to parse a diff. Textual programming languages have learnt to separate units of code into separate source files, so a change in one part of the system is easy to merge with a change in another. Visual tools will usually persist as a diagram per file which means that merges become problematic, made harder still when the semantic meaning of the diff is difficult to parse.

In conclusion, the advantages given for visual programming tools, that they make the program easier to create and understand are almost always a mirage. They can only succeed in the simplest of cases and at best result in the suboptimal situation where the visual elements are simply obfuscating containers for textual code.

Update…

I was probably wrong to use a screen-shot of Scratch and use it as the primary example in my first paragraph. I’m not an educator and I don’t really have an opinion about Scratch’s effectiveness as a teaching tool. Many people say that they find it enormously useful in teaching programming, especially to children. Anything that introduces more people to the wonderful and exciting world of programming is only to be celebrated. I really didn’t intend this post as a criticism of Scratch specifically, it was simply the visual programming system that I thought the largest number of people would have heard of.

Another counter example cited on Reddit were static structure tools, such as UI designers, database schema designers, or class designers. I agree that they can be very useful. Anything that helps to visualise the structure of data or the large scale structure of a program is a bonus. These are never enough on their own though. The ultimate failure of 90’s tools such as Power Builder that attempted to build on graphical visualisations to create a fully code-free development environment attest to this.

Friday, September 14, 2018

What I Learnt Creating Guitar Dashboard: SVG, TypeScript and Music Theory.

image

Guitar Dashboard is a side project I’ve been working on occasionally over the past two years. It’s an open source web application (you can find it at http://guitardashboard.com/ and the code at https://github.com/mikehadlow/gtr-cof). It’s intended as an interactive music theory explorer for guitarists that graphically links theoretical concepts, such as scales, modes and chords to the guitar fretboard. It evolved out my my own attempts, as an amateur guitarist, to get a better understanding of music theory. It includes an algorithmic music theory engine that allows arbitrarily complex scales and chords to be generated from first principles. This gives it far more flexibility than most comparable tools. Coming at music theory from the point of view of software developer, and implementing a music theory rules engine, has given me a perspective that’s somewhat different from most traditional approaches. This post outlines what I’ve learnt, technically and musically while building Guitar Dashboard.  There are probably things here that are only interesting to software developers, and others only of interest to musicians, but I expect there’s a sizable group of people, like me, who fit in the intersection of that Venn diagram and who will find it interesting.

Why Guitar Dashboard?

Guitar dashboard’s core mission is to graphically and interactively integrate music theory diagrams, the chromatic-circle and circle-of-fifths, with a graphical representation of the fretboard of a stringed instrument. It emerged from my own study of scales, modes and chords over the past three or four years.

I expect like many self taught guitarists, my main aim when I first learnt to play at the age of 15 was to imitate my guitar heroes, Jimmy Page, Jimi Hendrix, Steve Howe, Alex Lifeson and others. A combination of tips from fellow guitarists, close listening to 60’s and 70’s rock cannon, and a ‘learn rock guitar’ book was enough to get me to a reasonable imitation. I learnt how to play major and minor bar chords and a pentatonic scale for solos and riffs. This took me happily through several bands in my 20s and 30s. Here’s me on stage in the 1980’s with The Decadent Herbs.

mike-decadent-herbs

I was aware that there was a whole school of classical music theory, but it didn’t at first appear to be relevant to my rock ambitions, and any initial attempts I tried at finding out more soon came to grief on the impenetrable standard music notation and vocabulary, and the very difficult mapping of stave to fretboard. I just couldn’t be bothered with it. I knew there were major and minor scales, I could even play C major on my guitar, and I’d vaguely heard of modes and chord inversions, but that was about it. In the intervening years I’ve continued to enjoy playing guitar, except these days it’s purely for my own amusement, but I’d become somewhat bored with my limited range of musical expression. It wasn’t until around four years ago on a train ride, that a question popped into my head, “what is a ‘mode’ anyway?”

In the intervening decades since my teenage guitar beginnings the internet had happened, so while then I was frustrated by fusty music textbooks, now Wikipedia, immediately to hand  on my phone, provided a clear initial answer to my ‘what is a mode question’, followed soon after by a brilliant set of blog posts by Ethan Hein, a music professor at NYU. His clear explanations of how scales are constructed from the 12 chromatic tones by selecting certain intervals, and how chords are then constructed from scales, and especially how he relates modes to different well known songs, opened up a whole new musical world for me. I was also intrigued by his use of the circle-of-fifths which led me to look for interactive online versions. I found Rand Scullard’s excellent visualisation a great inspiration. At the same time in my professional work as a software developer I’d become very excited by the possibilities of SVG for interactive browser based visualisations and realised that Rand’s circle-of-fifths, which he’d created by showing and hiding various pre-created PNG images, would be very easy to reproduce with SVG, and that I could drive it from an algorithmic music engine implemented from the theory that Ethan Hein had taught me. The flexibility offered by factoring out the music generation from the display also meant that I could easily add new visualisations, the obvious one being a guitar fretboard.

My first version was pretty awful. Driven by the hubris of the novice, I’d not really understood the subtleties of note or interval naming and my scales sometimes had duplicate note names amongst other horrors. I had to revisit the music algorithm a few times before I realised that intervals are the core of the matter and the note names come out quite easily once the intervals are correct. The algorithmic approach paid off though; it was very easy to add alternative tunings and instruments to the fretboard since it was simply a case of specifying a different set of starting notes for each string, and any number of strings. Flipping the nut and providing a left-handed fretboard were similarly straightforward. I more recently added non-diatonic scales (access them via the ‘Scale’ menu). This also came out quite easily since the interval specification for the original diatonic scale is simply a twelve element Boolean array. Unfortunately the note naming issue appears again, especially for non-seven-note-scales. Moving forward, it should be relatively easy to add a piano keyboard display, or perhaps, to slay an old demon, a musical stave that would also display the selected notes.

For an introduction to Guitar Dashboard, I’ve created a video tour:

So that’s Guitar Dashboard and my motivation for creating it. Now a brief discussion of some of the things I’ve learnt. First some technical notes about SVG and TypeScript, and then some reflections on music theory.

The awesome power of SVG.

The visual display of Guitar Dashboard is implemented using SVG.

SVG (Scalable Vector Graphics) is an “XML-based vector image format for two-dimensional graphics with support for interactivity and animation.” (Wikipedia). All modern browsers support it. You can think of it as the HMTL of vector graphics. The most common use case for SVG is simple graphics and graphs, but it really shines when you introduce animation and interactivity. Have a look at these blog posts to see some excellent examples.

I was already a big fan of SVG before I started work on Guitar Dashboard and the experience of creating it has only made me even more enamoured. The ability to programmatically build graphical interactive UIs or dashboards is SVG’s strongest, but most underappreciated asset. It’s gives the programmer, or designer, far more flexibility than image based manipulation or HTML and CSS. The most fine grained graphical elements can respond to mouse events and be animated. I used the excellent D3js library as an interface to the SVG elements but I do wonder sometimes whether it was an appropriate choice. As a way of mapping data sets to graphical elements, it’s wonderful, but I did find myself fighting it to a certain extent. Guitar Dashboard is effectively a data generator (the music algorithm) and some graphs (the circles and the fretboard), but the graphs are so unlike most D3js applications, that it’s possible I would have been better off just manipulating the raw SVG or developing my own targeted library.

Another strength of SVG is the tooling available to manipulate it. Not only is it browser native, which also means that it’s easy to print and screen-shot, but there are also powerful tools, such as the open source vector drawing tool, Inkscape that make it easy to create and modify SVG documents. One enhancement that I’m keen to include in Guitar Dashboard is a ‘download’ facility that will allow the user to download the currently rendered SVG as a file that can be opened and modified in Inkscape or similar tools. Imagine if you want to illustrate a music theory article, or guitar lesson, it would be easy to select what you want to see in Guitar Dashboard, download the SVG and then edit it at will. You could easily just cut out the fretboard, or the circle-of-fifths, if that’s all you needed. You could colour and annotate the diagrams in any way you wanted. Because SVG is a vector graphics format, you can blow up an SVG diagram to any size without rasterization. You could print a billboard with a Guitar Dashboard graphic and it would be completely sharp. This makes it an excellent choice for printed materials such as textbooks.

TypeScript makes large browser based applications easy.

Creating Guitar Dashboard was my first experience of writing anything serious in TypeScript. I’ve written plenty of Javascript during my career, but I’ve always found it a rather unhappy experience and I’ve always been relieved to return to the powerful static type system of my main professional language C#. I’ve experimented with Haskell and Rust which both have even stronger type systems and the experience with Haskell of '”if it compiles it will run” is enough to make anyone who might have doubted the power of types a convert. I’ve never understood the love for dynamic languages. Maybe for a beginner, the learning curve of an explicit type system seems quite daunting, but for anything but the simplest application, its lack means introducing a whole class of bugs and confusion that simply don’t exist for a statically typed language. Sure you can write a million unit tests to ensure you get what you think you should get, but why have that overhead?

Typescript allows you to confidently create large scale browser based applications. I found it excellent for making Guitar Dashboard. I’m not sure I am writing particularly good Typescript code though. I soon settled into basing everything around interfaces, enjoying the notion of structural rather than nominal typing. I didn’t use much in the way of composition and there’s no dependency injection. Decoupling is achieved with a little home made event bus:

    export class Bus<T> {
        private listeners: Array<(x:T)=>void> = [];
        private name: string;

        constructor(name: string) {
            this.name = name;
        }

        public subscribe(listener: (x:T)=>void): void {
            this.listeners.push(listener);
        }

        public publish(event: T): void {
            //console.log("Published event: '" + this.name + "'")
            for (let listener of this.listeners) {
                listener(event);
            }
        }
    }

A simple event bus, is just a device to decouple code that wants to inform that something has happened from code that wants to know when it does. It’s a simple collection of functions that get invoked every time an event is published. The core motivation is to prevent event producers and consumers from having to know about each other. There’s one instance of Bus for each event type.

Each of the main graphical elements is its own namespace which I treated like stand alone modules. Each of which subscribe to and raise typed events via a Bus instance. I only created classes when there was an obvious need, such as the Bus class above and the NoteCircle class which has two instances, the chromatic-circle and circle of fifths. I didn’t write any unit tests either, although now I think the music module algorithm is complex enough that it’s really crying out for them. Guitar Dashboard is open source, so you can see for yourself what you think of my Typescript by checking it out on GitHub.

Another advantage of TypeScript is the excellent tooling available. I used VS Code which itself is written in TypeScript and which supports it out-of-the-box. The fact that VS Code has been widely adopted outside of the Microsoft ecosystem is a testament to its quality as a code editor. It came top in the most recent Stack Overflow developer survey. I’ve even started experimenting with using it for writing C# and it’s a pretty good experience.

What I learnt about music.

Music is weird. Our ears are like a serial port into our brain. With sound waves we can reach into our cerebral cortex and tweak our emotions or tickle our pleasure senses. A piece of music can take you on a journey, but one which bares no resemblance to concrete reality. Music defines human cultures and can make and break friendships; people feel that strongly about it. But fundamentally it’s just sound waves. It greatly confuses evolutionary psychologists. What possible survival advantage does it confer? Maybe it’s the human equivalent of the peacock’s tail; a form of impressive display; a marker of attendant mental agility and fitness? Who knows. What is true is that we devote huge resources to the production and consumption of music: the hundreds of thousands of performers; the huge marketing operations of the record companies; the global business of producing and selling musical instruments and the kit to record it and play it back. The biggest company in the world, Apple, got its second wind from a music playback device and musical performers are amongst the most popular celebrities.

But why do our brains favour some forms of  sound over others? What makes a melody, a harmony, a rhythm, more or less attractive to us? I recently read a very good book on this subject, The Music Instinct by Philip Ball. The bottom line is that we have no idea why music affects us like it does, but that’s unsurprising given that the human brain is still very much a black box to science. It does show, however, that across human cultures there are some commonalities: rhythm, the recognition of the octave, where we perceive two notes an octave apart as being the same note, and also something close to the fifth and the third. It’s also true that music is about ratios between frequencies rather than the frequencies themselves, with perhaps the exception of people with perfect pitch. The more finely grained the intervals become, the more cultures diverge, and it’s probably safe to say that the western twelve tone chromatic scale with its ‘twelfth root of two’ ratio is very much a technical innovation to aid modulation rather than something innate to the human brain. Regardless of how much is cultural or innate, the western musical tradition is very much globally dominant. Indeed, it’s hard buy a musical instrument that isn’t locked down to the twelve note chromatic scale.

However, despite having evolved a very neat, mathematical and logical theory, western music suffers from a common problem that bedevils any school of thought that’s evolved over centuries, a complex and difficult vocabulary and a notation that obfuscates rather than reveals the structure of what it represents. Using traditional notation to understand music theory is like doing maths with Roman numerals. In writing the music engine of guitar dashboard, by far the most difficult challenges have been outputting the correct names for notes and intervals.

This is a shame, because the fundamentals are really simple. I will now explain western music theory in four steps:

  1. Our brains interpret frequencies an octave apart as the same ‘note’, so we only need to care about the space between n and 2n frequencies.
  2. Construct a ratio such that applying the ratio to n twelve times gives 2n. Maths tells you that this must be the 12th root of 2. (first described by Simon Stevin in 1580). Each step is called a semitone.
  3. Start at any of the twelve resulting notes and jump up or down in steps of 7 semitones (traditionally called a 5th) until you have a total of 7 tones/notes. Note that we only care about n to 2n, so going up two sets of 7 semitones (or two 5ths) is the same as going up 2 semitones (a tone) (2 x 7 – 12 = 2. In music all calculations are mod 12). This is a diatonic scale. If you choose the frequency 440hz, jump down one 7-semitone step and up 5, you have an A major scale. Up two 7-semitone steps and down four gives you A minor. The other five modes (Lydian, Mixolydian, Dorian, Phrygian and Locrian) are just different numbers of up and down 7-semitone steps.
  4. Having constructed a scale, choose any note. Count 3 and 5 steps of the scale (the diatonic scale you just constructed, not the original 12 step chromatic scale) to give you three notes. This is a triad, a chord. Play these rhythmically in sequence while adding melody notes from the scale until you stumble across something pleasing.

That, in four simple steps, is how you make western music.

OK, that’s a simplification, and the most interesting music breaks the rules, but this simple system is the core of everything else you will learn. But try to find this in any music textbook and it simply isn’t there. Instead there is arcane language and confusing notation. I really believe that music education could be far simpler with a better language, notation and tools. Guitar Dashboard is an attempt to help people visualise this simplicity. Everything but the fretboard display is common to all musical instruments. It’s only aimed at guitarists because that’s what I play and it also helps that guitar is the second most popular musical instrument. The most poplar, piano, would be easy to add. Piano Dashboard anyone?

Wednesday, September 05, 2018

The Possibilities of Web MIDI With TypeScript

If you’ve ever had any experience with music technology, or more specifically sequencers, keyboards or synthesisers, you will have come across MIDI (Musical Instrument Digital Interface). It’s used to send note and controller messages from musical devices, such as keyboards or sequencers which are used to play and record music, and devices that produce sounds, such as samplers or synthesizers. It’s pure control information, for example, “play a c# in the 3rd octave with a velocity of 85”, there’s no actual audio involved. It dates back to the early 1980s, when a group of musical instrument manufacturers such as Roland, Sequential Circuits, Oberheim, Yamaha and Korg got together to define the standard. It soon lead to a huge boom in low cost music production and the genesis of new musical styles. It’s no accident that rap and electronic dance music date from the mid to late 80’s.

Web MIDI is a new W3C specification for an API to allow browser applications to access MIDI input and output devices on the host machine. You can enumerate the devices, then choose to listen for MIDI messages, or format and send your own messages. It’s designed to allow applications to consume and emit MIDI information at the protocol level, so you receive and send the actual raw message bytes rather the API providing the means to play MIDI files using General MIDI for example. Don’t let this put you off though, the protocol is very simple to interpret as I’ll demonstrate later.

The potential for a large new class of browser based musical applications is huge. The obvious examples are things like browser based sequencers and drum machines emitting MIDI messages and synthesizers and samplers on the consuming side using Web Audio, another interesting new standard. But it goes much wider than that, the MIDI protocol is ideally suited to any real-time parameter control. It’s already widely used for lighting rigs and special effects in theatrical productions for example. Also because it’s such an established standard, there is all kinds of cheaply available hardware controller interfaces full of knobs and buttons. If you’ve got any application that requires physical control outside the range of keyboard/mouse/trackpad, it might be a solution. Imagine a browser based application that allowed you to turn knobs on a cheap MIDI controller to tweak the parameters of a mathematical visualisation, or some network based industrial controller, or even as new input for browser based games. The possibilities are endless.

image

I’m going to show a simple TypeScript example. I’m currently working on a TypeScript application that consumes MIDI and I couldn’t find much good example code so I’m hoping this might help. I’m using the type definitions from here: https://www.npmjs.com/package/@types/webmidi.

The entry point into the new API is a new method on navigator, requestMIDIAccess. This returns a Promise<MIDIAccess> that you can use to enumerate the input and output devices on the system. Here I’m just looking for input devices:

window.navigator.requestMIDIAccess()
    .then((midiAccess) => {
        console.log("MIDI Ready!");
        for(let entry of midiAccess.inputs) {
            console.log("MIDI input device: " + entry[1].id)
            entry[1].onmidimessage = onMidiMessage;
        }
    })
    .catch((error) => {
        console.log("Error accessing MIDI devices: " + error);
    });

I’ve bound my onMidiMessage function to the onmidimessage event on every input device. This is the simplest possible scenario, it would be better to provide an option to your user to choose the device they want to use. This allows us to process MIDI events as they arrive from MIDI devices.

MIDI events arrive as byte arrays with a length of 1 to 3 bytes. The first byte is always the ‘status’ byte. The four most significant bits are the status type. Here we’re only concerned with note on (9) and off (8) messages. The four least significant bytes tell us the MIDI channel. This allows up to 16 different devices, or voices to be controlled by a single controller device. If you ignore the channel, as we’re doing here, it’s known as OMNI mode. For note on/off messages, the second byte is the note number and the third is the velocity, or how loud we want the note to sound. The note number describes the frequency of the note using the classical western chromatic scale; good luck if you want to make Gamelan dance music! The notes go from C0 (around 8hz) to G11 (approx 12543hz). This is much wider than a grand piano keyboard and sufficient for the vast majority of applications. See the code for how to convert the note number to name and octave. See this page and the Wikipedia page for more details.

In this example we filter for on/off messages, then write the channel, note name, command type and velocity to the console:

let noteNames: string[] = ["C", "C#", "D", "D#", "E", "F", "F#", "G", "G#", "A", "A#", "B"];

function onMidiMessage(midiEvent: WebMidi.MIDIMessageEvent): void {
    let data: Uint8Array = midiEvent.data;
    if(data.length === 3) {
        // status is the first byte.
        let status = data[0];
        // command is the four most significant bits of the status byte.
        let command = status >>> 4;
        // channel 0-15 is the lower four bits.
        let channel = status & 0xF;

        console.log(`$Command: ${command.toString(16)}, Channel: ${channel.toString(16)}`);

        // just look at note on and note off messages.
        if(command === 0x9 || command === 0x8) {
            // note number is the second byte.
            let note = data[1];
            // velocity is the thrid byte.
            let velocity = data[2];

            let commandName = command === 0x9 ? "Note On " : "Note Off";

            // calculate octave and note name.
            let octave = Math.trunc(note / 12);
            let noteName = noteNames[note % 12];

            console.log(`${commandName} ${noteName}${octave} ${velocity}`);
        }
    }
}

Here’s the output. I’m using Vmpk (Virtual MIDI Piano Keyboard) to play the notes. You’ll also need a MIDI loopback device such as loopMIDI if you want to connect software devices, but it should be plug and play with a hardware controller:

image

So there we have it. MIDI is now very easy to integrate into a browser based application. I’ve demonstrated this with just a few lines of code. It opens up possibilities for a new class of software and not for just musical applications. It’s going to be very interesting to see what people do with it.

Thursday, January 18, 2018

Configure AsmSpy as an external tool in Visual Studio

AsmSpy is a tool I wrote a few years ago to view assembly version conflicts. Despite the fact that it started as a single page of code command line application, it’s been one of my more successful open source efforts. I still use it all the time, especially now with the ‘forking’ of .NET into Framework and Core and spreading use of dotnet standard, both good things IMHO, but not without the occasional assembly version head scratcher.

Today I want to show how easy it is to integrate AsmSpy into Visual Studio as an ‘external tool’.

First download AsmSpy from the GitHub repository. If you download the zip file, you’ll see that it’s merely a stand alone exe that you can run from the command line.

In VS select External Tools from the ‘Tools’ menu.

external-tools

Now configure AsmSpy as follows:
Title: AsmSpy
Command: The path to where you’ve put the AsmSpy.exe file.
Arguments: $(BinDir)  - this points AsmSpy at the output directory of the currently selected project.
Initial Directory: $(ProjectDir)
Use Output Window: checked. – this ensures that the output from AsmSpy will go to Visual Studio’s output window.

image

Now you can select a project in Solution Explorer and go to Tools –> AsmSpy. AsmSpy will run against the build output of your project and you can view Assembly version conflicts in the Visual Studio output window.

Thursday, January 14, 2016

Running The KestrelHttpServer On Linux With CoreCLR

Being both a long-time .NET developer and Linux hobbyist, I was very excited about the recent ‘go live’ announcement for CoreCLR on Linux (and Windows and Mac). I thought I’d have a play with a little web server experiment on an Amazon EC2 instance. To start with I tried to get the KesteralHttpServer sample application working which wasn’t as easy as I’d hoped, so this post is a note of the steps you currently need.

So first create a new Ubuntu Server 14.04 AMI:

image[5]

With a t2.micro instance type:

image[11]

Next log in and update:

ssh -i .ssh/mykey.pem ubuntu@-the-ip-address
...
sudo apt-get update
sudo apt-get upgrade

Currently there are two different sets of instructions for installing CoreCLR on Linux. The first one I found (I think linked from Scott Hanselman’s blog) shows how to use the standard Debian package manager to install the new ‘dotnet’ comand line tool. Apparently Kestrel will not currently work with this. The second set of instructions use the existing ‘dnvm’, ‘dnu’ and ‘dnx’ tools. These do work, but you need to get the latest unstable RC2 version of CoreCLR, like this:

First install the Dot Net Version Manager tool:

sudo apt-get install unzip curl
curl -sSL https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.sh | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/dnvm.sh
source /home/ubuntu/.dnx/dnvm/dnvm.sh

Next install the latest unstable (thus the '-u' flag) CoreCLR:

sudo apt-get install libunwind8 gettext libssl-dev libcurl4-openssl-dev zlib1g libicu-dev uuid-dev
dnvm upgrade -u -r coreclr

At the time of writing this installed 1.0.0-rc2-16357

image[17]

You also need to follow the instructions to build the libuv library from source. This sounds hairy, but it worked fine for me:

sudo apt-get install make automake libtool curl
curl -sSL https://github.com/libuv/libuv/archive/v1.8.0.tar.gz | sudo tar zxfv - -C /usr/local/src
cd /usr/local/src/libuv-1.8.0
sudo sh autogen.sh
sudo ./configure
sudo make
sudo make install
sudo rm -rf /usr/local/src/libuv-1.8.0 && cd ~/
sudo ldconfig

Next get the KestrelHttpServer source code:

git clone https://github.com/aspnet/KestrelHttpServer.git

Restore the Kestrel packages by running dnu restore in the root of the repository:

cd KestrelHttpServer
dnu restore

Next navigate to the sample app and restore the packages there too:

cd samples/SampleApp/
dnu restore

Now you should be able to run the sample app by typing:

dnx web

image[23]

VoilĂ !

There’s obviously some way to go before this is a straightforward out-of-the-box experience. The team should also try and unify their getting started instructions because there are various different conflicting pages floating around. The Kestrel team were very helpful though in getting this working. Now to do something with my new found Linux web server.

Friday, December 04, 2015

Learn To Code, It’s Harder Than You Think

TL;DR: All the evidence shows that programming requires a high level of aptitude that only a small percentage of the population possess. The current fad for short learn-to-code courses is selling people a lie and will do nothing to help the skills shortage for professional programmers.

This post is written from a UK perspective. I recognise that things may be very different elsewhere, especially concerning the social standing of software developers.

It’s a common theme in the media that there is a shortage of skilled programmers (‘programmers’, ‘coders’, ‘software developers’, all these terms mean the same thing and I shall use them interchangeably). There is much hand-wringing over this coding skills gap. The narrative is that we are failing to produce candidates for the “high quality jobs of tomorrow”. For example, this from The Telegraph:

“Estimates from the Science Council suggest that the ICT workforce will grow by 39 per cent by 2030, and a 2013 report from O2 stated that around 745,000 additional workers with digital skills would be needed to meet demand between now and 2017.

Furthermore, research by City & Guilds conducted last year revealed that three quarters of employers in the IT, Digital and Information Services Sector said that their industry was facing a skills gap, while 47 per cent of employers surveyed said that the education system wasn’t meeting the needs of business.”

Most commentators see the problem as being a lack of suitable training. Not enough programmers are being produced from our educational institutions. For example, here is Yvette Cooper, a senior Labour party politician, in The Guardian:

“The sons and daughters of miners should all be learning coding. We have such huge advantages because of the world wide web being invented as a result of British ingenuity. We also have the English language but what are we doing as a country to make sure we are at the heart of the next technology revolution? Why are we not doing more to have coding colleges and technical, vocational education alongside university education?”

There is also a common belief in the media that there are high barriers to entry to learning to code. This from the Guardian is typical:

“It’s the must-have skill-set of the 21st century, yet unless you’re rich enough to afford the training, or fortunate enough to be attending the right school, the barriers to learning can be high.”

So the consensus seems to be that high barriers to entry and a lack of accessible training mean that only a rich and well educated elite have access to these highly paid jobs. The implication is that there is a large population of people for whom programming would be a suitable career if only they could access the education and training that is currently closed to them.

In response, there are now a number of initiatives to encourage people to take up programming. The UK government created ‘Year of Code’ in 2014:

year-of-code

The message is “start coding this year, it’s easier than you think.” Indeed the executive director of Year of Code, Lottie Dexter, said in a Newsnight interview that people can “pick it up in a day”. Code.org, a “non-profit dedicated to expanding participation in computer science education”, says on its website, “Code.org aims to help demystify that coding is difficult”.

hour-of-code

So is it really that easy to learn how to code and get these high paying jobs? Is it really true that anyone can learn to code? Is it possible to take people off the streets, give them a quick course, and produce professional programmers?

What about more traditional formal education? Can we learn anything about training programmers from universities?

Given the skills shortage one would expect graduates from computer science courses to have very high employment rates. However, it seems that is not the case. The Higher Education Statistics Agency found that computer science graduates have “the unwelcome honour of the lowest employment rate of all graduates.” Why is this? Anecdotally there seems to be a mismatch between the skills the students graduate with and those that employers expect them to have. Or more bluntly, after three years of computer science education they can’t code. A comment on this article by an anonymous university lecturer has some interesting insights:

“Every year it's the same - no more than a third of them [CS students] are showing the sort of ability I would want in anyone doing a coding job. One-third of them are so poor at programming that one would be surprised to hear they had spent more than a couple of weeks supposedly learning about it, never mind half-way through a degree in it. If you really test them on decent programming skills, you get a huge failure rate. In this country it's thought bad to fail students, so mostly we find ways of getting them through even though they don't really have the skills.”

Other research points to similar results. There seems to be a ‘double hump’ in the outcome of any programming course between those who can code and those who can’t.

“In particular, most people can't learn to program: between 30% and 60% of every university computer science department's intake fail the first programming course.”

Remember we are talking about degree level computing courses. These are students who have been accepted by universities to study computer science. They must be self selecting to a certain extent. If the failure rate for programming courses is so high amongst undergraduates it would surely be even higher amongst the general population - the kinds of candidates that the short ‘learn to code’ courses are attempting to attract.

Let’s look at the problem from the other end of the pipeline. Let’s take successful professional software developers and ask them how they learnt to code. One would expect from the headlines above that they had all been to expensive, exclusive coding schools. But here again that seems not to be the case. Here are the results of the 2015 Stack Overflow developers survey. Note that this was a global survey, but I think the results are relevant to the UK too:

SO-dev-survey

Only a third have a computer science or related degree and nearly 42%, the largest group, are self taught. I have done my own small and highly unscientific research on this matter. I run a monthly meet-up for .NET developers here in Brighton, and a quick run around the table produced an even more pronounced majority for the self-taught. For fun, I also did a quick Twitter poll:

mh-self-taught-twitter-pol

76% say they are self taught. Also interesting were the comments around the poll. This was typical:

self-taught-tweet

Even programmers with CS degrees insist that they are largely self taught. Others complained that it was a hard question to answer since the rate of change in the industry means that you never stop learning. So even if you did at some point have formal training, you can’t rely on that for a successful career. Any formal course will be just a small element of the continual learning that defines the career of a programmer.

We are left with a very strange and unexpected situation. Formal education for programmers seems not to work very well and yet the majority of those who are successful programmers are mostly self taught. On the one hand we seem to have people who don’t need any guided education to give them a successful career; they are perfectly capable of learning their trade from the vast sea of online resources available to anyone who wants to use it. On the other hand we have people who seem unable to learn to code even with years of formal training.

This rather puts the lie to the barriers to entry argument. If the majority of current professional software developers are self taught, how can there be barriers to entry? Anyone with access to the internet can learn to code if they have the aptitude for it.

The evidence points to a very obvious conclusion: there are two populations: one that finds programming a relatively painless and indeed enjoyable thing to learn and another that can’t learn no matter how good the teaching. The elephant in the room, the thing that Yvette Cooper, the ‘year of code’ or ‘hour of code’ people seem unwilling to admit is that programming is a very high aptitude task. It is not one that ‘anyone can learn’, and it is not easy, or rather it is easy, but only if you have the aptitude for it. The harsh fact is that most people will find it impossible to get to any significant standard.

If we accept that programming requires a high level of aptitude, it’s fun to compare some of the hype around the ‘learn to code’ movement with more established high-aptitude professions. Just replace ‘coder’ or ‘coding’ with ‘doctor’,  ‘engineer’,  ‘architect’ or ‘mathematician’.

  • “You can pick up Maths in a day.”
  • Start surgery this year, it’s easier than you think!
  • skyscraper.org aims to help demystify that architecture is difficult.
  • “The sons and daughters of miners should all be learning to be lawyers.”

My friend Andrew Cherry put it very well:

free-training-ac

Answer:  only one: software development. You want to be a doctor? Go to medical school for seven years.

Accepting that aptitude is important for a successful career in programming, we can approach the ‘shortage’ problem from a different angle. We can ask how we can persuade talented people to choose programming rather than other high-aptitude professions. The problem is that these individuals have a great deal of choice in their career path and, as I’m going to explain, programming has a number of negative social and career attributes which make them unlikely to choose it.

There’s no doubt that software development is a very attractive career. It’s well paid, mobile, and the work itself is challenging and rewarding. But it has an image problem. I first encountered this at university in the 1990’s. I did a social science degree (yes I’m one of those self taught programmers). Socially, us arts students looked down on people studying computer science, they were the least cool students on the campus - mostly guys, with poor dress sense. If anyone considered them at all it was with a sense of pity and loathing. When towards the end of my degree, I told my then girlfriend, another social science student, that I might choose a career in programming, she exclaimed, “oh no, what a waste. Why would you want to do that?” If you did a pop-quiz at any middle-class gathering in the UK and asked people to compare, say, medicine, law, architecture or even something like accountancy, with software development, I can guarantee that they would rate it as having a lower social status. Even within business, or at least more traditional businesses, software development is seen as a relatively menial middle-brow occupation suitable for juniors and those ill-qualified for middle management. Perversely, all these courses saying ‘learn to code, it’s easy’ just reinforce the perception that software development is not a serious career.

There’s another problem with software development that’s the flip side of the low barriers to entry mentioned above, and that is there is no well established entry route into the profession. Try Googling for ‘how to become a doctor’, or ‘how to become a lawyer’ for example:

how-to-become-a-doctor

There are a well established series of steps to a recognised professional qualification. If you complete the steps, you become a recognised member of one of these professions. I’m not saying it’s easy to qualify as a doctor, but there’s little doubt about how to go about it. Now Google for ‘how to become a software developer’, the results, like this one for example, are full of vague platitudes like ‘learn a programming language’, ‘contribute to an open source project’, ‘go to a local programming group’. No clear career path, no guarantees about when and if you will be considered a professional and get access to those high-paying jobs of the future.

Yes, I made this up, but it makes the point. :)

Now take a high-aptitude individual who has done well at school and finds demanding intellectual tasks relatively straightforward, and offer them a choice: on the one hand, here is a career, let’s take medicine for example, you follow these clearly enumerated steps, which are demanding but you are good at passing exams, and at the end you will have a high-status, high paying job. Or, how about this career: go away, learn some stuff by yourself, we’re not sure exactly what; try and get a junior, low status job, and just learn more stuff – which you can work out somehow – and work your way up. No guarantees that there’s a well paying job at the end of it. Oh, and, by the way, the whole world will think you are a bit of a social pariah while you are about it. Which would you choose?

So could software development follow the example of older professions and establish a professional qualification with high barriers to entry? There are attempts to do this. The British Computer Society (BCS) calls itself ‘the chartered institute for IT’ and seeks establish professional qualifications and standards. The problem is that it’s comprehensively ignored by the software industry. Even if you could get the industry to take a professional body seriously, how would you test people to see if they qualified? What would be on the exam? There are very few established practices in programming and as soon as one seems to gain some traction it gets undermined by the incredibly rapid pace of change. Take Object Oriented programming for example. In the 2000’s, it seemed to be establishing itself as the default technique for enterprise programming, but now many people, including myself, see it as a twenty year diversion and largely a mistake. How quickly would programming standards and qualifications stay up to date with current practice? Not quickly enough I suspect.

However, my main point in this post has been to establish that programming is a high-aptitude task, one than only some people are capable of doing with any degree of success. If the main point of a professional qualification is filter out people who can’t code, does it really matter if what is being tested for is out of date, or irrelevant to current industry practices? Maybe our tentative qualification would involve the completion of a reasonably serious program in LISP? A kind of Glass Bead Game for programmers? The point would be to find out if they can code. They can learn what the current fads are later. The problem still remains how to get industry to recognise the qualification.

In the meantime we should stop selling people a lie. Programming is not easy, it is hard. You can’t learn to code, certainly not to a standard to get a well-paid-job-of-the-future, in just a few weeks. The majority of the population can not learn to code at all, no matter how much training they receive. I doubt very much if the plethora of quick learn-to-code courses will have any impact at all on the skills shortage, or the problem of unskilled low pay and unemployment. Let’s stop pretending that there are artificial barriers to entry and accept that the main barrier to anyone taking it up is their natural aptitude for it. Instead let’s work on improving the social status of the software industry – I think this is in any case happening slowly – and also work on encouraging talented young people to consider it as a viable alternative to some of the other top professions.

Thursday, September 10, 2015

Partial Application in C#

My recent post, C# Program Entirely With Static Methods, got lots of great comments. Indeed, as is often the case, the comments are in many ways a better read than the original post. However, there were several commenters who claimed that C# does not have partial application. I take issue with this. Any language that supports higher order functions, that is, functions that can take functions as arguments and can return functions, by definition, supports partial application. C# supports higher order functions, so it also supports partial application.

Let me explain.

Let’s start by looking at partial application in F#. Here’s a simple function that adds two numbers (you can type this into F# interactive):

>let add a b = a + b;;

Now we call use our ‘add’ function to add two numbers, just as we’d expect:

> add 3 4;;
val it : int = 7

But because F# supports partial application we can also do this:

> let add3 = add 3;;
> add3 4;;
val it : int = 7

We call add with a single argument and it returns a function that takes a single argument which we can then use to add three to any number.

That’s partial application. Of course, if I try this in C# it doesn’t work:

image

Red squiggly line saying “delegate Func has two parameters but is invoked with one argument.

Case proven you say: C# does not support partial application!

But wait!

Let’s look again at the F# add function. This time I’ll include the response from F# interactive:

> let add a b = a + b;;
val add : a:int -> b:int -> int

This shows us the type of the add function. The important bit is: “a:int –> b:int –> int”. This tells us that ‘add’ is a function that takes an int and returns a function that takes an int and returns an int. It is not a function with two arguments. F# is a restrictive language, it only has functions with single arguments. That is a good thing. See Mark Seemann’s post Less is More: Langauge Features for an in depth discussion of how taking features away from a language can make it better. When people say “F# supports partial application” what they really mean is that “F# functions can only have one argument.” The F# compiler understands the syntax ‘let add a b = …’ to mean “I want a function that takes a single argument and returns a function that takes another single argument.”

There’s nothing to stop us from defining our C# function with the same signature as our F# example. Then we can partially apply it in the same way:

image

There you are: partial application in C#. No problem at all.

“But!” You cry, “That’s weird and unusual C#. I don’t want to define all my functions in such a strange way.” In that case, let me introduce you to my friend Curry. It’s not a spicy dish of South Asian origin but the process of turning a function with multiple arguments into a series of higher order functions. We can define a series of overloaded Curry extension methods:

image

We can then use them to turn ‘ordinary’ C# functions with multiple arguments into higher-order functions which we can partially apply:

image

Thinking more about Mark Seemann’s blog post, it would be an interesting exercise to start to take features away from C# whilst keeping syntactic changes to a minimum. If we took away multiple function arguments, classes, interfaces, nullable types, default mutability etc, would we end up with a subset language that would be perfect for functional programming, but still familiar to C# developers? You would of course lose backward compatibility with existing C# code, so the incentive to do it isn’t that great, but it’s a fascinating thought experiment.

Friday, August 07, 2015

C#: Program Entirely With Static Methods

OK, that’s a provocative title to get your attention. This post is really about how one can move to a more functional programming style and remove the need for much of the apparatus of object-oriented programming, including interfaces and classes. In this post, I’m going to take some typical object-oriented C# code and refactor it in a more functional style. I’ll show that the result is more concise and easier to test.

Over the past couple of years I’ve noticed that my C# coding style has changed drastically under the influence of functional programming. Gone are interfaces and instance classes to be replaced by static methods, higher-order functions and closures. It’s somewhat ironic since I spent many years as a cheerleader for object-oriented programming and I considered static methods a code smell.

I guess if I look at my programming career, it has the following progression:

Procedural –> Object-Oriented –> Functional

The OO phase now looks like something of a detour.

C# has all the essential features you need for functional programming – higher-order functions, closures, lambda expressions – that allow you to entirely ditch the OO programming model. This results in more concise, readable and maintainable code. It also has a huge impact on unit testing, allowing one to do away with complex mocking frameworks, and write far simpler tests.

Introducing our object oriented example

Let’s look at an example. First I’ll introduce a highly simplified OO example, a simple service that grabs some customer records from a data-store, creates some reports and then emails them. Then I’ll show the same code refactored in a more functional style using delegates and higher-order static methods.

Let’s look at the object-oriented example first:

Well written object-oriented code is compositional. Concrete classes depend on abstractions (interfaces). These interfaces are consumed as dependencies by classes that rely on them and are usually injected as constructor arguments. This is called Dependency Injection. It’s good practice to compose object instances in a single place in the application - the composition root - usually when the application starts up, or on a significant event, such as an HTTP request. The composition can be hand coded or handed off to an IoC container. The constructed graph is then executed by invoking a method on the root object. This often occurs via an application framework (such as MVC or WebApi) rather than being explicitly invoked by user code.

We are going to get some customer records, create some reports and then email them to our customers. So first we need three interfaces: a data access abstraction, a report building abstraction, and an emailing abstraction:

And here are the implementations. This is not a real program of course, I’ve just coded some dummy customers and the emailer simply writes to the console.

Now we have our service class that depends on the three abstractions and orchestrates the reporting process:

As you can see, we inject the dependencies as constructor arguments, store them in class properties, then invoke methods on them in the code in the RunCustomerReportBatch method. Some people like to store the dependencies in class fields instead. That’s a matter of choice.

Our composition root composes the ReportingService with its dependencies and then returns it for the program to invoke. Don’t forget this is a highly simplified example. Composition is usually far more complex:

To write a unit test for the reporting service we would typically use either hand-crafted mocks, or some kind of mocking framework. Here’s an example unit test using XUnit and Moq:

We first create mocks for ReportingService’s dependencies with the relevant methods stubbed, which we inject as constructor arguments. We then invoke ReportingService and verify that the emailer was invoked as expected.

So that’s our object-oriented example. It’s typical of much well constructed C# code that you will find in the wild. It’s the way I’ve been building software for many years now with much success.

However, this object-oriented code is verbose. About a third of it is simply OO stuff that we have to write repeatedly and mechanically rather than code that is actually solving our problem. This boilerplate includes: the class’ properties (or fields) to hold the dependencies; the assigning of constructor arguments to those properties; writing the class and constructor. We also need complex mocking frameworks simply to test this code. Surely that’s a smell that’s telling us something is wrong?

Enlightenment

Enlightenment begins when you realise that the dependencies and method arguments can actually just be seen as arguments that are applied at different times in the application’s lifecycle. Consider a class with a single method and a single dependency:

We could equally represent this as a static method with two arguments:

But how do we partially apply these arguments? How do we give ‘DoThing’ the IDependency argument at composition time and the ‘string arg’ at the point where it is required by the application logic? Simple: We use a closure. Anything taking a dependency on ‘DoThing’ will ask for an Action<string>, because that is the signature of the ‘Do’ method in our ‘Thing’ class. So in our composition root, we ‘close over’ our previously created IDependency instance in a lambda expression with the signature, Action<string>, that invokes our DoThing static method. Like this:

So the interface is replaced with the built-in Action<T> delegate, and the closure is effectively doing the job of our ‘Thing’ class, the interface’s implementation, but with far fewer lines of code.

Refactoring to functional

OK. Let’s go back to our example and change it to use this new insight. We don’t need the interface definitions. They are replaced by built in delegate types:

ICustomerData becomes Func<IEnumerable<Customer>>

IEmailer becomes Action<string, string>

IReportBuilder becomes Func<Customer, Report>

The classes are replaced with static methods:

Our ReportingService is also replaced with a single static method that takes its dependencies as delegate arguments:

Composition looks like this:

This is functionally equivalent to the object-oriented code above, but it has 57 lines of code as opposed to 95; exactly 60% of the original code.

There’s also a marked simplification of the unit test:

The requirement for a complex mocking framework vanishes. Instead we merely have to set up simple lambda expressions for our stubs. Expectations can be validated with closed over local variables. It’s much easier to read and maintain.

Moving to a functional style of programming is certainly a huge departure from most C# code that you find in the wild and can initially look a little odd to the uninitiated. But it has many benefits, making your code more concise and easier to test and reason about. C# is, surprisingly, a perfectly adequate functional programming language, so don’t despair if for practical reasons you can’t use F#.

The complete code example for this post is on GitHub here: https://github.com/mikehadlow/FunctionalDemo

Friday, June 05, 2015

C#: How to Record What Gets Written to or Read From a Stream

Streams are a very nice abstraction over a read/write loop. We can use them to represent the contents of a file, or a stream of bytes to or from a network socket. They make it easy to read and write large amounts of data without consuming large amounts of memory. Take this little code snippet:

Example.txt may be many GB in size, but this operation will only ever use the amount of memory configured for the buffer. As an aside, the .NET framework’s Stream class’s default buffer size is the maximum multiple of 4096 that is still smaller than the large object heap threshold (85K). This means it likely to be collected at gen zero by the garbage collector, but still gives good performance.

But what if we want to log or view the contents of Example.txt as it’s copied to the output file? Let me introduce my new invention: InterceptionStream. This is simple class that inherits and decorates Stream and takes an additional output stream. Each time the wrapped stream is read from, or written to, the additional output stream gets the same information written to it. You can use it like this:

I could just as well have wrapped the input stream with the InterceptionStream for the same result:

You can use a MemoryStream if you want to capture the log in memory and assign it to a string variable, but of course this negates the memory advantages of the stream copy since we’re now buffering the entire contents of the stream in memory:

Here is the InterceptionStream implementation. As you can see it’s very simple. All the work happens in the Read and Write methods:

Thursday, May 28, 2015

Inject DateTime.Now to Aid Unit Tests

If you have logic that relies on the current system date, it's often difficult to see how to unit test it. But by injecting a function that returns DateTime.Now we can stub the current date to be anything we want it to be.
Let's look at an example. Here we have a simple service that creates a new user instance and saves it in a database:
    public class UserService : IUserService
    {
        private readonly IUserData userData;

        public UserService(IUserData userData)
        {
            this.userData = userData;
        }

        public void CreateUser(string username)
        {
            var user = new User(username, createdDateTime: DateTime.UtcNow);
            userData.SaveUser(user);
        }
    }
Now if I want to write a unit test that checks that the correct created date is set, I have to rely on the assumption that the system date won't change between the creation of the User instance and the test assertions.
    [TestFixture]
    public class UserServiceTests
    {
        private IUserService sut;
        private IUserData userData;

        [SetUp]
        public void SetUp()
        {
            userData = MockRepository.GenerateStub<iuserdata>();
            sut = new UserService(userData);
        }

        [Test]
        public void UserServiceShouldCreateUserWithCorrectCreatedDate()
        {
            User user = null;

            // using Rhino Mocks to grab the User instance passed to the IUserData stub
            userData.Stub(x => x.SaveUser(null)).IgnoreArguments().Callback<user>(x =>
            {
                user = x;
                return true;
            });

            sut.CreateUser("mike");

            Assert.AreEqual(DateTime.UtcNow, user.CreatedDateTime);
        }
    }
But in this case, probably because Rhino Mocks is doing some pretty intensive proxying, a few milliseconds pass between the user being created and my assertions running.
Test 'Mike.Spikes.InjectingDateTime.UserServiceTests.UserServiceShouldCreateUserWithCorrectCreatedDate' failed: 
  Expected: 2015-05-28 09:08:18.824
  But was:  2015-05-28 09:08:18.819
 InjectingDateTime\InjectDateTimeDemo.cs(75,0): at Mike.Spikes.InjectingDateTime.UserServiceTests.UserServiceShouldCreateUserWithCorrectCreatedDate()
The solution is to inject a function that returns a DateTime:
    public class UserService : IUserService
    {
        private readonly IUserData userData;
        private readonly Func<datetime> now;

        public UserService(IUserData userData, Func<datetime> now)
        {
            this.userData = userData;
            this.now = now;
        }

        public void CreateUser(string username)
        {
            var user = new User(username, createdDateTime: now());
            userData.SaveUser(user);
        }
    }
Now our unit test can rely on a fixed DateTime value rather than one that is changing as the test runs:
    [TestFixture]
    public class UserServiceTests
    {
        private IUserService sut;
        private IUserData userData;

        // stub the system date as some arbirary date
        private readonly DateTime now = new DateTime(2015, 5, 28, 10, 46, 33);

        [SetUp]
        public void SetUp()
        {
            userData = MockRepository.GenerateStub<iuserdata>();
            sut = new UserService(userData, () => now);
        }

        [Test]
        public void UserServiceShouldCreateUserWithCorrectCreatedDate()
        {
            User user = null;
            userData.Stub(x => x.SaveUser(null)).IgnoreArguments().Callback<user>(x =>
            {
                user = x;
                return true;
            });

            sut.CreateUser("mike");

            Assert.AreEqual(now, user.CreatedDateTime);
        }
    }
And the test passes as expected.
In our composition root we inject the current system time (here as UTC):
    var userService = new UserService(userData, () => DateTime.UtcNow);
This pattern can be especially useful when we want to test business logic that relies on time passing. For example, say we want to check if an offer has expired; we can write unit tests for the case where the current (stubbed) time is both before and after the expiry time just by injecting different values into the system-under-test. Because we can stub the system time to be anything we want it to be, it makes it easy to test time based busines logic.