This is the second part of an on-going series about Inversion of Control. The first was:
What is Inversion of Control?
And I plan to do a few more digging deeper into the wonderful world of IoC. Hmm, nice.
OK, so now I want to talk about how IoC can help you to test your classes / components. As an aside, I'm really not sure whether to say 'class' or 'component' these days. We are essentially talking about individual classes, but by leveraging IoC you can make your classes more component like. I'll try and expand on this train of thought in one of my future posts. But, getting back to the point, I want to show how IoC is almost essential if you are going to do meaningful unit testing and how you can use Mock objects to completely isolate the class under test.
Do you remember this code from the last post, a little reporter class that gets some reports from a ReportBuilder and then sends them with a ReportSender:
public class Reporter
{
public void Send()
{
ReportBuilder reportBuilder = new ReportBuilder();
List reports = reportBuilder.GetReports();
ReportSender reportSender = new ReportSender();
// send by email
reportSender.Send(reports, ReportSendType.Email);
// send by SMS
reportSender.Send(reports, ReportSendType.Sms);
}
}
Now what if we want to write a unit test for Reporter? How about this:
[Test]
public void OReporterTest()
{
Reporter reporter = new Reporter();
reporter.Send();
}
What happens when we run this test? In my little demo application it outputs some stuff to the console. In a real world scenario the ReportBuilder class might get it's list of reports from a relational database via a data access layer. The data access layer might need configuration. The ReportSender might rely on an email library and an SMS library. They might also need configuration. The logging might rely on a logging library, also configurable, it might write to the machine's application log. There might be helper functions from other places in our application.
If we run this test, unless we do some very sophisticated work hooking up email and SMS clients and poking into in the application log we are probably going to rely on manually checking our email in-box, our mobile phone's text messages and visually inspecting the application log to see if our little Reporter class has done its job. All that work just for a class method five lines long.
We might do this once or twice while we're writing Reporter, but we're unlikely to bother every time we change some other part of the application. But say we change something in the data access layer, who's to say that won't break Reporter, or how about when some other class wants to use ReportSender, but in a slightly different way that requires a change that breaks Reporter? We wont find out that Reporter is broken until some time later in System testing, and that's if we're lucky.
And there's more craziness. When we test Reporter, we're not just testing reporter, we're testing the data access layer, the relational database, the email library, the SMS library, the configuration, the logging... the list goes on. What if it doesn't work? What do we do? That's right, we fire up the debugger and spend the next half an hour stepping line by line through our application scratching our head wondering why the hell it doesn't work.
All for five lines of code.
Now remember our other reporter that used Inversion of Control. It took a ReportBuilder, a ReportSender and a ReportSendLogger (yuck, did I really make up that name) in its contructor:
[Test]
public void ReporterTestWithLogging()
{
IReportBuilder reportBuilder = new ReportBuilder();
ILogger logger = new Logger();
// send by email
IReportSender emailReportSender = new ReportSendLogger(new EmailReportSender(), logger);
Reporter reporter = new Reporter(reportBuilder, emailReportSender);
reporter.Send();
// send by SMS
IReportSender smsReportSender = new ReportSendLogger(new SmsReportSender(), logger);
reporter = new Reporter(reportBuilder, smsReportSender);
reporter.Send();
}
Now in a real world scenario, the way this test is currently written we have exactly the same problems as our simpler non-IoC-reporter I was complaining about above. If we run the test we're testing a large chunk of our application and there's no way for the test itself to know if it's passed or not; we'd have to manually check it.
This is where mock objects come to the rescue. I've written about them before here and here. They are basically replacement instances of our dependencies that can tell us what has happened to them. When I first started to do unit testing I used to write my own, but in the last couple of years I've become a convert to using a mock object library, my favorite now is Rhino Mocks. Here's the same test as above but using mock objects:
[Test]
public void ReporterTestWithMocks()
{
// create mock objects
IReportBuilder reportBuilder = mocks.CreateMock<IReportBuilder>();
IReportSender reportSender = mocks.CreateMock<IReportSender>();
// create the reports
Report report1 = new Report("Report 1");
Report report2 = new Report("Report 2");
List<Report> reports = new List<Report>();
reports.Add(report1);
reports.Add(report2);
// record expectations
Expect.Call(reportBuilder.GetReports()).Return(reports);
reportSender.Send(report1);
reportSender.Send(report2);
// run the test
mocks.ReplayAll();
Reporter reporter = new Reporter(reportBuilder, reportSender);
reporter.Send();
mocks.VerifyAll();
}
First we create mock versions of IReportBuilder and IReportSender. 'mocks' is an instance of the Rhino mock's MockRepository. Then we create some reports for our IReportBuilder to return. The next three lines of code are the really interesting ones. We record what we expect to happen to our reportBuilder and reportSender inside our Reporter instance. When we use our mock objects before the call to mocks.ReplayAll(), the mock object framework simply records what happened. Last of all we create our Reporter and call the Send() method. When we call VerifyAll(), Everything that happened to the mock objects after ReplayAll() is compared to what happened before. If the before and after series of events are different an exception is thrown which causes the test to fail.
The really cool thing about this test is that it only tests Reporter. Nothing else. If the test fails the reason will be obvious. The test is entirely automated. We can run it with hundreds; thousands; of other tests quickly enough that there is no overhead to execute all our unit tests frequently. certainly quickly enough to make sure we never check in any code that causes any of the tests to fail.
If you adopt Test Driven Development, you'll notice all kinds of unexpected side effects. One that I didn't expect was that I've almost stopped using the debugger.
Another really interesting side effect is that we don't actually need concrete versions of IReportBuilder or IReportSender to run this test. We can write Reporter before we write our ReportBuilder or ReportWriter. This is counter to the usual bottom-up way of writing applications where you have to have your lower level code in place before you can write the high level stuff. It lets you adopt a top down style; thinking about high level concerns first and then filling in the details.
In my next post on the amazing world of IoC, I'm going to talk about IoC patterns other than the constructor dependency-injection I've shown so far.