Thursday, July 31, 2008

Playing with ADO.NET Data Services

I've recently been asked to architect and build my first major Silverlight based application. The requirement is for an on-line accommodation booking service for a large multi-site educational institution. As a fully paid up member of the ORM mafia I really wanted to have the same persistence ignorance on the client that I enjoy on the server. The only way I could see to achieve this was building some kind of generic web service front end on top of my repository model along with a client side proxy that behaved in much the same way. It looked like a lot of work.

Enter ADO.NET Data Services. The first thing I'll say about it, is that the name is really crap. I can't see how this has anything remotely to do with ADO.NET. Fundamentally it's a combination of a REST style service over anything that provides an IQueryable implementation, plus client side proxies that expose an IQueryable API. Here's a diagram:

ado_net_architecture

The Data Context can be any class that has a set of properties that return IQueryable<T> and that implements System.Data.Services.IUpdatable and System.Data.Services.IExpandProvider. The IQueryable<T> properties give the entities that are exposed by the REST API, IUpdatable supports (as you'd expect) posting updates back to the Data Context and IExpandProvider supports eagerly loading your object graph. The REST API has lazy load semantics by default, with all entity references being expressed as REST URLs.

It's very nicely thought out and easy to use. You simply create a new DataService<T> where T is your Data Context and the service auto-magically exposes your data as a RESTfull web service. I've been playing with the Northwind example. You just create a new Web Application in Visual Studio, add a "LINQ to SQL classes" item to your project and drag all the Northwind tables onto the LINQ-to-SQL designer. Next add an "ADO.NET Data Service" to the project then it's just a question of entering the name of the NorthwindDataContext class as the type parameter of the DataService:

ado_net_DataService

Note I've also added a ServiceBehaviour to show exception details in faults. Now when I hit F5 I get this error:

ado_net_ds_request_error

The way that ADO.NET Data Services works an entity's primary key is pretty simplistic. It just looks for properties that end with ID and gets confused by CustomerCustomerDemos. Shawn Wildermuth explains about this issue here. So I'll just remove everything from the designer except for tables with an obvious Id, and now I get this:

ado_net_toplevel

It's easy to explore the model by playing with the URL...

Currently only the much loved Entity Framework and LINQ to SQL provide a Data Context that you can use with this. David Hayden shows how to set up a LINQ to SQL project here. However, Shawn Wildermuth has been doing some great work exposing LINQ to NHibernate as a Data Service friendly Data Context. You can read about it here, here, here and here. His NHibernateContext is now part of the LINQ to NHibernate project. My project is using NHibernate as it's ORM, so I'll posting more about this soon.

You can download my sample using LINQ to SQL and Entity Framework here:

http://static.mikehadlow.com/Mike.DataServices.zip

Tuesday, July 29, 2008

There's usually an easier way

My last post, 'more fun with yield return' described a simple method I'd added to a Contact class to iterate its properties (all strings) and only return those that were not null or empty. It was a nice use of the 'yield return' statement to build a custom iterator. I got a good comment from Ken Egozi who thought that I was putting UI code in my entity. He's right of course, the entity is not the right place for stuff like this. So I started thinking about creating a UI method to build the list and then a generic UI method for building a list of any entity's properties. It was getting pretty funky, with LineBuilder<T> and all kinds of crazy functional stuff, but a couple of refactorings later and all my code evaporated leaving just an array initializer and a Linq Where clause. How about this...

using System;
using System.Linq;

namespace Mike.PropertyRenderer
{
   class Program
   {
       static void Main(string[] args)
       {
           var contact = new Contact
           {
               Firstname = "Mike",
               Lastname = "Hadlow",
               Address1 = "3 Nicely Avenue",
               Country = new Country { Name = "UK" }
           };

           foreach (var s in new []{
               contact.Firstname,
               contact.Lastname,
               contact.Address1,
               contact.Address2,
               contact.Address3,
               contact.Town,
               contact.County,
               contact.Postcode,
               contact.Country.Name}
               .Where(line => !string.IsNullOrEmpty(line)))
           {
               Console.WriteLine(s);
           }

           Console.ReadLine();
       }
   }

   public class Contact
   {
       public string Firstname { get; set; }
       public string Lastname { get; set; }
       public string Address1 { get; set; }
       public string Address2 { get; set; }
       public string Address3 { get; set; }
       public string Town { get; set; }
       public string County { get; set; }
       public string Postcode { get; set; }
       public Country Country { get; set; }
   }

   public class Country
   {
       public string Name { get; set; }
   }
}

gives this:

PropertyRenderer

I've definitely spent enough time on this now :P

Sunday, July 27, 2008

More fun with 'yield return'

I'm a big fan of custom iterators that were introduced with C# 2.0. I keep on finding new uses for them. Today I was thinking about rendering contact addresses in Suteki Shop. I have a contact that looks like this:

sutekishopContact

Only Firstname, Lastname, Address1 and CountryId are required fields, all the others are optional. Previously I was rendering a contact like this:

sutekishopContactOldRender

If say, Address1, Address2 and Town were missing the contact renders with gaps in it. Also it's a bore writing out each line like that. With 'yield return' I was able to add a custom iterator 'GetAddressLines' to my contact class so that I can simply render out the lines that are available:

sutekishopContactRender

Here's the code:

public IEnumerable<string> GetAddressLines()
{
 yield return Fullname;
 yield return Address1;

 if (!string.IsNullOrEmpty(Address2))
 {
     yield return Address2;
 }

 if (!string.IsNullOrEmpty(Address3))
 {
     yield return Address3;
 }

 if (!string.IsNullOrEmpty(Town))
 {
     yield return Town;
 }

 if (!string.IsNullOrEmpty(County))
 {
     yield return County;
 }

 if (!string.IsNullOrEmpty(Postcode))
 {
     yield return Postcode;
 }

 yield return Country.Name;
}

Monday, July 21, 2008

MSTest is sapping my will to live

The team I'm currently working with has spent a lot of money buying MS's team suite and so naturally we wanted to use MSTest (AKA Team System Unit Testing). In theory using MSTest allows you to simply integrate tests into your build process and generate coverage reports etc. Unfortunately the grim reality is somewhat different.

Unit test tools have been around for several years. I've been using NUnit on a daily basis for around four years. I've never really paid much attention to it because it's one of those tools that just works; you write your tests, attribute them as [Test] and they either pass or fail. I keep hearing good things about MbUnit, but I haven't tried it yet because I feel no pain with NUnit and its constant companion, the excellent TestDriven.net. In simple terms, this is how NUnit/TestDriven.net works:

  1. Attribute any class with TestFixture and any void-void method with Test. Use Asserts to check my expectations.
  2. I have F8 mapped to run tests, so I hit F8 and TestDriven points NUnit at the test assembly in bin/debug folder (or wherever the project's output path points to).
  3. NUnit loads the assembly and executes any method attributed with Test and outputs the results to the output window.

And that's it. If my tests need any attendant configuration or need to load any assemblies dynamically I just make sure they are copied to the output path.

As I said before NUnit has been around for a while and MbUnit does the same stuff but with some extra bells and whistles. Any unit testing framework should work in a similar way and should be as simple to use as possible.

Enter MSTest. Now you would have thought that with the great success of NUnit to copy, even Microsoft couldn't make too great a hash of it. But they have. Two major faults make it an unsuitable tool for running unit tests:

  1. The test runner does not simply reflect over an assembly and run any method attributed with TestMethod, instead you have to create a special test project and your tests have to be present in an XML file in order for the test runner to find them. Why? Why should we need a special test project? There's no conceivable reason I can see. It means you have to have your tests in a separate assembly than the code under test. Now it's my practice to do this, but you shouldn't be forced to do it. Worse though is that it's just too easy for the XML file to get out of sync with the tests and for your tests to mysteriously not get executed. That's not to mention the source control nightmare it causes.
  2. The test runner runs the test assemblies in some other place that's different each time. Now you can't rely on an assembly manifest to tell you everything that some code might need to run. I'm a big fan of IoC containers which means that much of my application's object graph gets loaded at runtime. I get frequent test failures because my tests can't find assemblies they need to load. Configuration is a similar issue, miscellaneous files don't get copied to the test run location. Now I know that you can do add attributes to every single test or add stuff into the test configuration file to get the test runner to copy certain files to its run directory, but why should I have to do this? Also It doesn't clean up after itself and I run tests every minute of every day, this means it builds up not an insignificant amount of crud on my disk.

I'm just going to mention in passing that the MSTest Asserts are pre-historic and look a bit like NUnit version-a-long-time-ago and that the default unit test project is full of unnecessary files and that the unit test template generates a page of crud that you have to delete before you can start work. That's no so important and I could live with it if the core design wasn't so FUBAR.

MSTest looks like a tool that was designed by someone who'd never really done much unit testing. They'd certainly not worked with NUnit to any degree. I really can't understand the thought process that resulted in such bad design decisions, but it looks like it's designed to be run once a week with an great deal of ceremony. In other words MS have totally misunderstood TDD.

So now I'm lobbying hard to get us to drop MSTest and use NUnit or MbUnit instead. In fact experience with the whole team system is that it's not worth spending the money, you get far better results and much less stress from using a combination of open and third party tools. But that's another post...

Friday, July 18, 2008

Extension Method + LINQ + Interface = Reusable Queries

One of the requirements of Suteki Shop is that admin users should be able to create various new entities (categories, products, content etc) before they are published on the public web site. To this end I created a simple interface, IActivatable:

public interface IActivatable
{
   bool IsActive { get; set; }
}

As you can see it's got a single property 'IsActive'. Any entity that needs to be active/inactive implements it. So my product class' partial class definition looks like this:

public partial class Product : IOrderable, IActivatable, IUrlNamed
{
 ....
}

I'll explain IOrderable and IUrlNamed later, but they do similar things. Each entity has a matching table in the database and each entity that implements IActivatable also has a bit column IsActive. Here are Category and Product:

Product_Category_DB

When I drag these tables onto the LINQ-to-SQL design surface, Product and Category classes are created with IsActive properties. I can make their matching partial classes implement IActivatable and the designer IsActive property satisfies the implementation requirement.

Now I can write a reusable LINQ query extension method for any IActivatable to filter only active items:

public static IQueryable<T> Active<T>(this IQueryable<T> items) where T : IActivatable
{
   return items.Where(item => item.IsActive);
}

Now every time I need a list of active items I can just chain the query with Active(),

var products = productRepository.GetAll().Active();

Remember the IOrderable and IUrlNamed interfaces that the Product partial class implemented? They work in exactly the same way. I explained IOrderable in a previous post, it's used to create a generic service to order and re-order lists of entities. IUrlNamed provides a generic way of querying entities given a unique string from the URL. For example, Suteki Shop has nice URLs for products: http://sutekishop.co.uk/product/East_End_Trilby. So there's an extension method:

public static T WithUrlName<T>(this IQueryable<T> items, string urlName) where T : IUrlNamed
{
   T item = items
       .SingleOrDefault(i => i.UrlName.ToLower() == urlName.ToLower());

   if (item == null) throw new ApplicationException("Unknown UrlName '{0}'".With(urlName));
   return item;
}

That allows us to look up products like this:

public ActionResult Item(string urlName)
{
   Product product = productRepository.GetAll().WithUrlName(urlName);
   return View("Item", ShopView.Data.WithProduct(product));
}

Friday, July 11, 2008

ALT.NET UK (un)conference

Yeah! I'm the first person to sign up for this year's ALT.NET UK (un)conference, I even got a congratulations email from Ben (thanks!). After I missed last year's sign up, I thought I'd better get up early :P We should give a big thankyou to Ben, Alan and Ian for organizing it. See you there dear reader!

Friday, July 04, 2008

Why isn't Microsoft ALT.NET?

My last post: ‘The ALT.NET virtuous network’ has a comment from Ken Egozi: "imo this is all stuff that Mainstream.NET people should do anyway ... I mean - High cohesion/Low coupling? that's software engineering 101" I started writing a reply in the comments, but it soon became a full scale rant so I’ve promoted it to a post. Of course, Ken is absolutely right, why should all this be ALT.NET? Well, by ALT.NET I guess we mean doing things not necessarily as recommended by MSDN and using non-Microsoft tools to do it. If you’re doing Domain Driven Development with Monorail and NHibernate and using Rhino Mocks in your tests then you are ALT.NET. It’s recognition that while the core .NET framework is a powerful beast; easily the equal of Java for example, much of what Microsoft has built on top of it is poorly conceived. So why does Microsoft not encourage good development practice? Partly it’s a symptom of being a monopoly; they commit early and poorly to particular technologies, but partly it’s because they are trying to satisfy developers with such a wide range of skills. I’m an itinerant consultant and have worked in wide range of development shops, many of which have been in blue chip companies with household names. I’ve found that a majority of developers don't know “software engineering 101”. Microsoft’s market research tells them that creating developer tools that assume that knowledge will just confuse many of their customers. It’s well known that the move from classic ASP to ASP.NET was not a popular one with many developers. I know at least two who are still using classic ASP after a frustrating and fruitless attempt at learning ASP.NET. Sometimes Microsoft seems to listening to the ALT.NET crowd, with developments like the MVC Framework. But then Entity Framework comes along which seems to suggest the opposite. So here’s my suggestion for Microsoft: I would like to see them recognize that they have at least two (probably more) very distinct developer markets and clearly mark different tools and guidance for them. One group is what we describe as ALT.NET; Software developers who clearly understand the core principles of software engineering. Microsoft should continue as they have been doing with the MVC Framework for this group: enter into dialogue about the development of the tools and encourage integration with 3rd party and open source frameworks. Another group could be described as the ‘we’d probably defect to PHP’ group, or ‘Morts’ in Microsoft parlance. These are developers who don’t fully grep software engineering 101. It should be clear that the advice to this group is based on this presumption. The core point here is to avoid the situation that many in the ALT.NET group often find themselves in: that they are blocked from pursuing good software development principles or using the best tools because the MSDN documentation is aimed at this group. A majority of developers never look further than the MSDN home page and much of the guidance there doesn't encourage basic principles like HC/LC. Just look at any of the Microsoft official training courses none of them encourage the trainee to learn core programming skills, or even make them aware that they exist. If Microsoft made it clear in their documentation that there is a better way of doing things, but it requires some study to understand, I think it would raise the game across the developer spectrum.

Wednesday, July 02, 2008

The ALT.NET virtuous network

This is an attempt at visualising the virtuous relationship between various ALT.NET development practices. Any comments?

Tuesday, July 01, 2008

Eagerly Loading Northwind: playing with DataLoadOptions

An email today from Hrvoje Hudoletnjak set me off experimenting with LINQ-to-SQL DataLoadOptions. By default LINQ-to-SQL lazy loads entities. Here's a bit of the Northwind database mapped with the LINQ-to-SQL designer: If we do some simple data access with the Northwind database like this:
var dataContext = new NorthwindDataContext();

var theOrder = dataContext.GetTable<Order>().Single(order => order.OrderID == 10248);

Console.WriteLine("Order Date: {0}\r\nShipper: {1}\r\nCustomer Name: {2}\r\n", 
    theOrder.OrderDate, 
    theOrder.Shipper.CompanyName,
    theOrder.Customer.ContactName);

Console.WriteLine("Customer Demographic:");
foreach (var customerCustomerDemo in theOrder.Customer.CustomerCustomerDemos)
{
    Console.Write("{0}, ", customerCustomerDemo.CustomerDemographic.CustomerDesc);
}
Console.WriteLine("\r\n");

foreach (var orderDetail in theOrder.Order_Details)
{
    Console.WriteLine("Product Name: {0},\r\nSuppler Name: {1},\r\nCategory {2},\r\nQuantity {3}\r\n\r\n",
        orderDetail.Product.ProductName,
        orderDetail.Product.Supplier.CompanyName,
        orderDetail.Product.Category.CategoryName,
        orderDetail.Quantity);
}
Which gives this result: And then we use SQL Profiler to see what SQL gets thrown at the database, we get this: 17 separate SQL select statements thrown at our database: If we then use DataLoadOptions to eagerly load our order like this:
using System;
using System.Data.Linq;
using System.Linq;

namespace Mike.Northwind
{
    class Program
    {
        static void Main(string[] args)
        {
            var dataContext = new NorthwindDataContext();

            var options = new DataLoadOptions();

            options.LoadWith<Order>(order => order.Shipper);
            options.LoadWith<Order>(order => order.Customer);
            options.LoadWith<Customer>(customer => customer.CustomerCustomerDemos);
            options.LoadWith<CustomerCustomerDemo>(ccd => ccd.CustomerDemographic);

            options.LoadWith<Order>(order => order.Order_Details);
            options.LoadWith<Order_Detail>(orderDetail => orderDetail.Product);
            options.LoadWith<Product>(product => product.Supplier);
            options.LoadWith<Product>(product => product.Category);

            dataContext.LoadOptions = options;

            var theOrder = dataContext.GetTable<Order>().Single(order => order.OrderID == 10248);

            Console.WriteLine("Order Date: {0}\r\nShipper: {1}\r\nCustomer Name: {2}\r\n", 
                theOrder.OrderDate, 
                theOrder.Shipper.CompanyName,
                theOrder.Customer.ContactName);

            Console.WriteLine("Customer Demographic:");
            foreach (var customerCustomerDemo in theOrder.Customer.CustomerCustomerDemos)
            {
                Console.Write("{0}, ", customerCustomerDemo.CustomerDemographic.CustomerDesc);
            }
            Console.WriteLine("\r\n");

            foreach (var orderDetail in theOrder.Order_Details)
            {
                Console.WriteLine("Product Name: {0},\r\nSuppler Name: {1},\r\nCategory {2},\r\nQuantity {3}\r\n\r\n",
                    orderDetail.Product.ProductName,
                    orderDetail.Product.Supplier.CompanyName,
                    orderDetail.Product.Category.CategoryName,
                    orderDetail.Quantity);
            }
        }
    }
}
We only get two hits. First this large select which is pretty much everything: And this one which gets CustomerDemographic. Now I wondered why CustomerDemographic has to be fetched separately and I guess it's either that it's at the other end of a many-to-many relationship with a join table, or that it's another collection in addition to the order details and it only makes sense to get one collection at a time? What is plain is that you can dramatically reduce the round trips to the database that are required by judicious use of DataLoadOptions. Source for this here http://static.mikehadlow.com.s3.amazonaws.com/Mike.Northwind.zip