Monday, December 15, 2008

Open Source .NET Exchange

Gojko Adzic and Skills Matter are organising an open source .NET exchange 'mini-conference' for January 22nd. The format is an evening of short talks covering some cool stuff currently emerging in the .NET community, but not yet mainstream.

I'm lucky enough to have been asked to contribute. I'll be talking about the rise of the repository pattern. My plan is to scour the open source world and hunt down some prime examples wild repository to serve up to the attendees.

The rest of the evening looks even better...

Dylan Beattie talking about jQuery.

David Ross talking about Postsharp.

Sebastien Lambla on Fluent NHibernate.

David de Florinier on Active MQ and NMS.

Russ Miles on Spring.NET

See you there!

Monday, December 08, 2008

OpenRasta

OpenRasta is a REST based framework for building web applications. The source has just been released here. RASTA stands for REST Architecture Solution Targeting Asp.net (phew). The man behind OpenRasta is Sebastien Lambla, who's well known for organising the popular Alt.net beers in London and speaking at community events. Here's Seb's description of it:

"Rasta takes a radically different approach. Every Url is mapped to a resource. In other words, http://example.org/Customer is mapped to a resource of type CustomerEntity. Each resource type can have many resource handlers that are responsible for acting upon a resource in a certain way. This could mean retrieving a resource, updating it, etc.

When you access a Url to get to a resource (called dereferencing), the first step Rasta is going to take is to try and locate which handler can be used to access the resource, based on the request you made. For example, http://example.org/Customers/{name}  is a Url that would associate a meaning of name with a value to the handler. From that name, the handler would return an instance of the resource (an instance of CustomerEntity). In other words, a handler is responsible for dereferencing a Uri based on a request to get to a resource, and to act upon it if required.

And that's it. The handler is not involved at any point in how the request is processed or how the response is sent to the client. This is the responsibility of the codecs.

A codec is a component that, to simplify, convert a Content-Type sent on the wire to an object and back. In essence, if your handler returns an instance of CustomerEntity, the codec will be able to convert it into an xml stream representation, or whatever wire format the codec supports. Out of the box, the current Rasta svn repository has support for webforms (for html rendering), json and xml, and I have some working code supporting AtomPub, atom, rss and even support for some WebDav, although all this is mostly prototype quality.

In this model, the resource is the resource, and its representations are handled by codecs that are loosely coupled from the resource itself. You could see it as an application of the semantic web, but based on types and objects. In other words, it works and is good enough."

After hearing Seb speak about it and chatting with him over a couple of beers, it does appear to be a very compelling way of thinking about web applications. A single alternative to the plethora of frameworks (MVC Framework, WCF, ADO.NET Data Services) coming out of Microsoft. He has promised an OpenRasta version of Suteki Shop which I'm really looking forward to seeing. No pressure there Seb :)

Friday, November 28, 2008

Multi-tenancy part 2: Components and Context

In my previous post, Multi-tenancy part 1: Strategy, I talked about some of the high level decisions we have to make when building a single software product for multiple users with diverse requirements. Today I'm going to look at implementing basic multi-tenancy with Suteki Shop. I'm going to assume that my customers have identical functional requirements, but will obviously need to have separate styles and databases. Some other simple configurable items will also be different, such as the name of their company and their contact email address.

But first I want to suggest something that's become quite clear as I've started to think more about implementing multi-tenancy. I call it:

Hadlow's first law of multi-tenancy: A multi-tenanted application should not look like a multi-tenanted application.

What do I mean by this? The idea is that you should not have to change your existing single-tenant application in any way in order to have it serve multiple clients. If you build your application using SOLID principles, if you architect it as a collection of components using Dependency Injection and an IoC container, then you should be able to compose the components at runtime based on some kind of user context without changing the components themselves.

I am going to get Suteki Shop to serve two different tenants without changing a single existing line of of (component) code.

We are going to be serving two clients. The first one is our existing client, the mod-tastic Jump the Gun. I've invented the new client zanywear.com. I just made the name up, it's actually a registered domain name but it's not being used. We're going to serve our clients from the same application instance, so we create a new web site and point it to an instance of Suteki Shop. Now we configure two host headers (AKA Site Bindings) for the web application:

test.jumpthegun.co.uk
zanywear.com

multitenanted-iis

For testing purposes (and because I don't own zanywear.com :) I've added the two domains to my C:\WINDOWS\system32\drivers\etc\hosts file so that it looks like this:

# Copyright (c) 1993-1999 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
#      102.54.94.97     rhino.acme.com          # source server
#       38.25.63.10     x.acme.com              # x client host
127.0.0.1     localhost
127.0.0.1     test.jumpthegun.co.uk
127.0.0.1     zanywear.com

Now when I browse to test.jumpthegun.co.uk or zanywear.com I see the Jump the Gun website.

The task now is to choose a different database, style-sheet and some basic configuration settings when the HTTP request's host name is zanywear.com. Conventiently Suteki Shop has two services that define these items. The first is IConnectionStringProvider which provides (you guessed it) the database connection string:

namespace Suteki.Common.Repositories
{
    public interface IConnectionStringProvider
    {
        string ConnectionString { get; }
    }
}

And the other is the somewhat badly named IBaseControllerService that supplies some repositories and config values to be consumed by the master view:

using Suteki.Common.Repositories;
namespace Suteki.Shop.Services
{
    public interface IBaseControllerService
    {
        IRepository<Category> CategoryRepository { get; }
        IRepository<Content> ContentRepository { get; }
        string GoogleTrackingCode { get; set; }
        string ShopName { get; set; }
        string EmailAddress { get; set; }
        string SiteUrl { get; }
        string MetaDescription { get; set; }
        string Copyright { get; set; }
        string PhoneNumber { get; set; }
        string SiteCss { get; set; }
    }
}

Note that this allows us to to set the name of the style-sheet and some basic information about the shop.

In order to choose which component is used to satisfy a service at runtime we use an IHandlerSelector. This interface was recently introduced to the Windsor IoC container by Oren Eini (AKA Ayende Rahien) specifically to satisfy the requirements of multi-tenanted applications. You need to compile the trunk if you want to use it. It's not in the release candidate. It looks like this:

using System;
namespace Castle.MicroKernel
{
    /// <summary>
    /// Implementors of this interface allow to extend the way the container perform
    /// component resolution based on some application specific business logic.
    /// </summary>
    /// <remarks>
    /// This is the sibling interface to <seealso cref="ISubDependencyResolver"/>.
    /// This is dealing strictly with root components, while the <seealso cref="ISubDependencyResolver"/> is dealing with
    /// dependent components.
    /// </remarks>
    public interface IHandlerSelector
    {
        /// <summary>
        /// Whatever the selector has an opinion about resolving a component with the 
        /// specified service and key.
        /// </summary>
        /// <param name="key">The service key - can be null</param>
        /// <param name="service">The service interface that we want to resolve</param>
        bool HasOpinionAbout(string key, Type service);
        /// <summary>
        /// Select the appropriate handler from the list of defined handlers.
        /// The returned handler should be a member from the <paramref name="handlers"/> array.
        /// </summary>
        /// <param name="key">The service key - can be null</param>
        /// <param name="service">The service interface that we want to resolve</param>
        /// <param name="handlers">The defined handlers</param>
        /// <returns>The selected handler, or null</returns>
        IHandler SelectHandler(string key, Type service, IHandler[] handlers);
    }
}

The comments are self explanatory. I've implemented the interface as a HostBasedComponentSelector that can choose components based on the HTTP request's SERVER_NAME value:

using System;
using System.Linq;
using System.Web;
using Castle.MicroKernel;
using Suteki.Common.Extensions;
namespace Suteki.Common.Windsor
{
    public class HostBasedComponentSelector : IHandlerSelector
    {
        private readonly Type[] selectableTypes;
        public HostBasedComponentSelector(params Type[] selectableTypes)
        {
            this.selectableTypes = selectableTypes;
        }
        public bool HasOpinionAbout(string key, Type service)
        {
            foreach (var type in selectableTypes)
            {
                if(service == type) return true;
            }
            return false;
        }
        public IHandler SelectHandler(string key, Type service, IHandler[] handlers)
        {
            var id = string.Format("{0}:{1}", service.Name, GetHostname());
            var selectedHandler = handlers.Where(h => h.ComponentModel.Name == id).FirstOrDefault() ??
                                  GetDefaultHandler(service, handlers);
            return selectedHandler;
        }
        private IHandler GetDefaultHandler(Type service, IHandler[] handlers)
        {
            if (handlers.Length == 0)
            {
                throw new ApplicationException("No components registered for service {0}".With(service.Name));
            }
            return handlers[0];
        }
        protected string GetHostname()
        {
            return HttpContext.Current.Request.ServerVariables["SERVER_NAME"];
        }
    }
}

It works like this: It expects an array of types to be supplied as constructor arguments. These are the service types that we want to choose based on the host name. The HasOpinionAbout method simply checks the supplied serivce type against the array of types and returns true if there are any matches. If we have an opinion about a service type the container will ask the IHandlerSelector to supply a handler by calling the SelectHandler method. We create an id by concatenating the service name with the host name and then return the component that's configured with that id. So the configuration for Jump the Gun's IConnectionStringProvider will look like this:

<component
  id="IConnectionStringProvider:test.jumpthegun.co.uk"
  service="Suteki.Common.Repositories.IConnectionStringProvider, Suteki.Common"
  type="Suteki.Common.Repositories.ConnectionStringProvider, Suteki.Common"
  lifestyle="transient">
  <parameters>
    <ConnectionString>Data Source=.\SQLEXPRESS;Initial Catalog=JumpTheGun;Integrated Security=True</ConnectionString>
  </parameters>
</component>

Note the id is <name of service>:<host name>.

The configuration for Zanywear looks like this:

<component
  id="IConnectionStringProvider:zanywear.com"
  service="Suteki.Common.Repositories.IConnectionStringProvider, Suteki.Common"
  type="Suteki.Common.Repositories.ConnectionStringProvider, Suteki.Common"
  lifestyle="transient">
  <parameters>
    <ConnectionString>Data Source=.\SQLEXPRESS;Initial Catalog=Zanywear;Integrated Security=True</ConnectionString>
  </parameters>
</component>

Note that you can have multiple configurations for the same service/component in Windsor so long as ids are different.

When the host name is test.jumpthegun.co.uk the HostBasedComponentSelector will create a new instance of ConnectionStringProvider with a connection string that points to the JumpTheGun database. When the host name is zanywear.com it will create a new instance of ConnectionStringProvider with a connection string that points to the Zanywear database. We configure our IBaseControllerService in a similar way.

The only thing left to do is register our IHandlerSelector with the container. When I said I didn't have to change a single line of code I was telling a fib, we do have to change the windsor initialization to include this:

protected virtual void InitializeWindsor()
{
    if (container == null)
    {
        // create a new Windsor Container
        container = new WindsorContainer(new XmlInterpreter("Configuration\\Windsor.config"));
        // register handler selectors
        RegisterHandlerSelectors(container);
        // register WCF integration
        RegisterWCFIntegration(container);
        // automatically register controllers
        container.Register(AllTypes
            .Of<Controller>()
            .FromAssembly(Assembly.GetExecutingAssembly())
            .Configure(c => c.LifeStyle.Transient.Named(c.Implementation.Name.ToLower())));
        // set the controller factory to the Windsor controller factory (in MVC Contrib)
        System.Web.Mvc.ControllerBuilder.Current.SetControllerFactory(new WindsorControllerFactory(container));
    }
}
/// <summary>
/// Get any configured IHandlerSelectors and register them.
/// </summary>
/// <param name="windsorContainer"></param>
protected virtual void RegisterHandlerSelectors(IWindsorContainer windsorContainer)
{
    var handlerSelectors = windsorContainer.ResolveAll<IHandlerSelector>();
    foreach (var handlerSelector in handlerSelectors)
    {
        windsorContainer.Kernel.AddHandlerSelector(handlerSelector);
    }
}

The handler selector setup occurs in the RegisterHandlerSelectors method. We simply ask the container to resolve any configured IHandlerSelectors and add them in. The configuration for our HostBasedComponentSelector looks like this:

<component
  id="urlbased.handlerselector"
  service="Castle.MicroKernel.IHandlerSelector, Castle.MicroKernel"
  type="Suteki.Common.Windsor.HostBasedComponentSelector, Suteki.Common"
  lifestyle="transient">
  <paramters>
    <selectableTypes>
      <array>
        <item>Suteki.Shop.Services.IBaseControllerService, Suteki.Shop</item>
        <item>Suteki.Common.Repositories.IConnectionStringProvider, Suteki.Common</item>
      </array>
    </selectableTypes>
  </paramters>
</component>

Note that we are configuring the list of services that we want to be selected by the HostBasedHandlerSelector by using the array parameter configuration syntax.

And that's it. We now have a single instance of Suteki Shop serving two different clients: Jump the Gun and Zanywear.

multitenanted-websites

Today I've demonstrated the simplest case of multi-tenanting. It hardly qualifies as such because our two tenants both have identical requirements. The core message here is that we didn't need to change a single line of code in any of our existing components. You can still install Suteki Shop and run it as a single-tenant application by default.

In the next installment I want to show how we can provide different implementations of components using this approach. Later I'll deal with the more complex problem of variable domain models. Watch this space!

Tuesday, November 25, 2008

Windsor WCF Integration

I've been playing with the Windsor WCF Integration facility today. I've been using a recent trunk build of the Castle project, and since there's been a lot of work recently on the WCF facility, the docs on the Castle site are somewhat out of date. I had to work out how to use the facility by reading the unit tests and a post by Craig Neuwirt to the Castle Project Development list.

So why would you want to integrate WCF with your IoC container? When you use an IoC container in your application it should be the repository of any service that you consume. The service consumer should not care about how the service is implemented or where it comes from. A service should also not be concerned about how it is consumed. The Windsor WCF facility allows you to use your WCF services and proxies as if they were any other service provided by the container.

Let's think about how this might help us when writing enterprise applications. My eCommerce platform, Suteki Shop, sends the customer an email confirming that they've made an order. It sends them another email when the order is dispatched. I have an interface IEmailSender that defines the contract for sending an email. Currently my Windsor configuration maps the IEmailSender service to a class called EmailSender. The OrderController expects to be given an instance of IEmailSender in its constructor. So now it gets an actual instance of EmailSender that wraps the .NET API for sending emails. When the customer clicks the 'order' button the request blocks while an email is sent. This isn't a very scalable solution, but I'm not worried about it now because the only client I have for Suteki Shop is low volume. If their shop take off and starts to get a lot more traffic, I will want to have a more scalable architecture for sending emails. With the WCF integration I could configure IEmailSender to provide a WCF proxy instead of a concrete class. I could then install EmailSender (the IEmailSender implementation) on another server and configure the local container to expose it as a WCF service. The protocol I use can be configured using WCF. So I could use TCP, SOAP, MSMQ, or whatever fitted the purpose. The key point is that by using the WCF facility I've been able to take a part of my application and move to to a new process or machine without touching a single line of code.

So how do you use the WCF facility?

Service

Let's assume we have a simple service interface:

[ServiceContract(Namespace = "Mike.WindsorWCFIntegration")]
public interface ICustomerService
{
    [OperationContract]
    Customer GetCustomer(int id);        
}

Customer is just some entity in my application. In order to serve this via WCF we have to attribute the service interface with WCF attributes: ServiceContract and OperationContract. We don't have to do anything to the existing Windsor.config file, it looks the same as before:

<component id="customerService"
   service="Mike.WindsorWCFIntegration.ICustomerService, Mike.WindsorWCFIntegration"
   type="Mike.WindsorWCFIntegration.DefaultCustomerService, Mike.WindsorWCFIntegration">
</component>

We're telling the container that when a component asks for an ICustomerService (or 'customerService' by name) they will be given DefaultCustomerService. Since I'm going to host the service as part of a web application, we can use the WCF IIS integration and create a CustomerService.svc file for the customer service that defines the service host:

<%@ ServiceHost Language="C#" Service="customerService" 
Factory="Castle.Facilities.WcfIntegration.DefaultServiceHostFactory, Castle.Facilities.WcfIntegration"  %>

Note that the Service attribute specifies the same name as the component id in the Windsor.config. The other important point is that we're asking WCF to use the Castle DefaultServiceFactory. This service factory wraps the Windsor container which it uses to resolve any service requests.

We configure WCF as normal:

<system.serviceModel>
  <services>
    <service name="customerService">
      <endpoint contract="Mike.WindsorWCFIntegration.ICustomerService" binding="basicHttpBinding" />
    </service>
  </services>
</system.serviceModel>

Finally we have to register the WCF Facility with the container on application start:

protected void Application_Start(object sender, EventArgs e)
{
    Container = new WindsorContainer()
        .AddFacility<WcfFacility>()
        .Install(Configuration.FromXmlFile("Windsor.config"));
}

The nice thing is that we haven't had to alter our Windsor.configuration or our existing service. Although we did have to attribute our service interface with WCF specific concerns which is a bit of a shame.

Client

The client side story is also pretty simple. Here's a client service that uses the customerService:

using System;
namespace Mike.WindsorWCFIntegration.Client
{
    public class DoSomethingWithCustomers : IDoSomethingWithCustomers
    {
        private readonly ICustomerService customerService;
        public DoSomethingWithCustomers(ICustomerService customerService)
        {
            this.customerService = customerService;
        }
        public void DoIt()
        {
            WriteClientDetails(customerService);
            WriteClientDetails(customerService);
        }
        private static void WriteClientDetails(ICustomerService customerService)
        {
            var customer = customerService.GetCustomer(24);
            ...
        }
    }
}

It's expecting the customerService to be injected by the container. The Windsor.config has to specify that ICustomerService is provided by WCF:

<component id="customerService"
    type="Mike.WindsorWCFIntegration.ICustomerService, Mike.WindsorWCFIntegration"
    wcfEndpointConfiguration="customerClient">
</component>

The wcfEndpointConfiguration references the WCF configuration in App.config, note that it's the same as the endpoint name, 'customerClient':

<configuration>
  <system.serviceModel>
    <client>
      <endpoint address="http://localhost:2730/CustomerService.svc"
        binding="basicHttpBinding"
        contract="Mike.WindsorWCFIntegration.ICustomerService"
        name="customerClient">
      </endpoint>
    </client>
  </system.serviceModel>
</configuration>

Once again we have to make sure that the WCF Facility is registered with the container (you can do this in the XML configuration as well):

var container = new WindsorContainer()
 .AddFacility<WcfFacility>()
 .Install(Configuration.FromXmlFile("Windsor.config"));

I hope I've been able to show that using WCF integration with Windsor can be a big win in extending component oriented application design to include serving and consuming remote services. Note that in none of the code above did I have to change the implementation of any of my services in order to make them work with WCF. You can retain your Dependency Injected, testable components and simply have them served onto the web. WCF allows you to configure the wire format (SOAP, REST, POX, binary) and transport (HTTP, named pipes, TCP-IP) independently of your component design. It's very easy to use and very nifty.

You can download the complete demo solution here:

http://static.mikehadlow.com/Mike.WindsorWCFIntegration.zip

Friday, November 21, 2008

Resources for tomorrow’s Developer Day talk

Tomorrow I’m going to be talking at the Developer Day at Microsoft’s UK headquarters near Reading. My talk is titled “Using an inversion of control container in a real world application.” I’m going to be showing how I architected Suteki Shop and talking about some of the cool things that you can do with an IoC container. I’ve got too much material for an hour, but hopefully I’ll be able to get through most of it.

You can download the slides here.

http://static.mikehadlow.com/Using an Inversion of Control Container in a.pptx

And the Suteki Shop code is available as always on Google Code:

http://code.google.com/p/sutekishop/

I”ll be hanging out between sessions and going to other talks, so please come and say hello if you see me.

Wednesday, November 19, 2008

Multi-tenancy part 1: Strategy.

I want my eCommerce application Suteki Shop to be able to flexibly meet the requirements of any customer (within reason). How can we have a single product but enable our customers to have diverse implementations. I think the solution depends on the level of diversity and the number of customers. The right solution for a product with thousands of customers with little customisation is different from a solution for a handful of customers with very diverse requirements.

There are several axis of change to consider:

  1. Codebase. Do I have one codebase, or do I maintain a separate codebase for each customer?
  2. Application instance. Do I have one application instance to service all my customers, or does each customer have a separate one?
  3. Database schemas. Do I have one database schema, or do I have a different schema for each customer?
  4. Database instances. Do I have one database instance or separate ones for each customer?

Lots of customers, little customisation

Lets consider the two extremes. First, say I've got a product that I expect to sell to thousands of customers. My business model is premised on selling a cheap service to very many people. It's worth my while not to allow too much customisation because the effort to supply that simply wouldn't be worth it. If someone wants something significantly different from my thousands of other customers, I'll just tell them to look elsewhere. In this case I'd have a single code base, application instance, database schema and database. Effectively a single application will service all my customers.

In this case the main technical challenge will making sure that each user's data is keyed properly so they perceive it as a single application servicing their needs only. It would be a major failure if one customer could see another's data. Think of an on-line email service such as hotmail. Sometimes we want to allow customers to see each other's data, think Facebook, but that interaction needs to be tightly controlled.

Scope for providing diverging customer requirements is very limited in this case. Effectively every customer gets exactly the same features and all you can do is allow them to maybe switch features off and on. The cost of developing a customisable experience with such a setup is high.

The great thing about this single application approach is its scalability. Each extra customer requires no extra development effort. It's the way billion dollar on-line businesses are made.

Few customers, Deep customisation

At the other extreme we might be selling an application to just a handful of clients. Each client has diverse requirements and is willing to spend the money to have them implemented. Do we really have a single application in this case or are we instead delivering bespoke solutions? Here the answer might be to have a common application library but separate codebases for each client. It would then follow that each client needs separate database schemas, databases and application instances.

In this scheme, the cost of developing divergent features is no greater than developing them for bespoke software. The pain comes in deciding what to code in your shared library and what to code in your bespoke applications. Getting that wrong can mean repeating the same code for each customer and seriously violating the DRY principle.

Because we are effectively building bespoke software, each extra customer requires development effort. As our business grows we have to hire more and more developers to service our customers. The problem is that development teams don't scale well and there are high organisational costs involved. Our profit on each instance of our product is likely to decline rather than rise. For this reason there's a limit to the size a company based on this development model can grow to.

So which one is Suteki Shop?

Suteki Shop only has one customer at present. So it's a single bespoke application based on that customer's requirements. In the new year I hope to add a second customer so I'll have to start making decisions about my multi-tenancy strategy. I want to make the right decisions now so that I can provision my third, fourth, tenth or one-hundredth as easily as possible. My customers are likely to have divergent requirements within the constraints of an eCommerce system. The easiest thing for my second customer would be to branch the code, make any changes they want and deploy a second database (with possibly a customised schema) and application instance. But that strategy will not scale to one-hundred customers. Let's make the assumption that I want to service maybe a hundred customers easily, but I'm unlikely to have more than a thousand. Consider the four axis of change above...

  1. Codebase. I really don't want to fork my codebase. I'm a firm believer in keeping things DRY. There will be no branching.
  2. Application Instance. This is less clear cut. There are obvious benefits from maintaining separate application instances for each customer and this was my initial intention. I can configure and deploy them as required. However, after looking at techniques for provisioning multiple divergent customers on one instance, I believe that the costs can be minimal. Having one instance is much easier to administer and monitor, so I am changing my mind on this point and will aim for one application instance. One application instance can of course be duplicated to scale over a server farm. I think this approach could scale to thousands of customers.
  3. Database schemas. In the old days, I would have said that having multiple schemas would cause intense scale pain, but that is changing with modern tools. Using an ORM allows us to defer schema creation until deployment. If I can automate the management of schema creation and change, then having multiple versions is less of a problem. Of course that all depends on having multiple databases....
  4. Database instances. If I chose to have one database instance I would have to have one schema which limits the amount of customisation I can do. I would also have to carefully key each customer's data, which is a development overhead. Automating database deployment and maintenance is relatively straightforward  so I think the choice here is clear cut. One database per customer. Also this gives me extremely simple database scalability, I can simply farm out my customers to multiple DBMS's. This approach will scale to hundreds of customers, thousands would be problematic but not impossible.

So now I've got my multi-tenancy strategy, how am I going to implement it? The core technology I'm going to use is component oriented software design using Dependency Injection and an IoC Container. I'm going to follow some of the recommendations that Oren Eini makes in these posts:

Multi Tenancy - Approaches and Applicability

Adaptive Domain Models with Rhino Commons

Components, Implementations and Contextual Decisions

Windsor - IHandlerSelector

Windsor - IModelInterceptersSelector

In part 2, I'm going to show my first faltering steps to making a single instance of Suteki Shop host more than one customer. Stay tuned!

Upgrading Suteki Shop to the MVC Framework Beta

suteki_shop

I've just gone through the process of upgrading Suteki Shop, my open source eCommerce application from MVC Framework Preview 4 to the Beta. Yes, I know, I missed out Preview 5 all together, so my experiences are going to be different from someone just moving from Preview 5 to the Beta.

Here's a brief rundown of what was involved:

  1. Download and install the MVC Beta
  2. Get the latest MvcContrib trunk and build it. It already references the MVC Beta assemblies, so there's no need to copy them in.
  3. Copy all the MVC Beta, MvcContrib and Castle assemblies into my 'Dependencies' folder.
  4. Make sure all the MVC Beta assemblies are marked as 'Copy Local'. I don't want to have to install them in the GAC on my server.

Now as expected, when I built the solution I got a ton of exceptions. Here's what I needed to do to get it to build:

  1. Most HtmlHelper extension methods have been moved to the System.Web.Mvc.Html namespace, so I had to add a using statement to any file which referenced ActionLink.
  2. ActionLink<T> now lives in the futures assembly: 'Microsoft.Web.Mvc' so I had to add using statements where appropriate.
  3. I had to add my own implementation of 'ReadFromRequest' to my own ControllerBase class as this has been removed from Controller.
  4. I had to fully qualify the namespace of my ControllerBase class because there is now a ControllerBase in the MVC Framework.

I also had to make the following changes to the Web.config file:

  1. Change the version number of System.Web.Abstractions to 3.5.0.0
  2. Change the version number of System.Web.Routing to 3.5.0.0
  3. Add the namespace 'System.Web.Mvc.Html'

There were also some changes to the views:

  1. Html.Form has been renamed to Html.BeginForm.
  2. Html.Checkbox's signature has been changed.

With all the changes made the software now builds and runs. But that's not the end of the story. It works, but it's coded in a very Preview 1 style. I'm planning to do a major refactoring over the Christmas holidays to use all the Beta goodness that been introduced and that will mean changing the way most of the controllers work and re-doing most of the views. Should be fun :P

Friday, November 14, 2008

Digital Media Awards South

dimas

Wow, this is very cool. Code Rant has for some unknown reason been nominated for the Digital Media Awards: South. Well, not really unknown, I did submit it of course :P There are three nominated blogs. I've never heard of either of my competitors but they are both pretty awesome.

First is Matt Pearson, a Flash guru. Just check out some of his work. His blog is called zenbullets. You just know it's going to be good with a name like that and it is. A fantastic wide ranging discussion of subjects from The Anthropic Principle to The Mathematics of Clouds. He also has a harrowing account of being falsely accused of a serious crime that would make anyone very very angry with the way Sussex Police operate. Sounds like a great guy. I really hope I get to meet him at the awards ceremony.

Next up is the blog of NixonMcInnes a company that deliver social media solutions. it looks like an incredibly creative and fun place to work. Their blog is great read on the subtleties of social media and running a small Brighton company.

If it were up to me I would vote for Matt's blog. NixonMcInnes is a great example of company blogging, but Matt's blog is interesting beyond his talents as a Flash developer and could have a very wide audience. As for Code Rant, I would be very surprised if it gets many votes. For one thing, most of my posts are mainly long code listings. Unless you are a true .NET junkie (like you, dear reader) it's going to be mostly meaningless. But getting nominated is great. It should mean that a few more companies in Brighton become aware that there's a long haired .NET obsessive that they can hire for very reasonable rates ;)

The awards evening should be fun. I've been given three tickets, so if anyone would like to go along on the 27th November, I've got two to spare. First come first served :) Unfortunately you have to pay for your own food and drink :(

Wednesday, November 05, 2008

Dependency injection with Castle Windsor: Video

Gojko has just posted the video of the talk we did last month at Skills Matter. In the first half I talk about Inversion of Control basics and show some design patterns. In the second half of the talk, Gojko covers the hard stuff: managing component configuration and facilities. BTW, you'll be relieved to know that my wife brought an end to "experiment beard" last week :)

Friday, October 31, 2008

MVC Framework and jQuery = Ajax heaven

I've got an admission to make: I've never used any of the Microsoft Ajax Toolkit. But recently I've been adding some mapping functionality to the project I'm working on. We wanted users to be able to pull a marker to a position on a map and have the new position updated on the server. Obviously we were going to have to use Ajax of some kind to do that. What I want to show today is how trivially easy it's proved to be to use MVC Framework on the server and jQuery on the browser to do Ajax requests and updates. JQuery is now included in the default project for the MVC Framework, so there's no excuse not to use it.

Here's a very simple scenario. I've got a a page where I want to show a list of people when I click 'Get People'. I also want to add a person to the database when I type their name into a text box and click 'Add Person'. Here's what it looks like:

AjaxDemo

The first thing we do is create the button for 'Get People' and an unordered list to put the people in:

    <input type="button" id="getPeople" value="Get People" />
    <ul id="people_list">         

Next we have a jQuery ready handler (which fires when the page loads) that sets up the event handler for the getPeople button:

$(function() {
    $("#getPeople").click(function() {
        $.getJSON("/Home/People", null, getPeople);
    });
});

When the button is clicked we fire off a json request to /Home/People. This maps to a People action in our Home controller:

[AcceptVerbs(HttpVerbs.Get)]
public ActionResult People()
{
    var db = new DataClasses1DataContext();
    return Json(db.Persons);
}

All we do here is get all our Person records (using LINQ to SQL) and return them as Json. When the response is returned to the browser the getPeople callback function is called. Remember we set this in the getJSON jQuery method above. Here's the getPeople function:

function getPeople(people) {
    $("#people_list").text("");
    $.each(people, function(i) {
        $("#people_list").append("<li>" + this.Name + "</li>");
    });
}

The callback provides the data returned from the JSON request as its argument. All we have to do is use the handy jQuery each function to iterate through our people collection and append a new li element to our list for each one. The really nice thing here is how well the MVC Framework and jQuery interact. I didn't have to do any conversions between them. Our data just automagically converts from C# objects to Javascript.

How about the update? This is just as simple. First we need some more HTML: a text box to type the name into and a button to fire the update:

    <label>Name <input type="text" id="name" /></label><br />
    <input type="button" id="addPerson" value="Add Person" />

Next another click event handler is set up to handle the addPerson click event:

$("#addPerson").click(function() {
    var name = $("#name")[0].value;
    if (name == "") {
        alert("You must provide a name");
        return;
    }
    var person = { Name: name };
    $.post("/Home/People", person, null, "json");
});

This time we use the useful 'post' function to post our newly created person JSON object to the /Home/People URL. Because this is a POST request it's handled by our overloaded People action, this time attributed with AcceptVerbs[HttpVerbs.Post]:

[AcceptVerbs(HttpVerbs.Post)]
public ActionResult People(string Name)
{
    var db = new DataClasses1DataContext();
    var person = new Person {Name = Name};
    db.Persons.InsertOnSubmit(person);
    db.SubmitChanges();
    return null;
}

All we have to do to retrieve the Name of the person is to name the argument of the action 'Name'. The MVC Framework automatically pulls apart the request, finds a value called 'Name' and supplies it to the action. If you have a complex JSON structure you can deserialize it to a C# object using the built in data binding. All that's left to do is create a new Person instance with the given name and save it to the database. Nice!

I'm a latecomer to the Ajax party, but with jQuery and the MVC Framework I'm finding a breeze to wire up some pretty nice functionality.

You can download the demo solution here:

http://static.mikehadlow.com/JQueryAjax.zip

Thursday, October 30, 2008

Dynamic Dispatch in C# 4.0

So C# 4.0 is finally with us (in CTP form anyway). I've just been watching Ander's PDC presentation, The Future of C#. The main focus is providing interop with dynamic languages and COM, so in order to do that they've added some dynamic goodness in C# itself. There's a new static type called dynamic :) Yes, it's the number one joke at PDC it seems. The dynamic keyword allows us to say to the compiler, "don't worry what I'm doing with this type, we're going to dispatch it at runtime".

Take a look at this code:

using System;
using Microsoft.CSharp.RuntimeBinder;
using System.Scripting.Actions;
using System.Linq.Expressions;
namespace ConsoleApplication1
{
    public class Program
    {
        static void Main(string[] args)
        {
            DoSomethingDynamic(7);
            DoSomethingDynamic(new Actor());
            DoSomethingDynamic(new DynamicThing());
            Console.ReadLine();
        }
        static void DoSomethingDynamic(dynamic thing)
        {
            try
            {
                thing.Act();
            }
            catch (RuntimeBinderException)
            {
                Console.WriteLine("thing does not implement Act");
            }
        }
    }
    public class Actor
    {
        public void Act()
        {
            Console.WriteLine("Actor.Act() was called");
        }
    }
    public class DynamicThing : IDynamicObject
    {
        public MetaObject GetMetaObject(System.Linq.Expressions.Expression parameter)
        {
            return new CustomMetaObject(parameter);
        }
    }
    public class CustomMetaObject : MetaObject
    {
        public CustomMetaObject(Expression parameter) : base(parameter, Restrictions.Empty){ }
        public override MetaObject Call(CallAction action, MetaObject[] args)
        {
            Console.WriteLine("A method named: '{0}' was called", action.Name);
            return this;
        }
    }
}

OK, so we've got a little console application. I've defined a method, DoSomethingDynamic, which has a parameter, 'thing', of type dynamic. We call the Act() method of thing. This is duck typing. The compiler can't check that thing has an Act method until runtime, so we're going to wrap it in a try-catch block just in case it doesn't. A side effect of duck typing is that there's no intellisense for the thing variable, we can write whatever we like against it and it will compile. Any of these would compile: thing.Foo(), thing + 56, thing.X = "hello", var y = thing[12].

Next, in the Main method, we call DoSomethingDynamic a few times, passing in different kinds of arguments. First we pass in the literal 7. Int32 doesn't have an Act method so a RuntimeBinderException is thrown. Next we pass an instance of a normal C# class, Actor. Actor has an Act method so Act is called normally as expected.The last invocation of DoSomethingDynamic shows off how you can do dynamic dispatch in C# 4.0. We define a new class called DynamicThing and have it inherit IDynamicObject. IDynamicObject has a single method you must implement: GetMetaObject. GetMetaObject returns a MetaObject and all you have to do is implement a CustomMetaObject that knows what to do with any method (or parameter, or indexer etc) invocation. Our CustomMetaObject overrides Call and simply writes the name of the method to the console. Chris Burrows from the C# compiler team has a series of three blog posts showing off these techniques here, here and here. Anders' PDC presentation, The Future of C# is here. C# is primarily a statically typed language and Anders obviously still believes that static typing is a better paradigm for large scale software. He sees the dynamic features as a way of adding capabilities to C# that have previously been the prerogative of VB and dynamic languages. He's especially contrite about the failure of C# to interoperate with COM efficiently in the past. However there are going to be lots of cases when using dynamic will be a short cut to features that are hard to implement statically. I can see the C# forums humming with complaints that intellisense doesn't work anymore, or hard to diagnose runtime errors as a result of over zealous dynamism.

The last ten minutes of the talk, when he showed us some of the post 4.0 features, was very cool. They are rewriting the C# compiler in C#. This means that the compiler API will be just another library in the framework. Applications will be able to call compiler services at runtime giving us lots of Ruby style meta-programming goodness. Tools will be able to read the C# AST, giving us incredible power for refactoring or post compilation style tweeks.

Jim Hugunin has a related presentation, Dynamic Languages in Microsoft .NET that goes deeper into the dynamic features that Anders talks about. Also well worth watching.