David Duffett

Software Architect with a particular focus on user experience, .NET and e-commerce.
Posts I Like

If you’re an ASP.NET developer and you haven’t checked out Nancy, do yourself a favour.  It’s awesome.

Nancy 0.8 was released a couple of days ago and included in this release was a new exception handling hook.

I use ELMAH for exception logging in all my ASP.NET MVC applications, and when I wrote an application in Nancy, I really wanted to use ELMAH there too.  Unfortunately there is no out-of-the-box support, and as Nancy 0.8 is very new, not a lot of documentation as to how the exception handling hook works.

So I thought I’d share how I got it working - hope it helps someone!

Add ELMAH to your project

First of all, add ELMAH to your project using Nuget.  It should add the necessary elements to your web.config so you can configure logging to a database, email, etc.  More details can be found on the ELMAH website.

Handling exceptions

Handling exceptions in Nancy 0.8 is easy - just modify your Nancy Bootstrapper like so:

public class MyBootstrapper : DefaultNancyBootstrapper
{
    protected override void InitialiseInternal(TinyIoC.TinyIoCContainer container)
    {
        OnError.AddItemToEndOfPipeline((context, exception) =>
        {
            Elmah.ErrorSignal.FromCurrentContext().Raise(exception);
            return null;
        });
    }
}

This will log any exceptions that occur within your application.  You can test it by forcing an exception to occur, like this:

public class MyModule : NancyModule
{
    public MyModule() : base("/error")
    {
        Get["/"] = parameters => 
        { 
            throw new Exception("Exception was caught by Elmah"); 
        };
    }
}

Then navigate to the “/error” path of your application.  The exception will be logged by ELMAH.

Handling HTTP status codes

Another thing you may want to log is when you deliberately return a specific HTTP status code, for example 404 Not Found or 400 Bad Request.  Your code might look like this:

public class MyModule : NancyModule
{
    private readonly IRepository _repository;

    public MyModule(IRepository repository) : base("/find")
    {
        Get["/{Id}"] = parameters => getRecord(parameters.Id);
    }

    private Response getRecord(int id)
    {
        var record = _repository.GetById(id);
        if (record == null)
            return HttpStatusCode.NotFound;

        // do other things
    }
}

Your application has returned a 404 status code, but no exception has been raised, so ELMAH won’t know anything about it.  If you do want to log these, you will need to implement your own IErrorHandler class.

The DefaultErrorHandler class in Nancy currently handles 404 Not Found and 500 Internal Server Error codes.  It returns a nice error page in each case by modifying the response in the Nancy context.

In my version, however, I’m not too concerned about an error page (as this is a service application) so I’m simply going to log an exception with Elmah.  I’m going to accept 400 Bad Request and 404 Not Found exceptions.

public class ElmahErrorHandler : IErrorHandler
{
    private readonly HttpStatusCode[] _supportedStatusCodes = new[]
    {
        HttpStatusCode.BadRequest,
        HttpStatusCode.NotFound
    };

    public bool HandlesStatusCode(HttpStatusCode statusCode)
    {
        return _supportedStatusCodes.Any(s => s == statusCode);
    }

    public void Handle(HttpStatusCode statusCode, NancyContext context)
    {
        var message = string.Format("Error occurred for requested URL: {0}.", getUrl(context));
        ErrorSignal.FromCurrentContext()
            .Raise(new HttpException((int)statusCode, message));
    }

    private static string getUrl(NancyContext context)
    {
        return string.Concat(context.Request.Path, context.Request.Url.Query);
    }
}

The error handler in Nancy is called by the NancyEngine any time a status code is returned.  It will call the HandlesStatusCode method to determine whether the error handler can be used for the status code.  If this method returns true, then it calls the Handle method.

I couldn’t find a very clean way of getting Nancy to use this error handler, as the default error handler is kind of “baked in”.  One way you can do it is to override the InternalConfiguration and set the ErrorHandler type property in your bootstrapper class.  So, you end up with this:

public class MyBootstrapper : DefaultNancyBootstrapper
{
    private NancyInternalConfiguration _internalConfiguration;

    protected override NancyInternalConfiguration InternalConfiguration
    {
        get
        {
            if (_internalConfiguration == null)
            {
                _internalConfiguration = base.InternalConfiguration;
                _internalConfiguration.ErrorHandler = typeof(ElmahErrorHandler);
            }
            return _internalConfiguration;
        }
    }

    protected override void InitialiseInternal(TinyIoC.TinyIoCContainer container)
    {
        OnError.AddItemToEndOfPipeline((context, exception) =>
        {
            Elmah.ErrorSignal.FromCurrentContext().Raise(exception);
            return null;
        });
    }
}

Hope this helps someone looking for how to hook up ELMAH in your Nancy application!

The English version of this post is here - this one is for my Brazilian friends and family!

Eu e meus colegas da empresa onde trabalho, eSpares, decidimos correr a “Royal Parks" meia-maratona para instituição de caridade.  Normalmente, eu nunca faria um coisa parecida com isso - pois eu não sou um atleta!  Mas eu decidi que é um bom desafio por uma boa causa.  Agora eu estou sentindo a pressão, porque vai acontecer no dia 9 de outubro!

A parte mais importante da historia

Nossa caridade é The Children’s Trust em Tadworth, no Reino Unido.  Eles cuidam de crianças com deficiências graves, especialmente lesões cerebrais.  Todo ano eles ajudam centenas de crianças, e suas familias, com assistência médica, terapia e educação.

O objetivo é dar a eles a independência tanto quanto possível.  Voçe pode ler mais sobre a instituição de caridade neste website www.thechildrenstrust.org.uk (ingles!).

Então, a parte mais importante é que nós devemos arrecadar o maximo de dinheiro possível para as crianças.  Qualquer doação sera apreciada por mim, e principalmente pelas crianças!

Você pode doar aqui: http://www.justgiving.com/espares

Você deve esta pensando: mas, só uma MEIA-maratona?

Sim, é chamada uma meia-maratona, mas acreditem, é um longo caminho!  Uma meia-maratona é 21 kilometros: isso é mais do que 2 horas correndo!

É um desafio grande para mim, porque eu nunca corri mais do que 1.5km na minha vida antes de Junho deste ano.  Agora, eu estou correndo 3 vezes por semana, uma media entre 6.5km a 10km cada vez.  Eu uso um bom aplicativo do iPhone chamado Nike+ GPS.  O aplicativo grava todas as minhas corridas e me mostra a distancia percorrida, velocidade, etc.

Aqui é uma imagem do mapa da minha corrida de 6.5km aqui em Londres:

E quando eu corro 10km, eu preciso estender a corrida como esta trajetoria abaixo:

Eu sou sortudo porque eu posso correr perto do rio Tamisa isso ajuda fazer a minha corrida mais legal.

Uma coisa eu aprendi: é que eu tenho que correr com muita frequência, porque se eu tiver um feriado/ferias, quando eu volto a correr é muito difìcil manter o mesmo ritmo!  Em agosto eu fui para Italia e comi muito (lembra??).  Quando eu voltei foi como começar tudo de novo…

Então, por favor doe se você puder, e me desejem boa sorte!

Myself and some of my colleagues at eSpares decided some months ago to run in the Royal Parks Half Marathon for charity.  This is not the sort of thing I would normally do - I’m certainly no athlete!  Nevertheless, I decided that it was both a good personal challenge and is definitely for a good cause.  It’s crunch time now, as the run is on Sunday October 9th.

The really important part

We chose The Children’s Trust in Tadworth as our charity.  The Children’s Trust provide care and services for children with profound disabilities and complex health needs, particularly involving brain injuries.  Every year they help hundreds of children in the UK with not only medical care and therapy, but also education and important information and training for families.

The aim is to give these children and their families can achieve as much independence and quality of life as possible.

The money we’re raising will be used to benefit the children in the Trust’s care to help them lead as full lives as possible no matter how serious their disability. More information is available at www.thechildrenstrust.org.uk.

So the most important part is that we raise as much as we can for this worthy charity.  We’ve got a big goal so any donation you can give will be much appreciated, by me and by the children.

You can sponsor me by donating here: http://www.justgiving.com/espares

If you’re a UK taxpayer, please don’t forget to tick the Gift Aid box as well. That gets us an extra 25% on top of whatever is being donated, courtesy of Mr Osborne and the nice people at The Treasury.

But it’s only a HALF-marathon!

It may be called a half-marathon, but don’t let that fool you: it’s a bloody long way.  A half-marathon is 13.1 miles or about 21 kilometres if you’re that way inclined.  That’s a bit over 2 hours of running (jogging really) at my pace of about 10 minutes per mile.

Now the reason why this is such a challenge is that for the last 31 years of my life I’ve never run more than a mile in one go!  So, for the last 3 months I’ve been in training.  My regimen as been 3 runs per week, each around 4 or 5 miles (6 - 8km).  I use the brilliant Nike+ GPS app on my iPhone to record each run.  Here is a picture of my standard 4 mile run in West London:

And here is how I stretch the same run out to just over 6 miles:

I’m lucky enough to live quite close to the River, so at least part of my run is right along side it.

One thing I have learned is that you have to keep running, very regularly!  A little break of 3 or 4 days really affects how you feel on the next run.  In August I took a 3 week break, went to Italy and pigged out, and believe me, when I started running again, it was like going back to square one!

Anyway, please do donate if you can, read a bit more about the Royal Parks Half, and wish me luck!

I’m about to undertake building a personal website in Ruby on Rails.  Being a complete n00b, I did some research and found that a lot of people recommend going straight to developing with Ruby in Linux, rather than the Windows route.  Various reasons have been stated, from performance to reliability and so on.  Not having experienced Linux before, I thought I’d give Ubuntu a go.  Setting up an Ubuntu virtual machine and getting the Ruby on Rails development environment up and running took quite a bit of reading for a n00b like me, as there are a few tricks to the process.  I thought I’d bring them all together here in case anyone else can benefit from it.

1. Install VirtualBox

VirtualBox is free, lightweight virtualization software which I’ll be using to host my Ubuntu virtual machine.  I’ve used VMware Workstation before, and trust me, VirtualBox seems lighter, faster and much easier to use.  So, the first step is go to www.virtualbox.org, download and install VirtualBox.

2. Download Ubuntu ISO

Don’t worry - you don’t have to burn any CDs or have a USB stick handy - the ISO will be attached to the virtual machine to act as a boot CD to install the operating system.  Download the latest Ubuntu ISO from here and save to anywhere on your hard drive: releases.ubuntu.com

3. Create a virtual machine

Time to get the virtual machine up and running.  I found this article very useful, and I’ll repeat the steps here.  Fire up VirtualBox, and click New.  When choosing the VM Name and OS Type, you can enter any name, but ensure you choose Linux and Ubuntu for the operating system and version.

The next screen asks you to choose the memory size.  A good guide is to choose a quarter of the size of RAM on your machine (for example, 512MB if you have 2GB RAM).

Following that, I setup a virtual hard disk with size of 8GB.  Bear in mind that Ubuntu itself requires 4GB of space, so allow yourself enough for whatever development or usage you think you’ll require.  This sits on your hard drive as one large 8GB file.

The hard drive creation process can take a few minutes.

4. Boot the virtual machine from the Ubuntu ISO

To boot the virtual machine from the ISO, choose the virtual machine in VirtualBox and click Settings.  Go to the Storage tab, choose the CD/DVD drive (“Empty" under IDE Controller), click the CD icon next to the CD/DVD Drive dropdown list and click Choose a virtual CD/DVD disk file:

Find the ISO file you downloaded, then you have attached that to the CD/DVD drive for the virtual machine.  Click OK out of the Settings dialog, then Start the virtual machine.

5. Install Ubuntu and update software

The process to install Ubuntu is fairly self-explanatory.  All I would say is that the startup screen gives you 2 options: to Try Ubuntu and to Install Ubuntu.  Trying Ubuntu will simply run the operating system without installing it on the machine.  You want to choose to Install Ubuntu.  It is largely an unattended process, just asking you a few simple questions about your location, language and so on.  You can find more details on the process here.

When you get to the Installation Complete dialog, instead of clicking Restart Now, click on the VirtualBox Machine menu, and Close.  Choose to send the machine the shutdown signal.  This is important, as you need to detach the Ubuntu ISO before the machine boots again.

Go back into the Settings of the machine, Storage, select the Ubuntu ISO and click on the same CD icon you clicked to attach, but click Remove disk from virtual drive.  Now you can start the virtual machine again.

Now that Ubuntu is installed, the first thing you want to do is ensure it is up to date.  To do this, in the top navigation panel, click on System, Administration and Update Manager.  Choose to install all recommended updates.

6. Install prerequisites: Curl and Git

Curl and Git are both required before installing Ruby.  This is handy anyway, as you will want to setup Git and your GitHub account anyway.  The full process of setting up Git under Linux is explained well on the GitHub site.

Click on System, Administration and Synaptics Package Manager.  Find the packages curl, git-core, git-doc and git-gui and mark them all for installation.  Click Apply to install them, and their dependencies.

7. Setup SSH Keys

Exactly as explained at GitHub:

Go to Applications, Accessories and Terminal, then type in the following:

$ ssh-keygen -t rsa -C "your_email@youremail.com"

You can hit Enter to choose the default directory, and it’s up to you as to whether you enter a passphrase or not.  Now you need to add your SSH key to GitHub.  First copy the contents of the id_rsa.pub file by opening it in gedit:

$ gedit ~/.ssh/id_rsa.pub

Select all and copy.  Go to GitHub, login and click Account Settings, SSH Public Keys and Add another public key.  Paste in the Key and click Add key.

Now, setup your personal git information.  In GitHub, click on Account Settings and Account Admin.  Select your API token and copy it.  Back in the terminal window:

$ git config --global user.name "Firstname Lastname"
$ git config --global user.email "your_email@youremail.com"
$ git config --global github.user [username]
$ git config --global github.token [apitoken]

Next, we can move on to RVM.

8. Install RVM

RVM stands for Ruby Version Manager, and is becoming the de-facto way of managing your Ruby installation.  The process for installing RVM is:

Open up a Terminal window by clicking on Applications, Accessories and Terminal.  Enter the following command: 

$ bash < <(curl -s https://rvm.beginrescueend.com/install/rvm)

After RVM is installed, you need to modify your shell startup script in order to make RVM commands available to any terminal session.  You can do this by simply entering the following command: 

$ echo '[[ -s "$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm" # Load RVM function' >> ~/.bashrc

Unless you logout and login to Ubuntu, you need to reload your bash profile to get access to RVM: 

$ source .bashrc

9. Install RVM notes prerequisites

Yes, more prerequisites.  By entering the following command, RVM will give you a list of packages that you need to install before installing Ruby:

$ rvm notes

Quite a lot of information is shown to you, but the important one is for Ruby.  You can copy the command given, but will need to change it slightly so it is run as root, rather than your restricted user account.

Grab this and replace /usr/bin/apt-get with sudo apt-get.  This is a Unix command that allows you to execute a command with root privileges.  Your command will look something like this:

$ sudo apt-get install build-essential bison openssl libreadline6 libreadline6-dev curl git-core zliblg zliblg-dev libssl-dev libyaml-dev libsqlite3-0 libsqlite3-dev sqlite3 libxml2-dev libxslt-dev autoconf libc6-dev ncurses-dev

Now, we can finally install Ruby.

10. Install Ruby and RubyGems using RVM

This is made very simple with RVM, with just 2 commands.  The first installs Ruby (and takes a while):

$ rvm install 1.9.2

And the second ensures that Ruby 1.9.2 is the default Ruby interpreter used whenever you open a terminal window:

$ rvm use 1.9.2 --default

Ruby and RubyGems are now installed on your virtual machine.

11. Install Rails and create your application

You now want to install Rails.  This is trivial, as you now have RubyGems installed:

$ gem install rails

After this is done, you can create your application, for example, an application called blog:

$ rails new blog

UPDATE: Pablo in the comments has notified me that the following procedure is no longer needed as the bug has been fixed.

Once this is done, there is an important step to take, to overcome a bug in Rake 0.9.  Rake is a general-purpose command runner used by Rails.  The latest version of Rake has a bug which causes errors to occur when performing actions such as creating databases.  What we need to do is pin your application to using Rake 0.8.7 until this bug is fixed.  To do that, you need to edit the Gemfile file in your application.

$ cd blog
$ gedit Gemfile

Add in the following line and save the file:

gem 'rake', '~> 0.8.7'

More information on why we do this can be found on StackOverflow here: http://stackoverflow.com/questions/5287121/undefined-method-task-using-rake-0-9-0-beta-4

Now, to update all gems in your application, but keep Rake at 0.8.7, run the following:

$ bundle install

12. Add your application to GitHub

I found this great article on easily getting up and running with Git and GitHub: http://agilewarrior.wordpress.com/2010/07/24/git-github-rubyonrails-how-to/

First you need to create your Git repository and add your application files to it.  

Create a repository: 

$ git init

Add all current application files to the repository: 

$ git add .

Do your initial commit:

$ git commit -m "Initial commit"

Create a new repository in GitHub and copy the Read+Write access git file path.  Then in Terminal, enter:

$ git remote add origin git@github.com:username/app_name.git
$ git push origin master

Now your app is on GitHub!

13. Optional: Install RubyMine

I decided to try out RubyMine from JetBrains as I’ve been impressed by other software from them such as TeamCity and Resharper.  I haven’t yet decided whether I’ll continue with it, but thought I’d put down the steps required to install it anyway, as it’s not straightforward!

First we need to install the Sun Java JRE, and remove the already installed OpenJDK, as it is incompatible with RubyMine. 

UPDATE: Pablo Salomon informed me in the comments that Java is no longer available from Canonical Partners, so I’ve updated this procedure.  Thanks Pablo!

One of the issues is that the Sun JRE packages cannot be found by default, so we need to add a new software repository:

$ sudo add-apt-repository ppa:ferramroberto/java

Now go back to the Synaptics Package Manager.

Click on the Reload button, so that the list of packages is updated to include those from this new repository.

Select the sun-java6-jre, mark for installation, then Apply to install it and it’s dependencies.

Now, we need to remove the OpenJDK.  This is easiest done from the Terminal:

$ sudo apt-get remove openjdk*

Now, when you run the following command to find out the Java version:

$ java -version

It should read like this:

java version "1.6.0_24"
Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
Java HotSpot(TM) Client VM (build 19.1-b02, mixed mode, sharing)

Now we need to download and extract RubyMine:

wget http://download.jetbrains.com/ruby/RubyMine-3.1.1.tar.gz
tar xf RubyMine-3.1.1.tar.gz

Now move it to the /opt/rubymine directory:

sudo mv RubyMine-3.1.1/ /opt/rubymine 

Now, create a custom application launcher by right-clicking in the top menu panel, and clicking Add to Panel.  Choose Custom Application Launcher.  

Click on the icon button and choose /opt/rubymine/bin/RMlogo.svg.  Enter RubyMine for the name, and /opt/rubymine/bin/rubymine.sh for the Command and click OK.

You now have RubyMine installed and are ready to develop your application in Ruby on Rails.  

Geez this is a long post!

This morning I came across a tweet from Rob Conery pointing to a rant from a developer disappointed with the performance of SubSonic (an open-source ORM Rob wrote) as compared to the so called “micro-ORMs”, Dapper, MassivePetapoco and so on:

The proper link to the thread is here: http://groups.google.com/group/subsonicproject/browse_thread/thread/ee2d5f16dfed6766?hl=en&pli=1

Even though I’m a web developer, I consider SQL (particularly using SQL Server) to be one of my strongest skills.  I know how to write complex queries efficiently, design a good data model, choose the right indexes and so on.  It pains me somewhat to read about and meet developers that have no interest whatsoever in learning how to extract data from a database!  After all, SQL has not disappeared.  It isn’t dead.  Even if you hide it, it’s there, and if you don’t know about it I’m convinced you’re not going to be able to produce quality, scalable software over a relational database.

Reading through the thread about SubSonic, and Rob’s perfectly reasonable replies reminded me of the journey I (and various teams I’ve worked with) have gone through in relation to data access.  It’s been quite a rollercoaster, and, like a rollercoaster, we seem to have almost come full circle.

1999: An Inline SQL Odyssey

One of my first very large projects was working with a team on software to manage an online casino.  I was working with some seriously talented people, very organised, great software (and infrastructure) practitioners.  

But, these were the days of Visual Basic 5, Goto statements, and feeling the need to append ridiculously large lists of modifications as comments above every subroutine.

These were also the days of inline SQL.

'*********************************************************************
'Created:
'   28-10-1998    John Smith 
'
'Modifications:
'   01-04-1999    David Duffett    Just love tweaking that inline SQL.
'
'Parameters:
'   psCustNo - The value of the criteria to search for (customer no)
'
'Return Value:
'   If success, then the customer name is returned, otherwise null
'   will be returned.
'
'Description:
'   To get customer name using customer no.
'*********************************************************************
Function GetCustName(psCustNo As String) As String
    On Error GoTo GetCustName_Err

    Dim adoRst As New Recordset
    Dim strSQL As String

    GetCustName = ""
    If psCustNo <> "" Then
        strSQL = "SELECT ngcusfname, ngcuslname From ngcus WHERE ngcus# = " & psCustNo
        adoRst.Open strSQL, adoCon
        If Not adoRst.EOF Then
            GetCustName = Trim$(adoRst!ngcusfname) & " " & Trim$(adoRst!ngcuslname)
        End If
    End If

GetCustName_Exit:
    If adoRst.State = adStateOpen Then adoRst.Close
    Exit Function

GetCustName_Err:
    Call ErrorMessage_Display(Err.Description, Err.Number, "GetCustName", mcstrModule)
    Resume GetCustName_Exit

End Function

Life was good, inline SQL was great.  But then, the web was just starting to flourish, performance was just a matter of having beefy servers, and SQL injection just a minor threat marked as “low priority” in the backlog.

Stored Procedures save lives!

Now I may have been a little late to the party, but something happened half way through working on the casino project.  People started touting the benefits of using stored procedures for all data access, instead of inline SQL.  This provided many benefits:

  • Performance.  Although not a problem today, back then stored procedures were pre-compiled, whereas inline SQL was not, which meant calling a stored procedure was faster.
  • Parametrization.  We now had some level of protected against SQL injection attacks, as values were safely passed to SQL Server in stored procedure parameters.
  • Security.  Security could now be very fine-grain.  We ditched applying any permissions to tables, and controlled access through each interaction by applying permissions to the stored procedures themselves.

Life was good, inline SQL was bad.  In fact, if a developer caught sight of inline SQL over your shoulder, you were likely to get openly taunted in front of your colleagues.  Inline SQL?  A sin!

Master of my own domain

After leaving the casino I picked up a book on C# and decided to find a job in .NET.  I’d progressed to Visual Basic 6, and we were now up to SQL Server 2000.  It was around this time that I was introduced to the concept of a “domain model”.  What a revolution.  No longer should I think of this event executing this SQL, but now I must think along the lines of:

Product product = new Product();
product.LoadFromId(id);
product.Name = strNewName;
product.Categories.Add(newCategory);
product.Save();

My brother and I even developed our own ActiveRecord-style ORM using a nice combination of .NET Reflection and stored procedures. Once again, life was good. Domain model is great, stored procedures are wonderful.

LINQ me up, Scotty

Test-Driven Development peaked my interest, and the buzz in developer-land was around the frustration of domain logic creeping into stored procedures. Deployment was tricky for the database as well, and version control was almost non-existent.  The more stored procedures we had, the more logic that was creeping into that unsafe world - the database.  Along comes LINQ. Finally now we had the means of writing these sorts of queries in code!  This logic could be tested and a wonderful time had in trial and error figuring out LINQ’s oddities getting your queries to select the right data.

LINQ was fantastic. Here’s where things started to get ugly, though. As with stored procedures vs inline SQL before, developers started getting very strong views, like seen in the SubSonic thread, that every query must be written in LINQ.  You’re outed if you don’t embrace this - why on earth are you writing old fashioned SQL?

var confusion = from inline in db.inlineSql
                join sp in db.storedProcs on sp.sql equals inline.sql
                join linq in db.now on linq.sql equals inline.sql
                into confusion
                where linq.sql == inline.sql
                select new GiveMeABreak()

2011: Grow Up.

Inline SQL.  Stored Procedures.  LINQ.  NHibernate, SubSonic, Entity Framework, Dapper, Petapoco, Massive.  All of these can be useful, it just depends on what you are doing.

Even with the best architects, the best developers, with any large system you will come to a point where your client asks you to implement a piece of functionality that requires a complicated query, or something that lies outside the duties of your chosen ORM or LINQ.  ORMs (particularly the LINQ materializers in them) are general-purpose tools.  They are designed to work for a variety of use cases.  This means that when you have an edge case with very specific requirements, they probably won’t do exactly what you want.

Need a million records returned to your app?  Write some SQL.  Use a good old-fashioned data reader.  Better yet, use Massive, Dapper or Petapoco.

Client asked you to implement a screen that displays all customer orders for a product that has a Chinese language description, has been shipped to Australia, costs 3 euros when selling in Europe and has exactly 4 images?  What are you going to do?  Re-write your domain model for this one use case?

If your systems are sufficiently de-coupled, there’s no reason why you can’t have some parts using NHibernate and some using a micro-ORM.  Whatever suits, whatever is most productive for you.

One size does NOT fit all when it comes to data access.  You need a healthy dose of pragmatism, especially in the real world, where delivering value to your business is most important.  Don’t forget SQL - it’s still a trusted friend after all these years.

var calm = db.Query("SELECT * FROM Problems WHERE Status=@0", "Solved");

Here’s a hidden trick I found recently in SQL Server using a combination of it’s built-in XML capabilities, and the STUFF statement.

I’ve had some cases where I’ve needed to concatenate values of a particular column in a table into a comma-separated string. For example, the models that a part fits:

image

As this came up a lot, we wrote a CLR function in C#, imported this into SQL Server and created a User Defined Function (UDF) to ease concatenation of string values from rows in a table. I had a few major problems with this approach:

  1. The code is unnecessarily complex for such a simple requirement, mainly due to the plumbing code required to make SQL CLR functions
  2. We now have to add another process to our deployment, installing this CLR function in SQL Server
  3. People now have to know how to do this. This is a problem - how often do you actually write CLR functions and reference them in SQL??

I was overjoyed to find a way of achieving this that is more efficient and involves purely SQL! In can be achieved using the SQL STUFF statement, and SQL Server’s built-in XML capabilities.

Say I have the following tables (very simplistic):

CREATE TABLE [dbo].[Product](
    [ProductId] [bigint] NOT NULL,
    [Name] [varchar](255) NOT NULL
)

CREATE TABLE [dbo].[ModelProductFits](
    [ProductId] [bigint] NOT NULL,
    [ModelThisFits] [varchar](50) NOT NULL,
)

With the following values:

-----------------------------
| ProductId | ModelThisFits |
|---------------------------|
| 12345     | A3666         |
| 12345     | A3667         |
| 12345     | A8999         |
-----------------------------

And I want to output the following:

-----------------------------------
| ProductId | Models              |
|---------------------------------|
| 12345     | A3666,A3667,A8999   |
-----------------------------------

How can we achieve this just in T-SQL?

1. Get XML element string with FOR XML

Adding FOR XML PATH to the end of a query allows you to output the results of the query as XML elements, with the element name contained in the PATH argument. For example, if we were to run the following statement:

SELECT ',' + ModelThisFits
FROM ModelProductFits
ORDER BY ModelThisFits
FOR XML PATH('Model')

We would receive the following XML string:

,A3666,A3667,A8999

By passing in a blank string (FOR XML PATH(”)), we get the following instead:

,A3666,A3667,A8999

2. Remove leading comma with STUFF

The STUFF statement literally “stuffs” one string into another, replacing characters within the first string. We, however, are using it simply to remove the first character of the resultant list of values. The parameters of STUFF are:

  1. The string to be “stuffed” (in our case the full list of models with a leading comma)
  2. The location to start deleting and inserting characters (1, we’re stuffing into a blank string)
  3. The number of characters to delete (1, being the leading comma)
SELECT STUFF 
(
    SELECT ',' + ModelThisFits
    FROM ModelProductFits
    ORDER BY ModelThisFits
    FOR XML PATH('Model')
), 1, 1, ''

So we end up with:

A3666,A3667,A8999

3. Join on Product to get full list

Next we just join this on the list of products in the Product table, to get a list of product IDs with models they fit!

SELECT
    P.ProductId,
    STUFF
    (
        (
            SELECT ',' + ModelThisFits
            FROM ModelProductFits M
            WHERE M.ProductId = P.ProductId
            ORDER BY ModelThisFits
            FOR XML PATH('')
        ), 1, 1, ''
    ) AS Models
FROM
    Product P

And we have our result:

-----------------------------------
| ProductId | Models              |
|---------------------------------|
| 12345     | A3666,A3667,A8999   |
-----------------------------------

Simple.

UPDATE

Thanks to denisvaleev's comments on this post 3 years ago, I've remembered an issue that can be encountered by using this technique - truncation of the resulting value.  I'd recommend you cast the resulting XML to varchar(max), by doing the following:

SELECT
    P.ProductId,
    STUFF
    (
        (
            SELECT ',' + ModelThisFits
            FROM ModelProductFits M
            WHERE M.ProductId = P.ProductId
            ORDER BY ModelThisFits
            FOR XML PATH(''), type
        ).value('.', 'varchar(max)'), 1, 1, ''
    ) AS Models
FROM
    Product P

Thanks for continuing to read this post!

Quite often I will have a screen with a list of items, each with a checkbox so you can select multiple items and perform actions on them, like so:

In these screens you often need a checkbox at the top to select all of the checkboxes easily.  To achieve this I created a reusable script which uses jQuery to do the following:

  1. If the select all checkbox is ticked, ensure all individual checkboxes are ticked
  2. If the select all checkbox is unticked, ensure all individual checkboxes are unticked
  3. If an item checkbox is ticked or unticked, ensure that the select all checkbox is ticked if all items are ticked, otherwise is unticked.

There are a lot of examples of select all checkboxes on the net, but most of them ignore the 3rd point above, leaving the select-all checkbox in the incorrect state when individual checkboxes are clicked.

Here is the script:

/* Initialize the select all checkbox */
$(document).ready(function() {
    selectAllCheckbox.init();
});

selectAllCheckbox = function () {
    /* When select all is clicked, ensure checked state 
       of all items equals the select all state */
    var selectAllClicked = function() {
        $(".select-checkbox input").attr("checked", $(".select-all-checkbox input").is(":checked"));
    },
    /* When an item checkbox is clicked, ensure the 
       select all checkbox is ticked only if all items are ticked */
    individualClicked = function() {
        var all;
        $(".select-checkbox input").each(function() {
            all = $(this).is(":checked");
            if (!all) return false;
        });
        $(".select-all-checkbox input").attr("checked", all);
    };
    return {
        init: function() {
            $(".select-all-checkbox input").click(selectAllClicked);
            $(".select-checkbox input").click(individualClicked);
        }
    }
}();

And an example of it’s usage:

<table>
    <tr>
        <th class="select-all-checkbox"><input type="checkbox" /></th>
        <th>Name</th>
    </tr>
    @foreach (var item in Model) {
        <tr>
            <td class="select-checkbox"><input type="checkbox" /></td>
            <td>@item.Name</td>
        </tr>
    }
</table>

So, all you need to do is:

  1. Reference a JavaScript file containing the script above
  2. Add a CSS class select-all-checkbox to the container around the select all checkbox
  3. Add a CSS class select-checkbox to the container around each individual item checkbox
  4. Done!

It’s a good idea (essential, really) to apply HTTP caching rules to your CSS and JavaScript files.  This ensures that when users visit your site, they do not have to download these files every time, and that in fact they are cached on the client, and periodically checked to see if they have been changed.

The problem with this is when you change your JavaScript, or CSS, and the user doesn’t receive the update straight away, it can cause your site to break, look bad, etc.  You then see that your user needs to fully refresh the page (F5) to force the browser to check whether the referenced file has changed, and download the new version.

How can we cache our JavaScript and CSS, but also force it to update whenever we change it?  I’m going to provide 2 solutions for you, and they simply involve changing the URL to the referenced file slightly.

1. Add the application version to the reference

In the first solution, we will dynamically add the application version number to the reference, as in the following examples:

<link type="text/css" src="/content/css/site.css?v=2.34.0.1255" />
<script src="/scripts/product.js?v=2.34.0.1255" />

What does this do?  Well, adding a query parameter to your CSS or JavaScript reference does nothing to change the actual CSS or JavaScript.  What it does do, however, is tell the browser that it needs to download a different file.  

In this solution, every time your release your application, browsers will be forced to download new versions of your CSS and JavaScript, which will be cached as normal from then on.

Here is my MVC UrlHelper extension to achieve this:

public static string VersionedContent(this UrlHelper urlHelper, string contentPath)
{
    var versionedPath = new StringBuilder(contentPath);
    versionedPath.AppendFormat("{0}v={1}", 
                               contentPath.IndexOf('?') >= 0 ? '&' : '?', 
                               getApplicationVersion());
    return urlHelper.Content(versionedPath.ToString());
}

private static string getApplicationVersion()
{
    return Assembly.GetExecutingAssembly().GetName().Version.ToString();
}

And to use it:

<link type="text/css" src="@Url.VersionedContent("~/content/css/site.css")" />

One disadvantage to this approach is that you might not have actually changed some of your CSS or JavaScript, and you may wish those unchanged files to continue to be cached, and not force users to download new versions.  So, we have a second solution.

2. Add the file modified date to the reference

Here, instead of adding the application version to the reference, we will add the file modified date:

<link type="text/css" src="/content/css/site.css?m=20110502123507" />
<script src="/scripts/product.js?m=20110301145518" />

From above, you can see that the site.css file was last updated on 5th May 2011 at 12:35:07.  The product.js file, however, was last updated 1st March 2011, at 14:55:18.  So, we now have granularity over which files will be forced to be downloaded as new!

Here is the Url extension for this:

public static string DatedContent(this UrlHelper urlHelper, string contentPath)
{
    var datedPath = new StringBuilder(contentPath);
    datedPath.AppendFormat("{0}m={1}",
                           contentPath.IndexOf('?') >= 0 ? '&' : '?',
                           getModifiedDate(contentPath));
    return urlHelper.Content(datedPath.ToString());
}

private static string getModifiedDate(string contentPath)
{
    return System.IO.File.GetLastWriteTime(HostingEnvironment.MapPath(contentPath)).ToString("yyyyMMddhhmmss");
}

And usage:

<script src="@Url.DatedContent("~/scripts/product.js")"></script>

So, now, whenever you update a CSS or JavaScript file, you can be sure your users will always be up to date with the latest version of the file, but also will be benefiting from your cache directives.

If you’ve ever had the pleasure of writing HTML specifically for emails, you’ll know that it’s a dirty and ugly business.  Desktop email clients such as Microsoft Outlook do not play nice with modern HTML, and online email such as Gmail and Hotmail also have their own quirks.

Here are my golden rules for writing HTML for emails:

1. Use TABLEs, not DIVs

Pains me to say it, but your life will be much easier if you go back to living in the 1990’s and use TABLEs for your layout.  Email clients don’t handle floating elements very well, and you can ensure your layout works by simply breaking it down into exact rows and columns.

<table cellpadding="0" cellspacing="0" width="613"></table>

2. Use inline styles, not CSS stylesheets or style elements

Most email clients ignore or strip out altogether everything except the inner contents of the <body> tag in your HTML, so there’s no use putting in links to stylesheets, as they won’t work!  Add your styles in the old-fashioned way, right in the element itself.

<p style="font-family:Verdana;font-size:11px"></p>

Also -don’t get all fancy and try to use any CSS 2 or CSS 3 styling - stick with basic CSS 1.

3. Hard-code your dimensions

This goes for images as well as table cells where you need the dimensions to work perfectly.  Hard-code those dimensions in, using the good old-fashioned width and height attributes.  Note that when using theseattributes, do not enter “px” after the pixel amount.

<td width="10" height="10">

When using the style attribute, however, do include the “px”.

<td style="width:10px; height:10px;">

4. Don’t forget the Gmail img gap fix!

Gmail inserts a mysterious gap around your images unless you include style=”display:block” on all of your img elements.

Also, if your image is within a hyperlink, be sure to add border=”0” too, otherwise a big ugly border will appear around it.

<img style="display:block" border="0" alt="something" src="something.gif"
width="10" height="10" />

Hard truths, but if followed will save you hours of frustration and trial and error getting your email working in major clients.

Please, however do not use these recommendations for HTML on your website.  I’ll be posting another blog about writing semantic HTML (including HTML5) soon…

A good way to deploy heavily used websites without worrying that files are in use is a technique called Blue-Green Deployment.  I won’t go into all the details (Martin Fowler’s article explains it thoroughly), the basic premise is that you have 2 production environments, with one being live (say “Blue”) and one inactive (“Green”).

When deploying, you simply update the inactive slice (“Green”), then once this slice is fully operational, switch your traffic to be routed to the new slice.

I’ve done this simply using Powershell and IIS 6 home directory paths.  The script below does the following:

  1. Queries IIS on the remote server to find out what the “active” home directory is (maybe the “Green” folder).
  2. Unzips the package file into the inactive folder (“Blue”).
  3. Changes the IIS home directory on the website to point to the updated slice (“Blue”).

Another advantage to this approach is you now have a really easy rollback, as the other folder contains the previous version of your website.

The script below optionally accepts a $virtualDirName, if the application you are updating is actually a virtual directory under a site, and not the site itself.

param
(
    $packageFile,
    $server,
    $siteName,
    $blueIISPath,
    $greenIISPath,
    $blueNetworkPath,
    $greenNetworkPath,
    $virtualDirName = $null
)

# Default paths to the "Blue" slice
$newIISPath = $blueIISPath
$newNetworkPath = $blueNetworkPath

# Check package file exists
if (!(Test-Path $packageFile)) {
	Write-Error "Package file [$packageFile] could not be found or accessed."
	exit 1
}

# Access IIS on remote machine
$packetPrivacy = 6
$query = "ServerComment = '" + $siteName + "'"
$webServer = Get-WMIObject -Class IIsWebServerSetting -Namespace "root\microsoftiisv2" -Filter $query -ComputerName $server -Authentication $packetPrivacy
if ($webServer -eq $null) {
	Write-Error "Could not find or access IIS on $server"
	exit 1
}

# Query current home directory path in IIS
$webDir = $null
$nameQuery = "Name = '" + $webServer.Name + "/root'"

# Modify query for virtual directories under site
if ($virtualDirName -ne $null -and $virtualDirName -ne "") {
	$nameQuery = "Name = '" + $webServer.Name + "/root/" + $virtualDirName + "'"
}

$webDir = Get-WMIObject -Class IIsWebVirtualDirSetting -Namespace "root\microsoftiisv2" -Filter $nameQuery -ComputerName $server -Authentication $packetPrivacy
if ($webDir -eq $null) {
	Write-Error "Could not find or access home directory"
	exit 1
}
Write-Host "Current site path is" $webDir.Path

# Are we on the blue slice? If so, switch to green
if ($webDir.Path -eq $blueIISPath) {
    $newIISPath = $greenIISPath
    $newNetworkPath = $greenNetworkPath
}

try {
	$error.Clear()

	# Clean folder
	Write-Host "Cleaning $newNetworkPath"
	Remove-Item "$newNetworkPath\*" -recurse -force

	# Unzip build file
	Write-Host "Updating $newNetworkPath to new version"
	$shell = New-Object -com shell.application 
	$zipFile = $shell.namespace($packageFile) 
	$destination = $shell.namespace($newNetworkPath) 
	$destination.Copyhere($zipFile.items(), 0x4)

	# Switch IIS path
	Write-Host "Switching path to $newIISPath"
	$webDir.Path = $newIISPath
	Set-WmiInstance -InputObject $webDir

	Write-Host "Successfully deployed."
} catch {
	Write-Error $error[0]
	exit 1
}

In order to use this powershell script, you can call it from a command or batch file, passing in parameters for the network locations to update (blue and green), the local IIS paths (blue and green), the IIS server and site name.  For example:

powershell -ExecutionPolicy unrestricted -command blue_green_deploy.ps1 
	-buildZipFile package.zip -server WEBSERVER -siteName WebSite 
	-blueIISPath D:\Web_Blue\ -blueNetworkPath \\WEBSERVER\Web_Blue\ 
	-greenIISPath D:\Web_Green\ -greenNetworkPath \\WEBSERVER\Web_Green\;

As this updates over the network, I use this in a deployment build in our Continuous Integration system to easily update different environments.