Thursday 8 October 2009

On Duct Tape Programmers

To: Duct tape programmers

Enjoy, your skin-of-your teeth deliveries, your fingers-crossed deployments, your days lost fire fighting the same issues over and over again. Have fun, wading through incomprehensible spaghetti code, and cut 'n' pasting your "one off" solutions all the shop. Smile sweetly as you work late into the night figuring out the latest screw up.

By all means, cling onto you outdated methods like its still 1988. But please, allow me to work the way I want to work. And, do me a favour, stay outta my codebase.

Wednesday 7 October 2009

Mind The Gap

Agile, in whatever form, has a set of core practices - iterative development, unit testing, continuous integration, etc. You can, by all means, choose not to include one or more of these core practices; but you better make damn sure you fill the gap with something else or you're headed for big trouble.

For instance, one core practice is the creation of acceptance tests for each story (use case) at the same time, or before, the story is being worked on. These acceptance tests are often created by test professionals who form part of the team. If you don't do this but instead see testing as seperate to development, then you need to do something to plug the knowledge gap. The testers are not going to know what to test. How can they? They're not involved in planning or daily stand ups. The usual response then is that "Agile doesn't work for testing because it is document-lite." Wrong, agile is document-lite because everyone is involved in the development process. Take that away and you have a gap. A gap you must fill. In this case by writing up your stories in detail for the testers to test against.

You see, the agile core practices work together, so that the whole is greater than the sum of the parts. Take one of the parts away and you may very well find the whole is broken. At this point you may decide agile doesn't work. Not true, you broke it!

Thursday 17 September 2009

Waiting for Visual Studio

I find this funny and ironic......

Tuesday 14 July 2009

Collapsing foreach Loops using LINQ

Much of the business logic we have traditionally written has been buried in a heap of nested foreach loops and if statements. All this additional cermoney obfiscates the meaning of the code making finding what a method does more difficult.

One the key strength of LINQ, as I see it, is in its ability to put your application's business logic center stage. By seperating your query from your command within each method, the clarity of those methods is greatly increased.

Time for a example I think. Take the following example of traditional business logic:

public void FulfillOrders()
{
foreach(Customer customers in Customers)
{
if(customer.CanPlaceOrders())
{
foreach(Order order in customer.Orders)
{
if(!order.IsShipped()
{
foreach(OrderLine line in order.Lines)
{
if(line.IsInStock())
{
customer.Ship(order, line);
}
else
{
customer.BackOrder(order, line);
}
}
}
}
}
}
}

OK, I admit, not a great bit of DDD - but I'll live with that; its not the point of this blog. However this code, is fairly difficult to read and its going to quickly become unmaintainable as more business logic is heaped upon it. LINQ allows us to collapse down those foreach loops and if statements into a query thereby achieving command query seperation within the method.

public void FulfillOrders()
{
var allOrders = from customer in Customers
from order in cutomer.Orders
from line in order.Lines
where customer.CanPlaceOrders()
&& !order.IsShipped
select new {Customer = customer,
Order = order,
Line = line};

var shippable = from item in allOrders
where item.IsInStock()
select item;

var outOfStock = from item in allOrders
where !item.IsInStock()
select item;

foreach(var item in shippable)
{
item.Customer.Ship(item.Order, item.Line);
}

foreach(var item in outOfStock)
{
item.Customer.BackOrder(item.Order, item.Line);
}
}

Note, how in the allOrders query I am stacking up the "from" statements with no joins. This tells LINQ to do a cross join. But wouldn't this mean that every customer is joined to every order and every order line regardless of whether the customer owned the orders. Well, no; in the object model the Orders belong to the Customer and the OrderLines belong to the Orders. In this situation LINQ is smart enough to figure out which constraints should apply.

In this new "LINQed-up" version we have clearly defined the queries (at the top) from the commands (at the bottom). This leads to clarity in code readability which in-turn leads to an increase in maintainability. Getting to grips with the syntax of LINQ can take some time, but the increase in code maintainability is well worth the effort.

Thursday 14 May 2009

Redundant Layers

There been an interesting difference of opinion on the blogs of Ayende and Greg Young regarding the usage of the Repository pattern. Whilst some great points were made on both sides, I tend to agree with Ayende's view that advances in tooling can render patterns, that we have become accustomed to using, redundant. In this case he argues that NHibernate has left little reason to implement Repository any more.

We segregate our applications into separate layers to help us manage the complexity of differing concerns. But we should accept that as new layers are added to an application, overall complexity of that application increases. If I was able to succinctly express my entire application in a single text file, without recourse to layers, I would. That is not (yet) possible so I use a layered approach.

As advances are made in tooling we need to constantly re-evaluate our use of layers and patterns to see it they have been superseded by technological advances. Take MVC as an example; had I been a Smalltalk programmer fifteen years ago I would have needed to create a controller for every widget. As tooling has advanced this has become unnecessary and we now have a different kind of MVC.

All too often I see applications where the architecture has been treated like a check list of layers. DTOs - check, domain - check, web services - check, repository - check; you get the idea. For me there has to be a very good reason for creating a new layer in my application. Do I really need that abstraction? If not, I don't feel compelled to include it. 90% of the applications I work with use a domain model. But that doesn't mean I always include that layer. A simple CRUD application with a few validation rules probably only requires a DAL and few DTOs and a presentation layer, for example.

We need to constantly question ourselves and the decisions we make about architecture, and should only implement layers where we feel there is a need; not because that's the way we've always done so.

Wednesday 22 April 2009

Unity: Accessing the Creating Container

Here's something I learned today about Microsoft's Unity.

Often when an object is created through a container, you want to be able to access that container so that you can perform further resolutions. In the past my solution to this was to create a static class that held onto a single global instance of the container, essentially a Service Locator. The Service Locator could then be used by any class to access the same container.

I have never been 100% happy with this because, as we all know, static global data is a *bad* thing which introduces dependencies which are hard to test. Today I discovered that when a container resolves a class which has a parameter of type IUnityContainer, the container will pass itself to that parameter. This means that the container can be passed down the chain of constructors without resorting to any global static nonsense.

Here is my test:


[Test]
public void ContainerPassesItselfToObjectsItCreates()
{
var container = new UnityContainer();
container.RegisterType<ClassCreatedByIoC>();

var objectCreatedByIoC =
container.Resolve<ClassCreatedByIoC>();
Assert.That(objectCreatedByIoC.Container,
Is.SameAs(container));
}

public class ClassCreatedByIoC
{
public ClassCreatedByIoC(IUnityContainer container)
{
Container = container;
}

public IUnityContainer Container { get; private set; }
}



I assume this works with injection methods and properties as well, but I haven't tested it.

Thursday 19 February 2009

Edward Woodward

Yesterday, I create a class called TheEqualizer. This has made me way more happy than should be the case.

Now I am worried that I more of geek than I thought I was.

Sunday 15 February 2009

Scrum is Only a Recipe

I like Scrum. It offers a great template for getting started with agile project management. But the important word in the previous sentence is template. I see too many teams worrying if they are doing scrum properly. That's not the point, doing Scrum properly is not the point. The point is trying to make a process that suits your current situation that is as agile as possible. Scrum is just a useful first stepping stone in getting there.

The first time you cooked a recipe from a cookery book, you would follow it blindly. What else could you do? When you eat the food it may be too bland, so you next time you cook it you add a little salt; or it may be dry, so you adjust the cooking time. You use your previous experience to improve the outcome.

Agile is like that: execute, reflect, improve, repeat. Sprint planning not working for you? Find another way to do it. Users cannot go three weeks without changing the scope? Shorten the iteration length. Don't feel constrained by the scrum process, remember it's only a template, change the bits that aren't working for you.

Don't put up with crap food, improve your recipe.

Thursday 29 January 2009

The x => Factor

I may be swimming against the tide here, but I don't understand why it has become an idiom of C# to use x as the parameter name in lambda expressions, as in:
var matching = list.ForEach(x => x.Version > 1)
You wouldn't expect to see
foreach(var x in list)
{
if(x.Version > 1)
{
yield return x;
}
}
I suspect it has a lot to with conciseness. Whilst I highly value conciseness, I value readability over conciseness.

Sunday 25 January 2009

News Feeds Changed to Feedburner

I have changed the news feeds for this blog to use FeedBurner. You may want to update your links.

Thursday 22 January 2009

Explicit Exception Messages

What Happened

I was adding code, deep in the bowels of the data persistence layer, to update auditing flags when entities where saved. The code received the UserName of the logged in user, looked it up in the User table, and placed the UserId in the CreatedBy or ModifiedBy field as appropriate. If the user could not be found a SecurityException was raised containing the message
User with a login of [user name] cannot be found in the User table
I tested it, everything worked fine, so I committed my changes and moved onto the next task; which was some changes to the Log-In page. Several hours, later I was ready to test my changes to the Log-In page. Whenever I hit the [OK] button, I got a SecurityException with the message
User with a login of [user name] cannot be found in the User table
Having already forgotten my earlier changes in the persistence layer, I took the message at face value and assumed I had messed up with my changes to the log in page. I then wasted a couple of hours undoing and redoing my work, but to no avail, the exception stubbornly remained.

Eventually I was "smart" enough to follow the exception's stack trace which led me to the code I had changed earlier in data persistence layer. It turned out the membership provider was trying to update the LastLoggedIn date on the user, when my code tried to apply the ModifiedBy flag it couldn't get the logged in user because the log in process wasn't complete yet.

Lessons Learnt

  • When you get an exception don't just look at the message, use the stack trace (duh?)

  • Make sure you throw appropriate exception types. The fact that I was getting a SecurityException made me think this was a security issue, when in fact it was a data persistence issue. Something like DatabaseRecordNotFoundException would have been better.

  • Write exception messages that provide as much contextual information as possible. When writing the message don't assume that the message will be read in the same context as it is thrown. What do I mean by this? The error message I wrote made perfect sense in the context in which I was working, the data persistence layer. But once it escaped the DP layer, and bubbled up to the presentation it made much less sense and sent me off in the wrong direction, wasting time. Imagine how much worse this would be if it occurred some months later in production, and the error message arrived via the help desk with no stack trace.

The error message I finally decided upon read
During a data persistence operation for [entity name] an attempt was made to update the auditing field [CreatedBy/ModifiedBy]. This failed because a user with a login of [user name] cannot be found in the User table

A Pattern for Fluent Syntax

I've always been interested in making code as readable as possible and recently have become increasingly interested in fluent APIs. I have noticed that most of my fluent work has followed a similar pattern. I present this pattern here in the hope it helps others deliver their own fluent interfaces.

Fluent Interface

Typically APIs provide methods with multiple parameters, many of which are not required for most uses. To overcome this, many programming languages have features such as method overloading or optional parameters. Whilst in many cases this helps API usage, it can result in bloated and hard-to-use APIs.

Fluent APIs take a different approach, breaking down the API into its consituent parts and guiding the programmer through its usage. Fluent API takes advantage of the features of modern development environments, particularly "code-complete", to help provide this guidance. The aim for a fluent API is that it should be easy to write and easy to read.


How it Works

The fluent syntax of your API will be provided by the public methods of a class; this class is named the Lexicon. The key to the pattern, is that each method called should build up some state in the Lexicon and then return a reference back to a lexicon so that another method can be used from a Lexicon, and so on, to build up a meaningful statement.


Consider the following fluent syntax for counting files in a folder:

int files = FileSystem.GetDrive("C:")
.OpenFolder("Program File")
.GetFileCount();
FileSystem.GetDrive is the entry-point into this fluent API. The entry-point will often be a constructor or static creation method. In all cases it will return a reference to a lexicon instance.

OpenFolder is a continuation. Continuations are methods that will set some state within the lexicon and then return an instance to the lexicon.

GetFileCount is an end-point for this API. An end-point is a method that takes some action based on the state of the lexicon. It will return some value that is the result of the fluent API call, or void, but will not return an instance of a lexicon.

A lexicon can have multiple entry-point, continuations, and end-points. Where the fluent API is large, it is common to break the API into more than one lexicon. Control is passed from one lexicon to another by a continuation which returns an instance of a different lexicon type. The new lexicon will probably need to know the context of the original lexicon; this is normally done by passing the original lexicon to the new lexicon in its constructor.


When to use it

Paradoxically, whilst a fluent API makes the reading and writing of that API easier, often the code making up that API is complicated by the addition of the fluent syntax. Consequently, Fluent Interface, should only be used in areas where the effort of extending the syntax is rewarded by frequent reuse. Typically, this applies to areas of cross-cutting concern such as configuration, logging or auditing.

It is worth considering the complexity of the API before implementing it as fluent. For a simple interface with only a handful of methods it is probably not worth the effort.


Example (C#)

For this example we are going to consider a fluent syntax for obtaining information about the file system. I have split the API into multiple lexicons. This would possibly be considered overkill in the real-world but have done so here to make the example more complete.

Firstly lets look at the FileSystem lexicon which provides fluent syntax for dealing with drives on your system:

public class FileSystem
{
internal string DriveSpec { get; private set; }

private FileSystem(string driveSpec)
{
DriveSpec = driveSpec;
}

// 1st entry point
public static FileSystem GetDrive(string driveSpec)
{
if(!IsValidDrive(driveSpec))
{
throw new ArgumentException("Not a valid drive");
}
return new FileSystem(driveSpec);
}

// 2nd entry point
public static FileSystem MapDrive(string location, char assignedLetter)
{
string driveSpec = assignedLetter.ToString();

if(IsValidDrive(driveSpec))
{
throw new ArgumentException("Drive is already assigned");
}

NetworkUtilities.MapDrive(location, assignedLetter);

return GetDrive(driveSpec);
}

// continuation transfering control to another lexicon
public FolderLexicon RootDirectory()
{
return new FolderLexicon(this,"\\");
}

// 1st end point
public long AvailableBytes
{
get
{
DriveInfo info = new DriveInfo(DriveSpec);
return info.AvailableFreeSpace;
}
}

// 2nd end point
public string Name
{
get
{
DriveInfo info = new DriveInfo(DriveSpec);
return info.VolumeLabel;
}
}
}

And now the FolderLexicon used to hold the syntax for accessing information about file system folders

public class FolderLexicon
{
// holds a reference to FileSystem
// so the entire context can be discovered
private readonly FileSystem drive;

private readonly StringBuilder path = new StringBuilder();

// entry point - note that this lexicon can
// only be entered through a FileSystem
internal FolderLexicon(FileSystem drive)
{
this.drive = drive;
path.Append(drive.driveSpec);
path.Append("\\");
}

//continuation
public FolderLexicon OpenFolder(string dirSpec)
{
path.Append("\\");
path.Append(dirSpec);

if(!Directory.Exists(path.ToString()))
{
throw new ArgumentException("Folder does not exist");
}

return this;
}

//end point
public int GetFileCount()
{
DirectoryInfo dInfo = new DirectoryInfo(path.ToString());
return dInfo.GetFiles().Length;
}
}